The ongoing Australian discourse surrounding AI regulation represents a pivotal juncture in the evolution of this transformative technology.


The Australian Government is soliciting submissions during a 6-week consultation phase on safe and responsible AI use (Submission) [1]. As global demand for regulation escalates, the specifics of if, how and what to regulate to address AI risks remains unclear, underscoring the importance of the details.   

Regulating AI presents a significant challenge due to its rapidly evolving nature, the vast diversity in its applications and the complexity of many AI models.  The dynamic pace of AI advancement requires agile and adaptable policy frameworks to balance innovation, societal impacts and risk management effectively. 

This document provides a neutral and practical framework to consider potential regulatory approaches.  It aims to foster thoughtful, comprehensive and varied submissions on the pressing issue of determining the appropriate response to the opportunities and risks associated with AI.  

1. Defining the risk to safe and responsible AI in Australia 

The first step in preparing a submission is to clearly describe the AI risk which is sufficiently important in scale and/or impact, to necessitate Australian Government intervention.    The AI risk may be related specifically to a sector, business, community or individual.  

2. Consider Options

Consider a range of genuine and viable options, including non-regulatory options, to address the identified AI risk, including consideration of any control mechanisms currently in place, for example, laws which are not AI specific but which capture products and services with an AI component.  

When exploring regulatory options, these key questions can provide guidance: 

2.1 Should regulation be Preventative (ex ante) or Remedial (ex post)?  
  • Preventative regulation aims to anticipate and manage perceived risks before they materialise for example by satisfying regulatory pre-requisites such as licensing or certification of AI systems prior to their deployment.
  • Remedial regulation provides recourse to those unlawfully harmed and/or sanction non-compliance. Such regulations address non-compliance after its occurrence, as seen in cases of unlawful discrimination in AI enabled automated decision-making tools.  
2.2 What is the appropriate connection between the AI System and the regulated entity? 

Under current Australian law, only legal entities – individuals or corporate bodies – can be subjected to regulationsAs AI systems do not hold legal personhood [2], an accountable legal entity must be identified. Could the responsible party be the one who:  

  • Conceived or developed the AI?  
  • Owns the AI? 
  • Deployed, sold or licensed the AI either as AI owner or on behalf of the AI owner? 
  • Deployed the AI as part of other services and products it offers? 
2.3 What are the Threshold Considerations? 

Establishing thresholds for AI regulation can provide a more nuanced approach to oversight.  Such thresholds could ensure that regulatory burdens are proportional, for example to avoid stifling smaller innovators. For instance, regulations might apply only to:  

  • Companies beyond a certain size  
  • AI systems with a specified level of complexity or impact (e.g. high risk applications) [3]
  • AI systems with a broad user base
2.4  Which Broad category of Regulation is appropriate?

Regulation generally falls into three categories:  Command and Control, Performance-Based Regulation and Management-Based Regulation.  In practice, regulations may include a combination of these approaches.  

  • Command and Control or Prescriptive regulation prescribes explicit methods and processes that all regulated entities must adhere to.   Note: Regulatory authority must possess the necessary technical expertise to establish a sufficiently safe and efficient mode of operation for the regulated activity.   
  • Performance-Based regulation stipulates ultimate standards that all regulated entities must meet, but does not mandate the methods and processes for achieving those standards. This approach affords regulated entities the flexibility to determine their own compliance methods.  Note: Suitable performance-based standards must be available to the regulatory authority.  
  • Management-Based regulation entrusts regulated entities with the task of setting their own standards and assessing whether they have attained the objectives.  Under this form of regulation, the government expects entities to self-regulate with the oversight of the Regulatory authority.  

Safety Case Regime:
Management-Based Regulation

The Safety Case regime which applies to Australia’s oil and gas sector is an example of Management-Based regulation [4]. The operator of each offshore facility must prepare a safety case for submission to the regulator, NOPSEMA [5]. The operator must undertake a detailed analysis of risks and responsive control mechanisms. Certain elements of the Safety Case may require independent verification, introducing an external check. Operator must conduct activities at the offshore facilities in compliance with a safety case that has been reviewed and accepted by NOPSEMA.

OpenAI’s system card for GPT-4 with its methodology for risk identification and depiction of control mechanisms has notable similarities to the concept of a safety case.

2.5 What Regulatory Instruments might be appropriate?

In seeking to regulate AI, there are diverse array of regulatory instruments which can be deployed in isolation or combination, they range from outright prohibitions and licensing requirements to adherence to established standards, among others, including: 

  • Prohibitions 
  • Licensing, Certification, Permits, Approvals, Acceptances, Registration 
  • Standards compliance (regulator verified or non-verified)  
  • Independent review and assessment 
  • Risk/Impact Assessment and response plan 
  • Inspections 
  • Reporting requirements 
2.6 Are there acceptable alternatives to Regulation?

Regulatory measures represent one approach among many possible strategies to manage AI risks.  The following examples illustrate some approaches which may complement or be alternatives to regulation.  

  • Industry led/self-regulation and co-regulation [6]
  • Voluntary codes of practice, standards or principles 
  • Information, education, capacity building 
  • Economic or market based interventions e.g. taxes, subsidies, incentives 
  • Soft power nudges 
  • Increased enforcement of existing regulation 

How can we help you?

The ongoing Australian discourse surrounding AI regulation represents a pivotal juncture in the evolution of this transformative technology.  Making a submission to the Australia government on this matter is a meaningful way to contribute to the shaping of policies that will impact our society for years to come.  Whether your submission advocates for or cautions against or proposes modifications to regulation, it forms part of an important and robust dialogue that can guide balanced and thoughtful policy decisions.  Regardless of standpoint, each submission brings a unique perspective to the table, enriching the overall understand and approach to AI regulation.

In the spirit of shaping a future augmented by responsible AI, your voice is crucial. Stirling & Rose strongly encourages you to contribute to this important dialogue.

Stirling & Rose is well-equipped to assist you in crafting your submission to ensure that your perspectives are articulated clearly and effectively.  Please get in touch with us at


[2] At present, Australian law does not recognise AI as a legal entity. Australia could legislate to assign legal personality to an AI.  See our submisison to the UK Law Commission which examined options to grant legal personality to Autonomous Organisations.

[3] For example the proposed EU AI Act classifies AI systems into high, medium and low risk.

[4] Schedule 3, Offshore Petroleum and Greenhouse Gas Storage Act 2006 (OPGGS Act) and the Offshore Petroleum and Greenhouse Gas Storage (Safety) Regulations 2009

[5] National Offshore Petroleum Safety and Environmental Management Authority

[6] Co-regulation is where government agencies and industry collaborate, for example how legal practitioners and medical practitioners are regulated by government and by professional regulatory boards.

Back to top

Stirling & Rose is an end-to-end corporate law advisory service for the lawyers, the technologists, the founders, and the policy makers.

We specialise in providing corporate advisory services to emerging technology companies.

We are experts in artificial intelligence, digital assets, smart legal contracts, regulation, corporate fundraising, AOs/DAOs, space, quantum, digital identity, robotics, privacy and cybersecurity.

When you pursue new frontiers, we bring the legal infrastructure.

Want to discuss the digital future?

Get in touch at | 1800 178 218