“For a very new technology we need a new framework” - Sam Altman, OpenAI CEO

Computer scientists at some of the largest tech companies globally are split on the call for regulation of Artificial Intelligence. OpenAI CEO Sam Altman spoke before a Senate judiciary committee in the US on the need for regulatory guardrails for technology that enables the benefits of AI while minimizing the harms. Meta’s Chief AI scientist Yann LeCun has also recently come out against calls for an AI technology moratorium. LeCun has expressed that while regulations shouldn’t impede AI advancement, regulation will be humanity’s safeguard against the risks AI poses.  

Altman and LeCun’s pronouncements that laws and regulation will protect against the worst of AI risks has raised eyebrows within expert legal and policy maker circles.  AI regulation is still in its infancy. Developing regulation (particularly for the technology industry) is extraordinarily difficult, with legislators facing issues such as defining AI, soft vs hard approaches and how slow-moving laws can keep up with fast-paced technology.  Good law is often said to be technology agnostic, so to many in the field, the idea of “AI law” is already a legal anathema. 

Existing regulatory frameworks differ in their approach to AI. Europe has adopted a unified regulatory approach with an emphasis on individual privacy rights and accountability, but there is no law on foot yet. Other countries have been less keen to adopt an explicit legislative approach. The United States and Singapore both adopt what they consider more innovation-focused and sector specific approaches to regulating AI, but it is not “hard” law.  Australia lacks a dedicated legislative regime for regulating AI, and ownership of AI generated works is a controversial gap in our existing laws. Stirling and Rose proposed a potential solution to this gap in January utilising attribution and smart contracts.  

A better solution than explicit AI law may be giving existing laws an “AI wash”; adapting existing legal frameworks to account for AI. For example, within corporation law, this may be done by changing the definition of legal “person” to include machines or algorithms.  Legal change must be undertaken in a measured and timely way. The gravity of these changes cannot not be an excuse for inaction – as inaction itself will increase risk. 

AI poses a number of risks, perhaps beyond what the law is capable of considering and accounting for. Geoffrey Hinton argues AI is fundamentally different to our human understanding of intelligence and requires solutions beyond our traditional policymaking system. 

While there is no one clear way forward, it is evident that a balance needs to be struck between innovation and risk management in regulating AI at all levels. Stirling and Rose have partnered with the Gradient Institute to support institutions in artificial response planning, including guiding changes to existing legal, governance and policy frameworks.  

Back to top

Stirling & Rose is an end-to-end corporate law advisory service for the lawyers, the technologists, the founders, and the policy makers.

We specialise in providing corporate advisory services to emerging technology companies.

We are experts in artificial intelligence, digital assets, smart legal contracts, regulation, corporate fundraising, AOs/DAOs, space, quantum, digital identity, robotics, privacy and cybersecurity.

When you pursue new frontiers, we bring the legal infrastructure.

Want to discuss the digital future?

Get in touch at info@stirlingandrose.com | 1800 178 218