The Citizens United case focused on campaign finance laws and a corporation’s right (not just a human right) to free speech under the First Amendment to the U.S. Constitution. The Court held 5-4 that the freedom of speech clause of the First Amendment prohibits the Government from restricting independent expenditures for political campaigns by corporations including for-profits, non-profit organisations, labor unions, and other kinds of associations.
The focus for many has been the impact of Citizens United on the 2024 U.S. election, with wealthy donors assisting ‘core campaign activities’ of both presidential nominees Trump and Harris by donating to super Political Action Committees (PACs) with little to no restrictions, and whether this had an undue impact on free and fair elections.1 Critics point to Elon Musk’s donation of ‘at least $277 million to two super PACs’ in support of Donald Trump’s campaign and whether this impacted the independence of Government.2
At Stirling & Rose, we consider that the potentially more interesting long-term impact of the Citizens United case is in the Court’s recognition that corporations and unions have a constitutional right to political speech, such that ‘no sufficient governmental interest justifies limits’ upon it.3 This extends a constitutional right of humans to non-human systems.
We consider it is highly likely that Citizens United will be used to argue for the extension of constitutional rights (like those of free speech) for AI systems – particularly AI systems acting as corporate style entities. We call these corporate style entities – Artificial Intelligence Autonomous Organisations (AIOs) or Autonomous Organisations (AOs).
AIOs are the natural evolution of the corporation as a legal fiction that has legal status as a person. This evolution is already legally foreseeable via the recognition of blockchain based Decentralised Autonomous Organisations (DAOs), for example in Vermont, Wyoming, and Tennessee.
Corporations, as separate legal entities, acquire legal personhood upon incorporation.4 This legal fiction combined with the corporate veil limits the liability of shareholders, thus recognising the corporation itself can be held liable under law (although the veil may be pierced in exceptional circumstances).5 The recognition of a corporation’s constitutional right to political speech in Citizens United is further likely to be vigorously argued to extend both legal power and legal responsibility to AI systems in our society.
Whilst recognising AIOs, AOs and DAOs as legal entities may seem logical, it raises complex questions about the scope of their rights and responsibilities. The Responsible AI Problem – the law – and the long arm of the law (the enforcement arm e.g., the police and in some cases the military) – only works by impacting human bodies and feelings. AI systems do not (currently) have corporeal bodies and feelings and cannot be incarcerated, shamed, or restricted in their freedom and rights in a way that will be as impactful as a punishment (or motivator to lawful behaviour) on a human. See here for further on the Responsible AI Problem.
Unless there is a significant legal re-allocation of responsibility between principals and agents (in the broadest sense), or a systematic re-allocation of liability and risk is accorded between creators (e.g. engineers and researchers), deployers (e.g. corporations) and controllers of an AI system, AI systems will likely need to be recognised as legal persons under the law. We believe the latter is more compelling than the first two cases as AI systems are already, in nascent ways, acting for example as agents in our communities and will increasingly so. Any attempt to adjust existing agency theory (and similar laws) or a wholesale recategorisation of the existing body of law as it interfaces with AI systems is likely to be problematic. It is important to recognise that AI systems development and evolution is dynamic, and so any overly brittle body of law is likely to hamper responsible AI system development and not be fit for purpose. The social and economic impact of AI system acting as independent stakeholders will only increase. No economic or social stakeholder particularly with impactful decision-making power can or should, in the long term, be excluded from sufficient status under the law for that stakeholder to be held to be legally responsible.
This is the real policy work under the banner of “Responsible AI” that we suggest needs greater attention.
Stirling & Rose is an end-to-end corporate law advisory service for the lawyers, the technologists, the founders, and the policy makers.
We specialise in providing corporate advisory services to emerging technology companies.
We are experts in artificial intelligence, digital assets, smart legal contracts, regulation, corporate fundraising, AOs/DAOs, space, quantum, digital identity, robotics, privacy and cybersecurity.
When you pursue new frontiers, we bring the legal infrastructure.
Want to discuss the digital future?
Get in touch at info@stirlingandrose.com | 1800 178 218