What is the AI Act — and what do the rules mean for the UK?
The EU is a step closer to passing flagship rules that will govern the use of artificial intelligence (AI) tools, such as ChatGPT.
Under the proposals, AI systems will be regulated based on their potential to cause harm. The legal framework classifies the tech into three risk categories, from low to unacceptable, with bans set to be meted out for tools that pose the most serious threats to society.
While the UK is no longer part of the EU, and is developing its own AI rulebook, the new legislation could have a lasting effect on consumers and the tech industry alike. Here’s what you need to know about the EU’s AI Act.
What is the EU AI Act?
The EU AI Act is a sweeping raft of new rules aimed at ensuring that AI is used safely, ethically, and responsibly. As the world’s first legal framework on AI, the rules could become a benchmark for other countries seeking to pass their own AI regulations.
The European Commission first proposed the EU AI Act in April 2021, and it is expected to come into force in 2025.
In a nutshell, the bill assigns applications of AI to three risk categories: “Unacceptable risk” AI that poses a serious threat to fundamental rights and freedoms; “high-risk” AI that poses a significant risk to the health, safety, or fundamental rights and freedoms of people; and “low-risk” systems that do not pose a risk.
The potential to experiment in a safe space — a regulatory sandbox — may prove very attractive
Tim Wright, tech and AI regulatory partner at London law firm Fladgate
EU member states and companies using these tools will have to abide by different rules depending on the risk level. As part of the bill, AI tools deemed to be the most harmful will be banned, while high-risk applications will be subject to strict legal requirements, and low-risk systems will be left unregulat
Which AI applications does the act cover?
On Thursday, European lawmakers voted to ban facial-recognition technology in public places. The move will likely be welcomed by privacy activists who claim the tech is intrusive and discriminatory.
In Europe, facial recognition was already banned under the General Data Protection Regulation, barring specific use cases including when it is of substantial public interest. Its use is also governed by broader human rights law enshrined in the EU Charter of Fundamental Rights.
MEPs also voted to obligate companies to conduct a risk assessment before using AI, and to provide users with more information on how their data is being processed.
Social scoring ban
Alongside the ban on facial recognition, the latest rules will also ban the use of AI for social scoring and biometric categorisation.
Social scoring, also known as “social credit”, are systems that use AI to rank people based on their social and economic activity. The score is calculated using a variety of factors, such as their online activity, their spending habits, and their social interactions.
Social-scoring systems are currently in use in China, the US, and Israel. These systems have been criticised for limiting freedoms, violating people’s privacy, and for being discriminatory when used by government agencies, the courts, the police, insurers or financial institutions to make crucial decisions about individuals’ lives.
In addition, the rules state that companies behind generative AI tools, such as ChatGPT and Midjourney, will have to disclose any copyrighted material used to develop their systems.
How will the AI act impact the UK?
It is likely that other countries will follow in the EU’s footsteps and adopt their own regulations on AI.
On Wednesday 24 May, UK Prime Minister Rishi Sunak met with the chief executives of ChatGPT firm OpenAI, Google Deepmind and Anthropic to discuss the need for regulation to mitigate AI’s risks.
A joint statement from the meeting acknowledged that AI’s success is contingent on having the “right guardrails” in place to ensure public confidence in the technology’s safety.
Mr Sunak stressed to OpenAI’s Sam Altman, Google’s Demis Hassabis and Anthropic’s Dario Amodei that any AI regulation must be co-ordinated internationally and that it has to be agile.
For its part, the UK recently set out its own approach to AI governance in a white paper published in March. In essence, the Government said it aims to regulate AI in a way that supports innovation while protecting people’s rights and interests.
Some of the principles laid out in the white paper echo the EU’s stance on AI. For example, the Government wants to create a new regulatory framework for high-risk AI systems. It also wants to require companies to conduct risk assessments before using AI tools. However, the white paper proposes that these principles will not be enforced using legislation — at least not initially.
What does the AI act mean for me?
The AI Act is designed to assess how much of a danger AI models pose to society and make it clear to tech firms what they can and cannot do. In theory, this means EU lawmakers are trying to ensure tech firms cannot use computer algorithms to breach human rights.
“The AI Act will play a vital role in ensuring responsible governance, and safeguarding consumer rights and privacy while fostering trust and innovation,” Jonathan Boakes, managing director of UK digital solutions consultancy Infinum, who has been advising organisations for the past 20 years, including the UK Government and regulators, told The Standard.
Gabriela Hersham, co-founder and chief executive of London start-up innnovation accelerator Huckletree agrees: “There is a fine balance to be found between using AI to protect us and not allowing it to infringe on our rights. It’s reassuring that several of the technologies are still going to be permitted, but only for law enforcement use (and within a strict framework) but that some have been banned altogether.”
According to Tom Whittaker, a senior associate in technology at UK law firm Burges Salmon, this means there will likely be more pop-ups with information warning you about how your data is used when you access an online service that makes use of AI models, similar to the warnings about internet cookie collection. You will probably receive more emails too from service providers.
“Consumers are most likely to notice the impact of the EU AI Act by increased transparency of when and how an AI system has been used that affects them,” he told The Standard.
The other way the AI Act affects consumers is that if you were hoping to use ChatGPT-syle technology to make your job easier, you might not be legally allowed to do so.
“For example, a text-generating AI tool might be used to draft patient letters for medical professionals, utilising sensitive patient data, even if this was not its original intention,” explains JJ Shaw, a senior associate at technology law firm Lewis Silkin.
“Whilst a general purpose AI system might be considered as a great technological development by AI enthusiasts, from the EU law-making perspective, such unpredictable applications are considered ‘high-risk’.”
What does the AI Act mean for tech firms?
The other big issue is how the AI Act will impact UK tech firms and other companies that seek to do business with us, due to our proximity to the EU.
The tech giants, in any case, don’t want to take any chances.
On Wednesday 24 May, Google’s chief executive Sundar Pichai met with the European Union’s internal market commissioner Thierry Breton and agreed to work with the EU on an “AI Pact”, namely a set of voluntary standards and rules that Google will follow until formal regulations are ready.
However, Open AI boss Sam Altman told the audience at a panel discussion at University College London on the same day that his AI firm might have to pull out of the EU if it is forced to comply with what he perceives as “impossible regulations”.
As for everyone else, one concern is that AI developers in the US will “steal a march” on European competitors, because all AI systems in the EU will need to be categorised as to whether they pose harm from the get-go, which could slow down innovation, explains Tim Wright, tech and AI regulatory partner at London law firm Fladgate.
“The US tech approach — think Uber — is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework,” he told The Standard.
However, he thinks this will be good news for UK tech firms: “The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space — a regulatory sandbox — may prove very attractive.”
But there are lots of rumblings from businesses that the EU way of regulating is restrictive, while equally others believe the UK is being too relaxed in the way it governs AI.