The EU AI Act: key facts startups must be aware of
Economics, Politics, Culture - the Big Picture for the Start-Up Community
TL;DR - The Executive Summary
AI regulation is coming, and startup founders must be aware of the key requirements and start to think about risk mitigation before it is too late. The EU AI Act is likely to pass later this year and come into force in 2024/2025. Startups developing AI products in the EU, or selling into the EU, will need to comply or risk hefty fines.
The EU AI Act bans some uses of AI outright, including many solutions based on biometric markers, and lists other uses as ‘high risk’. A central tenet is that any AI that can deliberately or inadvertently cause lasting harm (e.g. financially, physically, or emotionally) is classified as high risk and will need to submit to a national regulatory body within their home country.
AI regulation is coming in all geographies worldwide which puts additional pressures on AI startups to be aware of the specific laws for each country they wish to go to market in. The good news is that compliance with the EU AI Act is likely to give a free pass to many other markets, as well as build a moat around the EU market for EU startups.
Algo says ‘no’
Your last startup idea didn’t quite pan out and in waiting for inspiration for your next, you decide the best way to pimp your CV is to join another scale-up for a while. You find the perfect job with Closed AI, write an awesome cover letter, and press ‘Apply’ fully confident of an offer or at the very least an interview. You wait for a quick reply, but none comes.
After a couple of weeks you reach out to your buddy Sam who you know works at Closed AI and ask if he can check what’s happening with your application. He writes back to you and says that they are now using the latest recruitment tools, and their AI gives a firm ‘decline’ to your application. Man, you know it was a mistake to add your extracurricular activity as a cannabis amateur farmer.
Regulate, regulate, regulate
The last months have seen an incredible rise in the temperature of the debate over AI progress. This is already the second AI hype cycle I am experiencing, but it is the largest yet. The release of the latest large language models (LLMs) and their various applications (e.g. ChatGPT, Bing integration, Midjourney, Stable Diffusion) opened the eyes of our entire society to how transformative AI can be.
As expected the flurry of excitement is followed by an avalanche of fear mongering and demands for regulation. At the time of writing this text, over 30’000 signatures have been collected on a letter demanding a six-month hiatus in training AI models in order for regulation to keep up, including heavy weights such as Steve Wozniak, Max Tegmark, and Elon Musk. In early May 2023, the CEO of Open AI (the company behind ChatGPT) Sam Altman even demanded the US Congress to create AI regulation ASAP. There are no shortage of calls for AI regulation, the question is just who, when, and how.
As the introductory example illustrates, there are numerous obvious reasons why regulating AI use is the right thing to do. There are also many less obvious reasons, and less obvious applications, to consider. Europe has long been a leader in regulating the digital industries. For example, GDPR (outlining rules around data protection and digital marketing towards individuals) has been in place since 2016, and the DMA (outlining rules around regulations and responsibilities of digital “gatekeepers”) since 2022. Very recently, the EU fined Meta a record €1.2 billion for breaches of GDPR.
It is clear that the EU wants to take the lead also on regulating AI, and work towards writing and approving the EU “AI Act” has been ongoing since 2021. A milestone was met on 14 June when the EU Parliament voted in favour of the proposed draft that will now circulate to the member countries for final changes. It is expected to come into force fully in 2025.
Key terms at a glance
Like most regulations, it is chiefly aimed at larger, and possibly more nefarious, companies. However, as AI is a very new technology and as many start-ups will either be based around new generative AI applications or have an AI angle to them, the AI act, this is one regulation all start-up founders should be aware of.
First of all, some applications of AI will be completely banned. This includes applications with “unacceptable risk” to human safety such as algorithms for social scoring (classifying people based on their social behaviour or personal characteristics), predictive policing (think ‘Minority Report’), emotion recognition, and most applications including biometric data.
Beyond banned applications, the key concept of the AI Act is the definition of ‘high risk’ AI applications. This is a list of applications where mistakes or deliberate malfeasance could cause considerable harm to humans. Expected to fall under this category includes applications for:
Safety and operations for utilities and road management
Education, i.e. assessing individuals or allowing access to education
Everything related to hiring, evaluating, monitoring, promoting, or dismissing employees
Access to and enjoyment of essential private services and public services and benefits, e.g. creditworthiness, access to public funding, or access to emergency responders
Law enforcement for e.g. assessing individuals’ risk, polygraphs, evaluation of evidence, or using data on individuals for crime analytics
Everything related to migrants, asylum seekers, and border controls
Use within the legal system to research, interpreting facts, and applying law
The full list can be found in Annex III of the draft of the bill, found here.
Those building AI applications within a ‘high risk’ area will need to follow strict regulation and have their algorithms certified by national regulatory bodies, and will need to be able to prove that no harm can come to an individual due to the algorithm. This includes the possibility that a human actor could use a well-intended system for malicious purposes.
The act will be in force for all businesses building AI applications within the EU, as well as any non-EU based company selling into the EU. A single customer within the EU is enough for a non-EU company to be required to comply with the AI act.
A breach of the rules will cost you. If you are found to be in breach of the prohibited uses or failing in data governance for a high risk application, the fine will be €30M or 6% of global annual revenue, whichever is higher. Any other breach of the regulation for high risk applications will bear a fine of €20M or 4% of global annual turnover, whichever is higher.
Impact on startups
Many startup founders are not fully aware of the laws that are in the market, or coming soon. Whereas most startups are unlikely to be building solutions for the prohibited applications, and most are likely to be aware of the sensitivity of such applications, there are many startups who could find themselves inside the high risk area. Over the past 10 years, many startups have been working on products in the risk management and credit space, or in recruitment and talent management. Many of them may even have started their businesses with a sense that data could help underserved communities by being unbiased compared to e.g. a human evaluation/decision. Yet, they will from one day to the next become part of a regulated industry, with all the associated extra work and cost. It seems likely many will fail, especially given the current tough fundraising climate.
A secondary question is whether the EU will become less competitive for AI startups in general due to the AI act, and whether we will see fewer international champions emerge. When 100+ startups were surveyed recently, more than a third of startups believed they would be classified as ‘high risk’, and 16% claimed to consider either not building AI at all, or doing so outside of the EU. Any regulation is an obstacle for startups, but challenges can also be opportunities. An EU “safe AI” certification would be seen as a gold standard outside of the EU, which could be a benefit and sales argument. Furthermore, it will also build a moat around the EU market - a $17 trillion market which is second only to the USA ($23 trillion).
Global perspective
The US is a more complex situation. Whereas the safety concerns of AI have been on the federal agenda several years, there is no regulation on a federal level yet. Federal agencies have been urged by both the Biden and Trump administrations to develop mitigation plans for AI risks, yet few have delivered credible plans to date. A “Blueprint for an AI Bill of Rights” was released in October 2022. Some US states have moved ahead and written their own legislation, although most of it is guidance and few have the potential impact that the EU AI Act will have.
It is clear that the situation is still much more lax in the US, and that any startup with customers in the US will need to stay on top of regulatory frameworks at both federal and state levels.
The Chinese government, on the other hand, has always held a tight grip over the technology sector. China was quick to bring out draft legislation for regulation of generative AI in April this year and although many of the paragraphs sound supportive of the same core values as the AI Act, it is also clear that technology firms will not be allowed to develop products that are not approved by the government. Applications by the government are also not covered by this legislation. In attempting to keep an iron grip on the technology industry, China is ironically creating the strongest privacy laws globally.
Stanford University recently published a study that found that “Legislative bodies in 127 countries passed 37 laws that included the words “artificial intelligence” this past year”. The world is becoming an AI regulation patchwork. In a conversation I had with a provider of compliance software recently, they admitted that it had taken their research team 18 months to identify and catalogue all compliance requirements their clients would need.
As a startup, all this means higher development costs and higher political risk factors to include in strategic decisions and we are only just getting started. If you are developing something that is compliant with the EU AI Act, you are likely developing under one of the most strict frameworks and should be fairly compliant in most other markets.
Pop quiz
What AI titan recently quit his job and said he regretted his lifelong career due to his fears of what AI’s might do to humans in the future?
The “EU AI Act” is a bit of a mouthful. What famous VC also has four vowels in a row in its name?
Spying on citizens and breaching privacy is not a new phenomenon. What was the name of the infamous secret police in East Germany (DDR), who supposedly had one informant per 6.5 citizens?
What country recently announced that they would host the first “Global Summit on AI Safety” later this year?
In what classic 50’s musical film was the following quote uttered: “Lina. She can't act, she can't sing, she can't dance. A triple threat.”
Fun fact
In 2022, the EU adopted 2’430 acts or amendments to acts, the largest batch being antitrust motions related to company mergers (416). Given that an average year has circa 230 working days, that’s a whopping 10.6 adopted acts per day. It’s a good thing the EU Commission has 32’000 staff to research, draft, negotiate, and push all these documents through the process.
Quiz answers
Further reading
What are foundation models?, IBM Research, May 2022
Do Foundation Model Providers Comply with the Draft EU AI Act?, Stanford University, June 2023
The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment, The Brookings Institution, April 2023
The Global Race to Regulate AI, Foreign Policy, May 2023