The EU’s AI Act, and what you should start doing about it now 

DALL-E image interpreting the EU AI act

The EU’s AI Act is happening.  So, what do data teams need to do about it? 

In March, the first legislation to holistically manage Artificial Intelligence passed its last significant barrier to becoming law, when the EU Parliament voted it through.  It will now go through a further couple of more procedural steps before coming into force in the summer.  Businesses shouldn’t panic, however, as there will then be a 2 year implementation period. 

 However, as many AI systems can take years to successfully implement, what are the sorts of things that businesses with EU operations be aware of, and start putting into place? 

 Firstly, what’s the fuss about? 

 The AI Act is the first attempt to regulate AI in a co-ordinated, systemic way, anywhere in the world, and is regarded as a pioneering piece of legislation.  There are some criticisms, which we will come to, and not all business leaders are entirely happy about its approach, but its ambition and breadth cannot be disputed, and besides, one major democracy had to be the first to regulate AI; it might not be perfect, but it’s a necessary step. 

 Its aim is to engender trustworthy AI, and to manage the potential risk to individuals, and to enforce governance practices on those businesses developing and deploying AI systems. 

 It affects all businesses using AI, but the Act categorises different systems into whether they could potentially cause harm to individual citizens; as a result, AI is grouped into risk groups.   

 The highest level of risk, where there could be a violation of fundamental rights and values (such as social scoring by governments, or toys using voice controls, that might encourage dangerous behaviour) are deemed unacceptable, and will be banned.  

 It’s the second category – that of high risk – that is perhaps the most pertinent for most organisations.  This is where there is a risk of impact on health, or safety, or fundamental rights, but where data processing serves a legitimate purpose.  The high risk group includes specific classifications, such as: 

  • Biometrics 
  • Critical infrastructures (eg transport) 
  • Education (eg exams or vocational training) 
  • Employment (CV management or employee segmentation) 
  • Law enforcement (eg evidence management) 
  • Access to essential services (which includes access to financial services and health systems) 
  • Migration (visa automation or border control) 
  • Democratic administration (court searches or voting management) 

 This means that particular sectors, but also particular departments using AI, will be affected by the act, where others may not be.  Any AI system based on employee or recruitment data sets is likely to be classed as high-risk, as will any working in educational or training environments – member organisations or professional trade bodies, for instance.  Essential services is a fairly broad category, but would include financial services, insurance, telco and health service access, and so will affect entire industry sectors as high-risk.  But if you are conducting profile or probability scoring in other sectors which might affect outcomes shown to end customers, this might also be classed as high-risk. 

 A third category below high-risk – called Transparency Risk – focuses on the risk of manipulation of individuals, and governs areas like chatbots, but also AI-generated content, and the potential for deepfakes.  This category requires information about sources and transparency about where the data and content has come from.  Clearly, AI-generated content is a huge growing area at the moment, so many emerging tools, particularly in marketing, are likely to be subject to these rules.   

 A fourth group for common AI systems will require no specific regulation, and includes recommender systems, spam filters and similar low-risk tools. 

 There is also a separate classification for General Purpose AI models, which requires transparency and some risk assessment, somewhat like the Transparency Risk category.  This is in response to the development of tools like ChatGPT, so is a fairly late addition to the Act. 

 Is this likely to affect my business? 

 If you operate or have customers who live in the EU, then yes.   

 As we have just seen, some sectors, and departments, will be affected more than others.  AI in your HR teams, for example, is likely to be regarded as high-risk, whereas marketing might need to worry more about transparency-risk, with AI-generated content.  Part of the criticism levelled at the Act is that it’s not outcome-based, but risk-based, which means that it’s harder for businesses to manage, as risk appetites or assessments may vary, but outcomes are more definitive. 

 At the same time, there’s still a lot of clarification and detail that will emerge as part of the implementation period which will start in Summer 2024, so there is still plenty of time to get your AI house in order.  But you will be best served to start on this journey now, so you are not leaving things to the last minute. 

 What do I need to do to prepare? 

 Here’s a quick suggested list of what you should be doing to get ahead: 

 At the moment, the most important thing to do is to identify your level of exposure to the AI Act across your business.  So, as a business with EU-based customers: 

 Create an “inventory” of AI systems within the business and classify by risk type 

Make sure you include: 

  • any AI systems that you are planning to develop or are in pipeline for release in the next two years 
  • any AI systems that might be part of any supplier tools or capabilities 

Identify your exposure to categories and data types 

  • Could you be regarded as an “essential service”? 
  • Do you, or your suppliers, use biometrics in any area of your business? 
  • If you do fit into any areas that are clearly in scope, monitor any initiatives and standards for that sector, and engage with the EU’s AI Standards Hub. 

This website uses cookies

We use cookies to improve your experience and to provide us with insight into how people use our website.

To find out more, read our cookie policy.