News & Views

Hannes Snellman’s Digital Horizon: AI Act – Part I

2 September 2024

Authors: Jesper Nevalainen, Vilhelm Schröder, and Simon Mittler

It feels safe to say that AI has slipped into our corporate agendas as tech giants, and startups alike, race for machine intelligence with greater tenacity and focus than ever before. Some public displays may seem convenient for the sake of PR, and yes, the hype is real. However, we should not underestimate the value to be gained. For example, we cannot afford to overlook the immense algorithmic and hardware improvements that drives this progress — not to mention the quantity and quality of training data.

Through the rise of GenAI, the potential and capabilities of these systems have left many in awe, while we have become more wary of the risks related to them (a brief macro-level take on the pros and cons may be found here). Human or not, we want our friends and colleagues to be harmless, fair, and trustworthy. Consequently, the need to regulate this truly global technology is prominent for the regulators — how it should be done seems to be the pressing issue. We have seen initiatives all around the world, from China and the White House to the G7. However, in the current political economy, it seems that the EU is yet again striving to be in the forefront with its reliance on the good old “Brussels Effect”. Accordingly, the European Commission proposed the Artificial Intelligence Act (the “AI Act”) on 21 April 2021, and the European Council and Parliament formally adopted it this spring.

Regulating AI systems and applications are an integral part of the EU’s Digital Decade Strategy — a wider topic that our IP & Technology Team has previously introduced here. The AI Act will bring forth obligations throughout the entire value chain by introducing a common regulatory and legal framework for the development, deployment, and use of AI systems. As such, the regulation seeks to ensure that such systems are transparent, accountable, and trustworthy.

The EU AI Act, which entered into force on 1 August 2024 and the application of which will start gradually (with the provisions on banned AI systems applicable as of 1 February 2025), is to date the world's most comprehensive and far-reaching regulation on AI. Hence, let us look at the key elements of the act

    1. Scope: Applies to AI systems that are intended to be used or placed on the EU market, regardless of where they are developed or manufactured. The AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
    2. Four risk categories: Establishes a risk-based approach to regulating AI systems, with different requirements based on their level of risk. High-risk AI applications, such as those used in healthcare and transportation, will be subject to stricter requirements than lower-risk applications. Limited-risk AI systems would be subject to transparency obligations (see below). Minimal-risk AI systems, posing no or negligible risks, would be subject to no specific obligations but would be encouraged to follow voluntary codes of conduct.
    3. Prohibited practices: Prohibits certain practices involving AI, such as subliminal (hidden/unconscious) techniques that manipulate human behaviour and real-time biometric identification in public spaces for law enforcement purposes.
    4. Transparency: Requires that AI systems are transparent, meaning that their functionality and the data used to train them are accessible to authorised personnel. Furthermore, users must be made aware of their interaction with an AI system.
    5. Human oversight: Requires that AI systems are subject to human oversight, meaning that a person must be able to intervene and override decisions made by the AI system.
    6. Accountability: Requires that developers and users of AI systems be held accountable for any harm caused by the system.
    7. Testing and certification: Establishes requirements for testing and certification of AI systems, including documentation and risk management.
    8. Governance: Establishes a governance system for the implementation and enforcement of the regulation, involving various actors at the EU and national levels, such as the European Artificial Intelligence Board.
    9. Enforcement: Includes measures for enforcement, including fines of up to 7% of a company's global revenue for non-compliance.

The risk-based approach is an essential take-away from this blog, as it will eventually determine the level of requirements and concerns that relevant stakeholders should take into account. The starting point should be to assess the risks and impacts of the systems or applications you or your stakeholders use, develop, and/or provide as well as determine which one of the aforementioned risk categories these fall into. With respect to high-risk systems in particular, the documentation obligations are likely to cause some challenges. Running a risk management system, registering relevant AI systems to the EU database, and drawing up declarations of conformity will probably entail a decent amount of administrative work.

Our team is dedicated to keeping our readers and clients up to date as the AI saga continues. Meanwhile, we would be delighted if you paid a visit to our Digital Horizon website. We hope it may serve as a tool for you to identify and navigate EU legislation that is key for your compliance in this digital era.

More News