Written by Marina Laukes


The term Artificial Intelligence (AI) may no longer be new, but it is still polarising and continues to raise questions. Its implementation and its increasingly advanced algorithms are already modifying our daily lives, from smartphones and search engines to education, healthcare, and finance. However, AI and its potential are controversial topics because of the question of whether such systems are morally and ethically acceptable, and to what extent they breach current legal boundaries. Hence, in order to strike a balance between humans and technology, whether in society as a whole or at the corporate level, laws and directions are needed, especially in a constantly changing and increasingly complex environment. Ursula von der Leyen, current President of the European Commission, echoed this back in 2019 in the political guideline for the 2019-2024 period of the Commission, saying “I want Europe to strive for more by grasping the opportunities from the digital age within safe and ethical boundaries” (European Commission et al., 2019).

This article is intended to provide a brief and critically considered overview of the European Union’s current draft regulation, the EU AI Act, which aims to create a binding legal framework for AI developments, whether for developers or users.


The first proposal for a legal framework was presented by the European Commission on April 21st, 2021. However, in principle, it had already begun in February 2020 with the publication of several frameworks by the European Commission as part of the EU’s Digital Strategy, including a white paper on artificial intelligence. With the implementation of several frameworks, the Strategy aims to create an environment in which companies and, above all, society, can make a decisive contribution to the transformation towards a climate-neutral Europe in 2050. In addition, the priority is to establish a harmonised framework with the goal of “becoming a global leader in innovation in the data economy” (European Commission, 2020), all under the aspect of digital sovereignty. Thus, the EU aims to operate more independently in terms of technological development and research (Madiega, 2020). 

In this context, it is important to clearly define the concept of AI systems, which is, according to the EU AI Act, a ” …software that is developed with one or more of the techniques…and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (European Commission, 2021, p.39). The techniques referred to above are based on machine learning, logic and empirical concepts; therefore this definition also determines how risks are classified and helps to differentiate AI from simpler software systems (Council of EU, 2022). 

In addition, the first draft of the AI Act from 2020 addresses two main issues:

  1. To create an environment that aims to promote cooperation, whether at the national or international level through Small and Medium Enterprises (SMEs), as well as greater involvement of the private and public sectors, in order to ultimately drive research and innovation. This is referred to as the Ecosystem of Excellence by the EU.
  2. With the so-called Ecosystem of Trust, the EU defines one of its main core issues for the regulation of AI. It emphasises the protection of society and at the same time promotes the spread of trust-based AI systems (European Commission, 2020). The EU repeatedly refers to direct promotion of human-centred AI, which is a discipline intended to provide more transparency. Humans and their capabilities are to be supplemented and expanded instead of completely substituted (Geyer et al., 2022).

Why do we need rules?

The fact that AI is becoming a part of everyday life is reflected in the constantly increasing number of companies using it. At least 56% of companies use AI in some way, particularly in service operations (McKinsey, 2021). AI is estimated to increase global GDP by up to 15% by 2030 (Bughin et al., 2018). This is why creating a legal framework for AI is so important. Countries such as the United States and EU member states, whilst upholding similar values and principles of democracy and law, differ on the issue of moral regulation of AI. While the EU clearly emphasises the fundamental rights and privacy of its citizens, the US emphasises transparency and the protection from discrimination, often caused by AI applications. But it is not only the USA and the EU that want to implement AI systems positively into society with the help of regulation; countries such as China and the UK are also working on legislation (HEC Paris Insight, 2022). In its framework, the EU refers to the following objectives (European Commission, 2021):

  • Maintaining and guaranteeing fundamental rights, values, and standards of the EU.
  • Creation of legal compliance for the promotion of investments. 
  • Improving institutional control and legally effective establishment of currently existing directives in the context of security.
  • Support to promote the implementation of sound AI based systems in the European single market, as well as the prevention of market fragmentation.

Scope of application

Since new technologies, such as AI, bring with them unforeseen risks which may fall outside the scope of existing laws, an adequate legal framework is needed to counteract potential threats. Risks can be classified into four categories: unacceptable, high, limited, and minimal (Kop, 2021).

Classifying risks is important  for the EU to raise the general awareness of the potential impact of harm, which is primarily security related, so that organisations can prepare accordingly. This need for increased awareness is demonstrated by a study by Benjamin et al. (2021), according to which, in 2020, only 48% of companies surveyed stated they understood the significance of complying with the regulations.

Unacceptable risks

This includes all clearly identifiable risks that affect the security of society and its entire living environment in all its breadth (European Commission, 2021). An immediate exclusion of AI systems applies if they lead to a manipulation of the individual and, above all, if they do not use ethically and morally justifiable techniques (Council of the EU, 2022).

High risks

AI applications are considered high-risk, especially those that have a large impact on society, so it is necessary to determine liability before launch. High-risk AI systems include those that are used in the context of critical infrastructure, such as transport, and which could lead to direct threat, such as self-driving vehicles. Furthermore, high-risk AI systems also encompass those with an impact on education and careers (e.g. application processes, as with CV assessment). The judicial and legal systems also fall into high-risk sectors, because AI used there could have an impact on fundamental rights and democratic processes. Additionally, indispensable private and public services, such as credit checks, are also classified as high risk because they could lead to citizens being wrongly denied access to loans (Kop, 2021).

The EU emphasises that users of an AI system with a high-risk classification are obliged to register in a dedicated EU database. The current AI framework also explicitly signals that a natural or legal person has the possibility to file a complaint with the market surveillance authority in case of non-compliance with AI laws (Council of the EU, 2022).

Limited and minimal risks

Limited and minimal risks are those that pose only a minimal or almost no societal security risk. For instance, chatbots, to be considered of limited risk, must be visible in a way that does not prevent sound decision making during interaction (European Commission, 2022).


However, there are some concerns that the current draft AI Act is insufficient and incomplete. Accordingly, several organisations point out that the issue of transparency is insufficiently clarified. Particularly with regard to AI systems which are classified as high risk, it is criticised that when a database system is implemented, the database withholds the information for use and currently only the information about the provider and its registered risks is made available.  This means that it is difficult  for society to determine where high-risk AI systems are being used from, on whom and, above all, for what purpose. In addition, the establishment of meaningful rights and the use of legal remedies are a subject of the debate surrounding the AI Act. Currently, the framework does not grant individuals any rights of their own, so the question of what mechanism there is for legal remedies for the individual is also unresolved. (EDRi, 2021). Thus, the Commission’s prioritisation of human-centric AI is being undermined (Algorithm Watch, 2021).


The EU’s AI Act is promising, especially in light of the rapidly advancing implications of algorithmic systems and the concurrent need for security regulations. Such a framework would contribute to the path towards a safe future especially in the context of privacy of the citizens. However,  there are still many unresolved issues, such as the use of biometric systems and facial recognition technologies for the purpose of law enforcement. In this context, it’s clear how narrow the line is: on the one hand, the focus is on the protection of society and on preventing criminal prosecution due to possibly biased algorithms, which would suggest that a clear ban on AI is the right measure. However, a complete  ban would arguably lead to an opportunity cost in the long run, especially in light of the fact that society has the ability to prevent criminal offences. The desire to protect society and preserve private rights is proving to be a difficult task (Erler, 2022). In conclusion, it appears that the current EU framework is only the first step on a long road to a responsible implementation of AI systems.


Algorithm Watch. (2021, April 22). AlgorithmWatch’s response to the European Commission’s proposed regulation on Artificial Intelligence – A major step with major gaps. https://algorithmwatch.org/en/response-to-eu-ai-regulation-proposal-2021/

Benjamin, M., Buehler, K., Dooley, R., & Zipparo, P. (2021, August 10). What the draft European Union AI regulations mean for business. McKinsey & Company.


Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018, September 4). Notes from the AI frontier: Modeling the impact of AI on the world economy. McKinsey & Company. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy

Council of the EU. (2022, December 6). Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights. https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

EDRi. (2021, November 30). An EU Artificial Intelligence Act for Fundamental Rights. https://edri.org/wp-content/uploads/2021/12/Political-statement-on-AI-Act.pdf

European Commission. (2022, September 29). Regulatory framework proposal on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

European Commission. (2021, April 4). Regulation of the European Parliament and of the Council Laying down harmonised rules on artificial intelligence (artificial intelligence acts) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=DE 

European Commission. (2020, 19. February). A Europe fit for the digital age. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age_en

European Commission. (2020, February 2) White Paper On Artificial Intelligence – A European approach to excellence and trust. https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf

Erler, A. (2022, January 3). The EU’s Artificial Intelligence Act: Should some applications of AI be beyond the pale? Heinrich Böll Stiftung. https://hk.boell.org/en/2022/01/03/eus-artificial-intelligence-act-should-some-applications-ai-be-beyond-pale

European Commission, Directorate-General for Communication, Leyen, U. (2020). Political guidelines for the next European Commission 2019-2024; Opening statement in the European Parliament plenary session 16 July 2019; Speech in the European Parliament plenary session 27 November 2019, Publications Office of the European Union. https://data.europa.eu/doi/10.2775/101756

Geyer, W., Weisz, J., Pinhanez, C. S., & Daly, E. (2022, August 3). What is human-centered AI? IBM Research Blog. https://research.ibm.com/blog/what-is-human-centered-ai

HEC Paris Insights. (2022, September 9). Regulating Artificial Intelligence – Is Global Consensus Possible? Forbes. https://www.forbes.com/sites/hecparis/2022/09/09/regulating-artificial-intelligence–is-global-consensus-possible/?sh=2009e96d7035

Kop, M. (2021, October 1). EU Artificial Intelligence Act: The European Approach to AI. Stanford Law School. https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/

Madiega, T. (2020, July). Digital sovereignty for Europe. European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/651992/EPRS_BRI(2020)651992_EN.pdf

McKinsey & Company.(2021, December). The state of AI in 2021. https://www.mckinsey.com/capabilities/quantumblack/our-insights/global-survey-the-state-of-ai-in-2021

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like