Back to All Articles

Human-Centered AI: Unpacking the upcoming European Union’s Artificial Intelligence Act

By Mia Lanca
Date: 7th November 2023

AI and the Human Brain. Courtesy of geralt (2018)

While artificial intelligence has been utilized in services and products used by the public for years, the rise of chatbots and other forms of generative AI has pushed the conversion about it to the centre of public attention. ChatGPT’s launch in 2022 seemingly made everyone agree that the future of society is that of AI. It also caused a disagreement on what is to be expected from that future. As with any emerging technology (particularly one this popular), many view it as a revolutionary force that will lead to a high-tech utopia,  while others as the thing that will ultimately lead to the downfall of humanity. In the midst of the polarization, it became easy to forget that AI is, at the end of the day, a tool for humans to use – not something that is inherently good or bad. So the main question about AI shouldn’t be what will it inevitably lead to, but what will we allow ourselves to do with it?

THE REGULATION OF AI IS A CONFUSING TOPIC.

There are a few main reasons for this. Firstly, it’s difficult to have constructive conversations about AI, since AI as a term has been obscured. While it is being treated as a single issue (currently, mostly referring to machine learning algorithms such as ChatGPT) in reality, it is a broad term encompassing many different types of algorithms which work in different ways, pose different threats, and require different ways of dealing with them. Secondly, the sensationalist coverage of the possibilities and potential harms of AI is making it difficult to objectively assess what dangers current technology actually poses. Lastly, AI is a profitable and fast-developing industry, and regulating it comes with economic sacrifice and the potential for the slow-down of technological progress. This makes it a controversial topic for many. 

In order to provide some clarity about what is being done about AI (as well as what isn’t), this article aims to analyze the most ambitious attempt to regulate emerging AI technology –  the EU’s proposed Artificial Intelligence Act – in order to offer a better understanding of its scope, relevance and implications for the general direction in which we can expect artificial intelligence to develop.

AI ethicist in front of neural network screens. Courtesy of Unreal (2023).

THE PROPOSAL

At the core of the European Union’s approach to AI regulation is the concept of human-centered artificial intelligence – AI which benefits and works with humans and not against them. This regulation aims to safeguard human rights, focusing mainly on protecting the right to human dignity, non-discrimination, respect for private life, protection of personal data, freedom of expression and freedom of assembly. 

The Artificial Intelligence act was proposed in 2021, and is still in the process of being finalised – it has not yet been put into power. Currently, the act has gone through two trialogues and is likely to be passed in early 2024. After it gets passed, providers of AI systems will have a grace period of 2 years to ensure their products are compliant with the regulation. 

The proposal is based on a method of evaluating AI systems based on the amount of risk they pose to human rights and conducing the severity of regulation based on this assessment. The system has three tiers – AI posing an unacceptable risk, a high risk and low or minimal risk. In order not to overregulate the market and stifle innovation, the legislation mainly targets only unacceptable and high-risk AI. 

Unacceptable AI systems, which are those that will not be allowed on the EU market, are far and few in between. Most important among them are the real-time biometric identification systems and social-scoring systems – systems that are likely to seriously harm the rights and livelihoods of humans (such as losing social benefits due to a low social score which was calculated by an AI system). This category also includes two more types of AI systems: software that is likely to manipulate users through subliminal techniques beyond their consciousness and software that aims to exploit vulnerabilities of specific groups (eg. children) in order to influence their behavior in a way which is likely to be harmful to them or others. While at first glance this seems to address concerns about malicious AI shaping the public’s opinion – in reality it does not amount to much. It is unlikely to be effective since the complicated requirements for this to be applicable are almost impossible to fulfill. 

The high risk category is the most widely applicable and the most contested. Systems which fall under this category are used in fields such as critical infrastructure, education, employment or social benefits. In short –  any area in which mistakes by the AI system could have tangible repercussions on physical lives and wellbeing  of people. 

To make these AI systems less risky, the providers need to fulfill certain criteria before placing their products on the market. Namely, this is focused on risk management, the quality of data (it has to be sufficiently relevant, representative, accurate and complete), cybersecurity, transparency and building for human oversight. 

Ideally, this would allow the EU to make sure AI systems on its market do not discriminate on the basis of insufficient data, are not posing as humans, are not prone to attacks from outside actors, are managing and mitigating their risks as they are evolving and can be managed by a human being in any moment if necessary. 

Certain types of AI systems have special transparency requirements. The most prominent of these are AI systems meant to generate or manipulate content – chatbots, deep fakes and others. These systems are required to disclose that they are created or altered by artificial intelligence. The providers are also required to disclose the copyrighted content they use to train the models. 

The EU would require a conformity assessment to make sure the conditions are fulfilled before being put on the market. 

THE GOOD, THE BAD AND THE FORGOTTEN

AI and the EU. Courtesy of The European Comission (2019).

The proposed AI Act addresses a couple of key issues concerning AI systems of today. 

It adequately addresses the issue of discrimination as a consequence of poor training data (as well as promising to issue providers direct access to quality data when needed). It partially tackles the issue of undisclosed AI generated content – the problem remaining is that users of AI software can still take AI generated content and present it as authentic.  Its regulation on biometric recognition leaves space for abuses in such technology since its use is only prohibited in real time and in person – not retroactively or online (in live streams). This leaves ample space for the limitations of civil freedoms such as the right to a protest. The Act  does well in addressing cyber security and transparency for commercial AI. 

The issue of copyrighted content is addressed, but not yet resolved, as there is no final decision of whether it is fair to use it in development of AI systems. However, the mandatory disclosure of their use does open the door for that discussion to follow. 

While this act should be looked at in context with other EU regulation concerning technology and data (especially since the EU does acknowledge that many of the issues accelerated by AI are issues found in other types of technology as well), some issues still remain unaddressed. 

Notably – the militant use of AI systems and the environmental impact of AI technology. 

AI systems used exclusively for military purposes and for the purposes of national security are completely exempt from the proposal, despite the fact that AI technology is increasingly being applied to warfare (being used for surveillance, threat evaluation, underwater mine warfare, cyber security, intelligence analysis control etc.). Unless these systems are put under the same amount of scrutiny as commercial ones, there is a higher likelihood of potentially harmful uses of AI technology by states

The environmental impact of artificial intelligence is also not addressed whatsoever, despite the fact that, as any big and growing industry, it requires vast amounts of energy extraction and ecological sacrifice in order to function and develop (and it’s not the clean energy EU is so fond of). 

WHAT’S TO BE EXPECTED?

The EU and the European Continent. Courtesy of Holistic AI (2023).

While the objectives of this proposal are clear, there is much talk around what exactly its results will be. On one hand, the proposal is facing strong opposition. Many claim that the compliance costs that come with the Act will lead to the suppression of smaller and open-source developers – leaving AI in the hands of the big corporations who can pay to keep up with the regulation. There’s doubts that it will make the EU less competitive in the AI market, lagging behind actors like China and the US, which have taken vastly different approaches towards AI. 

While definite outcomes of the proposal are impossible to predict with certainty, the act is not as oblivious to these two issues as its opponents make it out to be. 

Firstly, the Act’s objective is to harmonize legislation among member countries. This means that, if enacted, it would reduce the number of different possible AI regulations in the EU market to a single set of criteria. Considering the fact that some European countries have already started forming their own AI legislation, if the Act doesn’t get passed, companies are likely to face multiple sets of standards (depending on the number of countries who choose to enact legislation). The reduction in different rules inside the market that the AI Act proposes, would in reality probably decrease the cost of compliance for companies looking to get on the market. 

Due to the recognition of the importance of innovation in the AI field, the proposal does offer certain mechanisms to deal with the stifling effect regulation can have. The proposal urges Member Countries to establish regulatory sandboxes – schemes in which participating businesses have a window of time in which they do not have to fully abide by the regulation. While there are some restrictions – primarily the obligation to stop if the systems are shown to inflict harm and to keep proper documentation of risks – the majority of restrictions are lifted in order to foster innovation and promote development.  The act gives priority to Small and Medium-Sized Enterprises (‘SMEs’) and start-ups when it comes to deciding who gets to “play in the sandboxes”. 

In order to help minimize the effect of the AI Act on small scale providers, the proposal urges Member States to develop initiatives to ease the burden on them (mainly focused on information communication). Additionally, it promises to keep the interests and needs of small-scale providers in mind when conformity assessment fees are being set, as well as when they decide on languages accepted for relevant documentation (to lower the translational costs). 

The implementation of these initiatives is up to member states, and so the protection of small businesses will depend on the efforts of individual countries. 

Big companies (among which were Meta, Google and Amazon) have shown their dislike towards this legislation which had led some to worry that they might pull out of the EU market. Similar doubts were voiced before the 2018 General Data Protection Regulation (GDPR), after which all of those companies caved to the regulation’s demands in order to keep their access to the EU market. 

Those more optimistic about the effect of the regulation expect to see a co-called Brussels effect – positive changes in products offered in non-EU countries, as well as in other countries’ legislation because of the Act. The Brussels effect occurs as it is easier for businesses to follow one set of standards, making it more likely they will make all of their products abide by the same rules (making products that get placed on markets outside of the EU also compliant). It is also more likely they will place their products on markets with the same type of regulation since the barrier to entry is low. This effect was seen after GDPR was passed and it is likely that a similar effect will be seen to an extent with the AI act. 

However, this does not come without a drawback. The EU regulation of AI is meant to harmonize legislation in all member states and is meant to do so in the entire field of AI, not just high-risk AI. This in turn means that member states are obliged to follow the regulation and are not allowed to regulate outside of it (except in a few minor cases). Considering that the EU AI Act leaves everything but high and unacceptable risk AI unregulated, member states are not allowed to impose any of their own regulation on lower risk AI systems even if they believe they pose enough danger to do so. So, while the regulation will most likely lead to stricter world-wide management of high risk AI systems, it will also result in lack of regulation of lower risk systems.  

THE FUTURE OF AI

Considering that the way in which we choose to regulate and shape AI today is what determines how AI will develop and influence our society tomorrow,  the proposed AI Act offers a look into what we can expect from AI in the future. The EU’s human-centric approach to AI highlights the importance of human rights and non-discrimination. By ensuring the regulation focuses on direct harm to human rights that AI systems could be responsible for, the EU has created a system which allows it to focus on the most dire issues, while not overregulating the market. Although that makes it possible to address multiple serious issues already seen in today’s AI systems (like the possibility of discrimination and misinformation), the EU’s AI act is not without fault. The current state of regulation leaves AI systems open to abuses in the name of national security and military purposes, as well as ignorant of the environmental impacts of the technology. These blind spots, as well as some loopholes in the regulation, could have an important impact on the efficiency of the Act and in turn, the way AI will develop in the future.  

The EU and AI. Courtesy of The CFA Institute (2022)

Copyright © 2023 Sparklight Media