Artificial Intelligence has played a central role in the world’s digital transformation, impacting a wide range of sectors, from industry to healthcare, from mobility to finance. This rapid development has increased the need and urgency of regulating technologies that by their nature have important and sensitive ethical implications. For these reasons, as part of its digital strategy, the EU has decided to regulate Artificial Intelligence (AI) to ensure better conditions for the development and use of this innovative technology.
It all started in April 2021, when the European Commission proposed the first-ever EU legal framework on AI. Parliament’s priority is to ensure that AI systems used in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly. AI systems must be monitored by humans, rather than automation to prevent harmful outcomes.
The fundamental principles of the regulation are empowerment and self-evaluation. This law ensures that rights and freedoms are at the heart of the development of this technological development, securing a balance between innovation and protection.
How did we get here? Ambassadors from the 27 European Union countries voted unanimously on February 2, 2024, on the latest draft text of the AI Act, confirming the political agreement reached in December 2023 after tough negotiations. The world’s first document on the regulation of Artificial intelligence is therefore approaching a final vote scheduled for April 24, 2024. Now is the time to keep up with the latest developments, but in the meantime, let’s talk about what it means for us.
Risk Classification
The AI Act has a clear approach, based on four different levels of risk:
- Minimal or no risk
- Limited risks
- High risk
- Unacceptable risk
The greater the risk, the greater the responsibility for those who develop or deploy Artificial Intelligence systems and applications considered too dangerous to be authorized. The only cases that are not covered by the AI Act are technologies used for military and research purposes.
Most AI systems pose minimal risk to citizens’ rights, and their security is not subject to specific regulatory obligations. The logic of transparency and trust applies to high-risk categories. The latter in particular can have very significant implications, for example AI systems used in the fields of health, education or recruitment.
In cases such as chatbots, there are also specific transparency requirements to prevent user manipulation, so that the user is aware of the interaction with AI. For instance, the phenomenon of deep-fakes, images through which a subject’s face and movements are realistically reproduced, creating false representations that are difficult to recognise.
Finally, any application that poses unacceptable risk is prohibited, such as applications that may manipulate an individual’s free will, social evaluation or emotion recognition at work or school. An exception should be made when using Remote Biometric Identification systems (RBI), which can identify individuals based on physical, physiological, or behavioral characteristics in a publicly accessible space and at a distance from a database.
Violations can result in fines ranging from $7.5 million, or 1.5 percent of global revenue, to $35 million, or 7 percent of global revenue, depending on the size and severity of the violation.
The AI Act as a growth driver
For businesses, the AI Act represents a significant shift, introducing different compliance requirements depending on the risk classification of AI systems – as seen before. Companies developing or deploying AI-based solutions will have to adopt higher standards for high-risk systems and face challenges related to security, transparency, and ethics. At the same time, this legislation could act as a catalyst for innovation, pushing companies to explore new frontiers in AI while respecting principles of accountability and human rights protection.
The AI Act can thus be an opportunity to support a virtuous and more sustainable development of markets both on the supply and demand-side. By the way, it is no coincidence that we use the word ‘sustainable’. Artificial Intelligence can also be an extraordinary tool for accelerating the energy transition, with concepts that save not only time but also resources and energy, like the Green Software focused on developing software with minimal carbon emission.
ArtificiaI Intelligence, indeed, allows companies to adapt to market changes by transforming processes and products, which in turn can change market conditions by giving new impetus to technological advances.
The present and the future of AI in Italy
The obstacles that Italian small and medium-sized enterprises face today in implementing solutions that accelerate their digitalization processes are primarily economic. This is because financial and cultural measures are often inadequate. There is a lack of real understanding of the short-term impact of these technologies and the risks of being excluded from them.
In Italy, the lack of funding for companies risks delaying more profitable and sustainable experiments with Artificial Intelligence in the short and long term. As an inevitable consequence, the implementation of the necessary infrastructure and methods may also be delayed.
The AI Act itself will not be operational “overnight,” but will require a long phase of issuing delegated acts, identifying best practices and writing specific standards.
However, it is essential to invest now in cutting-edge technologies that enable efficient and at the same time privacy-friendly data exchange. This is the only way for Artificial Intelligence to truly contribute to the country’s growth.
Sources
News European Parliament – Shaping the digital transformation: EU strategy explained
News European Parliament – AI Act: a step closer to the first rules on Artificial Intelligence