AI Act

Artificial Intelligence has played a central role in the world's digital transformation, impacting a wide range of sectors, from industry to healthcare, from mobility to finance. This rapid development has increased the need and urgency of regulating technologies that by their nature have important and sensitive ethical implications. For these reasons, as part of its digital strategy, the EU has decided to regulate Artificial Intelligence (AI) to ensure better conditions for the development and use of this innovative technology. 

It all started in April 2021, when the European Commission proposed the first-ever EU legal framework on AI. Parliament’s priority is to ensure that AI systems used in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly. AI systems must be monitored by humans, rather than automation to prevent harmful outcomes.

The fundamental principles of the regulation are empowerment and self-evaluation. This law ensures that rights and freedoms are at the heart of the development of this technological development, securing a balance between innovation and protection.

How did we get here? Ambassadors from the 27 European Union countries voted unanimously on February 2, 2024, on the latest draft text of the AI ​​Act, confirming the political agreement reached in December 2023 after tough negotiations. The world's first document on the regulation of Artificial intelligence is therefore approaching a final vote scheduled for April 24, 2024. Now is the time to keep up with the latest developments, but in the meantime, let's talk about what it means for us.

Risk Classification

The AI Act has a clear approach, based on four different levels of risk: 

  • Minimal or no risk 
  • Limited risks 
  • High risk 
  • Unacceptable risk 

The greater the risk, the greater the responsibility for those who develop or deploy Artificial Intelligence systems and applications considered too dangerous to be authorized. The only cases that are not covered by the AI Act are technologies used for military and research purposes.

Most AI systems pose minimal risk to citizens' rights, and their security is not subject to specific regulatory obligations. The logic of transparency and trust applies to high-risk categories. The latter in particular can have very significant implications, for example AI systems used in the fields of health, education or recruitment. 

In cases such as chatbots, there are also specific transparency requirements to prevent user manipulation, so that the user  is aware of the interaction with AI. For instance, the phenomenon of deep-fakes, images through which a subject's face and movements are realistically reproduced, creating false representations that are difficult to recognise.

Finally, any application that poses unacceptable risk is prohibited, such as applications that may manipulate an individual’s free will, social evaluation or emotion recognition at work or school. An exception should be made when using Remote Biometric Identification systems (RBI), which can identify individuals based on physical, physiological, or behavioral characteristics in a publicly accessible space and at a distance from a database.

Violations can result in fines ranging from $7.5 million, or 1.5 percent of global revenue, to $35 million, or 7 percent of global revenue, depending on the size and severity of the violation.

The AI Act as a growth driver

For businesses, the AI Act represents a significant shift, introducing different compliance requirements depending on the risk classification of AI systems - as seen before. Companies developing or deploying AI-based solutions will have to adopt higher standards for high-risk systems and face challenges related to security, transparency, and ethics. At the same time, this legislation could act as a catalyst for innovation, pushing companies to explore new frontiers in AI while respecting principles of accountability and human rights protection.

The AI Act can thus be an opportunity to support a virtuous and more sustainable development of markets both on the supply and demand-side.  By the way, it is no coincidence that we use the word 'sustainable'. Artificial Intelligence can also be an extraordinary tool for accelerating the energy transition, with concepts that save not only time but also resources and energy,  like the Green Software focused  on developing software with minimal carbon emission.

ArtificiaI Intelligence, indeed, allows companies to adapt to market changes by transforming processes and products, which in turn can change market conditions by giving new impetus to technological advances.

The present and the future of AI in Italy

The obstacles that Italian small and medium-sized enterprises face today in implementing solutions that accelerate their digitalization processes are primarily economic. This is because financial and cultural measures are often inadequate. There is a lack of real understanding of the short-term impact of these technologies and the risks of being excluded from them.

In Italy, the lack of funding for companies risks delaying more profitable and sustainable experiments with Artificial Intelligence in the short and long term. As an inevitable consequence, the implementation of the necessary infrastructure and methods may also be delayed.

The AI Act itself will not be operational "overnight," but will require a long phase of issuing delegated acts, identifying best practices and writing specific standards.

However, it is essential to invest now in cutting-edge technologies that enable efficient and at the same time privacy-friendly data exchange. This is the only way for Artificial Intelligence to truly contribute to the country's growth.


Sources
News European Parliament - Shaping the digital transformation: EU strategy explained
News European Parliament - AI Act: a step closer to the first rules on Artificial Intelligence

Read More
AI Ethics

Artificial intelligence (AI) is the tech buzzword of the moment, and for good reason. Its transformative potential is already revolutionizing industries and shaping our future.

Its role in both everyday life and the world of work is now undeniable.  Machine Learning, combined with IoT and industrial automation technologies, has emerged as a disruptive force in the evolution of production processes across all economic and manufacturing sectors. 

The fact that these solutions can be widely deployed is intensifying the debate about the control and discipline of Artificial Intelligence. The ability to identify, understand and, where necessary, limit the applications of AI is clearly linked to the ability to derive sustainable value from it

First and foremost, the focus must be ethical, to ensure that society is protected based on the principles of transparency and accountability. But it is also a business imperative. Indeed, it is difficult to incorporate AI into decision-making processes without an understanding of how the algorithm works, certainty about the quality of the data, and the absence of biases or preconceptions that can undermine conclusions.

The AI Act from principles to practice

The European Union's commitment to ethical AI has taken a significant step forward with the preliminary adoption of the AI Act, the first regulation on AI to create more favourable conditions for the development and application of this innovative technology. This legislation aims to ensure that AI deployed within EU countries is not only safe and reliable but also respects fundamental rights, upholds democratic principles, and promotes environmental sustainability.

It is also important to note that the AI Act is not about protecting the world from imaginary conspiracy theories or far-fetched scenarios in which new technologies take over human intelligence and lives. Instead, it focuses on practical risk assessments and regulations that address threats to individual well-being and fundamental rights.

In other words, while it may sound like the EU has launched a battle against robots taking over the world, the real purpose of the AI Act is more prosaic: to protect the health, safety and fundamental rights of people interacting with AI. Indeed, the new regulatory framework aims to maintain a balanced and fair environment for the deployment and use of AI technologies.

More specifically, the framework of the AI Act revolves around the potential risks posed by AI systems, which are categorised into four different classes:

  • Unacceptable risk: Systems deemed to pose an unacceptable threat to citizens' rights, such as biometric categorization based on sensitive characteristics or behavioural manipulation, are strictly prohibited within the EU.
  • High risk: Systems operating in sensitive areas such as justice, migration management, education, and employment are subject to increased scrutiny and regulation. For these systems, the European Parliament requires a thorough impact assessment to identify and mitigate any potential risks, ensuring that fundamental rights are safeguarded.
  • Limited risk: Systems like chatbots or generative AI models, while not posing significant risks, must adhere to minimum transparency requirements, ensuring users are informed about their nature and the training data used to develop them.
  • Minimal or no risk: AI applications such as spam filters or video games are considered to pose minimal or no risk and are therefore not subject to specific restrictions under the AI Act.

Understanding the above categorization is fundamental for organizations to accurately assess their AI systems. It ensures alignment with the Act's provisions, allowing companies to adopt the necessary measures and compliance strategies that are relevant to the specific risk category of their AI systems.

In short, the EU AI Act represents a pivotal moment in the EU's commitment to the responsible development and deployment of AI, setting a precedent for other jurisdictions around the world. By prioritizing ethical principles and establishing regulatory guidelines, it paves the way for a future where AI improves our lives without compromising our fundamental rights and values.

Infrastructure AI and Expertise

Infrastructure AI and Expertise

Our relationship with technology and work is about to change. As AI becomes the driving force of a new industrial revolution, it is imperative that we better equip ourselves to remain competitive and meet new business needs.

As we have seen, in the age of Artificial Intelligence (AI), where Large Language Models (LLMs) and Generative AI hold enormous potential, the need for robust control measures has become a critical component. While these cutting-edge technologies offer groundbreaking capabilities, their inherent complexity requires careful oversight to ensure responsible and ethical deployment. This is where Infrastructure AI systems come in, providing the necessary tools to manage AI models with precision and transparency.

Infrastructure AI systems like Radicalbits platform are meticulously designed to address the critical challenge of AI governance. A solution that not only simplifies the development and management of AI, but also enables organizations to gain deep insight into the inner workings of these complex models. Its ability to simplify complex tasks and provide granular oversight makes it an indispensable asset for companies seeking to harness the power of AI ethically and effectively.

In this scenario, Europe faces a new challenge: the need for proficient expertise. Indeed, to unlock the full potential of AI, we need a mix of skills that includes domain and process expertise as well as  technological prowess.

Infrastructure AI, the foundation upon which AI models are built and deployed, requires a diverse set of skills. It’s not just about coding and algorithms: it's about encompassing an ecosystem of platforms and technologies that enable AI to work seamlessly and effectively.

Newcomers to the AI field will therefore need a mix of technical skills and complementary expertise to be successful. Domain expertise is critical for identifying challenges, while process skills facilitate the development of AI solutions that are not only effective, but also sustainable and ethical. The ability to think critically, solve problems creatively, and prioritize goals over rigid instructions will be essential.

How to support responsible AI development

The need for companies to regularly monitor the production and use of AI models and their reliability over time is highlighted by the new regulatory framework being promoted by the EU. To prepare for the new transparency and oversight requirements, it is essential to focus on two areas. 

First, the issue of skills, particularly in the context of the Italian manufacturing industry. It’s well known that the asymmetry between the supply and demand for specialized jobs in technology and Artificial Intelligence, to name but two, slows down growth and damages the economy. 

In this transformative era, Italy is uniquely positioned with its strong academic background and technological expertise. By combining Italian skills and technologies, we can harness the transformative power of Infrastructure AI to promote responsible AI development and to establish ethical AI governance practices. Our history of innovation and technological advancement, together with the capabilities of Infrastructure AI, could  provide a competitive advantage in this field.

The second point is technology. Tools that support the work of data teams (Data Scientists, ML Engineers) in creating AI-based solutions provide companies with a real competitive advantage. This is where the term MLOps comes in, referring to the methods, practices and devices that simplify and automate the machine learning lifecycle, from training and building models to monitoring and observing data integrity.

Conclusions

The adoption of the EU AI Act marks the beginning of a transformative era in AI governance. It’s a first, fundamental step towards ensuring that AI systems play by the rules, follow ethical guidelines and prioritize the well-being of users and society. 

It should now be clear that it can serve as a proactive measure to prevent AI from becoming wild and uncontrolled, and instead foster an ecosystem where AI operates responsibly, ethically and in the best interests of all stakeholders.

On this exciting path towards responsible and sustainable AI, both cutting-edge technology solutions and skilled workforce are essential to unlock the full potential of AI, while safeguarding our values and the well-being of society. The smooth integration of expertise and cutting-edge technology is therefore the real formula for unlocking the full potential of AI.

This is the approach we take at Bitrock and in the Fortitude Group in general, where a highly specialized consulting offering is combined with a proprietary MLOps platform, 100% made in Italy. And it is the approach that allows us to address the challenges of visibility and control of Artificial Intelligence. In other words, the ability to fully, ethically and consciously exploit the opportunities of this disruptive technology.

Read More