Policy@Intel
A place to exchange ideas and perspectives, promoting a thriving innovation economy through public policy
644 Discussions

Can the EU AI Act Withstand the Test of Time?

Intel_EU_Policy
Moderator
0 0 662

By Hendrik Bourgeois, Vice-President Government Affairs, Europe and Mario Romao, Policy Director, Artificial Intelligence 

As the European Parliament gave its final approval to the EU Artificial Intelligence (AI) Act this week, the world will soon see the first ever comprehensive legislation aimed at regulating AI. The goal is to mitigate negative impacts to health, safety and fundamental rights that some AI systems may cause. But with the pace and unknown nature of future advancements and domains in AI, a crucial question arises: How can the EU AI Act keep up with the ever-evolving landscape of AI capabilities and applications without sacrificing innovation? 

Intel is like-minded in its efforts to develop and deploy safe and trustworthy AI systems with a risk-based approach. As our CEO Pat Gelsinger says: “The focus of Intel is bringing AI everywhere, making it truly accessible for all.” We believe that AI will only be truly accessible to all when it’s ethical and responsible. Partnering with the industry, we are delivering innovative ecosystem tools and solutions that make AI safer and more secure and help address privacy concerns as it scales exponentially. To mitigate the risks and maximize the benefits of AI, we apply robust security practices, principles, and capabilities that serve as the foundation for AI systems. Additionally, Intel has a well-established internal process to review AI development activities through the lens of six key areas: human rights, human oversight, explainable use of AI, security (including safety and reliability), privacy, equity and inclusion. We are developing new capabilities and techniques related to secure AI - like Intel Confidential Computing, Intel Trust Authority, and Intel Hardware Shield, and we are also working on developing a real-time deepfake detector.  

The EU AI Act has the potential to help companies, SMEs, and developers bring secure and trustworthy AI everywhere. However, there are several pitfalls to avoid as it is implemented. The impact of the EU AI Act will likely extend far beyond the European market. Therefore, issues like enforcement, technical complexity, legal uncertainty, administrative burden, as well as international coordination must be dealt with foresight. The bottom line is that fostering innovation and promoting technical entrepreneurship are important considerations when weighing the potential risks of AI systems against benefits. A true risk-based approach holds the promise that industry players like Intel can continue developing without unduly restrictive red tape that would hamper innovation. Given the early stage of the regulation, it remains to be seen whether the EU has gotten that balance right. 

A risk-based approach is the right approach 

The world of AI is exceptionally diverse and dynamic. There are countless AI systems already with more being added by the day. And with no two systems the same, the potential risks from their use varies widely. That is why there is widespread support for a risk-based approach. It is a sensible approach for regulation to target what matters most (e.g. protecting safety or fundamental rights), and to aim to do that in a proportionate way while preserving Europe’s innovations in this field.  

When AI became general 

Nothing embodies the rapid evolution of AI like the arrival of foundation models. The original draft of the EU AI Act from 2021 had not foreseen that a technology like ChatGPT could be launched for widespread public use in less than two years. The precariousness of trying to pre-emptively regulate such fast-evolving digital technologies could not be more evident. While the EU AI Act meticulously specifies high-risk usages of AI systems and associated obligations, can it adequately adapt to emerging scenarios?  

Out of the box, probably not. After all, the EU AI Act maps the understanding of the AI landscape at that moment in time. To keep pace with technological change without stifling innovation, implementation must be guided by open, transparent, and constructive consultations with a broad range of stakeholders from industry to academia to civil society. This must not be done in international isolation. Given the EU AI Act’s likely outsized impact beyond the EU’s borders, closely collaborating with other nations is crucial to harnessing the immense potential of AI responsibly s. After all, a thriving international market that brings responsible AI everywhere benefits consumers, businesses, and societies. 

Fostering Responsible Innovation 

How to regulate a rapidly evolving technology without compromising innovation? That is one of the biggest challenges EU lawmakers, regulators, and standard developers will face in their next mandate. The EU AI Act gives the European Commission the ability and the framework to shape the rules of AI development by reviewing and updating the law against current trends, allowing it to keep up with the latest innovations. In a white paper we published recently, we offer some guiding principles for how we believe this can be done to best develop and deploy responsible AI. Our suggestions include:  

1) national competent authorities should be equipped with adequate resources for implementation;
2) when revisiting and specifying rules, it will be critical to take international trends in AI governance, standards and industry initiatives in due regard; 
3) the EU should invest in digital readiness programs to help its citizens adopt key principles of safe and secure use of AI.  

These are the foundations of bringing low-risk, high-benefit AI everywhere. 

Read more about Intel’s commitment to responsible AI.