Implementing the EU AI Act as a value driver

Trustworthy AI

Güvenilir Yapay Zeka
  • Survey
  • 5 minute read
  • 14 Jun 2024

Foreword:

Artificial intelligence (AI) is a key technology for digital transformation in private and public organisations. By 2030, we expect that AI will be a direct or indirect component in all processes and products along the entire European value chain. From smart washing machines to (partially) autonomous vehicles, from automated application processing to intelligent chatbots, and from optimised maintenance processes to production robots the rapid spread of AI can be seen all around. In addition to the economic relevance of use cases, trust in the performance, security, reliability, and fairness of AI is an essential factor in deciding for or against the use of AI systems. Both dimensions are inextricably linked because, in practice, economic efficiency and trustworthiness are mutually dependent on the use of AI. Finally, the use of the technology requires certainty and trust on the part of users, customers, and decision-makers, which only sound governance based on best practices and generally accepted standards can provide. For this reason, the institutions of the European Union have developed and launched harmonised regulations for AI systems. This comprehensive “EU AI Act” is directly applicable in the member states and affects both private and public organisations – regardless of whether they are providers or deployers of an AI system.

The regulation aims both to promote European AI value creation through uniform standards and to protect EU citizens. Specifically, the EU AI Act calls for holistic AI governance that promotes the development and use of high-quality AI systems and makes the risks of AI systems manageable and transparent throughout the entire lifecycle. With the right approach and the corresponding interdisciplinary competencies in AI governance and law, organisations can avoid efforts and risks, shorten (market) introduction times for their AI systems and take a pioneering role in digital transformation with AI.


Holistic AI governance for compliance and quality

Combining high-quality AI systems and AI compliance is the key to scaling and, thus, to the success of sustainable value creation with AI systems. Both aspects require recognized standards, best practices, and appropriate tools. The goal: fast, secure, and efficient development and operation of AI systems.

In the following, we want to outline an AI compliance management system that can be used step-by-step to build up organization specific AI governance that combines both aspects. To this end, we draw on concepts and principles from existing compliance management systems, which we blend with our experience and the requirements of the EU AI Act.

Holistic AI
Conclusion and outlook

Conclusion and outlook

The EU AI Act will have a massive impact on the development, use and commercialisation of AI systems in the coming years. Organisations along the entire AI value chain must act now.

Early implementation of holistic AI governance and compliance management systems gives organisations not only a time advantage over the competition but also an immediate economic advantage through shortened time-to-market and high quality of their high-risk AI systems. The requirements are complex and present companies with new kinds of challenges that will take time to overcome.

Linking these aspects requires interdisciplinary competencies in organisations along the entire AI value chain. This is how to create structures and processes for an AI governance that is technically, legally, and organisationally fit for the future.

Trustworthy AI

To access the full report

Download PDF (PDF of 2.72mb)

Contact us

Oktay Aktolun

Oktay Aktolun

Risk Assurance Services Leader, PwC Türkiye

Tel: +90 212 326 6468

Mehmet Zeki Önal

Mehmet Zeki Önal

Information Technologies Risk and Governance Services Leader, PwC Türkiye

Tel: +90 212 326 6773

Bizi takip edin