cnt

Effective Governance for Responsible AI Adoption

Von Matt Liang und Zoltan Bucsko

Artikel erschienen in Swiss IT Magazine 2024/12

     

Artificial Intelligence (AI) is rapidly transforming the enterprise landscape where it has already become an essential resource. This is more than just a technological change, it›s a strategic one.

Even as AI holds so much promise, companies today are treading carefully through more complex regulatory terrains (e.g. EU AI Act; Canada AI and Data Act; US Executive Order on Safe, Secure, and Trustworthy AI) that stress the importance of responsible and ethical use of AI. With International Data Corporation (IDC) forecasting global AI spending to top $301 billion by 2026 and $632 billion by 2028. This highlights an immediate need for governance that can bind the promise of AI with ensuring regulation. The problem is that without proper governance guardrails in place it becomes difficult to handle both the risks and opportunities of AI. This is important not only for structuring a responsible use of AI (e.g. dedicated processes, risk management plan) but also to scale those specific initiatives very fast and in multiple contexts within the company.

Principles for Effective AI Adoption

Designing AI governance, however, does not necessarily require starting from scratch. Most enterprises have governance regime in place for technology solutions used in data management, data privacy and cyber security. Each of these established frameworks offers various lessons-learned that can be leveraged or built upon, laying the groundwork for a more effective AI governance process to increase an organization›s ability to effectively govern its unique needs around AI.

Before even the work stars, it is essential to establish key principles for effective AI governance to provide a strong foundation. These principles include defining responsibilities among AI actors, ensuring transparency in decision-making, promoting fairness to reduce bias, maintaining security with a focus on data privacy, and enabling auditability for alignment with business objectives. Additionally, organizations should prioritize human well-being by designing AI systems that align with human needs and values while adapting existing governance practices to fit AI solutions.


To build & maintain an adequate AI governance framework it is essential to select the right people to manage and to define the relevant use cases for AI applications. These should be in line with the overall business strategy as well as the guiding principles we have set earlier. Using such an approach helps organizations develop an adaptable and responsive AI governance framework based on evolving policies, regulations and demand for responsible strategic AI implementation.

Selecting the Advisory Board

The foundation for AI governance begins with people who, in this case, will form an advisory board that should guide and promote the adoption of this rapidly evolving technology to meet business needs. This team should be composed of senior management, lawyers and compliance officers with subject matter expert representatives to ensure that the AI work is consistent with the organization›s purpose. They are narrowing down the problems the AI solutions are solving or should be solving, defining target use cases, and setting measurable success objectives. They maintain the quality and relevance of data sources in addition to evaluating data storage, processing, privacy, and security requirements, and evaluating ethical concerns. Additionally, they perform a risk assessment and identify all stakeholders (Internal and External) that are affected from the AI use case and weigh potential benefits against costs and devises measures to assess the success of the AI initiative. Furthermore, they are responsible for ensuring that users receive the appropriate level of education regarding the applied AI tools and solutions. These actions ensure that the proper levels of ethics, compliance and alignment with company mandate are met when integrating AI into daily business practice.

Defining Use Cases

To identify strategic AI use cases mapping must be performed for the potential scenarios to critical organizational goals, and areas where AI can deliver measurable improvement. The advisory board can elaborate on these to focus on high-impact, relevant problems that address certain challenges and support building an adaptable governance structure. Such use cases form the basis for embedding compliance and ethical standards into the governance framework which ensures the consideration of the predefined principles as well over time. Furthermore, by evaluating the relevant risks and rewards, the board can select those that can scale, making security, resilience and responsiveness a part of the governance process. To ensure effective governance in AI initiatives, organizations should avoid limited perspectives by fostering board diversity, clearly define roles to enhance decision-making speed, and adhere to ethical norms to protect users and to safeguard the reputation of the organization.

Building AI Governance Framework

Given distinct requirements for each use case, you may not need to reinvent the wheel with regards governance. The respective teams can learn and apply from good practices, voluntary guidelines and standards (e.g. NIST AI RMF, ISO 42001) and utilize existing policies and procedures. Although, the governance framework needs to be more fluid and tailored for each type of individual circumstance. For example, the risks and data requirements for an AI system developed to enable preventive maintenance will be different from those of a system designed to help us in hiring.


The use case specific controls approach simplifies the processes and makes sure governance is aligned with business priorities and moves along with industry standards. Creating a right-sized governance model for the business is essential that organizations have a framework in place to cater for the wide variety of AI use cases which may emerge. AI can be inhibited by overly strict, one-size-fits-all models. The objective ought to be a governance structure that can transform, as AI evolves ensuring the components and controls in place are adopted correctively through use cases. Although it must follow the path paved by the principles, the organization’s objectives, and the external and internal requirements against the management and the protection of the respective data. An executable AI governance framework fosters responsible AI adoption by encouraging ethical use, enhancing transparency, and mitigating risks, ultimately leading to improved collaboration among stakeholders. Additionally, it supports compliance with regulations, protects brand reputation, and facilitates faster and more effective AI implementation.

Summary

To navigate the complexities of enterprise-wide AI integrations, a sound AI Governance model is inevitable. Focusing on well-established principles, for example transparency, accountability, fairness, security or agility and the lessons learned from identification and application of use cases, organizations can scale AI applications in a managed risk and regulatory compliant manner which can lead to better and more ethical AI solutions. AI Governance can be adaptive, allowing AI to flourish and to become a competitive edge for organizations over time.

The Authors

Matt Liang (left)
ISACA CH board member
Protiviti Switzerland Head of Assurance
+41 79 887 58 74
Matt.Liang@protiviti.ch

Zoltan Bucsko
Protiviti Switzerland AI Governance Lead
+41 79 858 72 80
Zoltan.Bucsko@protiviti.ch



Artikel kommentieren
Kommentare werden vor der Freischaltung durch die Redaktion geprüft.

Anti-Spam-Frage: Welche Farbe hatte Rotkäppchens Kappe?
GOLD SPONSOREN
SPONSOREN & PARTNER