The global AI governance landscape is complex and rapidly evolving. Key themes and concerns are emerging, however government agencies should get ahead of the game by evaluating their agency-specific priorities and processes. Compliance with official policies through auditing tools and other measures is merely the final step. The groundwork for effectively operationalizing governance is human-centered, and includes securing funded mandates, identifying accountable leaders, developing agency-wide AI literacy and centers of excellence and incorporating insights from academia, non-profits and private industry.
The global governance landscape
As of this writing, the OECD Policy Observatory lists 668 national AI governance initiatives from 69 countries, territories and the EU. These include national strategies, agendas and plans; AI coordination or monitoring bodies; public consultations of stakeholders or experts; and initiatives for the use of AI in the public sector. Moreover, the OECD places legally enforceable AI regulations and standards in a separate category from the initiatives mentioned earlier, in which it lists an additional 337 initiatives. The term governance can be hard to define. In the context of AI, it can refer to the safety and ethics guardrails of AI tools and systems, policies concerning data access and model usage or the government-mandated regulation itself. Therefore, we see national and international guidelines address these overlapping and intersecting definitions in a variety of ways. For all these reasons AI governance should begin at the level of concept and continue throughout the lifecycle of the AI solution.
Common challenges, common themes
Broadly, government agencies strive for governance that supports and balances societal concerns of economic prosperity, national security and political dynamics, as we’ve seen in the recent White House order to establish AI governance boards in U.S. federal agencies. Meanwhile, many private companies seem to prioritize economic prosperity, focusing on efficiency and productivity that drives business success and shareholder value and some companies such as IBM emphasize integrating guardrails into AI workflows. Non-governmental bodies, academics and other experts are also publishing guidance useful to public sector agencies. This year the World Economic Forum’s AI Governance Alliance published the Presidio AI Framework (PDF). It “…provides a structured approach to the safe development, deployment and use of generative AI. In doing so, the framework highlights gaps and opportunities in addressing safety concerns, viewed from the perspective of four primary actors: AI model creators, AI model adapters, AI model users, and AI application users.” Across industries and sectors, some common regulatory themes are emerging. For instance, it is increasingly advisable to provide transparency to end users about the presence and use of any AI they are interacting with. Leaders must ensure reliability of performance and resistance to attack, as well as actionable commitment to social responsibility. This includes prioritizing fairness and lack of bias in training data and output, minimizing environmental impact, and increasing accountability through designation of responsible individuals and organization-wide education.
Policies are not enough
Whether governance policies rely on soft law or formal enforcement, and no matter how comprehensively or eruditely they are written, they are only principles. How organizations put them into action is what counts. For example, New York City published its own AI Action plan in October 2023, and formalized its AI principles in March 2024. Though these principles aligned with the themes above–including stating that AI tools “should be tested before deployment”–the AI-powered chatbot that the city rolled out to answer questions about starting and operating a business gave answers that encouraged users to break the law. Where did the implementation break down? Operationalizing governance requires a human-centered, accountable, participatory approach. Let’s look at three key actions that agencies must take:
- Designate accountable leaders and fund their mandates
- Provide applied governance training
- Evaluate inventory beyond algorithmic impact assessments