Imagine you have recently been appointed to a Board and asked to Chair the Audit Committee. Boards are increasingly disciplined about recruitment, and your years of education and experience align with many conventional governance needs. But you soon learn that the systems managing your organization’s financials are leveraging AI, and this AI was developed by an outside vendor that used a foreign dataset to train the models. Or you might discover that the organization’s reputation is at risk because of racially biased client-facing algorithms. While AI is a cutting-edge technology with potential benefits, it also presents new challenges for Boards, at the frontiers of governance. ital and access to high-speed internet.
The Call for Trustworthy AI
AI presents both strategic opportunity and risk for firms, with exposure on legal, financial and reputational fronts. Whereas traditional software is static, deterministic and rules-specific, machine learning is dynamic, non-deterministic and learns from the data it is fed. The risks associated with machine learning (ML) relate to different aspects of the technology across its lifecycle, including data quality and integrity, data and algorithmic bias and model drift.
This truth underlies the widespread call for trustworthy AI; for transparency, fairness and accountability in the development and use of AI solutions. Despite the promise of AI, the risk of negative outcomes is real. Amazon’s secret AI recruitment tool was scrapped after being shown to be biased. Microsoft’s chatbot ‘Tay’ was shut down after it developed Nazi sympathies. The broadly recognized incidence of racial bias in facial recognition is another real-world example of costly consequences. There are hard costs to AI done badly including fines, litigation and settlement charges. The U.S. Securities and Exchange Commission (SEC) ordered Bluecrest Capital to pay US$170 million to settle investor charges relating to inadequate disclosure, material misstatement and misleading omission relating to algorithmic trading. Even more than the hard costs, soft costs like business distraction, loss of confidence and reputational damage are likely to undo organizations that do AI badly.
There are hard costs to AI done badly including fines, litigation and settlement charges. The U.S. Securities and Exchange Commission (SEC) ordered Bluecrest Capital to pay US$170 million to settle investor charges relating to inadequate disclosure, material misstatement and misleading omission relating to algorithmic trading. Even more than the hard costs, soft costs like business distraction, loss of confidence and reputational damage are likely to undo organizations that do AI badly.
That’s why every board in just about any business today, should be moving forward on AI Governance. National institutes have been talking about AI policy and governance for several years and there are hundreds of principles-based frameworks as guidance. What has been slower to develop is a practical, firm-level capacity for governing AI. Along with other frontier governance challenges like data and cybersecurity, AI governance needs a proper place in the boardroom.
Add AI Governance to the Risk Register: Do you even have an Inventory of your AI?
As Audit Chair with oversight responsibility for risk and corporate reporting, AI Governance is yours to champion, and ultimately it’s an internal controls challenge.
The purpose and function of internal controls pertaining to tangible assets is well understood, and this needs to be expanded with the rise of intangibles as elements of corporate valuation, including data and algorithms. Establishing and preserving trust in data and its various uses is one of the great challenges in this era of digital transformation. Emerging technologies are being deployed in advance of governance and control mechanisms, thereby placing organizations and their stakeholders at greater risk.
For anyone unfamiliar with internal controls, in financial terms, internal controls are the mechanisms, rules and procedures implemented by a company to ensure the integrity of financial and accounting information, promote accountability and prevent fraud. In the world of AI, internal controls play the same role, representing the policies, processes and procedures that an organization develops to measure, monitor and assure the trustworthiness of its data and models. With AI, these controls might be called guardrails, incorporating a set of quantifiable measures and metrics on which the trustworthiness of models can be evaluated.
The same fundamental principles of internal controls apply to financial assets or data and models including;
- the need to establish guardrails,
- the need to establish accountability through specified roles and responsibilities,
- the maintenance of adequate records and
- the requirement of performing regular and independent reviews
This systematic approach to controls serves to safeguard organizations and minimize risk. CPAs understand this given their experience with financial audit, assurance and internal controls but also the practice of non-financial attestation and engagement through the CSAE 3000 and 3001 standards. As with any other area of uncertainty, an organization’s risk register As with any other area of uncertainty, an organization’s risk register needs to capture AI risk. When the audit committee sits down to review strategic, technical, operational, financial, ethical, reputational or any other type of risk, AI needs to be on the table. Moreover, AI is a technology with a lifecycle, and it’s a virtual certainty that the risk from machine learning will evolve through concept, design, development, testing and deployment.
Some organizations tend toward a compliance-orientation in establishing internal controls, even focusing strictly on mandatory regulatory requirements. This is a mistake with artificial intelligence. The optimal way to govern AI effectively, is to establish ethical guardrails before a project is initiated. It will be at best inefficient and at worst catastrophic, to discover well into deployment that an AI system is operating outside ethical guardrails. And in the event an organization is running AI but hasn’t yet established guardrails, the blowback could cause major damage before the board even knows it has a problem.
While it doesn’t make sense to wait for legislation, the prospect of impending regulation should spur even the compliance-minded to get to work. Two decades ago with Sarbanes-Oxley, regulators were focused on developing new governance structures to protect investors. Today with rising concern over data privacy, digital security and artificial intelligence, a comparable cycle of regulatory angst is upon us. If the last regulatory cycle aimed to safeguard investors from fraud, then this one seeks to safeguard stakeholders from harm.
AI legislation has been proposed in the European Union and Canada that will impose massive penalties for breach of rules around AI development and use. The EU’s Artificial Intelligence Act focuses the greatest regulatory burden on high risk applications, whether products covered by existing product safety legislation or standalone systems deemed high-risk by virtue of their intended purpose. Canada’s proposed Artificial Intelligence and Data Act is less prescriptive but mirrors the EU’s risk-based approach to regulating AI. The passage of these and other new regulatory instruments targeting data and AI is inevitable, but that doesn’t mean organizations should wait to implement robust approaches to AI Governance.
AI Governance is not ‘one size fits all’? Be specific for your Organization
AI Governance needs to be organization specific. Several practical steps have been shown to be successful in practice including education, the development of an organization-specific framework and the establishment of AI Governance guardrails.
A possible action plan could include:
- A board level education or awareness workshop (e.g. including understanding and actions to assign risk to audit committee, and opportunity to strategize)
- Conducting an AI Inventory (e.g. existing, planned, embedded projects)
- Development of an organization-specific framework based on stakeholder-driven questions for the establishment of trust
- Implementation of a platform for monitoring guardrails according to pre- determined metrics with triggers for governance actions
AI governance brings considerations at both the organization-level and project-level. Effective oversight can best be achieved through establishment of an AI Governance Platform for guardrails, metrics and a repository of tools for due diligence. AI is dynamic so the use cases must be monitored over time. The key is to translate important AI technical details into governance visuals and operational thresholds that trigger action.
It isn’t enough to just talk about ethical AI, we need to be implementing ethical AI, and that means outlining a process to define, measure, monitor and report on things like fairness, bias, explainability and privacy.
“IF THE LAST REGULATORY CYCLE AIMED TO SAFEGUARD INVESTORS FROM FRAUD, THEN THIS ONE SEEKS TO SAFEGUARD STAKEHOLDERS FROM HARM.”
Conclusion
Regulation is coming in Canada and around the world but waiting for that to happen is a mistake. Good governance is needed to promote innovation through trust and confidence. AI has the potential to make us individually and collectively, healthier or sicker. AI governance done early and well provides a competitive boost like a vitamin. AI governance done poorly or late acts like a costly but necessary pain killer. CPAs with expertise in internal controls are well positioned to play a role in the emerging ecosystem of AI assurance. It’s time for theoretical discussion to give flight to a wave of pioneers who are implementing AI governance in practice.
ABOUT THE AUTHORS:
Mardi Witzel is a board director with 20 years’ experience in not-for profit board governance and currently sits on the CPA Ontario Council. She is focused on AI and ESG and works with NuEnergy.ai providing AI governance expertise to private and public sector organizations.
Niraj Bhargava is the CEO and co-founder of NuEnergy.ai and an expert in AI Governance. He has over 30 years of experience in technology, business creation and leadership. He is currently the Chair of the Innovation Committee of the Board at the Royal Ottawa Mental Health Centre and has been a member of their Board of Trustees for the last five years.
This article first appeared in CPA Ontario digital magazine, UpNext.
Click here to follow us on LinkedIn to keep up to date with new content.