Companies must be accountable for deploying it responsibly
This opinion article was first published by the National Post
Does generative artificial intelligence (AI) pose a threat to society and humanity? In the wake of ChatGPT’s stunning release, many have been asking this question. On March 22, led by the Future of Life Institute (FLI), a group of prominent tech leaders and researchers called for a temporary pause in the development of all systems more powerful than GPT-4.
The open letter — signed by billionaire tech innovators Elon Musk and Steve Wozniak, among thousands of others — cites an absence of careful planning and management. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the letter states.
But this argument misses a critical point: the genie is already out of the bottle. ChatGPT is estimated to have reached 100-million monthly active users as of January 2023. Its website already generates one-billion visits per month. Beyond its record-breaking status as the fastest-growing consumer application in history, OpenAI’s ChatGPT has transformed the AI landscape. This can’t be undone.
That compels us to look for ways to prevent harms, beyond asking for a pause, or government action. Rather than focusing exclusively on the admittedly important roles of AI regulators, labs and developers, a much greater emphasis could, and should, be placed on the urgent need for the organizations deploying the technology to get serious about governance and its users to be more accountable.
In contrast with machine learning — narrow-use AIs such as the algorithms used to detect fraud by banks — generative AI is a general-purpose technology, with a wide range of possible uses, including some that cannot be foreseen by its developers.
The FLI’s call for a development pause, which now has more than 26,000 signatures, understandably reflects a broad public concern about the magnitude of generative AI’s potential impacts. Interestingly, however, the furor around ChatGPT has not sparked a conversation about the role that should be played by the many organizations that are already using AI. This is a mistake.
In the new world of general-purpose AI, including generative AI, surely a key responsibility for governance should fall on the companies that are proposing to deploy AI systems for various tasks. Those organizations should evaluate whether the benefits justify the risks and potential negative impacts, in the context of specific use cases. And they should be accountable, whether to boards or shareholders, for those decisions.
Some experts agree that the principal burden of responsibility should fall on the organizations that deploy, rather than those that develop, generative AI. It’s not a loud chorus, but that can — and should — change.
Based on the general-purpose nature of generative AI, it’s not a stretch to see the impracticalities of expecting developers to anticipate and mitigate every risk. In the European Union, for example, the proposed “artificial intelligence act” calls for a comprehensive approach to risk management for high-risk systems. But how could a developer of generative AI ever anticipate the many high-risk applications in which its software might eventually be used?
A 2022 study found that AI adoption has more than doubled from 2017 to 2022, with 50 per cent of companies surveyed using the technology in at least one business area. How many of them have implemented effective governance? Good governance involves the practice of asking good questions, getting answers and making judgments. It does not rely on passing the buck to AI developers. It is neither easy nor simple. But it is necessary.
The time is now for organizations deploying AI to implement robust governance, starting with education and the establishment of guardrails, including measures on trust factors such as explainability, fairness, privacy and ethics. An assessment of AI’s trustworthiness needs to be accompanied by an accountability scheme, outlining the roles and responsibilities of deploying organizations. It is this composite of principles, processes, measures and management that can bring a functional, effective methodology for trustworthy AI to life.
The imperatives of governing AI hold with all its forms. But the special challenges of generative AI complicate the process, due to the opaque nature of the models and the complexity of a value chain with multiple players.
Over time, new governance approaches will emerge, including regulatory instruments, legislation, software tools and crowdsourcing techniques. What matters now is an organization’s own ability to impose transparent standards on its own use of the technology. Organizations that use AI, in any form, should see to this without delay.
ABOUT THE AUTHORS:
Mardi Witzel is a board director with 20 years’ experience in not-for profit board governance and currently sits on the CPA Ontario Council. She is focused on AI and ESG and works with NuEnergy.ai providing AI governance expertise to private and public sector organizations.
Niraj Bhargava is the CEO and co-founder of NuEnergy.ai and an expert in AI Governance. He has over 30 years of experience in technology, business creation and leadership. He is currently the Chair of the Innovation Committee of the Board at the Royal Ottawa Mental Health Centre and has been a member of their Board of Trustees for the last five years.