• Defining “Trust” in AI (podcast excerpt)
Michael Kravshik: I’m Michael Kravshik, and I recently had the pleasure of sitting down with Niraj Bhargava, the founder and CEO of NuEnergy.ai to talk all about AI governance. Here’s a clip from our discussion:
I do want to hone in on something that you’ve now said multiple multiple times, which is the word “trust”, and if you could — how do you define the concept of trust when it comes to AI?
Niraj Bhargava: Absolutely. So, it’s related to the term “governance”, that we also know quite well. You know, governance has been around a long time, and we’ve learned a few lessons on governance, whether Sarbanes-Oxley or others. And so the topic of governance does apply, not just to people, but to machines. And technology has come a long way over the last few decades, and we’re now at a point where the combination of processing power, and of big data, and of neural networks and deep learning, have allowed us to have technology that learns on its own. And, we can empower technology to make decisions. So technology is not necessarily our tool anymore; we could be the tool of technology. So we need to think about how we are going to manage that. This is where governance applies; but at the heart of governance, is asking good questions. And so as a board member, and when we think about oversight, we need to ask good questions. That’s our fiduciary responsibility. So the same thing applies to AI. So we’ve got to start with, okay, for an AI governance framework — how do we make sure we ask the right questions?
Now, just like human trust, the trust of AI is not one size fits all. As an example, you and I trust different people, at different times, for different reasons. Same with machines. So we need to figure out a framework that makes sense for an organization. And foundationally, there’s compliance. So of course, we need to meet legislative standards, and privacy laws we need to respect. But saying that we meet legislative requirements isn’t enough, when we’re talking about trust. The questions we need to ask are from the views of the stakeholders. What are the right questions? And what applies to our organization? And then, very importantly, how do we answer those questions? So when we talk about creating a governance framework, it’s using the governance model that we know, but applying it to AI and asking the right questions, and identifying what those are for your organization.
MK: Great. That’ll be the next big thing that we want to get into — defining some of those trust questions. Just before we go there, I do want to take a moment to talk about your company, NuEnergy, and how you’re trying to tackle this problem. Can you give us, at least at a high level, how you’re trying to go about this, and what the major, let’s call it, “missing pieces” are, in the tools that are available to companies today.
NB: NuEnergy-ai prides itself on being partners in the AI governance framework. We co-create with our clients, a governance framework that makes sense for their organization, using some standard methodologies that have been developed on what makes good sense for governance of AI. But very importantly, we follow up, when we talk about measures and questions, on the tools, to actually quantify and measure the trustworthiness of AI. We offer a Machine Trust Platform that can integrate the governance questions that are specific to your organization. Then we can provide access to the right tools that have been qualified by our company — open-source qualified tools that can come from any organization.