How to protect your organization and build competitive advantage with your AI

• Practical considerations for putting guardrails around your AI (podcast excerpt)

Michael Kravshik: I’m Michael Kravshik, and I recently had the pleasure of sitting down with Niraj Bhargava, the founder and CEO of NuEnergy.ai to talk all about AI governance.  Here’s a clip from our discussion:

This seems like a good moment for us to get into defining Trust Questions.  So, let’s start from the very beginning; about a company that either hasn’t thought about this at all, or hasn’t thought about this enough, and they’re trying to establish an approach that makes sense to deal with some of these issues.  How do you approach building the set of Questions that is most important for your particular organization?

Niraj Bhargava: To your point Michael, “trust” is a big word.  We need to break it down into pieces.  So we look at the categories of trust that we’re talking about here, and I mentioned a moment ago, compliance.  So we, again, think, legislation is foundational.  We need to understand what legislation applies, what regions we’re in, etc. We definitely need to understand the compliance questions, and they are not just legislation, but policies, standards that we’ve adopted.  But over and above compliance, we break it down into a number of categories, and Trust Categories may vary, from a hospital, to national defense, to a bank, to an automotive company.  But topics like privacy; bias; explainability; ethics; governance.  These are all topics that, if you drill down, and start looking at them…there may be different Trust Questions in each of these categories.  And there may be categories that apply more to an organization.  So we break that big word “trust” down into foundational elementd, and then various Trust Categories, and then we say, okay, what are the specific Trust Questions, because we need to get into the details, and get specific.  And so then the question becomes: trust in the eyes of whom?  Is it good enough to say, okay, well what are the Trust Questions of the Ph.D. data scientists developing the algorithm?  Or is it what’s trustworthy to the CEO, or for the board?  Or is it trustworthy for the customers.  Or the shareholders.  Or society.  Or the employees? So we need to think about whose trust we are trying to support, who is involved in our reputation, and then identify the specific Trust Questions from their points of view.  

MK: One thing that you brought up was around compliance and legislation, and I think we all know that legislation, especially when it comes to emerging areas, is sometimes slower to lagging. So how do you recommend to a company that is approaching this how to understand the regulatory requirements? How do you think through these different risk areas?

NB: Well, I think there are multiple kinds of risk, and certainly financial risk and performance risk.  But we’re really talking about reputational risk, here, as well.  And so, legislation, as you pointed out, is lagging. And many would argue that if you bring legislation on too much, you’ll stifle innovation.  So there’s a debate about when legislation is appropriate / isn’t.  But when you talk about your organization’s reputation, and what your customers value about your organization, and the risk you have of their departure based on certain actions you take, you need to consider those.  And again, you said it earlier, there can be a competitive advantage on this topic of trust.  There’s the flip side of risk, and there’s an opportunity to build trust, and differentiate yourself on being trustworthy and transparent on what you’re doing.  

Explainability is another example. There is an understanding that you can have high-performance algorithms, in black box deep neural net kind of applications, and in some cases, that’s okay.  But in other cases, you need to be able to explain the algorithms.  For example, can you explain a false positive, or a false negative, in those kinds of situations.  So understanding explainability is another example of putting on, again, the guardrails, and saying, okay, what’s the level of explainability that we need to have, so that we can protect our reputation, and apply these kinds of technologies.