• Managing the risks of AI (podcast excerpt)
Michael Kravshik: Welcome to LumiQ. I’m Michael Kravshik, and I recently had the pleasure of sitting down with Niraj Bhargava, the founder and CEO of NuEnergy.ai to talk all about AI governance. Here’s a clip from our discussion:
So why don’t we start with the actual problem of governance within AI, and you had this great saying that I want to quote here, that “data is the new oil, but it’s also the new plutonium”. So can you tell us a little bit about what you mean by that?
Niraj Bhargava: Well I think we all know that data is very valuable in today’s economy and society, and it is really like energy for an organisation, so it has enormous value, like oil, often intangible on the balance sheet, but data not only has value in itself, but it can be mined — to develop machine learning models, profound algorithms, artificial intelligence — that can really help us meet and exceed business market goals. But, not only is it energy and valuable, but it’s like plutonium — it’s explosive. If we aren’t using it thoughtfully, it can actually be a ticking time bomb. So we really have to think about the energy, and how we utilize it, and avoid those issues. So I think we have all seen the headlines, and we’ve got to watch the headlines — the costs after a crisis are very very high, so the opportunity really is to get the benefits of AI, with good governance, and avoid it being “plutonium”.
MK: Okay, well that’s what we’re going to hopefully try to help people navigate today. One of the first questions I just want to ask about this is, what is too early in terms of thinking about this problem?
NB: I think the time is now. You know, I’ve talked to organizations where they say, “Oh, we’re not in AI yet, we’re not doing AI”. But AI is everywhere. We are all in AI, and it is moving very quickly, and if you miss the opportunity you’re going to have it affect you. So now is the time, and you want to get the guardrails right, and make sure you manage those risks.
MK: And another interesting point that you brought up when we spoke, was that it can actually be a competitive advantage if you do this appropriately. Can you tell us what you mean by that?
NB: As one of my mentors said, “Is AI governance a vitamin pill, or a headache pill?”, and I think it’s both. You actually have an opportunity to use the governance of AI to your advantage because there’s tremendous value in that data and the AI, but you’ve got to use it effectively, and make sure you manage not only the performance, but also your reputation on how you’re using it.
MK: Great. So let’s talk a little bit about the different types of data risk, and if we can, let’s link it back to organizations you mentioned that say, “Well, we’re not in AI,” so maybe if you could — link it back to some of those scenarios that might have — let’s say, for organizations that aren’t, Google or a tech company that’s doing something in AI — that would link it back to their more traditional business.
NB: Let me give you some examples Michael. I think it is all around us, it’s not just the Silicon Valley companies that are in AI now, so — let me start with banks. You know, we’re all customers of banks, and many of us are shareholders in banks, and as you know, banks are in the business of lending money. But they’re also in the business of managing risk. So this is the business that they’re in, and to optimize their profitability and performance, they have a ton of data. Lots of data on us, on our transactions, on our business, on our activities – and they’re mining it. And so they’re looking at that data to minimize the risk and maximize returns. So they’re optimizing and doing it very well. But if you ask a C-level on what’s important to a bank — they would say banks trade on trust. And so, are there risks in using AI? There are. And do people know how their data is being used? And is it being used in a way that they didn’t consent to? Are there privacy issues? Is there bias in that algorithm that decided not to give me a loan? So there are concerns. Banks need to make sure that they put those guardrails on.
There are other examples — like facial recognition. Today, the police and RCMP and border security may want to use technology for safety, and managing criminal activity, but to companies that are using facial recognition and developing it — where’s that data going? The legislation’s not keeping up with it. So there’s good value to organizations, to society, for AI, and people are using it now. But we certainly have to consider the governance aspects as well.
MK: So do you have any other just tantamount examples that you want to bring up?
NB: Yeah, I think another one that we all can relate to is our car. I mean, we all probably, most of us, pride ourselves in being good drivers, but what about machines? What about autonomous vehicles? I mean, the good news is that machines don’t drink and drive, so you know, arguably, we’ll be safer with autonomous vehicles. But what about those winters, and black ice? Would you trust the vehicle to make the right decisions? And the question of autonomous vehicles is not in the future, we have self-steering cars now. Are we going to trust autonomous vehicles, and in what contexts? So these are here-and-now experiences that we’re living as consumers, but our organizations are dealing with it right now, and there’s opportunity to, again, improve profitability, improve the value of your balance sheet and the intangibles of data as an asset. But making sure you put those guardrails on is also really important.