Responsible AI

By Joe Nathan and Ajay Bawa
September 5, 2023
Season 4, Episode 33

In this episode of the Business Excelleration® Podcast, as companies integrate artificial intelligence technology into their infrastructure, how can they do so responsibly? A discussion of what it means to focus on Responsible AI, and what problems companies can avoid by taking a responsible approach in their AI efforts. With The Hackett Group Principal Joe Nathan and Technology Innovation Advisors Founder and CEO Ajay Bawa.

Welcome to The Hackett Group’s “Business Excelleration Podcast,” where week after week we hear from experts on how to avoid obstacles, manage detours and celebrate milestones on the journey to world-class performance. This episode is hosted by Joe Nathan, principal at The Hackett Group. Today’s episode continues our series all about artificial intelligence (AI). As the final episode in the series, this episode discusses critical aspects customers look for, implementation, adoption and operation of the fit-for-purpose platform. Joe is joined by Ajay Bawa, founder of Technology Innovation Advisors, to discuss responsible AI (RAI).

Despite the fact that AI has been around since the 1950s, it has only recently captured the attention of the masses. This is because this is the first time a machine has had the capability to understand human language and engage in human-like communication. All types of AI need to have a lot of data to be able to compute and respond. This data has been collected over the years through normal business processes and interactions. The normal human biases and discriminations have also found their way into this data. This is where responsible AI comes into play. Other risks associated with AI include privacy concerns, negative environmental impact and copyright considerations. Listen as Ajay shares a real-world example of unintentional harm AI can cause.

There have also been studies on the environmental impact of training these large-scale models. The GPT 3 model is said to have consumed an enormous amount of energy and generated tons of carbon dioxide. However, there has been further research in the field on improving AI’s environmental efficiency.

Advancements in AI are definitely here to stay and have the potential to create enormous value. However, the concerns people have around further implementation of AI are valid. In order to address this, organizations need to establish guardrails around the use of AI. These guardrails are what responsible AI is all about. This can look different depending on the organization’s core values and code of conduct. A basic map of these guardrails may include your organization’s values at the very top, then the principles you intend to protect, next your ethical measures and responsible actions, and finally are the risks or harms that could be caused if responsible action is not taken. It is recommended that organizations take a maturity model-based approach to operationalizing AI – start small and then grow from there. RAI must be continuously measured as AI capabilities evolve.

There is no-one-size-fits-all solution for responsible AI. Each enterprise needs to define what RAI looks like for them. Mature enterprises like Google and Amazon can serve as examples for smaller organizations looking to define their own RAI outline.

Before closing out, hear about the regulatory landscape around AI today. Europe is currently developing an EU AI Act, which will likely be the world’s first comprehensive AI law. There are currently no federal regulations around AI in place in the U.S. However, there is a U.S. Bill of Rights of AI to refer to.

Time stamps:

  • 0:56 – Welcome to this episode hosted by Joe Nathan.
  • 1:39 ­– Why responsible AI is important.
  • 3:25 – Risks associated with AI.
  • 5:16 – Real-world examples of the potential risks of AI.
  • 8:00 – What is responsible AI?
  • 10:25 – A framework for the guidelines for responsible AI.
  • 12:45 – How enterprises can make sure they stay responsible throughout the lifetime of AI.
  • 15:26 – Common themes around RAI.
  • 16:20 – The current regulatory landscape around AI.