IBM’s Krishnan Talks Finding the Right Balance for AI Governance

Developing governance of AI can help make it more palatable for regulators and the public as the technology becomes more omnipresent.

Joao-Pierre S. Ruth, Senior Editor

December 12, 2022

4 Min Read
Priya Krishnan, IBM, at The AI Summit New YorkJoao-Pierre S. Ruth

Increased regulatory oversight and the growing ubiquity of artificial intelligence have made the technology an escalating concern for industry and the masses. Questions about governance of AI took center stage last week at The AI Summit New York. During the conference, Priya Krishnan, director of product management with IBM Data and AI, addressed ways to make AI more compliant with new regulations in the keynote, “AI Governance, Break Open the Black Box.”

Informa -- InformationWeek’s parent company -- hosted the conference.

Krishnan spoke with InformationWeek separately from her presentation and discussed spotting early signs of potential bias in AI, which she said usually starts with data. For example, Krishnan said IBM sees this emerge after clients conduct some quality analysis on the data they are using. “Immediately, it shows a bias,” she said. “With the data that they’ve collected, there’s no way that the model’s not going to be biased.”

The other place where bias can be detected is during the validation phase, Krishnan said, as models are developed. “If they have not looked at the data, they won’t know about it,” she said. “The validation phase is like a preproduction phase. You start to run with some subset of real data and then suddenly it flags something that you didn’t expect. It’s very counterintuitive.”

The regulatory aspect of AI governance is accelerating, Krishnan said, with momentum likely to continue. “In the last six months, New York created a hiring law,” she said, referring to an AI law set to take effect in January in the state that would restrict the use of automated employment decision tools. Employers use such tools to make decisions on hirings and promotions. The law would prohibit the use of those AI tools unless they have been put through a bias audit. Comparable action may be coming on the national level. Last May, for example, the Equal Employment Opportunity Commission and the Department of Justice issued guidance to employers to check their AI-based hiring tools for biases that could violate the American with Disabilities Act.

During her keynote, Krishnan said there are four key trends in AI that IBM sees over and over as it works with clients. The first is operationalizing AI with confidence, moving from experiments to production. “Being able to do so with confidence is the first challenge and the first trend that we see,” she said.

The challenge comes essentially from not knowing how the sausage was made. One client, for instance, had built 700 models but had no idea how they were constructed or what stages the models were in, Krishnan said. “They had no automated way to even see what was going on.” The models had been built with each engineer’s tool of choice with no way to know further details. As result, the client could not make decisions fast enough, Krishnan said, or move the models into production.

She said it is important to think about explainability and transparency for the entire life cycle rather than fall into the tendency to focus on models already in production. Krishnan suggested that organizations should ask whether the right data is being used even before something gets built. They should also ask if they have the right kind of model and if there is bias in the models. Further, she said automation needs to scale as more data and models come in.

The second trend Krishan cited was the increased responsible use of AI to manage risk and reputation to instill and maintain confidence in the organization. “As consumers, we want to be able to give our money and trust a company that has ethical AI practices,” she said. “Once the trust is lost, it’s really hard to get it back.”

The third trend was quick escalation of AI regulations being put into play, which can bring fines and might also damage an organization’s reputation if they are not in compliance.

With the fourth trend, Krishnan said the AI playing field has changed with the stakeholders extending beyond data scientists within organizations. Most everyone, she said, is involved with or has stake in the performance of AI.

The expansive reach of AI and who can be affected by its use has elevated the need for governance. “When you think about AI governance, it’s actually designed to help you get value from AI faster with guardrails around you,” Krishnan said. By having clear rules and guidelines to follow, it could make AI more palatable by policymakers and the public. Examples of good AI governance include life cycle governance to monitor and understand what is happening with models, she said. This includes knowing what data was used, what kind of model experimentation was done, and automatic awareness of what is happening as the model moves through the life cycle. Still, AI governance will require human input to move forward.

“It’s not technology alone that’s going to carry you,” Krishnan said. “A good AI governance solution has the trifecta of people, process, and technology working together.”

What to Read Next:

AI Set to Disrupt Traditional Data Management Practices

4 Principles of Developing an Ethical AI Strategy

Ethical AI Lapses Happen When No One Is Watching

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights