75 Percent Large Organizations Will Hire AI Behavior Forensic Experts

75 Percent Large Organizations Will Hire AI Behavior Forensic Experts

Foto: Depositphotos

Users’ AI and machine learning solutions is plummeting as incidents of irresponsible privacy breaches and data misuse keep occurring. Despite rising regulatory scrutiny to combat these breaches, Gartner predicts that, by 2023, 75% of large organizations will hire AI behavior forensic, privacy and customer trust specialists to reduce brand and reputation risk.

Bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models. In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.

“New tools and skills are needed to help organizations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk,” said Jim Hare, research vice president at Gartner. “More and more data and analytics leaders and CDOs are hiring ML forensic and ethics investigators.”

Increasingly, sectors like finance and technology are deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks. In addition, organizations such as Facebook, Google, Bank of America, MassMutual and NASA are hiring or have already appointed AI behavior forensic specialists who primarily focus on uncovering undesired bias in AI models before they are deployed.

These specialists are validating models during the development phase and continue to monitor them once they are released into production, as unexpected bias can be introduced because of the divergence between training and real-world data. “While the number of organizations hiring ML forensic and ethics investigators remains small today, that number will accelerate in the next five years,” added Hare.

On one hand, consulting service providers will launch new services to audit and certify that the ML models are explainable and meet specific standards before models are moved into production. On the other, open-source and commercial tools specifically designed to help ML investigators identify and reduce bias are emerging.

Some organizations have launched dedicated AI explainability tools to help their customers identify and fix bias in AI algorithms. Commercial AI and ML platform vendors are adding capabilities to automatically generate model explanations in natural language. There are also open-source technologies such as Local Interpretable Model-Agnostic Explanations (LIME) that can look for unintended discrimination before it gets baked into models.

Data and analytics leaders and CDOs are not immune to issues related to lack of governance and AI missteps. “They must make ethics and governance part of AI initiatives and build a culture of responsible use, trust and transparency. Promoting diversity in AI teams, data and algorithms, and promoting people skills is a great start,” said Hare.

More from category

IBM President Jim Whitehurst to Step Down

IBM President Jim Whitehurst to Step Down

3 Jul 2021 comment

IBM President Jim Whitehurst will step down from his position.

Intel Makes Organisation Changes

Intel Makes Organisation Changes

28 Jun 2021 comment

Intel CEO Pat Gelsinger announced the addition of two new technology leaders to its executive leadership team, as well as several changes to Intel business units.

Route Mobile appoints John Owen as its CEO for Europe and Americas

Route Mobile appoints John Owen as its CEO for Europe and Americas

20 May 2021 comment

Route Mobile UK announced the appointment of John Owen as its CEO of Europe and Americas, based in the UK (London) office, with immediate effect.