Leaders Need to Take Responsibility for AI Practices

Leaders Need to Take Responsibility for AI Practices
Depositphotos

The estimated $15.7 trillion economic potential of AI will only be realised if the integration of responsible AI practices occurs across organisations, according to PwC. They state that those must be considered before any developments take place.

Combating a piecemeal approach to AI’s development and integration, which is exposing organisations to potential risks, requires organisations to embed end-to-end understanding, development and integration of responsible AI practices. PwC has identified five dimensions organisations need to focus on and tailor for their specific strategy, design, development, and deployment of AI: Governance, Ethics and Regulation, Interpretability & Explainability, Robustness & Security, and Bias and Fairness.

The dimensions focus on embedding strategic planning and governance in AI’s development, combating growing public concern about fairness, trust and accountability. Earlier this year, 85% of CEOs said AI would significantly change the way they do business in the next five years, and 84% admitted that AI-based decisions need to be explainable in order to be trusted.

“The issue of ethics and responsibility in AI are clearly of concern to the majority of business leaders. The C-suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically led strategy for the development of AI in order to balance the economic potential gains with the once-in-a-generation transformation it can make on business and society. One without the other represents fundamental reputational, operational and financial risks,“ said Anand Rao, Global AI Leader, PwC US.

In May and June, around 250 respondents involved in the development and deployment of AI completed the assessment. The results demonstrate immaturity and inconsistency in the understanding and application of responsible and ethical AI practices. Only 25% of respondents said they would prioritise a consideration of the ethical implications of an AI solution before implementing it. One in five (20%) have clearly defined processes for identifying risks associated with AI. Over 60% rely on developers, informal processes, or have no documented procedures.

Ethical AI frameworks or considerations existed, but enforcement was not consistent. 56% said they would find it difficult to articulate the cause if their organisation’s AI did something wrong. Over half of respondents have not formalised their approach to assessing AI for bias, citing a lack of knowledge, tools, and ad hoc evaluations. 39% of respondents with AI applied at scale were only “somewhat“ sure they know how to stop their AI if it goes wrong.