Humans and AI systems Increasingly Work Together
IBM hosted an exclusive online media roundtable on the topic of "Responsible Usage of AI" with IBM Fellow and AI scientist, Aleksandra Saška Mojsilović. Her presentation focused on the vulnerabilities of AI such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks.
Mojsilović stressed that performance will not suffice as an AI design paradigm, ethical concerns must be a part of the equation, too. Therefore IBM Research is developing techniques and algorithms to assess and address the foundational elements of trust for AI systems tools that discover and mitigate bias, expose vulnerabilities, defuse attacks, and unmask the decision-making process.
Because as AI advances, humans and AI systems increasingly work together, and it is essential that we trust the output of these systems to inform our decisions. Experts at IBM Research identify the following pillars to form the basis for trusted AI systems:
· Fairness: AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups
· Robustness: AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on
· Explainability: AI systems should provide decisions or suggestions that can be understood by their users and developers
· Lineage: AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle
However, just like a physical structure, trust can’t be built on one pillar alone. If an AI system is fair but can’t resist attack, it won’t be trusted. If it’s secure but noone can understand its output, it won’t be trusted either. Therefore, it is imperative to strengthen all the pillars together, accompanied by the ability to measure and communicate the performance levels of a system on each of these dimensions.
One way to accomplish this would be to provide such information via SDoCs or factsheets for AI services. In these, IBM experts suggest to include information about system operation, training data, underlying algorithms, test set-up and results, performance benchmarks, fairness and robustness checks, intended uses, and maintenance and re-training.
Currently serving as the Head of AI Foundations at IBM Research and Co-Director of IBM Science for Social Good, Aleksandra Mojsilović is also an IBM Fellow and IEEE Fellow. She is the author of over 100 publications and holds 16 patents. Among the most recent projects, Mojsilović contributed to is IBM Research’s AI system that, unveiled in March 2021, leverages AI-based technologies to speed up creation of novel peptides to fight antibiotic drug resistance.
These AI efforts can also help in the discovery and creation of new materials to help fight climate change, create more intelligent energy production and storage, and much more. The team’s novel AI generative framework was also applied on three COVID-19 targets, having generated 3000 novel molecules.