IBM unveils tools to make AI more fair
New service and open source tools to help detect bias in AI and improve transparency
IBM has introduced new software and services that will help businesses to better understand and utilize AI, including tools to uncover bias and discrepancies in AI decision making.
The new technologies are intended to gives businesses new transparency into AI, enabling them to more fully harness its power.
The software service, which automatically detects bias and explains how AI makes decisions - as the decisions are being made - runs on the IBM Cloud, and helps organisations manage AI systems from a wide variety of industry players. IBM Services will also work with businesses to help them harness the new software service.
In addition, IBM Research will release into the open source community an AI bias detection and mitigation toolkit, bringing forward tools and education to encourage global collaboration around addressing bias in AI.
"IBM led the industry in establishing trust and transparency principles for the development of new AI technologies," said Beth Smith, General Manager of Watson AI at IBM. "It's time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making."
These developments come on the back of new research by IBM's Institute for Business Value, which reveals that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.
IBM's new trust and transparency capabilities on the IBM Cloud work with models built from a wide variety of machine learning frameworks and AI-build environments such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML. This means organisations can take advantage of these new controls for most of the popular AI frameworks used by enterprises.
The software service can also be programmed to monitor the unique decision factors of any business workflow, enabling it to be customised to the specific organisational use.
The fully automated software service explains decision-making and detects bias in AI models at runtime - as decisions are being made - capturing potentially unfair outcomes as they occur. Importantly, it also automatically recommends data to add to the model to help mitigate any bias it has detected.
Explanations are provided in easy to understand terms, showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence. Also, the records of the model's accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons - such as GDPR compliance.
All of these capabilities are accessed through visual dashboards, giving business users an unparalleled ability to understand, explain and manage AI-led decisions, and reducing dependency on specialized AI skills.
IBM is also making available new consulting services to help companies design business processes and human-AI interfaces to further minimize the impact of bias in decision making.
In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit - a library of novel algorithms, code, and tutorials that will give academics, researchers, and data scientists tools and knowledge to integrate bias detection as they build and deploy machine learning models. While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.