Researchers at IBM are developing ways to reduce bias in machine learning models and to identify bias in data sets that train AI, with the goal of avoiding discrimination in the behavior and decisions made by AI-driven applications.
As a result of this research, called the Trusted AI project, the company has released three open source, Python-based toolkits. The latest toolkit, AI Explainability 360, which was released earlier this month, includes algorithms that can be used to explain the decisions made by a machine learning model.
The three IBM Trusted AI toolkits for Python are all accessible from GitHub. Some information about the three packages: