IBM Trusted AI toolkits for Python combat AI bias

Researchers at IBM are developing ways to reduce bias in machine learning models and to identify bias in data sets that train AI, with the goal of avoiding discrimination in the behavior and decisions made by AI-driven applications.

As a result of this research, called the Trusted AI project, the company has released three open source, Python-based toolkits. The latest toolkit, AI Explainability 360, which was released earlier this month, includes algorithms that can be used to explain the decisions made by a machine learning model. 

The three IBM Trusted AI toolkits for Python are all accessible from GitHub. Some information about the three packages: 

To read this article in full, please click here

Source link

قالب وردپرس

Related posts

How to Stop Superhuman (and Other Apps) From Tracking Your Email Opens


Defiant Uber co-founder Kalanick shows up for stock market premiere


GIF of bombed asteroid Ryugu shows much bigger crater than expected