IBM Trusted AI toolkits for Python combat AI bias

Researchers at IBM are developing ways to reduce bias in machine learning models and to identify bias in data sets that train AI, with the goal of avoiding discrimination in the behavior and decisions made by AI-driven applications.

As a result of this research, called the Trusted AI project, the company has released three open source, Python-based toolkits. The latest toolkit, AI Explainability 360, which was released earlier this month, includes algorithms that can be used to explain the decisions made by a machine learning model. 

The three IBM Trusted AI toolkits for Python are all accessible from GitHub. Some information about the three packages: 

To read this article in full, please click here

Source link

قالب وردپرس

Related posts

Control Your Appliances Using Your iPhone With The Smart Plug from Aukey


XGODY 6.3″ Android Cell Phone RAM 2GB 16GB Dual SIM T-Mobile Unlocked Smartphone


Upgrade Your Home WiFi With Nest WiFi 2-Pack for $199 ($70 Off)