IBM AI Fairness 360 open source toolkit adds new functionalities

published 04.06.2020 21:26

Image of article 'IBM AI Fairness 360 open source toolkit adds new functionalities'

The AI Fairness 360 R package is an open source library that contains a comprehensive accumulation of metrics for datasets and models to test for discrimination.

Machine learning analyzes and generalizes patterns within high volumes of data and can inadvertently create hidden biases toward more privileged groups, according to an IBM blog post.

IBM's update, however, makes bias detection even more accessible by opening compatibility with R users and scikit-learn.

The toolkit itself contains more than 70 fairness metrics and 11 unique bias mitigation algorithms developed within the research community, designed to translate algorithmic research from the lab into real-life practices throughout industries including finance, human capital management, healthcare, and education, per the blog post.

While many IBM notebooks use scikit-learn classifiers with pre- or post-processing workflows, switching between AI Fairness 360 algorithms and scikit-learn algorithms previously disrupted the workflow, making the user convert data structures back and forth, according to a blog post.