Bias and prejudice remains a serious issue across many societies, take away human input and the result could be disastrous.
IBM is stepping in with a tool it calls ‘Fairness 360’ which scans for signs of bias in algorithms to recommend adjustments on how to correct them.
AIs already have a documented bias problem. It’s rarely intentional, but typically a result of their developers coming from the predominant part of each society.
Take facial recognition software, for example.
Continue reading IBM releases tool for tackling scourge of bias in AI algorithms