September 12, 2016

A cautionary tale about humans creating biased AI models

robot-blinders The AI field lacks diversity — even more spectacularly than most of our software industry. When an AI practitioner builds a data set on which to train his or her algorithm, it is likely that the data set will only represent one worldview: the practitioner's. The resulting AI model demonstrates a non-diverse "intelligence" at best, and a biased or even offensive one… Read More

Source: TechCrunch