Scientists have warned of some concerns about the future of artificial intelligence after it was found that a robot had learned toxic stereotypes from the internet.
Researcher Andrew Hundt, a postdoctoral fellow at Georgia Institute of Technology who co-managed work as a doctoral student working at the Johns Hopkins Laboratory for Computational Interaction and Robotics in Baltimore, Maryland, said: “The robot has learned toxic stereotypes from through these flawed neural network models.”
“We are at risk of creating a generation of racist and sexist robots, but people and organizations have decided that it’s okay to create these products without addressing the issues,” Hundt added.
Those who train AI models to recognize humans often turn to huge data sets that are freely available on the Internet, the researchers said.
But since the web is riddled with inaccurate and overtly biased content, they said any algorithm built with these datasets could be saddled with the same problems.
Also, to prevent future machines from adopting and re-enacting these human stereotypes, the expert group said systemic changes in research and business practices are needed.