Selena Templeton and co-host Marco CIappelli chat with Ariel Herbert-Voss, a Ph.D. student at Harvard University with a focus on adversarial machine learning, about artificial intelligence, machine learning and unconscious bias.
This conversation was sparked by an article we all read that stated that a study “showed that AI is capable of forming prejudices all by itself. The researchers wrote that ‘groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.’"
But I’d argue that it’s more accurate to say that ‘AI demonstrated prejudice by simply identifying, copying and learning this behavior from the human programmers.’
It’s no secret that algorithms are, or can be, inherently biased; facial or image recognition software is probably the most notorious example. For example, when you Google “successful woman” you’re shown nothing but young, attractive white women – and one picture of Oprah.
In another example, an MIT grad student demonstrated that facial recognition software recognized only the white-skinned people in her research, but it could not "see" her dark-skinned face – unless she put on a white mask.
But it gets even more dangerous when this unconscious bias gets into medical care and devices or law enforcement, like PredPol, an algorithm used to predict when and where crimes will take place. This is meant to reduce human bias in law enforcement, but studies show that because the software learns from reports recorded by the police rather than actual crime rates, PredPol just creates a “feedback loop” of racial biases – and then ML takes the input and learns from that.
Without further ado, here is what Ariel, Marco and I had to say about all this….
About Ariel Herbert-Voss
Ariel is a PhD student at Harvard University, where she specializes in deep learning, cybersecurity, and mathematical optimization. Like many machine learning researchers, she spent plenty of time thinking about deep learning from a computational neuroscience point of view without realizing that skulls make biological neural networks a lot less hackable than artificial ones. Now she thinks about securing machine learning algorithms and offensive applications.