AI Robots Are Developing Prejudices Because Of Us Mere Mortals

AI Robots Are Developing Prejudices Because Of Us Mere Mortals .jpg

By Selena Templeton, host of DiverseIT

Selena Templeton and co-host Marco CIappelli chat with Ariel Herbert-Voss, a Ph.D. student at Harvard University with a focus on adversarial machine learning, about artificial intelligence, machine learning and unconscious bias.

This conversation was sparked by an article we all read that stated that a study “showed that AI is capable of forming prejudices all by itself. The researchers wrote that ‘groups of autonomous machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.’"

But I’d argue that it’s more accurate to say that ‘AI demonstrated prejudice by simply identifying, copying and learning this behavior from the human programmers.’

You’re only as good as your data when it comes to building a machine learning system...[so] you have to understand your own biases...because you’re bringing [them] to the table.
— Ariel Herbert-Voss

It’s no secret that algorithms are, or can be, inherently biased; facial or image recognition software is probably the most notorious example. For example, when you Google “successful woman” you’re shown nothing but young, attractive white women – and one picture of Oprah.

The issue is not technology; the issue is philosophical, it’s about human nature… We can only build something that is fair if we are. And I think we are far from that.
— Marco Ciappelli

In another example, an MIT grad student demonstrated that facial recognition software recognized only the white-skinned people in her research, but it could not "see" her dark-skinned face – unless she put on a white mask.

Selena: What are the incentives to eliminate bias?
Ariel: The conversation around fairness hasn’t really caught up yet.

But it gets even more dangerous when this unconscious bias gets into medical care and devices or law enforcement, like PredPol, an algorithm used to predict when and where crimes will take place. This is meant to reduce human bias in law enforcement, but studies show that because the software learns from reports recorded by the police rather than actual crime rates, PredPol just creates a “feedback loop” of racial biases – and then ML takes the input and learns from that.

It’s kind of funny or ironic because the more advances we make in AI and machine learning, I think the more we’re actually learning – or could be learning – about what it is to be human and examining ourselves under a greater microscope.
— Selena Templeton

Without further ado, here is what Ariel, Marco and I had to say about all this….


About Ariel Herbert-Voss

Ariel-Herbert-Voss.png

Ariel is a PhD student at Harvard University, where she specializes in deep learning, cybersecurity, and mathematical optimization. Like many machine learning researchers, she spent plenty of time thinking about deep learning from a computational neuroscience point of view without realizing that skulls make biological neural networks a lot less hackable than artificial ones. Now she thinks about securing machine learning algorithms and offensive applications.

Find Ariel on Twitter & LinkedIn.