Academics

Princeton University Researchers See Evidence Of Human Bias In AI Systems [Video]

By

Researchers at Princeton University discovered how machines can also exhibit human bias because they reflect their creators. This can pose a problem to systemic prejudices that the modern society is trying to eradicate.

A lot of people think that new artificial intelligence systems are logical, objective and rational. However, common machine learning programs, because they are trained with ordinary human language available online, can adopt the cultural biases in the patterns of wording.

In a press release, via EurekAlert, it was reported that these biases range from morally neutral ideas such as preference for flowers than insects to the questionable and sensitive issues of race and gender. It is important to identify and address these possible biases in machine learning systems as society continues to trust computers for processing the natural language used in communication.

According to researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, questions about fairness and bias in machine learning are very important nowadays. There is a possibility that these AI systems may actually be playing a role in perpetuating historical patterns of bias which society is trying to move away from.

Their study has been published in the journal "Science." Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton and Joanna Bryson, a reader at University of Bath, and CITP affiliate, authored the study.

The researchers conducted the Implicit Association Test, which is used in several social psychology studies. It was developed at the University of Washington in the late 1990s.

The team devised an experiment with a program that is basically a machine learning version of the Implicit Association Test. It is called GloVe and is developed by Stanford University researchers.

The researchers examined a group of target words like "programmer, engineer, scientist" and "nurse, teacher, librarian" along with two sets of attribute words: "man, male" and "woman, female."

They found that the machine learning program associated female names with more familial words like "parents" and "wedding." Male names, on the other hand, had stronger associations with career attributes like "professional" and "salary."

© 2024 University Herald, All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics