Thursday, Dec 14 2017 | Updated at 02:04 PM EST

Stay Connected With Us F T R

Apr 15, 2017 10:25 PM EDT

Researchers at Princeton University discovered how machines can also exhibit human bias because they reflect their creators. This can pose a problem to systemic prejudices that the modern society is trying to eradicate.

A lot of people think that new artificial intelligence systems are logical, objective and rational. However, common machine learning programs, because they are trained with ordinary human language available online, can adopt the cultural biases in the patterns of wording.

In a press release, via EurekAlert, it was reported that these biases range from morally neutral ideas such as preference for flowers than insects to the questionable and sensitive issues of race and gender. It is important to identify and address these possible biases in machine learning systems as society continues to trust computers for processing the natural language used in communication.

According to researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, questions about fairness and bias in machine learning are very important nowadays. There is a possibility that these AI systems may actually be playing a role in perpetuating historical patterns of bias which society is trying to move away from.

Their study has been published in the journal "Science." Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton and Joanna Bryson, a reader at University of Bath, and CITP affiliate, authored the study.

The researchers conducted the Implicit Association Test, which is used in several social psychology studies. It was developed at the University of Washington in the late 1990s.

The team devised an experiment with a program that is basically a machine learning version of the Implicit Association Test. It is called GloVe and is developed by Stanford University researchers.

The researchers examined a group of target words like "programmer, engineer, scientist" and "nurse, teacher, librarian" along with two sets of attribute words: "man, male" and "woman, female."

They found that the machine learning program associated female names with more familial words like "parents" and "wedding." Male names, on the other hand, had stronger associations with career attributes like "professional" and "salary."

Follows Machine Learning, AI, artificial intelligence, learning, Princeton University, tech, technology, Human Bias, Prejudice
© 2017 University Herald, All rights reserved. Do not reproduce without permission.

Must Read

Here is NASA’s Take On Anonymous Hackers Alien Claims [VIDEO]

Jun 28, 2017 AM EDTNASA official says no alien has been found until today.

International Cyber Attack Strikes Again: Ransomware Hits Companies Worldwide [VIDEO]

Jun 28, 2017 AM EDTOver 2,000 computers in about a dozen countries were affected.

The Magic of Celebrity Involvement: How Projects and Concepts Get Public Nod When Icons Get Involved [VIDEO]

Jun 28, 2017 AM EDTDo celebrities really affect marketing?

Student Loans In Focus: How Much Do Students Really Borrow To Attend The Top 10 Schools [VIDEO]

Jun 26, 2017 AM EDTFor most students, going into the Top 10 schools is a dream come true. But is the expense in studying in these schools worth it?