Thursday, Jul 19 2018 | Updated at 05:17 AM EDT

Stay Connected With Us F T R

Apr 15, 2017 10:25 PM EDT

Researchers at Princeton University discovered how machines can also exhibit human bias because they reflect their creators. This can pose a problem to systemic prejudices that the modern society is trying to eradicate.

A lot of people think that new artificial intelligence systems are logical, objective and rational. However, common machine learning programs, because they are trained with ordinary human language available online, can adopt the cultural biases in the patterns of wording.

In a press release, via EurekAlert, it was reported that these biases range from morally neutral ideas such as preference for flowers than insects to the questionable and sensitive issues of race and gender. It is important to identify and address these possible biases in machine learning systems as society continues to trust computers for processing the natural language used in communication.

Watch video

According to researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, questions about fairness and bias in machine learning are very important nowadays. There is a possibility that these AI systems may actually be playing a role in perpetuating historical patterns of bias which society is trying to move away from.

Their study has been published in the journal "Science." Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton and Joanna Bryson, a reader at University of Bath, and CITP affiliate, authored the study.

The researchers conducted the Implicit Association Test, which is used in several social psychology studies. It was developed at the University of Washington in the late 1990s.

The team devised an experiment with a program that is basically a machine learning version of the Implicit Association Test. It is called GloVe and is developed by Stanford University researchers.

The researchers examined a group of target words like "programmer, engineer, scientist" and "nurse, teacher, librarian" along with two sets of attribute words: "man, male" and "woman, female."

They found that the machine learning program associated female names with more familial words like "parents" and "wedding." Male names, on the other hand, had stronger associations with career attributes like "professional" and "salary."

See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide

Follows Machine Learning, AI, artificial intelligence, learning, Princeton University, tech, technology, Human Bias, Prejudice
© 2017 University Herald, All rights reserved. Do not reproduce without permission.

Must Read

Controlling Robots With Brainwaves And Hand Gestures

Jun 23, 2018 AM EDTWhat if you can control robots with a simple flick of a finger? A team of experts is trying to make that a reality through this latest experiment.

Flavored Electronic Cigarettes Linked To Possible Cardiovascular Disease

Jun 16, 2018 AM EDTScientists from the Boston University School of Medicine looked into the effects of flavored e-cigarettes to the lining of blood vessels. Here are ...

LendingTree Study: Which Places Have the Most Student Debt?

May 31, 2018 AM EDTLendingTree outs its study that identifies places in the United States with the most student debt. Here's the complete list.

Best College Reviews Names 10 Best Master's in Biomedical Engineering Programs Online

May 31, 2018 AM EDTPlanning to pursue a master's degree in Biomedical Engineering? Here are some of the best online programs you need to consider.