Saturday, May 04 2024 | Updated at 11:12 AM EDT

Stay Connected With Us F T R

Apr 15, 2017 10:25 PM EDT

Researchers at Princeton University discovered how machines can also exhibit human bias because they reflect their creators. This can pose a problem to systemic prejudices that the modern society is trying to eradicate.

A lot of people think that new artificial intelligence systems are logical, objective and rational. However, common machine learning programs, because they are trained with ordinary human language available online, can adopt the cultural biases in the patterns of wording.

In a press release, via EurekAlert, it was reported that these biases range from morally neutral ideas such as preference for flowers than insects to the questionable and sensitive issues of race and gender. It is important to identify and address these possible biases in machine learning systems as society continues to trust computers for processing the natural language used in communication.

According to researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, questions about fairness and bias in machine learning are very important nowadays. There is a possibility that these AI systems may actually be playing a role in perpetuating historical patterns of bias which society is trying to move away from.

Their study has been published in the journal "Science." Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton and Joanna Bryson, a reader at University of Bath, and CITP affiliate, authored the study.

The researchers conducted the Implicit Association Test, which is used in several social psychology studies. It was developed at the University of Washington in the late 1990s.

The team devised an experiment with a program that is basically a machine learning version of the Implicit Association Test. It is called GloVe and is developed by Stanford University researchers.

The researchers examined a group of target words like "programmer, engineer, scientist" and "nurse, teacher, librarian" along with two sets of attribute words: "man, male" and "woman, female."

They found that the machine learning program associated female names with more familial words like "parents" and "wedding." Male names, on the other hand, had stronger associations with career attributes like "professional" and "salary."

See Now: Covert Team Inside Newsweek Revealed as Key Players in False Human Trafficking Lawsuit

Follows Machine Learning, AI, artificial intelligence, learning, Princeton University, tech, technology, Human Bias, Prejudice
© 2024 University Herald, All rights reserved. Do not reproduce without permission.

Must Read

Common Challenges for College Students: How to Overcome Them

Oct 17, 2022 PM EDTFor most people, college is a phenomenal experience. However, while higher education offers benefits, it can also come with a number of challenges to ...

Top 5 Best Resources for Math Students

Oct 17, 2022 AM EDTMath is a subject that needs to be tackled differently than any other class, so you'll need the right tools and resources to master it. So here are 5 ...

Why Taking a DNA Test is Vital Before Starting a Family

Oct 12, 2022 PM EDTIf you're considering starting a family, this is an exciting time! There are no doubt a million things running through your head right now, from ...

By Enabling The Use Of Second-Hand Technology, Alloallo Scutter It's Growth While Being Economically And Environmentally Friendly.

Oct 11, 2022 PM EDTBrands are being forced to prioritise customer lifetime value and foster brand loyalty as return on advertising investment plummets. Several brands, ...