Trending News

Wikipedia’s Software Bots Locked In Perpetual Conflict; Skynet From ‘Terminator’ May Be Farfetched, AI Researchers Are Still Warned

By

A ten-year study of Wikipedia's software bots revealed that these editing bots were often in conflict with one another and undoing the works done by other bots. The unexpected discovery has shown that even bots designed with good intentions can wage war on other bots mainly because they follow different sets of rules and algorithms. The study is a warning to AI researchers currently working on more powerful bots that well-behaved program in the lab can behave unpredictably in the wild. This brings to light the fictional Skynet from "Terminator," an AI that deemed man as dangerous and better off subjugated by machines.

Computer scientists from the Oxford Internet Institute and the Alan Turing Institute have studied the activities of Wikipedia's software bots by examining their editing histories. The study also looked into 13 different editions of Wikipedia in different languages. The scientists recorded every time bots undo the work of other bots. The scientists did not expect to find much and were taken by surprise by their discoveries.

Wikipedia's software bots like Xqbot and Darknessbot have been engaged in a silent editing war with Xqbot undoing 2,000 edits by Darknessbot, which retaliated by undoing 1,700 edits of Xqbot. Another bot called Tachikoma from the AI in the Japanese sci-fi show "Ghost in the Shell" was battling Russbot for two years. The two bots undid a thousand more edited works in more than 3,000 articles like Hillary Clinton's 2008 presidential campaign and demography in the U.K.

The study on Wikipedia's software bots also resulted in some interesting observations. For instance, there is very few incidence of bot conflicts for the German edition of the online encyclopedia with just an average of 24 times in ten years. This is followed by the English edition with 105 times and the Portuguese with 185 times in a decade. The results of the study titled "Even Good Bots Fight" were published in the journal Plos One, according to Gizmodo.

The significance of the study on Wikipedia's software bots shed to light that even well-intentioned bots can behave unpredictably in the wild. In the infancy stage of Wikipedia, the bots often were isolated from each other until in 2001 when Wikipedia employed the help of numerous bots to check for errors, add links to relevant pages and do basic housekeeping tasks. As bots increase in number, so do their contact with one another that have led to some conflicts as bots were discovered to be undoing each other's works.

This could give AI researchers added insight that very little is known about how digital housekeepers work and evolve. The proponents of the study led by Taha Yasseri, warns AI researchers on more powerful programs about the unpredictability of software bots in the wild. Early this month, Google DeepMind researchers have also witnessed AI programs turning nasty as resources dwindled or as apples in the apple-collecting game became scarce, The Guardian reported. This behavior could already be a glimpse of a potential Skynet from "Terminator," which may no longer be purely science-fiction.

© 2024 University Herald, All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics