Bots also develop prejudices

Artificial intelligences learn biased behavior even without man

Artificial intelligences also develop prejudices. © Andrea Danti / thinkstock
Read out

Prejudiced Bots: Artificial intelligences can evidently develop prejudices even without prejudiced human input. Computer simulations with smart bots show that the computer brains take on prejudiced behavior by watching and copying other machines. This creates a dynamic that is also known from human societies, as the researchers report in the journal "Scientific Reports".

Computer systems that mimic human intelligence are now mastering astonishing abilities: the machine brains independently evaluate or even write language, pictures and texts. Besides, they have learned each other something

and easily cope with complex challenges. In the future, artificial intelligence (AI) could therefore take on more tasks in our everyday lives.

However, there is a problem here: Because such systems often learn their skills through data provided by humans, they sometimes take over human prejudices. The result is, for example, racist or sexist computer brains. But as if this were not alarming enough, scientists have now found evidence that artificial intelligence may even be able to develop prejudices even without prejudiced input from us.

How do prejudices arise?

For their study, Cardiff University's Roger Withaker and his colleagues used computer simulations to explore how prejudices are created and fueled. In their model, 100 smart bots interacted in a give-and-take game where they should either donate to someone from their own team or to a game participant from outside. display

Who they thought, these virtual actors decided on the basis of their own game strategy and the reputation of other individuals. Who would cooperate with whom - and who would exclude whom from the donations? "By playing through these simulations thousands of times, we were able to see how reservations about others develop and under what conditions they are promoted, " says Whitaker.

Always more biased

The evaluation showed that the more often the scientists played through the simulations, the more biased the bots became. Over time, they increasingly tended to exclude actors from outside groups from their donations and to act exclusively within their own team. In short, they developed ever stronger prejudices against "the other".

Whitaker and his colleagues observed that the smart bots adjusted their game strategy by copying other participants - those who collected the most money in the short term and were therefore the most successful. This resulted in groups of actors who acted in a similar way and consistently excluded non-affiliated game participants. The level of prejudice was particularly high when there were few, rather than many, different groups within the virtual population.

Simple copying behavior

The Bots learned their bias accordingly by simply copying other computer brains. "This suggests that the development of prejudice does not require higher cognitive abilities, " the researchers write. According to them, it seems clear that Artificial Intelligence does not need man-made data to be biased it's enough to have other machines around.

"Many AI developments are based on autonomy and self-control. This means that the behavior of such machines is also influenced by other machines around them. Our study shows what can theoretically happen to actors who rely on resources from others on a regular basis, "notes Whitaker.

"It shows that the collective intelligence of such machines is potentially susceptible to a phenomenon we know from human societies, " concludes the team. (Scientific Reports, 2018; doi: 10.1038 / s41598-018-31363-z)

(Cardiff University, 10.09.2018 - DAL)