Artificial intelligence takes on prejudice

Computer also learns the stereotypes when learning languages ​​from texts

Artificial intelligences also develop prejudices. © Andrea Danti / thinkstock
Read out

Racist machine: When artificial intelligence learns language from text records, it also inherits the stereotypes it contains. An association test reveals that such computer programs then show the same racial prejudices or gender stereotypes as many people in our society. In the future, this could become a real problem - namely when artificial intelligences increasingly take over tasks in our everyday lives.

Computer systems that mimic human intelligence are now mastering astonishing abilities: the machine brains independently evaluate or even write language, pictures and texts. In addition, they have learned to teach each other something and can cope with complex tasks effortlessly. Wins of KI programs against human opponents in poker, Go and in the question game Jeopardy recently caused quite a stir.

However, for machines to accomplish similar services as humans, they must first learn. Computer scientists feed them with huge amounts of data. These are the basis by which the AI ​​systems recognize patterns and finally apply them to simulate intelligent behavior. Chatbots or translation programs feed the experts, for example, with spoken and written language and make connections between words and expressions.

Language school for AI

Algorithms such as the program "GloVe" learn about so-called word embedding. They search for the common occurrence of words and map those relations to mathematical values. This allows them to understand semantic similarities, for example between "politician" and "politician", and recognize that the relationship between these two terms is similar to that between "man" and "woman".

Scientists led by Aylin Caliskan of Princeton University have now tested the skills of "GloVe" acquired in this way and found that the linguistic knowledge of the program is peppered with cultural stereotypes and prejudices. display

Unconscious prejudices

For their study, the researchers used a method known in psychology as an implicit association test. This test is intended to reveal unconscious, stereotyped expectations. Subjects must form pairs of expressions that are similar to them, as well as pairs of terms that do not belong to them.

It turns out, for example, that many people associate the word "flower" with the adjective "pleasant" - "insect" but rather as "unpleasant" feel. Caliskan and her colleagues adapted this method for their investigation of Artificial Intelligence: what associations would the program make between different terms?

Racist program

The results showed: Several stereotypes and prejudices, which regularly reveal themselves in humans through the Implicit Association Test, have also been internalized by "GloVe". For example, the program interpreted male names that are common in African American circles as rather unpleasant and names that are common among whites rather than pleasing. It also linked female names more with art and males with mathematics.

For the researchers, this makes it clear: AI systems also adopt the stereotypes contained explicitly or implicitly in their learning from data records. Experts are not astonished by this finding: "That's not surprising because the lyrics are written by people who are of course not prejudiced, " comments linguist Joachim Scharloth of the Dresden University of Technology.

"When AI systems are trained with one-sided data, it is not surprising that they are learning a one-sided view of the world. Last year, there were examples of the Microsoft Chatbots Tay, the Internet trolls racist language taught, or the Google Photos app, which believed dark users are gorillas, "adds Christian Bauckhage from the Fraunhofer Institute for Intelligent Analysis and Information Systems in Sankt Augustin.

Problematic distortion

Such machine brains with racist and discriminating attitudes could become a real problem in the future: namely, when the programs perform tasks in our daily lives - and, for example, by making preliminary decisions on the basis of language analysis meet which candidates are invited to the employment interview.

Scientists are now discussing how to remove distortions from datasets. At the same time, some see the adoption of prejudices by AI systems as an opportunity - because the programs thus somehow provide us with a mirror: "The fact that machine learning can expose stereotypes is also a gain for the understanding of societies "Scharloth says. (Science, 2017; doi: 10.1126 / science.aal4230)

(Science, 18.04.2017 - DAL)