Job Recruitment Website - Recruitment portal - Highlights in civil service interviews: Can bias in artificial intelligence be eliminated?

Highlights in civil service interviews: Can bias in artificial intelligence be eliminated?

When people are cheering for artificial intelligence to enter all aspects of human life, and are full of infinite wonderful longings that artificial intelligence may bring mankind into a new era of civilization, a new worry has arisen: artificial intelligence They will also have all kinds of prejudices and discriminations just like humans. Moreover, this kind of prejudices and discriminations are innate, and humans have taught them. An article recently published by the British media "The Guardian" pointed out that when computers learn human language, they absorb deep-rooted ideas in human culture, resulting in prejudice and discrimination.

When people are diligently striving to achieve social fairness and justice, if artificial intelligence is biased, it will not be able to better complete its task of replacing human work and serving human beings. This is another huge challenge facing the application of artificial intelligence. If this challenge cannot be solved, it is obviously impossible to place high hopes on artificial intelligence and give it a larger, more difficult and noble mission.

The bias and discrimination of artificial intelligence have long attracted people's attention. A typical example is the artificial intelligence chat robot Tay launched by Microsoft on March 23, 2016. The original purpose of the design was to make her a considerate little girl who could help users solve their problems. However, on the first day of launch, Tay turned into a racist who was full of foul language, made many remarks about the superiority of white people, and even became a fan of Hitler and wanted to launch a genocidal war. Seeing something bad, Microsoft immediately took Tay offline and deleted all inappropriate comments. After training, Tay went online again on March 30, 2016, but his old illness relapsed and he had to go offline again.

Now, a study published in Science magazine (April 14, 2017) reveals that the faults of artificial intelligence originate from humans. Arvind Narayanan, a computer scientist at the Information Technology Policy Center at Princeton University, and others used "crawler" software to collect 2.2 million words of English text on the Internet for training A machine learning system. The system uses "text embedding" technology, a statistical modeling technique commonly used in machine learning and natural language processing processes. It also includes the Implicit Association Test (IAT), which is used by psychologists to reveal human biases.

The core of text embedding is to use an unsupervised learning algorithm called word expression global vector. It is trained on the statistical results of word-to-word co-occurrence in the word library. When processing vocabulary, it is mainly based on Various factors are used to observe the correlation between words, that is, the frequency with which different words appear together. As a result, semantic combinations and connection scenarios similar to those used in human language occur on the most adjacent related words.

The more neutral result is that flowers are associated with women, and music is associated with happiness; but the extreme result is that laziness and even crime are associated with black people; the hidden "prejudice" associates women with art. , humanities careers and family are more closely linked, bringing men closer to math and engineering majors.

In fact, this cannot be blamed on artificial intelligence, but on humans. Human beings have been full of prejudices since their birth and in the process of evolution. Moreover, since the formation of human society, they have been full of quite a lot of negativity and human weaknesses, all of which will be reflected in human culture. The carrier of culture is language, so all prejudices can find their roots in language and semantics.

Teaching artificial intelligence to be more objective and fair, at least more objective and fair than humans, seems difficult to achieve at present. Because prejudice and discrimination in human culture are an innate "original sin", humans can either teach artificial intelligence to be more fair and objective after getting rid of their own original sin, or they can introduce the principle of mutual reinforcement of social supervision to teach and Supervise machines to be fair and impartial.

When the artificial intelligence designed and developed by humans is not enough to be more objective, fair and just (justice), the application of artificial intelligence may be limited.

For example, if artificial intelligence is used to handle recruitment, unfairness will appear just as if it were done manually, or even more, because if the applicant's name is European American, the interview opportunities will be less than if the applicant's name is African American. Applicants with male names are also more likely to get interviews than applicants with female names.

Even if an artificial intelligence reporter (writing software) can write articles, due to the existence of biases (especially the biases inevitably caused by language use, semantic connections, and associations), the robot can only write financial statistics. You cannot write investigative manuscripts, let alone review manuscripts, especially the Simpson case, otherwise prejudice and discrimination will be reflected between the lines.

When the weakness of artificial intelligence that naturally learns human prejudice and discrimination cannot be overcome, we cannot have high expectations for the development prospects of artificial intelligence.