Xinhua news agency, Beijing, February 17, new media special telegram “Russia daily” recently published an article saying that Professor Michael Kosinski, a scholar of Stanford University in the United States, has developed an artificial intelligence technology that can judge his belief tendency according to facial expressions.

According to the report, Kosinski first used a face photo library to train the neural network, which included a large number of people with clear beliefs. Then he put the neural network to work and asked it to differentiate millions of people into liberals and conservatives based on their photos. The recognition accuracy of artificial intelligence is 72%. The result doesn’t seem to be very good, but people’s judgment is worse. The accuracy rate is only 55%, which is almost the same as flipping a coin.

Interestingly, Kosinski also tried to figure out how neural networks work. He examined a number of features and factors in the photos, such as the color of facial hair, whether the eyes looked directly at the camera, and various expressions, but found that none of these factors worked. That is to say, the neural network finds some other features that are not known at present.

It is worth noting that kosinsky’s neural network has been able to determine a person’s sexual orientation through face photos a few years ago. In the case of only one photo to identify, the correct rate of male sexual orientation was 81%, and the correct rate of female sexual orientation was 74%. If there are five pictures to identify, the accuracy of its judgment on sexual orientation of men and women will be increased to 91% and 83% respectively. If people judge according to a single photo, their correct rates of judging sexual orientation of men and women are only 61% and 54% respectively, which is consistent with the results of previous studies.

At the time, Kosinski’s research was severely criticized by peers who thought the technology might endanger the safety and privacy of others. But the developers of the technology retort that they did not create tools that infringe personal privacy, but showed “how basic and widely used methods can threaten private life.”. In other words, they warn of the dangers involved in AI.

Kosinski has shown us one of the dangers. Neural network makes people transparent: it can find things in people that they may not know.

Many scholars say that laws and regulations on the use of artificial intelligence must be formulated. But it is inconceivable to put the released spirit back into the bottle.