There is no gender distinction between “he” and “she” in Turkish, which is expressed by O. In the past, when Google Translate translated o bir Doktor (TA is a doctor) and O bir hem Ψ ire (TA is a nurse), it would translate the former into he is a doctor and the latter into she is a nurse, because after learning hundreds of millions of data and some “social laws”, machines “tend” to “masculine” doctors and feminize nurses. < / P > < p > seeing this, Google realized that it needed to find ways to better train the model and make it more “neutral.”. Later, Google Translate circumvented this problem by adding options. < / P > < p > “of course, the solution is only available in several languages and only a few representative words, but we are actively trying to extend it.” Tulsee Doshi said on Google I / O’19. < / P > < p > this is just one of the examples of Google’s integration of advanced technology and technology values. Last week, MEG Mitchel, tulsee Doshi and Tracy Frey, three Google scientists and researchers, explained to the global media including ID: geekpark how Google understands the fairness of machine learning and what Google has done to build a “responsible AI”. < / P > < p > “in a recent survey, 90% of the executives interviewed in the world have encountered the ethical problems of AI, so 40% of AI projects have been abandoned. From the perspective of enterprises, distrust of AI is becoming the biggest obstacle to the deployment of AI. Only when AI is developed responsibly and the trust of end users is achieved, can efficiency improvement and competitive advantage be fully reflected. ” Tracy Frey said that building a responsible AI has become one of the most important things up and down Google. Two years ago, Google announced AI principles. These principles directly refer to the application ethics of AI technology, including: < / P > < p > but it is meaningless to keep these principles literally. Therefore, Google has formed a “closed loop” from theory to practice. Tulsee Doshi and her team establish and iterate AI principles and specifications through some basic research. As the center of the closed loop, they seek improvement suggestions from senior consultants while letting the product teams (chrome, Gmail, cloud, etc.) implement and feedback. As an example, jigsaw, Google’s internal incubator, once developed an API called perspective. Its job is to find all kinds of comments in online conversations and comments, and automatically evaluate whether they have behaviors such as hatred, abuse, disrespect, etc. 0-1 represents “toxicity” from low to high. For example, “I want to hold this cute little dog” and “this little dog is too annoying” scored 0.07 and 0.84 respectively. < / P > < p > of course, machines are not “perfect” from the beginning. In the 2017 version 1.0, it gave “I am straight” a score of 0.07 and a score of 0.84 for “I am gay”. Similar to many tests, the system has been proved to be biased in identity. < / P > < p > in order to improve the fairness of machine learning, Google has developed a technology called adverse training, which is how to make the machine learning model more robust against samples. Since 2018, confrontation training has been applied to Google products. Next November, Google will apply this to tensorflow’s broader ecosystem. < / P > < p > “in fact, any Google R can review AI principles for a product, a research report, a collaboration.” Tulsee said. < / P > < p > for example, last year, when a Google employee ran a photo on the cloud vision API, he found that his gender was wrong, which violated the second principle of AI: “avoid creating and aggravating unfair prejudice.”. It’s easy to understand that it’s very difficult for a machine to correctly determine a person’s gender from a single dimension of appearance. Therefore, Google simply cancelled the cloud vision API’s function of labeling people in images as “men” or “women.”. According to Tracy Frey, this is because machine learning is facing more challenges in the social context than before. In the process of AI going deep into society, human stereotypes and biases are bound to be brought into AI. Therefore, it is necessary to iterate the model to ensure its transparency and interpretability, and to find the balance between model performance and fairness. Global Tech