A paper co authored by timnit gebru, a former Google AI ethicist, raises some potentially tricky questions for Google, such as whether the AI language model is too large and whether technology companies are doing enough to reduce potential risks. < p > < p > Google’s AI team created a language model called Bert in 2018, which was very successful, and the company integrated it into its search engine. Search is Google’s lucrative business, generating $26.3 billion in revenue in the third quarter of this year alone. “This year, including this quarter, has demonstrated the valuable value of Google’s original product search to people,” Google CEO sandar pichay said in a conference call with investors in October < p > < p > gebru and her team presented a paper entitled “on the dangers of stoichic parrots: can language models be too big?” at a research conference. However, on Wednesday local time, she tweeted that after internal review, she had been asked to withdraw the report or remove the names of Google employees from the article. She said she asked Google about the conditions for her name to be removed from the paper, and if those conditions were not met, it would be a “last date” between them. Gebru said she then received emails from Google informing others “accept her resignation with immediate effect.”. In an email to employees, Jeff Dean, head of Google AI, pointed out that timnit gebru “did not meet our publishing standards.”. He wrote that one of the conditions for gebru to continue working at Google was that the company would tell her who had read the report and their specific feedback, but refused to do so, “timnit wrote that if we did not meet these requirements, she would leave Google and work before a deadline. We accept and respect her decision to resign from Google. ” In the letter, Dean wrote that the paper “ignores too much relevant research,” but its co-author, Emily M. bender, a professor of computational linguistics at the University of Washington, disagrees. Bender even pointed out that there was a “bibliography of 128 papers” that bender had not even been able to quote from any of the other authors. < / P > < p > gebru is known for its work on algorithmic bias – especially in facial recognition technology. In 2018, she co wrote a paper on display with joy buolamwini because the vast majority of the data sets used to train algorithms were white. In an interview with wired on Thursday, gebru said she felt censored. “You don’t have documents that keep the company happy and don’t point out any problems. It’s not the point of being a researcher. ” Thousands of supporters, including more than 1500 Google employees, have signed a protest letter since the news of gebru’s dismissal became public. The petitioners asked Dean and others involved in reviewing gebru’s paper to meet with the ethical AI team to explain how the paper was unilaterally rejected by the leadership. Epic Games accused Google of monopolizing the latter, which may have a better chance than apple