A new transparency report, released on Thursday, gives more details on the hate mood on social media after the company announced policy changes earlier this year, although it still did not answer some major questions. < p > < p > Facebook’s quarterly report includes new information about the prevalence of hate speech. The company estimates that 0.10 to 0.11 percent of content seen by Facebook users violates the hate speech rule, which is equivalent to “10 to 11 times you see hate speech every 10000 times you browse content.”. This is a random sample based on posts, which measures the range of content rather than the number of posts. However, it has not been assessed by external resources. In a phone conversation with reporters, guy Rosen, Facebook’s vice president for integrity, said the company was planning and working on an audit. < / P > < p > Facebook insists that it will take the initiative to remove most hate speech before users report it. The company said it had taken the initiative to remove about 95% of the hate speech on Facebook and instagram in the past three months. < / P > < p > this is a huge leap forward from its initial efforts – at the end of 2017, it only voluntarily deleted about 24% of its data. In addition, Facebook has stepped up efforts to remove hate speech: in the fourth quarter of 2019, about 645000 items of content were deleted, while in the third quarter of 2020, the number soared to 6.5 million. Organized hate groups are classified as a separate moderate category, with a much smaller increase, from 139900 to 224700. < p > < p > Facebook said some of the cancellations were due to improvements in AI. In May, Facebook launched a research contest to develop a system to better detect “hate Internet memes.”. In its latest report, the company says it can analyze text and pictures at the same time and capture content like the image macro shown below. However, this method has obvious limitations. As Facebook points out, a new hate speech may not be the same as before, because it cites a new trend or news story. It depends on Facebook’s ability to analyze multiple languages and capture trends in specific countries, as well as Facebook’s definition of hate speech, a category that has changed over time. For example, Holocaust denial was only banned last month. < / P > < p > in addition, it does not necessarily help Facebook moderators. Despite recent changes, the new coronavirus pandemic disrupts Facebook’s normal auditing process because it does not allow reviewers to view highly sensitive content at home. Facebook said in its quarterly report that its decline was returning to pre pandemic levels because of AI. < / P > < p > but some employees complained that they were forced to return to work before they were safe, so 200 content administrators signed a public request for better new coronavirus protection. In that letter, the moderator said automation failed to solve the serious problem. “AI is not up to the job. Important speeches are swept into Facebook filters, while dangerous content like self mutilation is preserved, “they said. However, Rosen disagreed with their assessment and said that Facebook’s office met or exceeded the requirements of a safe workspace. “These are extremely important people who play an extremely important role in this work, and our investment in AI is helping us detect and delete these to protect people,” he said < / P > < p > Facebook’s critics, including U.S. lawmakers, may still not believe it captures enough hate content. Last week, 15 U.S. senators pressed Facebook to respond to postings attacking Muslims around the world, as well as to provide more country specific information to illustrate the goal of its moderate approach and hate speech. Facebook CEO Mark Zuckerberg defended the company’s modest approach at a Senate hearing, suggesting that Facebook may include the data in future reports. The report shows that the number of app store purchases soared in the first half of this year due to the impact of covid-19