Here’s how censorship works on Facebook. Posts that are considered to violate company rules (from spam to hate speech and “glorify violence”) are tagged by users or machine learning filters. < / P > < p > in addition, some very specific cases are automatically processed (e.g., responses may involve deleting posts or blocking accounts), while others are queued up for review by human reviewers. < / P > < p > Facebook employs about 15000 such examiners around the world, and has been criticized in the past for not giving enough support to these employees, and the conditions for employing them may cause trauma. Their job is to organize tagged posts and make decisions about whether they violate company policies. < / P > < p > in the past, censors would more or less review posts in chronological order and deal with them in the order in which they were reported. Now, Facebook says it wants to make sure the most important posts are seen first and is using machine learning to help them. In the future, a combination of various machine learning algorithms will be used to sort the queue and prioritize posts according to three criteria: their degree of propagation, severity, and the possibility that they violate the rules. How these criteria are weighted is unclear, but Facebook says the goal is to deal with the most damaging posts first. Therefore, the more communicable a post is (the more times it is shared and seen), the faster it will be processed. The same is true of the severity of the post. Facebook said it ranked Posts involving real-world harm as the most important. This may mean content involving, child exploitation or self harm. At the same time, posts like spam, though annoying, are not traumatic and are listed as the least important censorship targets. < / P > < p > “all content that violates content will still receive some substantial manual review, but we will use this system to better prioritize,” Ryan Barnes, product manager of Facebook’s community integrity team, told reporters at a news conference. < p > < p > Facebook has shared some details in the past about how its machine learning filters analyze posts. These systems include a model called “wpie,” which stands for “whole post integrity embedding” and uses what Facebook calls a “holistic” approach to evaluate content. This means that the algorithm cooperatively judges the various elements in any given post, trying to find out what the picture, title, poster, etc. reveal together. < p > < p > Facebook’s use of AI to adjust its platform has been censored in the past, with critics pointing out that AI lacks the ability of humans to judge the context of many online communications. Especially on topics like misinformation, bullying and harassment, it’s almost impossible for a computer to know what it’s looking at. < p > < p > Facebook’s Chris palow, a software engineer for the company’s interactive integrity team, agrees that AI has its limitations, but he told reporters that the technology can still play a role in removing unwanted content. “This system is about combining AI with human auditors to reduce total errors,” palow said. “AI will never be perfect.” When asked about the percentage of posts in the company’s machine learning system that were classified incorrectly, palow did not directly answer, but pointed out that Facebook would only let the automated system work without human supervision if it was as accurate as a human censor. “The threshold for automatic action is very high,” he said. Nonetheless, Facebook is steadily adding more AI to its audit portfolio. Spontaneous combustion at a Guangzhou Motor vehicle intersection and other traffic lights in Shenzhen