Facebook has made it clear that it wants to use artificial intelligence (AI) technology to help the social network platform better audit. When the machine learning filter encounters the content that may violate the platform policy or be complained by users (including but not limited to spam, hate speech, incitement to violence), the system will respond in time and intervene in some operations (such as deleting posts or restricting accounts), so that valuable manpower can be used to review other cases. < / P > < p > it is reported that Facebook employs about 15000 reviewers worldwide, but has been criticized in the past for not giving them enough support. Its main work is to classify the marked posts according to the company’s platform policy. < / P > < p > in the past, the processes were sorted more or less according to the time sequence. Now, however, Facebook wants to use more machine learning algorithms to sort the content according to its weight, and set three reference principles: dissemination, sensitivity and violation severity. < / P > < p > although it is not clear how the specific weights of these criteria differ, Facebook says its goal is to limit the processing of the most damaging content. In other words, the more serious the communication consequences, the more priority will be given to the posts. < / P > < p > for example, contents involving violent terrorism, exploitation of children or self harm will be subject to substantive manual review.. At the same time, spam information and other eye-catching but less serious consequences will be judged as secondary priority by AI. < p > < p > Ryan Barnes, product manager of Facebook’s community integrity team, told reporters at a news conference that the company was using better algorithms to assess the human audit priority of all offending content, such as the wpie integrity embedding model shown above. < / P > < p > this means that the algorithm will jointly judge the various elements in a given post, and try to find out the content displayed jointly by images, titles, etc. in addition, the new algorithm also helps to reduce the psychological trauma suffered by auditors when dealing with some bad illegal content. Privacy Policy