According to foreign media reports, in video calls, the system can switch to highlight who is talking. Unfortunately, for silent languages such as sign language, these algorithms can’t be triggered, but a Google study may change this. It’s a real-time sign language detection engine that can tell when someone is doing sign language and when it’s over. < / P > < p > a new paper published by Google researchers on ECCV describes how to do this efficiently and with little delay. If sign language detection is successful, but it results in video delay or degradation, the goal is to ensure that the model is portable and reliable. < / P > < p > it is understood that the system first runs the video through a model called posenet, which estimates the position of the body and limbs in each frame. This simplified visual information is sent to a model that can be trained based on pose data from videos using German sign language, and then it compares the live image with what it thinks the sign language looks like. < / P > < p > this simple process has achieved 80% accuracy in predicting whether a person is doing sign language, and with some additional optimization, the accuracy rate is 91.5%. < / P > < p > in order not to add a new “someone is doing sign language” signal to an existing phone, the system uses a clever trick. It uses a virtual sound source to produce 20kHz tones, which is beyond human hearing range, but can be noticed by computer audio system. This signal is generated when people do sign language, which makes the speech detection algorithm think that they are speaking out loud. Global Tech

By ibmwl