November 14 is World Diabetes Day. At present, the global situation of prevention and control of diabetes is still grim. According to the International Diabetes Federation, in 2019, there are 463 million adults aged 20 to 79 with diabetes. By 2045, the number is expected to rise to 700 million. At the same time, the disease also causes at least 760 billion US dollars of medical expenditure, accounting for 10% of the total medical expenditure of adults in the world. It is particularly noteworthy that about 79% of patients with diabetes live in low-income and middle-income countries. However, in places where medical technology is not developed or medical resources are scarce, diabetes and its complications are often not diagnosed and treated in time. For example, as the fastest growing cause of blindness, although diabetic retinopathy can be treated properly after early detection, it often leads to irreversible blindness because there are not enough doctors to make timely diagnosis for all diabetic patients in many cases. < / P > < p > medical technology should be of help to all, and a lot of efforts have been made to address this challenge and improve screening for diabetic retinopathy. Using the latest advances in machine learning and computer vision, Google AI researchers have developed a deep learning algorithm that can determine whether a patient’s retina is suffering from lesions through eye scanning images. Nowadays, one of the common methods for ophthalmologists to diagnose diabetic retinopathy is to observe the scanning image of the eye, look for the signs of the disease (including micro aneurysm, bleeding, hard exudate, etc.), and judge its severity. Of course, mastering the ability to accurately interpret scanned images requires considerable professional training. However, in many parts of the world, the number of doctors with this ability cannot meet the screening needs of local diabetes patients. To help doctors check more patients with limited medical resources, Google has worked closely with doctors in India and the United States to create a dataset of 128000 fundus images, each of which has been professionally evaluated by 3-7 doctors in a team of 54 ophthalmologists. Based on this data set, Google trained a deep neural network to detect diabetic retinopathy. The next step of the algorithm training is to test its performance. To this end, Google arranged for it to “compete” with a new team of 8 out of 54 doctors with high consistency on two independent clinical verification sets consisting of 12000 fundus scanning images. The F-score of this algorithm is 0.95, which is better than 0.91 of the doctors group. The so-called F-score has a maximum value of 1, which measures the sensitivity and specificity. Sensitivity refers to the ability to reduce missed diagnosis, while specificity means the ability to avoid misdiagnosis. < / P > < p > after publishing the algorithm, Google researchers did not stop, but continued to improve its performance and interpretability. In this process, the grading scale of detection is more refined, from the initial level 2 to the later level 5; the reference standard for clinical verification has also changed from the majority opinion of retinal experts to the consensus reached after discussion. This new standard not only improves the accuracy, but also helps to detect the most subtle lesions, such as microaneurysms. < / P > < p > to make this algorithm a truly effective diagnostic tool, it is also necessary to ensure its adaptability, transparency and credibility in the clinical environment. In other words, it is necessary to show the diagnosis results of the algorithm to doctors in an appropriate way to help them improve their accuracy and confidence in the diagnosis of diabetic ophthalmopathy. < / P > < p > to achieve this, Google’s solution is to show ophthalmologists the model scores of the algorithm for different grades of diabetic retinopathy, and highlight the heat map of the most important area on which the algorithm predicts. As shown in the figure below, two out of three ophthalmologists did not find signs of diabetic retinopathy in the eye scanning images without the aid of algorithm, and with the help of the algorithm, they all gave accurate results. It can be said that the algorithm can really make doctors examine the pathology more carefully and pay attention to the details that are easy to be ignored. At present, this research has entered the stage of clinical application. In 2019, Google cooperated with verily, a life sciences and medical company affiliated to alphabet, for the first time to apply the algorithm in Aravind eye hospital in Madurai, India. First, the trained staff take the eye image of the patient, and then upload it to the detection algorithm through the software. The algorithm will automatically detect the symptoms of diabetic retinopathy and diabetic macular edema, and return the screening results. < / P > < p > in addition, Google has also conducted field studies in clinics in Baton Thani and Chiang Mai in Thailand to study how this algorithm can be better used for eye screening in diabetes care. For example, in response to the problem that nurses often have fuzzy or dark areas in their scanned eye images, and the algorithm will mark them as “unable to grade”, Google has improved its practical application process, allowing experts to examine the images while looking at the patient’s medical records, instead of referring them all to the ophthalmologist. This method reduces unnecessary misdiagnosis and saves time for doctors and patients. Global Tech

By ibmwl