Diabetic retinopathy - an eye condition that affects people with diabetes - is the fastest growing cause of blindness, and it can be treated if detected early. Google's deep learning algorithm is capable of interpreting signs of DR in retinal photographs.
One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and determine whether there are signs of the disease, and if so, how severe it is. While annual screening is recommended for all patients with diabetes, many people live in areas without easy access to specialist care. That means millions of people aren't getting the care they need to prevent loss of vision.
Today, in the Journal of the American Medical Association, Google published a deep learning algorithm capable of interpreting signs of DR in retinal photographs, potentially helping doctors screen more patients, especially in underserved communities with limited resources.
Working with a team of doctors in India and the U.S., Google's team created a dataset of 128,000 images and used them to train a deep neural network to detect diabetic retinopathy. The researchres then compared their algorithm's performance to another set of images examined by a panel of board-certified ophthalmologists. Google says its algorithm performs on par with the ophthalmologists, achieving both high sensitivity and specificity.
There?s a lot more to do before an algorithm like this can be used widely. For example, interpretation of a 2D retinal photograph is only one step in the process of diagnosing diabetic eye disease - in some cases, doctors use a 3D imaging technology to examine various layers of a retina in detail. Google's colleagues at DeepMind are working on applying machine learning to that method. In the future, these two complementary methods might be used together to assist doctors in the diagnosis of a wide spectrum of eye diseases.