The research saw the computer, which was created by Google’s AI experts, compared with medical professionals as they both screened mammograms.
It found that the AI was largely as good as the humans at spotting incidences of breast cancer – and that it was much better at avoiding false positives.
The comparison was undertaken by researchers from the US and UK and was published in the journal Nature. It is just the latest to suggest that AI could lead to dramatic changes in healthcare.
Radiologists miss about 20 per cent of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result.
The findings of the study, developed with Alphabet Inc’s DeepMind AI unit, which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago, said.
The team, which included researchers at Imperial College London and the NHS, trained the system to identify breast cancers on tens of thousands of mammograms. They then compared the system’s performance with the actual results from a set of 25,856 mammograms in the UK and 3,097 from the US
The study showed the AI system could identify cancers with a similar degree of accuracy to expert radiologists, while reducing the number of false positive results by 5.7 per cent in the US-based group and by 1.2 per cent in the British-based group.
It also cut the number of false negatives, where tests are wrongly classified as normal, by 9.4 per cent in the US group, and by 2.7 per cent in the British group.
These differences reflect the ways in which mammograms are read. In the US, only one radiologist reads the results and the tests are done every one to two years. In Britain, the tests are done every three years, and each is read by two radiologists. When they disagree, a third is consulted.
In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately detecting breast cancers.
Connie Lehman, chief of the breast imaging department at Harvard’s Massachusetts General Hospital, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.
The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not improved performance in clinical practice.
The issue, Dr Lehman said, is that current CAD programs were trained to identify things human radiologists can see, whereas with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.
This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” she added.
Although computers have not been “super helpful” so far, “what we’ve shown at least in tens of thousands of mammograms is the tool can actually make a very well-informed decision,” Dr Etemadi said.
The study has some limitations. Most of the tests were done using the same type of imaging equipment, and the US group contained a lot of patients with confirmed breast cancers.
Crucially, the team has yet to show the tool improves patient care, said Dr Lisa Watanabe, chief medical officer of CureMetrix, whose AI mammogram program won US approval last year.
“AI software is only helpful if it actually moves the dial for the radiologist,” she said.
Dr Etemadi agreed that those studies are needed, as is regulatory approval, a process that could take several years.
Additional reporting by Reuters