Tools & Strategies News

AI Detects Cancerous Lesions in Mammography Scans 

Radiologists are training artificial intelligence through mammography scans to identify potentially cancerous lesions.  

artificial intelligence

Source: Getty Images

By Erin McNemar, MPA

- Duke University researchers have created an artificial intelligence platform to analyze potentially cancerous lesions in mammography scans and determine whether patients should receive invasive biopsies. 

Unlike previous tools of its kind, the platform’s algorithm is interpretable, allowing physicians to see how the program came to its conclusions. The research team trained the AI to detect and evaluate lesions like an actual radiologist would be trained, rather than allowing it to freely develop its procedures. 

According to the team, the AI program could be a useful training platform to teach students how to read mammography images. Additionally, it could assist physicians in sparsely populated regions worldwide that do not regularly read mammography scans to make health care decisions.  

“If a computer is going to help make important medical decisions, physicians need to trust that the AI is basing its conclusions on something that makes sense,” professor of radiology at Duke, Joseph Lo, said in a press release.  

“We need algorithms that not only work, but explain themselves and show examples of what they’re basing their conclusions on. That way, whether a physician agrees with the outcome or not, the AI is helping to make better decisions.” 

While thousands of independent medical imaging algorithms exist, very few use validated datasets with more than 1000 images or contain demographic data. The researcher trained the new platform with 1,136 images taken from 484 Duke University Health System patients.  

The team first programmed the AI to detect suspicious lesions, ignoring healthy tissue and other irrelevant data. They then hired radiologists to carefully label the images to teach the AI to focus on the edges of the lesion and where potential tumors meet healthy tissue to compare those edges to those in images with known cancerous and benign outcomes.  

According to researchers, radiating lines, known medically as mass margins, are the best predictor of cancerous breast tumors and are the first thing radiologists look for.  

“This is a unique way to train an AI how to look at medical imagery,” a computer science PhD candidate at Duke and first author of the study, Alina Barnett, said.  

“Other AIs are not trying to imitate radiologists; they’re coming up with their own methods for answering the question that are often not helpful or, in some cases, depend on flawed reasoning processes.” 

Although the AI did not outperform human radiologists, it did just as well as other black box computer models. However, when the AI was wrong, those working with the platform were able to recognize why it made a mistake.  

Moving forward, the researchers are working on adding other physical characteristics for the AI to consider when making decisions, including a lesion’s shape. 

“There was a lot of excitement when researchers first started applying AI to medical images, that maybe the computer will be able to see something or figure something out that people couldn’t,” said Fides Schwartz, a research fellow at Duke Radiology.  

“In some rare instances that might be the case, but it’s probably not the case in a majority of scenarios. So we are better off making sure we as humans understand what information the computer has used to base its decisions on.”