Only a few years ago, AI was barely discussed within medical imaging. This has all changed and there are now numerous conversations taking place, both online and offline, about its significance in influencing multiple industries, including healthcare. Its growth has been rapid and it shows no signs of slowing. Based on data from the marketing intelligence company Tractica, AI in radiology will reach $19 billion by 2025.
Within the buzz about the technology, there is both excitement and concern among healthcare professionals about the opportunities it provides. Although there is consensus that AI could improve the speed and accuracy of diagnoses, there are also anxieties that it could replace humans, leaving jobs at risk. As a radiology resident physician with a deep entrepreneurial background, Ajay Kohli offers a valuable insight into both sides of the coin.
Failure to launch
Applications in radiology have not always been effective, contributing to ongoing worries about the technology among healthcare professionals. “In the late 1990s, computer-aided diagnosis (CAD) was used in mammography but this was in a rudimentary way,” explains Kohli.
Despite the excitement and high hopes for AI at the time, the technology was simply not advanced enough to be relied upon for clinical decision-making. In a 2018 paper published in the Journal of the American College of Radiology, Kohli discussed the lessons that could be learned from the failed application of CAD.
Due to its limitations, healthcare professionals had to re-examine all the areas flagged by the technology, which was not a productive use of their time. “Not only did CAD increase the recalls without improving cancer detection, but, in some cases, even decreased sensitivity by missing some cancers, particularly non-calcified lesions,” Kohli said. “CAD could lull the novice reader into a false sense of security. Thus, CAD had both lower sensitivity and lower specificity, a non-redeeming quality for an imaging test.”
The failure to offer anything beyond the role of a ‘second reader’ or ‘spell-checker’ was due to the technology’s limited processing power and, more fundamentally, because of its reliance on supervised learning. This type of AI, unlike deep learning, demands the identification and labelling of the inputs of the system, which means that it is heavily reliant on the skill of the radiologist.
“In supervised learning, the computer is trained on samples with known pathology (truth) and then tested for its ability to predict the likelihood of malignancies in a test sample (truth and lies),” Kohli said. “Despite the allure of supervision, the pedagogy is not neutral. Because the computer sees more cancers during its training than its test, there is verification bias, and the specificity drifts.”
Deep thoughts
Since the late 1990s, AI efforts have mainly focused on deep learning, a type of machine learning that is based on the way the human brain processes information. “There are three common applications of these technologies within radiology,” explains Kohli. “These are deep learning to recreate CT and to automate workflow, as well as apps for physicians.”
Based on recent figures, Frost & Sullivan notes that among the 114 start-ups active in the AI for medical imaging space, a significant majority are targeted at the image analysis aspect of radiology. As decisions about a diagnosis are based on this work, it is clearly a hugely important clinical task. AI can be used to recreate CT image scans to assist in this process.
Beyond image analysis, automating workflow is another hugely valuable application of AI. The technology could help to reduce the burden on radiologists and improve efficiencies within their daily activities. Unsurprisingly, this is an area of concern among clinical professionals as deep learning is capable of performing tasks that they would otherwise be completing. However, there currently remains a need for ‘human-in-the-loop’ systems where healthcare staff are making the final call in order to ensure patient safety.
“Neural networks called auto encoders can boost the quality by generating similar images with ‘repaired’ pixel values, which has been learned from training on similar data. This could prevent the need for the patient to have as many follow-up scans, which can lower exposure to radiation and allow for quicker diagnoses to reduce healthcare costs.”
Apps have also be developed for physicians using AI technology to help support them to enhance the level of care they can provide to patients. “You can help to ensure that care is similar in different countries and healthcare systems by replicating the workflow of the radiologist,” says Kohli. This can be particularly helpful in overcoming language barriers.
Despite the exciting AI applications within radiology, there are a number of weaknesses with current technologies. One of these relates to the errors that the technology can make, such as the potential for the neural network to ‘hallucinate’ by adding together two images. Similarly, worrying results have occurred with ‘backdoor poisoning attacks’ where mislabelled images enter into the data set, causing malicious actors to insert ‘backdoors’ into learning systems by tricking them into reliably predicting particular incorrect labels.
$19 billion
The estimated value of AI in radiology by 2025.
Tractica
Mistakes aside, an inherent problem with AI is termed the ‘black box’ issue, which means that results are difficult to explain and validate. Although inputs and outputs can be monitored, it is unclear exactly how these technologies arrive at a decision. “You can make a few tweaks to the system, which can create big changes in the results being generated,” says Kohli.
In some instances, these systems use shortcuts, which can lead to false conclusions being made. Such evidence should serve to further reassure radiologists that their jobs are not in immediate danger of being replaced.
Lending a hand
To help address some of these issues, Kohli suggests that healthcare can gain a lot from looking to the application of AI within finance. Although not intuitively compatible, they do have certain similarities that are beneficial for sharing knowledge. “Both industries have lots of data and use automation,” Kohli says. “Also, neither one can discount the role of humans in making decisions.”
The technologies implemented so far have fundamentally changed the financial industry. Examples include the use of natural language processing in detecting anti-money laundering and fraudulent activity, cognitive computing to analyse variables to build more effective training algorithms and leveraging deep learning to explore consumer decision patterns and provide personalised ‘chatbots’. Hedge funds are an area of finance that has been particularly transformed by the integration of AI. Systems using these technologies are already outpacing those run by humans alone.
In the same way that global financial markets had a metamorphosis into the electronic and digital versions that we see today, we are also currently witnessing a similar shift in the medical industry. In the past several years, there has been an influx of technologies such as electronic medical records, data from implantable and wearables. All of these have created large amounts of data but the industry remains behind the financial sector in optimising the use of this data.
Kohli has suggested that healthcare can learn a number of lessons from the application of AI in finance. The first of these is truly personalised medicine. Instead of merely using the imaging data, this could be integrated with more general information about the patient as well as their preferences to inform treatment options.
Comprehensive communication, characterised by video visits, telemedicine, outpatient imaging, patient-centred documentation and other aspects of convenient care are set to become the new norm for healthcare delivery. In the same way that natural language processing is being applied in finance in the form of ‘chatbots’ to educate people on how to make better investment decisions and track their spending, these could be used to inform patients on their condition and treatment options. They could also be used to track symptoms in between appointments so that time in clinic can be more productive. Ultimately, AI can be conceptualised as an ally to enhance the standard of care provided to patients, making it more personalised and accessible.
Collaboration between individuals is also needed in order to overcome the challenges of AI within imaging. Currently, software engineers, data scientists and venture capitalists are dominating the conversation about these technologies and radiologists remain silent. In order to maximise the use of AI in any healthcare setting, it is essential that developers understand the clinical relevance and significance of the data.
Critical thinking and reflection can also be prompted though collaboration. Kohli recalls an instance where he spoke about heart failure and was asked about his reasons for using that term and the potential negative impact of the word ‘failure’ for patients. As an established healthcare professional, it was not something Kohli had considered before but his experience highlights the value of conversations with individuals and organisations both inside and outside the healthcare industry.
Back to the future
Although AI is the current trend, Kohli notes that only a few years ago, it was wearables that was the topic du jour and that in a few years it will be different developments that are creating a buzz within healthcare. This includes the emerging quantum computing, tokenisation and interoperability, all of which help to make best use of the wealth of data generated by these technologies. Regardless of the format, it is clear that imaging and healthcare as a whole can learn a lot by collaborating with other individuals and industries to ensure that innovation translates into better care provided for patients.