Radiologists are primarily known for their image interpretation skills. As a result, recent breakthroughs in image recognition introduced by deep-learning techniques have been equated in the media with the imminent demise of radiologists. This misconception has been amplified by bold statements made by prominent researchers in AI. These statements are best understood from the perspective of advances in a subset of tasks accomplished by radiologists that require specialised intelligence, mainly detection of anomalies, segmentation and image classification.
However, the complex work performed by radiologists includes many other tasks that require common sense and general intelligence for problem solving – tasks that cannot yet be achieved through AI. Understanding a case may require integration of medical concepts from different scientific fields, such as anatomy, physiology, medical physics and clinical specialties, to provide plausible explanations for imaging findings. Such tasks accomplished by radiologists on a daily basis include consultation, protocoling, review of prior examinations, quality control, identification and dismissal of imaging artefacts, cancer staging, disease monitoring, interventional procedures for diagnostic or therapeutic purpose, reporting, management guidance, expertise in multidisciplinary discussions and patient reassurance. Additional tasks include education and the development of departmental policy.
Practice makes perfect
With technological advances in computer science, it is anticipated that an increasing number of repetitive tasks will be automated over time. The picture archiving and communication system (PACS) of all hospitals contain large imaging data sets with matching descriptions within radiology reports that can be used to perform multivariate machine learning (ML). The interactions between radiology images and their reports have been used to train ML for automated detection of disease in images. A review revealed that recent applications in medical image analysis focus on 2D convolutional neural networks that do not directly leverage 3D information. While 3D convolutional neural networks are emerging for analysis of multiplanar imaging such as CT, further research will require multiparametric imaging examinations.
Historically, residency programmes have successfully integrated basic sciences, such as biomedical physics and radiation protection, in their teaching curriculum. Because it is anticipated that the nature of the work accomplished by radiologists will evolve in the future, it follows that the radiology programmes should begin to integrate health, computer science and statistics courses in their curriculum.
AI research and development requires skill in statistics, coding, data structures and domain-specific data mining, in addition to new frameworks for data plumbing and computation. Building a medical image analysis tool may be as easy as learning the Python programming language and relying on existing software libraries. However, understanding the underlying maths, statistics, data structures and algorithms is significantly more challenging.
Being able to generalise an algorithm to work on disparate input data, from multiple different imaging machines using myriad protocols, can prove to be terribly difficult.
On the other side, data scientists without experience in day-to-day radiology practice need to be aware of the depth and idiosyncrasies of radiology workflows. It is common at conferences to see scientific presentations and posters of AI projects that on the surface may appear to address a clinical setting, but miss critical parts that render the algorithm useless in practice. For example, fully automated segmentation of liver tumours may appear spectacular, but does not provide information – such as type of tumour, distribution of tumours, and relationship to vessels and staging – required by surgeons and oncologists for clinical management. In addition, how things are described in medical textbooks may differ from how they are implemented in practice.
Algorithms are amoral and, at their core, are built to optimise some function. As such, they are prone to bias, and potentially produce significant ethical issues. We have the responsibility to research and educate all stakeholders and the public on what types of ethical and bias issues may arise, and how to detect and manage them.
For example, a machine-learning model for stratifying the risk of cancer in pulmonary nodules detected on a CT scan achieved high performance on the training data set that included patients from the US National Lung Screening Trial, but much lower performance once applied to patients at Oxford University Hospitals. This suggests that a machinelearning model incorporates implicit selection biases from the demographics of the population used for its training which may not be representative of the target population in which it will be applied. If this discrepancy in diagnostic performance is not recognised, there is a risk of misdiagnosis and potential harm to patients.
Practising radiologists need to understand the value and the pitfalls, weaknesses and potential errors that may occur when an AI product performs image analysis. While these algorithms are powerful, they are temperamental, and may give inappropriate answers when presented with images outside of their knowledge set. This includes images with technical artefacts such as movement or beam hardening, or images obtained with inappropriate techniques. For example, an algorithm evaluating brain CTs may work perfectly for long stretches, but then a new software upgrade to the CT occurs, or a new CT machine comes online and the algorithm produces faulty results. To alleviate this concern, in many modalities, new protocols for standardised imaging should be adopted with AI in mind, similarly to guidelines that have been proposed for image acquisition to enable quantitative analysis.
The responsibilities involved
AI is used for more than image analysis. It is a powerful tool to identify patterns, predict behaviour or events, or categorise objects. In the near future, these non–image-analysis tools may dramatically affect radiology. For example, these tools have the potential to improve radiology departmental workflow through precision scheduling, identify patients most at risk of missing appointments and empower individually tailored exam protocols. Perhaps most anxiety-provoking for radiologists, AI may enable programmes that use radiologists and their work as data, identifying details of each radiologists’ practice pattern, and even categorising them, enabling the creation of a sophisticated radiology report card.
Commercialising an AI image analysis product requires understanding the clinical need, or use case; the business case; and new methods of product regulation, verification and monitoring. The computer vision literature provides countless examples of automated segmentation and computer-aided detection (CAD) tools that are not used in clinical practice despite decades of refinement. To overcome barriers to clinical adoption, AI image analysis products must be integrated seamlessly in the clinical workflow and be able to interface with PACS software, which may otherwise act as a gatekeeper in the value chain.
There is a responsibility to inform the AI community of the role played by radiologists as consultants, experts, diagnosticians, interventionalists, educators and policymakers involved in patient care.
In addition, the radiology community should be prepared for automation of image interpretation tasks that will transform the nature of their work, particularly in 2D modalities. Furthermore, residency programmes should integrate health informatics, computer science and statistics courses in AI in their curriculum.
AI is currently having an immense impact on the research landscape of radiology departments. The availability of parallel computing hardware and ease of the open-source software tools have helped spark a movement towards AI research in academic departments that has frequently involved leadership from residents, fellows and junior staff who are comfortable with the technology. AI research, which can be done on data exports or even open source data, is significantly less expensive than imaging research involving patient recruitment and research coordinators scanning.
Labour to stay current
In the next five years, radiologists will see more competent AI application incorporated into PACS workflows, especially for laborious tasks prone to human error such as detection of bone metastases on CT. Currently, there is no evidence in the literature that AI can replace radiologists in day-to-day clinical practice. However, there is evidence that AI can improve the performance of clinicians and that clinicians and AI working together are better than either alone. For example, research has shown that a radiologist-augmented approach could improve the performance of two deep neural networks by resolving their disagreements.
Healthcare budgets will need to include funding and support for new AI tools that potentially improve disease detection. In Canada, where PACS upgrades and optimisations are budget limited, radiology leaders will need to strongly advocate for solutions that incorporate AI into image interpretation workflows to optimise patient care, reduce detection errors and increase hospital efficiencies. To remain current, radiologists will need to follow and contribute to healthcare AI research and development, embrace the changes in workflow that will be required to support the implementation of clinical AI and adapt to changes in their practice that will improve care of their patients.
Radiologists should partner with the computer science and engineering departments of their affiliated universities to ensure that the problems under examination have maximum clinical benefit. Canadian associates (CAR) must educate government policymakers on the complexities of radiology and the consequences of misses, including associated morbidity and litigation costs. Furthermore, the CAR should provide a pathway for the implementation of AI tools in PACS environments, establishing minimum performance metrics for critical abnormalities.
AI applications currently focus on anomaly detection, segmentation and classification of images. Familiarity with the terminology and key concepts in this field will allow the radiology community to critically analyse the opportunities, pitfalls and challenges associated with the introduction of these new tools. Radiologists should become actively involved in research and development in collaboration with key stakeholders, scientists, and industrial partners to ensure radiologist oversight in the definition of use cases and validation process, as well as in the clinical application for patient care.