Radiation Oncology is a medical specialty that, at its core, combines technology and medicine, creating opportunities for spectacular improvements and advances in a relatively short time. Below, you will find an interview with Dr. Pierre Thirion, an experienced French-Irish Radiation Oncologist who integrates AI-based solutions into his clinical practice.
Academic Excellence and Innovation
Could you please say a few words about your professional pathway, Professor Thirion? A brief self-introduction would help us understand better the depth and width of your perspective.
I was born in France and completed my basic and specialized medical training in Paris, graduating in 1997 both in Radiation Oncology and Medical Oncology. So, I am considered a clinical oncologist. After a short period as junior consultant in Paris, I moved in 1999 to Dublin for personal reasons. Since then, I have worked at St Luke’s Hospital (known now as St Luke’s Radiation Oncology Network) – first as a clinical and research fellow, then as a full-time consultant since 2005. Unlike in the UK, Radiation Oncology and Medical Oncology are two separate medical specialities in the Republic of Ireland. My daily practice is therefore mostly radiotherapy with small activity in therapeutic Nuclear Medicine. Indeed, Nuclear Medicine is not recognised in the Republic of Ireland as a stand-alone specialty. I am not prescribing chemotherapy.
St Luke’s Radiation Oncology Network includes three radiation therapy (RT) centres (Rathgar, St James’s and Beaumont Campus) and covers the needs for RT for public patients from Dublin city and the surrounding area. It is also an Academic and Research Institution – linked to all major Dublin University Hospitals and Universities – conducting clinical research through participation in National and International clinical trials and translational research studies. It is also involved in educational activities, being one of the three accredited centres for training in Radiation Oncology for the country.
What are the main fields of interest for you in cancer care?
My clinical and technological area of interest are lung cancer, urological malignancies (prostate and bladder cancers) and extra cranial stereotactic ablative radiotherapy (SABR) for primary lung oligometastatic disease and primary renal tumours. I also treat thyroid cancer.
Even if the extent of the scope of my practice can sound extensive, I am quite a lucky one in Ireland, having been able to narrow my activity to very specific tumour sites and technology. My main role for the past 15 years has been to implement extra cranial SABR, because initially there was a very limited number of Radiation Oncologists trained in SABR in the country.
My research area is focused on clinical trials and improvement of RT. We have conducted a couple of trials – one, for example, comparing different fractionations for spinal cord compression. We are just starting a clinical trial comparing different fractionation schemes for SBRT.
The Evolution of AI in Clinical Practice
Speaking of technology and optimisation in RT – what is your opinion on the use of AI?
My interest in AI started a few years ago, but the COVID times provided me the opportunity to study the subject further. I was always interested in technological improvements. Radiation oncology is very technology-driven, and also is one of the medical specialties where we have the unique opportunity to interact with people who have different non-medical backgrounds, such as medical physicists, engineers, software managers, and so on. We learn from each other; it is an enriching experience. Our first contact with AI-based solutions for RT was a machine based planning software developed by Varian. Our physicist was interested in training the model using our database, so we got to see the advantages of using AI.
My interest also comes from my interaction with the radiology specialty. I am a member of the board of faculty of radiologists (Royal College of Radiologists), in charge of the radiology and radiation oncology training for the country. Through interaction with my radiologist colleagues, I could see the emergence of the use of AI in their own field, too, and got interested in the application of AI for auto-contouring. During COVID I completed a MIT online course on AI in healthcare, which was eye-opening.
It was therefore natural for me to get involved in the implementation of AI-based volume contouring in SLRON. I was fortunate that this interest was shared by other colleague physicians and the physics department, creating a real multidisciplinary dynamic team that lead to securing funding form our Hospital charity (Friends of St Luke’s) and approaching the various vendors, including MVision, in 2023 -2024.
Why did you choose to do this type of research?
In the public eye, cancer research is focused on mutations and molecular biology or drug development. Of course, medical and radiation oncologists can do translational research. I have colleagues who are doing it, but when it comes to radiation oncology, there are a lot of opportunities to conduct technology-related research and development.
However, the possibility to be involved in such research activities depends on the clinical practice and workload, the institutional priorities and access to technology. Similarly, the perception of AI’s added value is different in an academic centre, where there are plenty of young fellows who can take some of the tasks, compared to other centres, where senior doctors have to do all the tasks.
AI in Training and Clinical Trials
What do you think about using AI for training?
Presently there are two attitudes regarding AI among Radiation Oncology professionals: extreme reluctance or enthusiasm. I believe that understanding the capabilities and the limits of AI will help us to have a more balanced view of AI. As a trainer, I believe we should implement a training module for residents early on, so the coming generation will have a better understanding of AI.
Presently, AI is making a breakthrough in radiation oncology mostly with AI contouring solutions and it is obvious that it is going to challenge traditional work practice and training. In regard to training most of the available AI contouring software providers are proposing training modules, applicable to junior education. For example, some modules will allow the trainee to contour from scratch and then compare them visually and quantitatively (using a score) to internationally recognised guides or atlases. Other tools also allow comparison of multiple delineations that could be useful during workshops.
AI is already here so we need to develop a new pedagogy approach so physicians can use it. But we also need to explore how AI could allow us to better train future physicians.
What about using AI to increase the contouring consistency in clinical trials?
Indeed, inter- and intra-observer variability in contouring is a recognised source of potential bias in clinical trials, and AI could be a useful solution to decrease it.
Trust, Practical Implementation and Validation
How do you explain the reluctance to implement AI? Are we afraid of being replaced? Are we overestimating its weaknesses because we feel uncomfortable being overtaken by “robots”?
Some people seem to have a reluctance to embrace AI. It’s clear that AI is not going to replace human beings. I am amazed to see reluctance toward the use of AI in RT from the same individuals who use AI-based solutions in their daily life, such as when using mobile phones, computers and even when driving cars. Should we remember that online search engines, and car anti-collision systems are using AI? If we trust the suggested itinerary to go from point A to point B suggested by the navigation system, why would we not trust AI to help us at work, too?
What are the advantages and drawbacks of AI use in RT, in your opinion?
First of all, AI is a tool. It’s not a stand-alone entity that is going to work by itself. The potential drawbacks I see are: Firstly, like any tool, it can be used in a good way or a bad way. Secondly, it highly depends on the way it was trained. Regarding the latter point, if the training data is of high quality and the performance of the model is good, then the results can be reliable.
Among the potential advantages I see:
- Decreasing variability and increasing consistency among doctors and in the daily practice of an individual physician.
- Improving patient access to high quality care by reducing the variation between radiotherapy departments and individual radiation oncologists in experience and caseload. It is recognised that individual patient outcomes are partly related to physician experience. AI could reduce this gap.
- Improving patient care by reducing the risk of error.
We are humans – we have different levels of expertise in a certain cancer type, we can be tired or we might work under pressure, so we can make mistakes. If the AI is consistently right, it reduces the risk of mistakes and inequality in cancer care. It’s the chance that the patients “knock on the right door”. It allows optimizing use of resources and the cost of cancer care.
Are we ready for it?
It’s already here, whether we like it or not, whether we choose to use it or not. We need to develop new ways of thinking and working. I am old enough to have gone through the transition from 2D to 3D in the 90’s, then from 3D to IMRT in the early 2000’s. We – including me – have to be flexible and adapt our way of thinking. Other medical specialties are also facing major technological evolution. Surgery has changed from open surgery to laparoscopic and now to robotic procedures. It’s a similar process, only the tools are different.
You are currently using MVision Contour+. What changed since you started using it?
I am biased, because of my knowledge and my previous experience with AI, so I was expecting a positive impact. As expected, Contour+ decreased my contouring time. However, there is more than this. It has also changed my way of working. It is essential to know the AI software we are using – in other words, to know where it is consistently correct and where there is variation. This can be easily achieved by a pre-implementation phase.
After conducting such a step, I can glance through the structures that we know are consistently accurate, but will dedicate time to check structures for which we have identified variability, usually complex structures such as the brachial plexus. It is important to get to know your AI solution and to validate it.
How should an AI be validated? On how many cases should it be tested and how to assess its performance?
There is no correct answer. Our institution had to go through a selection process that was required to secure financial funding. Our approach was:
- Set up a multidisciplinary team to ensure comprehensive evaluation, including physics and IT, because beyond AI performance, you need to ensure its integration into your workflow and compatibility with your DICOM system and TPS.
- Do an AI solution performance evaluation on 10 cases for each of the main tumour sites, classifying the AI generated contours as acceptable or non-acceptable.
- We compared several auto-contouring solutions and picked what worked best for us. In fact, except for a few rare AI solutions that provided sub-optimal contours, our final choice had to take into account other technical aspects, such as cloud or server-based system compatibility with the DICOM images and TPS. So, it’s a mixed evaluation.
From the physician’s side – we had to go through an interesting process of achieving a consensus on the atlas to use and the identification of the relevant organ at risk per tumour / anatomical sites, among all the ones proposed by the AI solution. In fact, the implementation of AI in our institution has also obliged clinicians to agree to a common protocol. In our previous experience of implementing AI for SABR planning, clinicians needed to agree on OAR dose volume constraints – which is far from straightforward. Additionally, you are also looking at the impact of AI after implementation to evaluate its impact on daily practice. AI is about having a new approach – consistent, dynamic, and consensual.
What are your thoughts regarding the clinical meaningfulness of numbers we get as a result of AI auto-contouring evaluation? Small Dice score differences might not be relevant.
First, we have to remember that what we are measuring with those parameters is variation among clinicians and/or variation between clinicians and a recognized atlas/ guideline. In the absence of randomised trials (and only some retrospective studies) we lack a real gold standard for volume, so we are not comparing our delineations with an absolute truth.
To answer the question of small dice difference, I will be controversial. Is it relevant to measure small inconsistency among clinicians or between clinicians’ manual contouring and AI generated contouring? In addition, we do not have the definition of a meaningful variation. Our approach is to evaluate the planning impact of contouring variation in terms of DVHs.
Beyond small variation, what we saw in our research on the impact of using auto-contouring was a reduction of outliers, which I believe is more clinically relevant.
Interestingly, such a benefit was less for an anatomical site/technique (SABR) where manual OAR delineation is protocol driven and peer-reviewed, demonstrating that you might get similar results to AI use by training people to be consistent and through peer-review.
Future Directions
What would be your opinion about future directions in this field?
We need to improve the overall RT workflow, and not only some steps of the RT workflow. In our institution, AI is already implemented in contouring and planning and certainly has benefits. However, we need to optimize the whole process, otherwise the benefit is not as big as we would hope for.
We need to acknowledge that AI in RT brings changes to our practice and a complete rethinking of the way we work. It has a learning curve and pushes us out of our comfort zone. Overall, it means more than speeding up the process. It is a new approach.