SLRON’s AI Journey in Radiotherapy: An Interview with Ciarán Malone

A picture of Ciaran Malone, arms crossed and smiling

Ciarán Malone is a medical physicist with over a decade of experience at St. Luke’s Radiation Oncology Network (SLRON) in Dublin. He is also a member of the AI and Radiation Oncology National Group under the National Cancer Control Programme — a multidisciplinary initiative uniting radiation oncologists, therapists, and physicists across Ireland to advance the integration of AI tools into clinical practice.

Currently serving as a research fellow funded by the St. Luke’s Institute of Cancer Research (SLICR), Ciarán focuses on the adoption of AI within the radiotherapy workflow. His work involves close collaboration with clinicians to implement and validate AI-based contouring, adaption, and automated planning solutions, all aimed at enhancing consistency, efficiency, and patient safety across the SLRON network.

For this piece, we had the opportunity to speak with Ciarán about his work and the future of AI in radiation oncology.

Introduction

Could you tell us a little bit about your department and clinical setup? It would help us understand the complexities of your workflow.

We’re fortunate to work in a closely connected network of three clinical sites in Dublin, treating over 5000 patients each year – a significant number by European standards. Having reliable clinical technology that reliably operates across multiple sites is crucial for supporting both staff and workflow standardization.

We have a full mix of technologies across our network: Varian TrueBeams, Elekta Trilogies, and Varian Clinacs. Plus, we use multiple treatment planning systems — Monaco, Eclipse, Brainlab, and OMP. It’s a bit of a Frankenstein setup, with many different technologies in play. But honestly, it’s been a huge advantage — it’s given us broad experience across diverse platforms, which is invaluable when implementing new technologies like AI.

Over the past two years, supported by Friends of St. Luke’s charitable body, we were able to purchase MVision GBS™ and other technological solutions. We started using AI auto-contouring about two years ago. Since then, it has supported planning workflows across our network.

AI Segmentation in Practice

How has the adoption of AI in radiotherapy evolved recently, both within your network and across Ireland more broadly?

In Ireland, if we had done a survey about a year and a half ago, it would have shown a very different picture. Since then, AI has been integrated incredibly quickly into radiotherapy workflows across the country.

At SLRON, we use multiple AI solutions. With auto-contouring, we were ambitious – we went live across all patient cohorts on a single date. This allowed us to investigate AI’s impact across multidisciplinary team (MDT) workflows and different points in the radiotherapy process. On the one hand, the speed of integration was a real success. On the other, it highlighted that speed isn’t always everything — rapid adoption brought challenges such as acceptance issues, reluctance, and fears about changes in clinical roles.

After all, it’s worth reflecting that radiation oncology has always embraced technological evolution. Twenty years ago, the work was manual: calculating dose points with pen and paper, shifting patients on the couch, and inserting physical wedges into the beam. Over time, we automated those processes — and improved accuracy, safety, and efficiency as a result. AI feels like a bigger step, but it’s really part of that same long journey of continuous improvement. We’ve adapted before, and we will adapt again — it’s just about doing it thoughtfully and safely.

What impact has AI-driven auto-contouring had on your clinical workflows, and what have you learned from implementing it at scale?

It’s been a transformative journey. In a real-world radiotherapy department, clinicians are constantly multitasking. Radiation oncologists and radiation therapists often start a contour, get pulled into another task, and return to it later. Complex cases, like head and neck, might often get delayed, simply because they take more time and effort. AI is giving people flexibility to fit more work into smaller time windows, completing multiple cases in an hour that previously might have taken much longer. Even the more complex cases now move through the workflow more efficiently.

When we analyzed the data, we found that while AI didn’t initially shorten the overall CT-to-treatment timeline—since patient start dates were pre-scheduled—it did significantly improve plan readiness. After we implemented AI segmentation, a significantly larger proportion of plans were finalised up to five days ahead of schedule and the patients could’ve started treatment earlier.

This experience showed us that to truly benefit from AI, it’s not enough to plug it into existing systems — we need to rethink how we work.

Is it enough to just start using it, or do clinicians need to develop new skills?

Overall, AI has definitely changed the way we operate. The focus now isn’t simply on whether someone can manually contour a structure; it’s about learning how to critique and fine-tune AI-generated contours for each patient. It’s a different skill set.

There was a really interesting ESTRO workshop a few years ago asking: “Could we automate the entire radiotherapy workflow with today’s technology?” The answer was largely yes — but the challenge lay in the human–machine interaction. We actually drew lessons from aviation: Originally, pilots manually flew planes; now they manage automated systems like autopilot. However, continuous education and training remain critical, so pilots can take over manually if the automation fails. The same principle applies here: We need systems to help us maintain clinical competency — ensuring expert human oversight is always in place, even as automation increases. It’s all about synergy.

Over the past year, we’ve really focused on education and training. It’s important to help clinicians understand what to expect from AI tools and how to recognize model strengths and limitations. Our aim has been to ensure that people work with the AI, not blindly rely on it. Knowing your model is important, but it’s equally important to know your human component. Monitoring competencies over time becomes essential.

Could you detail what you mean by “knowing your model”?

It’s all about knowing what to expect from the AI. Using the DL-based auto-contouring we learned the predictable, commonly required edits, and to identify cases where the model may not perform well – such as when large artifacts or poor image quality are present. This allows us to speed up workflows — because we know our models well, we know where they excel, where their limitations are, and where we need to make corrections.

There are areas requiring small but systematic changes — particularly around structure boundaries or where one anatomical region transitions into another. For example, the spinal–brainstem interface is a critical zone, with very different radiation tolerances for each structure. We’ve implemented specific training focused on critically evaluating AI contours at these transition zones.

Another important aspect is recognizing when an out-of-domain case is being presented to the model. No model performs perfectly in extreme or unusual cases — if the anatomy differs significantly from the data the model was trained on, performance can suffer. We’re training ourselves to spot these outliers — cases with imaging artifacts, anatomical differences, or unusual clinical features that the model might not handle well.

So you’ve learned to adapt to the AI, but have you felt the need to adapt the AI to your clinic’s protocols?

Our network spans three sites, and we treat a large number of patients every year. Naturally, our consultants come from a range of training backgrounds—some from the U.S., some from Europe, and others elsewhere—so their clinical approaches can vary quite a bit.

Clearly, we needed to find a way to give each consultant a starting point that better reflected their own practice. We did that through the advanced operations available. We could extend or trim structures where needed, and even make specific anatomical adjustments. It’s been very well received. Consultants can focus their time and expertise where it really matters—critiquing, fine-tuning, and personalizing the contours for each patient—rather than getting bogged down in repetitive drawing. It’s made a massive difference, both in terms of efficiency and the overall quality of care.

Education for the Next Generation

How are you preparing the young clinicians for responsible use of AI segmentation?

Our department organises contouring workshops for our registrars. However, our consultants are incredibly busy — managing large patient loads, multitasking, and also responsible for registrar training. Organizing hands-on workshops is difficult — they typically happen only a few times a year. The workshops are essential — but Guide and Verify complement them perfectly.

We initially used Verify for validation purposes — gathering metrics to assess how different users contoured relative to each other, especially after training programs. It helped us monitor variation and check for any changes over time — a kind of QA tool.

Now, we’re exploring whether Guide can bridge that gap by allowing registrars to contour at times that fit their clinical schedules. They can then benchmark their work against expert consensus standards, receiving immediate feedback. Traditionally, they would have had to manually cross-check every slice against an atlas — a slow and tedious process. Consultants are able to review their progress during workshops, focusing more on fine-tuning skills and clinically relevant details, rather than anatomical details.

The ELAISA study has shown the potential AI auto-segmentation software has on education and improving contouring practices. Have you noticed similar effects in your department?

It’s an interesting question whether trainees truly learn landmarks through AI. We’ve actually fed this reflection back into our education and training programs. We’ve reviewed how contouring practices have shifted over the last two years since integrating AI, and we’ve definitely seen some changes.

There are cases where the AI was correct, and we had to adjust our own contouring practices accordingly. One case in point is brain contouring — where we previously used thresholding, now we ensure the brainstem is properly separated like what is recommended by multiple contouring guidelines. Another good example is the brachial plexus — previously, we might have only contoured the segment relevant to the specific treatment plan, due to time constraints. Now, the AI contours the entire structure, prompting us to fully incorporate it into our workflows. This proved to be useful for patients who recur and/or need reirradiation nearby.

Most importantly, since implementing AI contouring, interpersonal and intrapersonal variation has decreased dramatically. Consultant oncologists and trainees now show similar levels of consistency — meaning the “experience gap” is narrowing. That’s a major quality improvement across our workforce.

Research and Innovation

You had an impressive number of abstracts accepted at ESTRO this year – 24, out of which 7 were around AI auto-contouring. Can you tell us a bit about your research?

We were lucky to have a wide range of studies accepted, made possible by having a MDT SLICR funded fellowship to support research in St. Luke’s Radiation Oncology Network. From learning about automation in aviation and how we can apply it to radiation oncology, to presenting about our open facemask trial (OPEN) as a late-breaking abstract. We also shared findings on how the airway accumulates radiation dose over the breathing cycle during SABR treatments, and highlighted the benefits of our MDT research fellowship.

On the AI auto-contouring side we presented different aspects of our experience with Contour+. We had a proffered paper presentation detailing the workflow changes I explained before, and a poster by Dr. Sinead Horan describing the flexibility the advanced operations feature brings. One of our RTTs, Keeva Moran, also showed that inter- and intra- person variation decreased, and leveled the playing field regarding experience. There were two posters about the system’s performance on thoracic ROIs, including mediastinal lymph nodes on different phases of 4DCTs. We also presented two major CBCT-related studies at ESTRO.

Could you give us more details about the CBCT projects? It’s an interesting, but less explored area for AI segmentation.

The first one, led by Banashree Kalita and Aodh MacGairbhith, a medical physics master’s student and Physicist in SLRON, evaluated the use of AI auto-contouring for tracking rectum and bladder variations in prostate and rectal cancer patients. Traditionally, radiation therapists manually contour the rectum daily to guide treatment decisions, but this study explored whether AI-generated contours could support that workflow. The results were very positive — the AI tool proved accurate and reliable enough for daily clinical decision-making, enabling faster assessments and supporting more autonomous, confident decisions by radiation therapists. We’re now working to integrate this AI-driven tracking into routine practice.

The second study focused on whether AI models trained on high-quality CT images could still perform reliably on lower-quality CBCT scans, particularly from older machines and challenging imaging scenarios, like low-dose scans, partial arcs, and common artifacts, especially in head and neck cases. The AI predictions were reliable and allowed us to consistently track anatomical changes. This helped us distinguish between patients with significant anatomical shifts (like weight loss) and those with stable anatomy by treatment days 10–15. Notably, structures not typically contoured manually but easily predicted by the AI, such as Xb nodes, proved to be highly predictive. These findings suggest that even lower-quality CBCT can support effective anatomical monitoring, opening up exciting possibilities for adaptive radiotherapy workflows. We’re now looking to implement it more broadly across our head and neck cohort.

What projects do you have for the future?

We have a few more AI-related projects in the works, about the human aspects of implementing AI, adaption and the impact/utility of AI decision-making tools. We also have to finish a number of studies currently ongoing! For example: When we first implemented AI auto-contouring, we were already designing a clinical trial evaluating different open-face head and neck masks and we wondered if we could integrate AI segmentation for anatomical changes during radiotherapy.

It’s still an ongoing study, but it’s really exciting because consistency in contouring (thanks to AI) might give us a clear advantage to monitor patients during treatment and help us predict who needs to be adapted. In radiomics or predictive analytics, consistency is crucial: you need to see the signal through the noise, and variability from human contouring could mask important predictive features. Using AI-generated contours could make it easier to detect early anatomical changes, helping us flag patients sooner for clinical action.

Professor Sinead Brennan, Dr. Jill Nicholson, our radiation therapist fellow Sam Ryan, and I worked together to ensure that as part of this clinical trial, we incorporated a radiomic and AI sub-study. The goal is to use AI-driven contours and anatomical tracking to predict which patients are likely to need adaptive re-planning — and which aren’t.

Related articles