Deep learning to generate synthetic CT Images from MR for Radiotherapy treatment planning

The generation of synthetic images has greatly advanced in recent years due to the progress of artificial intelligence. Some of the applications have recently been widely popularized in media, especially art and fashion generation and videos/photos creation of realistic-looking human faces¹. The image-to-image translation is a special case of synthetic image generation, where input images from one domain are translated into synthetic images in another domain.

A non-exhaustive list of example cases includes edges-to-photo translation, and labels-to-street scene translation, as shown in Figure 1. Various potential industrial applications exist for image-to-image translation such as fashion design, game development, self-driving cars engineering, or medical applications deployment where translation can be used for image augmentation/enhancement or converting modalities between them (MR to CT for example for radiation therapy planning).

Figure 1. Example cases of image-to-image translation. The figure is from Isola et al., 2017.

Generative adversarial networks (GANs) are a class of generative models that can be used for various synthetic data generation tasks. GANs have gained a lot of interest in recent years due to their impressive results in image generation², including image-to-image translation³′⁴. A GAN model consists of at least one generator network and a discriminator network. Figure 2 illustrates the training of a GAN model for MR-to-CT translation with training data consisting of pairs of real MR and CT images. The generator takes real MR images as input and generates synthetic CT images. The discriminator learns to classify real and synthetic CT images as real or fake. This drives the generator to learn to generate more realistic synthetic images that are indistinguishable from real images. The training of a GAN can be seen as a competition between the two networks, the generator and the discriminator, where the generator attempts to fool the discriminator.

Figure 2. Training of a GAN model for MR-to-CT translation. The model has one generator and discriminator network that are trained together. The discriminator learns to classify images as fake synthetic CT images created by the generator or real CT images. The generator learns to create synthetic CT images that are indistinguishable from real CTs.

Developing image-to-image translation models typically requires a training dataset containing multiple pairs of images aligned voxel-wise, such as MR and CT images for each patient. Since obtaining paired data can be a time-consuming and costly task (or even impossible in some scenarios), interest has grown in developing models which can be trained with unpaired data, for example, a set of MR images from some patients and a set of CT images coming from other patients. Despite widely available data, the training becomes a more challenging task as the performance of the models demonstrates, which tends to fall behind the paired image method. Hence, developing and improving unpaired methods remains an active area of research.

Image-to-image translation has the potential to improve many applications in the medical field. Given sufficient amount and quality of data, tasks in image space e.g. image denoising, resolution increase, or artifact removal become feasible. Modality conversion, such as MR-to-CT, and PET-to-CT can help to reduce the imaging burden to capture multiple sequences at clinics, especially when diagnostic performance is not the main priority. Synthetically generated images can also be used to enlarge or augment training datasets required by machine learning models minimizing data collection in the clinic. Reducing costs may also be a benefit when a manual annotation is required (organ segmentation task as an example).

Due to its inherent superior soft tissue contrast over CT and the development of high-fidelity machines, MR has become a popular imaging modality in radiotherapy. However, because dose calculation algorithms require a correlation between image intensities and high-energy photon beam attenuation, CT images are still used as the main modality for radiation therapy planning. The capability to generate synthetic CT from specific MR sequences makes MR-only workflows possible in external beam radiotherapy, eliminating the need for dedicated CT imaging. Many developments have been published in the literature and some AI and non-AI commercial products are now used in clinics. With an exhaustive high-quality training dataset, deep learning (DL) -based models tend to produce more realistic-looking synthetic CT images in comparison to standard classification methods. However, since GANs and other DL-based methods may sometimes fail (producing errors or artifacts), the need for automatic methods to validate and verify the fidelity of the generated synthetic images becomes important.

We expect image generation and image-to-image translation to continue to be an increasingly active research and development area in the near future, with the emergence of many applications and products in the industry, as the models, evaluation methods, and training data collections improve. Data privacy regulations may become stricter in many countries, and in this context synthetic image generation has the potential to reduce the need for patient-specific data with methods to generate fully-synthetic datasets which can be used for machine learning model development.

 

  1. K. Hill and J. White. Designed to deceive: Do these people look real to you? The New York Times, 11 2020.
  2. Karras, T., Aittala, M., Laine, S., Härkönen, E., Hellsten, J., Lehtinen, J., & Aila, T. (2021). Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34, 852-863.
  3. Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).
  4. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).

Our Newsletter

Subscribe to get information, latest news and other interesting offers about MVision AI

Related Posts

20.11.2024

MVision AI Ranked Among the Top 10 in Deloitte Technology Fast 50 Finland for 2024

Helsinki, Finland, 20 November 2024 – MVision AI, a leading innovator in AI-driven solutions for oncology, is proud to announce its recognition among the top ten companies in the Deloitte Technology Fast 50 Finland 2024 ranking. This prestigious award acknowledges MVision AI's rapid growth and its mission-driven advancements in making…

Press Releases

7.11.2024

Safe and Ethical AI in Radiation Oncology: Regulations and MVision’s Compliance

Technology is evolving, and the same applies to Radiation Oncology. New ways and new tools require new rules. Artificial intelligence (AI)  brings tremendous potential, so the world has to learn how to get the best of it, safely.  Al systems identified as high-risk have to comply with strict requirements, including…

Articles

24.10.2024

MVision AI Secures FDA 510(k) Clearance for Contour+ Advanced CT and MR Models

Helsinki, October 2024 – MVision AI, a leading innovator in AI-powered solutions for radiotherapy, is thrilled to announce that its auto-contouring solution Contour+ has received FDA 510(k) clearance for three new models: Bones CT, Brain MR, and Male Pelvis MR T2, and five updated models: Brain CT, Abdomen & Lung…

Press Releases