Scannerless MRI Generation Using Generative Adversarial Networks With Multiple Surrogate Signals

Motion models can track the position of a liver tumour based on a surrogate signal, compensating for respiratory-induced motion (RIM) to enable more accurate ablation and biopsy procedures. However, interpreting tumour position as an XYZ-coordinate is challenging for surgeons. This study presents a conditional progressively growing generative adversarial network (cProGAN) that can generate scannerless MR images using one or multiple surrogate signals for guidance during liver interventions. We compared three signals: an ultrasound transducer capturing internal movement, external marker tracking capturing external movement, and a heat camera capturing airflow.

This study is validated in seven human subject experiments, where MR images and the three surrogate signals are simultaneously collected while each subject is following a specific breathing protocol. The quality of the scannerless images is assessed by the structural similarity index measure (SSIM) and by extracting the superior-inferior movement of the liver border in the real and scannerless images and comparing the resulting waveform using the mean absolute error (MAE, in millimetres and as a percentage of average liver movement) and the R2 metrics. The model trained on visual markers generated images with the most accurate liver positions during breathing (MAE of 10.08%, R 2 of 37.15%) and breath holds (MAE of 9.14 ± 1.31 mm). The highest SSIM was for the combined model during breathing (51.42%) and for the visual marker model during breath holds (36.47%). Models using the other surrogate signals resulted in a significantly higher MAE and lower SSIM. These results suggest that visual marker tracking provides the most accurate respiratory motion modelling for scannerless MRI generation, though further research is needed to improve image quality.