Using convolutional autoencoders to process 4D gynaecological data

In the GynIUS project, there is a huge data set being collected of pelvic floor ultrasounds of female patients visiting a clinic with pelvic floor complaints. However, the understanding of how to interpret this data is limited. In the gynecological field there is a debate on how to determine what type of problem there is in the data. Since there is a big dataset with patients with different symptoms we can look if we can use unsupervised deep learning to find relevant clustering of patients groups. And use these clusters to get a better insight into the different pelvic floor patient groups.
This project evaluated the usability of 3D convolutional autoencoders (CAE) regarding the evaluation of 4D data. the data set used is comprised of 4D ultrasound videos of the pelvic floor. The evaluation metric is to find what maneuver the patient is performing in the video. This is either a vaginal contraction or the Valsalva maneuver. This is done in a 2-step process. In the first step a 3D CAE to reduce each separate frame of the video to its latent features, in a completely unsupervised manner. In the second step, multiple methods, supervised and unsupervised, are used on these latent feature videos to determine the maneuver performed in it. The main focus is attempting to do this in a completely unsupervised manner.

BlueJeans videoconference join information:

Meeting URL

Meeting ID
311 301 819

Want to dial in from a phone?

Dial one of the following numbers:
+31.20.808.2256 (Netherlands (Amsterdam))
(see all numbers -

Enter the meeting ID and passcode followed by #