Deep Learning Classification of Operator and Endoscope Motions during Endoscopy

Finished: 2021-12-21

MSc assignment

Objective: 

This study aims at classifying operator and endoscope motions to identify the key movements necessary to perform upper diagnostic flexible endoscopy and to study the relation between operator actions and endoscope response during endoscopy procedures. 

Motion data of both operator and endoscope are simultaneously recorded during clinical endoscopy procedures. Classifying these motions is not trivial as different operators generate different motion data even during the same procedure. Due to the complexity of the problem, the classification of different motions will be tackled with Deep Learning.

Background: 

The increasing development of interventional endoscopy raises the challenge to teach and learn complex endoscopic procedures. Today experts are facing the challenge of explaining precisely an elaborate choreography of movements performed during the procedure, while novices are confronted with a broad range of hand, wrist and shoulder movements each resulting in different scope responses. The teaching strategy of endoscopy could benefit from a dedicated motion library that deciphers the operator’s motion and the consequent endoscope response. A simplified endoscopic language made of individual motions could greatly shorten the learning curve. 

 

Data collected:

Motion data of both operator and endoscope are simultaneously recorded during clinical endoscopy procedures. The operator motion is extracted using Xsens suit and Kinect data, while the endoscopic motion is assessed using the Aurora NDI probe, together with camera recordings of the wheels of the endoscope-handle positions and internal endoscopic view.

 

 

For more information about the assignment, please contact Nicolò Botteghi (n.botteghi@utwente.nl).