This research is part of the 3DHAP project, which aims to develop a technology of compact handheld devices, capable to make 3D and AR visualization for perfusion information. There are medical applications where 3D skin reconstruction is of interest. Some examples are Psoriasis assessment, treatment of burn wounds, or the monitoring of surgery results. These 3D skin reconstructions could prove to be helpful towards a more efficient medical approach to skin condition assessment. Commercial 3D scanner for skin reconstruction exists, however, their applicability towards difficult skin location and condition are limited. Some research had been done on making the prototype of the device which is capable of 3D skin reconstruction. This already shows promising results on some measurements, however, improvements can still be made to the process.
A big part of the process comes in keypoint detection. The keypoint detector produce descriptor that allows for stereo matching, the method uses to build the 3D surface reconstruction. Commonly, a keypoint detector has a raw image input. Some known key point detectors are SIFT, SURF, FAST, BRISK, and KAZE. Another alternative method that has been popular as of late is deep learning network that is trained for point features, like corners and spots. The proposed hypothesis is that a model-based frontend will improve the performance of a data-driven method (deep learning-based) of keypoint detection.
This research will explore the implementation of a covariance model based keypoint detection in combination with a convolutional neural network to produce a keypoint descriptor.