Representation Learning Using Rank Loss for Robust Neurosurgical Skills Evaluation
ICIP 2022



overview

Abstract

Surgical simulators provide hands-on training and learning of the necessary psychomotor skills. Automated skill evaluation of the trainee doctors based on the video of a task being performed by them is an important key step for the optimal utilization of such simulators. However, current skill evaluation techniques require accurate tracking information of the instruments which restricts their applicability to robot assisted surgeries only. In this paper, we propose a novel neural network architecture that can perform skill evaluation using video data alone (and no tracking information). Given the small dataset available for training such a system, the network trained using L2 regression loss easily overfits the training data. We propose a novel rank loss to help learn robust representation, leading to 5% improvement for skill score prediction on the benchmark JIGSAWS dataset. To demonstrate the applicability of our method on non-robotic surgeries, we contribute a new neuro-endoscopic technical skills (NETS) training dataset comprising of 100 short videos of 12 subjects. Our method achieved 27% improvement over the state of the art on the NETS dataset.

BibTeX (Citation)

                    @inproceedings{xiang2018s3d,
                        title={S3d: Stacking segmental p3d for action quality assessment},
                        author={Baby, Britty and Chasmai, Mustafa and Banerjee, Tamajit and Suri, Ashish and Banerjee, Subhashis, and Arora, Chetan},
                        booktitle={2022 29th IEEE International conference on image processing (ICIP)},
                        pages={xxx},
                        year={2022},
                        organization={IEEE}
                      }			
                

Credits: Template of this webpage from here.