top of page

Fundamental Theory for Dexterous Surgical Skills Transfer to Medical Robots

Future battlefield medical operations call for robotic systems that can provide patient care at the point of injury. Autonomous behavior in such systems is key for situations of limited bandwidth, latency, and loss of signal. Therefore, skills learned in controlled scenarios, where data is abundant, should be transferable to deployable system where data might be limited. This project focuses on developing fundamental research to breach the gap in knowledge about two critical aspects: (1) how to leverage operator’s expertise under limited connectivity; (2) how to apply the existing abundant knowledge about surgical motions and maneuvers from the OR to new, Scalable Austere Environments (SAE) such as a smart stretcher, automated ambulance, mobile field hospital or a permanent hospital facility.


We created the DESK dataset to facilitate transfer learning across different robotic domains. The collected dataset DESK (Dexterous Surgical Skill) comprises a set of surgical robotic skills using the three robotic platforms: Taurus II, simulation of the Taurus II, and YuMi). Video and kinematic data were collected from multiple robots performing the peg transfer task in a wide range of environments. Peg transfer is one of the five fundamental tasks present in laparoscopic training and it is composed of 7 surgical gestures (surgemes) that encompass its main segments. This dataset was used to test the idea of transferring knowledge across different domains (e.g. from Taurus to YuMi robot) for a surgical gesture classification task with seven gestures/surgemes. We explored two different scenarios: 1) No transfer and 2) Domain transfer (simulated Taurus to real Taurus and YuMi robots). We conducted extensive experiments with three supervised learning models based on kinematic features and compared the results with the combination of kinematic features and compact image representations. The results show that using simulation data during training enhances the performance on the real robots. Furthermore, the addition of compact image representations boost the classification performance over the kinematic setting.


These advances will be integrated in a teleoperation system to overcome communication delays and loss of signal. The surgeme classification will be used to trigger semiautonomous behavior that leverages on the knowledge of the step in the task. Currently, we are working on the automation of the skills in our peg transfer dataset and extending our work to include a surgical debridement task. As an extension of our work, I developed a vrep simulator for the YuMi robot with 3d printed surgical adaptations. The YuMi robot can be controlled in the vrep environment using the HTC VIVE headset and controllers allowing to extract abundant simulated data collected from human examples.

MHSRS Poster
bottom of page