|Year : 2015 | Volume
| Issue : 2 | Page : 172-176
Transfer of skills on LapSim virtual reality laparoscopic simulator into the operating room in urology
Amjad Alwaal1, Talal M Al-Qaoud2, Richard L Haddad2, Tarek M Alzahrani2, Josee Delisle2, Maurice Anidjar2
1 Department of Surgery, Division of Urology, McGill University Health Centre, Montreal, Quebec, Canada; Department of Urology, King Abdul Aziz University, Jeddah, Saudi Arabia
2 Department of Surgery, Division of Urology, McGill University Health Centre, Montreal, Quebec, Canada
|Date of Submission||05-May-2014|
|Date of Acceptance||24-Jun-2014|
|Date of Web Publication||11-Mar-2015|
Dr. Amjad Alwaal
McGill University Health Centre, 687 Pine Avenue West, Suite S6.92, Montreal, Quebec H3A 1A1, Canada
| Abstract|| |
Objective: Assessing the predictive validity of the LapSim simulator within a urology residency program.
Materials and Methods: Twelve urology residents at McGill University were enrolled in the study between June 2008 and December 2011. The residents had weekly training on the LapSim that consisted of 3 tasks (cutting, clip-applying, and lifting and grasping). They underwent monthly assessment of their LapSim performance using total time, tissue damage and path length among other parameters as surrogates for their economy of movement and respect for tissue. The last residents' LapSim performance was compared with their first performance of radical nephrectomy on anesthetized porcine models in their 4 th year of training. Two independent urologic surgeons rated the resident performance on the porcine models, and kappa test with standardized weight function was used to assess for inter-observer bias. Nonparametric spearman correlation test was used to compare each rater's cumulative score with the cumulative score obtained on the porcine models in order to test the predictive validity of the LapSim simulator.
Results: The kappa results demonstrated acceptable agreement between the two observers among all domains of the rating scale of performance except for confidence of movement and efficiency. In addition, poor predictive validity of the LapSim simulator was demonstrated.
Conclusions: Predictive validity was not demonstrated for the LapSim simulator in the context of a urology residency training program.
Keywords: Laparoscopic simulator, LapSim, minimally invasive surgery training
|How to cite this article:|
Alwaal A, Al-Qaoud TM, Haddad RL, Alzahrani TM, Delisle J, Anidjar M. Transfer of skills on LapSim virtual reality laparoscopic simulator into the operating room in urology. Urol Ann 2015;7:172-6
|How to cite this URL:|
Alwaal A, Al-Qaoud TM, Haddad RL, Alzahrani TM, Delisle J, Anidjar M. Transfer of skills on LapSim virtual reality laparoscopic simulator into the operating room in urology. Urol Ann [serial online] 2015 [cited 2019 Oct 14];7:172-6. Available from: http://www.urologyannals.com/text.asp?2015/7/2/172/150475
| Introduction|| |
Utilization of minimally invasive techniques is increasing as new procedures are being described, and older procedures are being refined. The minimally invasive option is an attractive one for the surgeon, the patient, and the health system in general. It provides faster recovery, shorter hospitalization, and smaller scars. It requires however a different set of skills and the learning curve that can be different from one surgeon to another. Training for those skills can be an issue as patient safety comes into question. , Therefore, the need for a tool to assess the competence of the surgical trainee before operating on humans has surfaced.
Many virtual reality (VR) simulators are commercially available and in widespread use, and it became important to determine the transferability of skills on these VR simulators to real patients. If the predictive validity of these simulators is established, it would be possible to evaluate the trainee's readiness to operate on human subjects. One of these simulators is LapSim ® (Surgical Science Inc, Minneapolis, MN, USA).  It is a VR simulator with software that teaches and evaluates the trainee's performance on certain tasks, such as clip-applying. We prospectively investigated the adequacy of LapSim as an assessment tool for competence of the surgical trainee before proceeding to training on humans.
| Materials and methods|| |
After McGill University institutional review board approval was obtained in close coordination with the Steinberg-Bernstein Center for minimally invasive surgery (MIS) at McGill University, a total of 12 urology residents at McGill University were enrolled in the study. LapSim is a laparoscopic simulator, which has software and two laparoscopic instruments, using an interface with a diathermal pedal and a computer screen that transfers movement in real time. Following an extensive orientation to the LapSim simulation before the study commencement, the enrolled residents received 3 years of LapSim training between their 1 st and 3 rd year of residency training (from June 2008 to December 2011). The training consisted of 1 h of practice on LapSim weekly. This weekly training was composed of 3 tasks (cutting, clip-applying, and lifting and grasping), which were chosen based on their high face validity. Several parameters in those three tasks were assessed in order to evaluate the resident's respect for tissue and economy of movement. Those parameters include the total time, tissue damage, and path length. The tasks and parameters examined in our study are listed in [Table 1].
In the lifting and grasping task, the box was lifted, and a needle under it was grasped and placed in a target area. Once in the target area, the box disappears and reappears on the opposite side, and the task is repeated. In the cutting task, a structure resembling a vessel is grasped then it changes its color. A pair of ultrasonic scissors holds that the colored area then cuts it using a diathermy pedal. Once released the task is repeated. In the clip-applying task, a structure resembling a vessel is clipped on both ends after being stretched to reveal through changing the color the desired area for clip-applying. The area in between the clips is then cut with a scissor.
Total time was measured in seconds. Tissue damage represents the number of times the tissue area was hit by both instruments, while maximum damage represents the depth of tissue damage caused by the instrument in millimeters. Maximum stretch damage is the percentage of excessive stretch on the vessel with the notion that 100% represents excessive stretch leading to the vessel being torn resulting in bleeding. Path length is measured in millimeters while angular length is measured in degrees.
A monthly assessment was carried out for all the trainees for their performance in the above three tasks. Their last LapSim performance was compared with their first performance of radical nephrectomy on anesthetized porcine models in their 4 th year of training.
The operations on the porcine models took place at the Montreal General Hospital wet labs, and they were recorded on DVDs. Two urologic surgeons with experience in MIS have independently and blindly rated the recorded DVDs. They scored the trainees' performance on the DVDs using six predefined rating scales that measure psychomotor skills [Table 2]. They gave every subject a global score of 1 (poor) to 5 (excellent) on each of these components based on an agreeable standard method of rating based on two previously published articles on global assessment scales for intraoperative laparoscopic assessment. ,
With pertinence to the statistical approach, for each resident, a standardized cumulative score for each rater's observations, and the performance on the porcine models was calculated. The first part of the analysis entails examining the agreement between the two independent urologic surgeons' rating of resident performance on the porcine models. This was conducted using the kappa test with standardized weight function to assess for inter-observer bias, agreement, and disagreement. In general, a kappa value < 0.2 is considered poor agreement, and value in the range of 0.81-1.0 is considered very good agreement (Alan Acock, A Gentle Introduction to Stata). Box whisker plots displaying the inter-quartile range, median, and mode were also constructed. Second, in order to assess the predictive validity of the LapSim in predicting how good residents are likely to perform on the porcine models, nonparametric spearman correlation testing was used to compare each rater's cumulative score with the cumulative score obtained on the porcine models. All statistical analysis was conducted using STATA version 11 (StataCorp LP, College Station, TX, USA), and a P < 0.5 was deemed as significant.
| Results|| |
As previously stated, data on 12 residents was analyzed. The kappa results demonstrated acceptable agreement between the two observers amongst all domains of the rating scale of performance except for confidence of movement and efficiency [Table 3]. Highest kappa values on agreement were observed on bimanual dexterity and tissue handling. Box whisker plots are shown in [Figure 1].
Examining the predictive validity of the LapSim in predicting the performance on the porcine models, spearman testing between the each of the LapSIM components and the porcine scores demonstrated poor correlation across all components [Table 4], all correlation P > 0.05], and hence poor predictive validity.
|Table 4: Correlation of LapSim performance to intraoperative laparoscopic assessment|
Click here to view
| Discussion|| |
Predictive validity of a simulator is the transferability of the skills learned on the simulator to real-life performance.  Our study failed to demonstrate the predictive validity of the LapSim simulator when correlated with the resident performance during laparoscopic nephrectomy in a porcine model.
Other studies have looked into the predictive validity of the LapSim. Some of those studies showed the transferability of the skills while some did not. Larsen et al.  randomized 24 junior gynecology residents to LapSim or standard surgical training, and their skills were assessed while performing salpingectomy. The intervention group showed marked improvement of their skills, including halving the operating time. Several other studies reached the same conclusion by demonstrating significant improvement of residents' , or medical students'  skills when trained on LapSim.
Hogle et al. in 2008  evaluated the predictive validity of LapSim on 21 PGY1 residents when they later performed laparoscopic cholecystectomy on pigs, and failed to demonstrate significant improvement in their skills. In addition, Hogle et al. in 2009  also failed to demonstrate significant predictive validity of the LapSim simulator using different randomized studies.
This is the first study to examine the predictive validity of the LapSim in urology. In a recently published article,  we failed to demonstrate the construct validity of the LapSim in urology training, while we did previously demonstrate the construct validity of another simulator ProMIS (Haptica, Ireland) in urology training.  This is important because these simulators are expensive, and it is of paramount importance to identify the proper simulator that can benefit the training program by improving the resident's surgical skills prior to applying them on humans. The LapSim simulator and its associated setup used here cost about 55,000$.
It is possible that the small number of residents (12 residents) has affected our results. In addition, there was a poor correlation between the two observers in two of the scale domains for the porcine nephrectomy assessment (i.e. confidence of movement and efficiency). Further studies are definitely needed for this and other simulators in order to identify the best simulators that can be utilized within a urology residency MIS training.
| Conclusion|| |
We failed in this study to demonstrate the predictive validity of the LapSim simulator within our urology residency program when laparoscopic skills were examined during porcine nephrectomy.
| References|| |
Scott DJ, Bergen PC, Rege RV, Laycock R, Tesfay ST, Valentine RJ, et al.
Laparoscopic training on bench models: Better and more cost effective than operating room experience? J Am Coll Surg 2000;191:272-83.
Ahlberg G, Enochsson L, Gallagher AG, Hedman L, Hogman C, McClusky DA rd, et al.
Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies. Am J Surg 2007;193:797-804.
Fairhurst K, Strickland A, Maddern G. The LapSim virtual reality simulator: Promising but not yet proven. Surg Endosc 2011;25:343-55.
Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondré K, Stanbridge D, et al.
A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 2005;190:107-13.
Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg 2004;91:146-50.
Larsen CR, Soerensen JL, Grantcharov TP, Dalsgaard T, Schouenborg L, Ottosen C, et al.
Effect of virtual reality training on laparoscopic surgery: Randomised controlled trial. BMJ 2009;338:b1802.
Cosman PH, Hugh TJ, Shearer CJ, Merrett ND, Biankin AV, Catmill JA. Skills acquired on virtual reality laparoscopic simulators transfer into the operating room in a blinded, randomised, controlled trial. Stud Health Technol Inform, 2007;125:76-81.
Hyltander A, Liljegren E, Rhodin PH, Lönroth H. The transfer of basic skills learned in a laparoscopic simulator to the operating room. Surg Endosc 2002;16:1324-8.
Hogle NJ, Widmann WD, Ude AO, Hardy MA, Fowler DL. Does training novices to criteria and does rapid acquisition of skills on laparoscopic simulators have predictive validity or are we just playing video games? J Surg Educ 2008;65:431-5.
Hogle NJ, Chang L, Strong VE, Welcome AO, Sinaan M, Bailey R, et al.
Validation of laparoscopic surgical skills training outside the operating room: A long road. Surg Endosc 2009;23:1476-82.
Kovac E, Azhar RA, Quirouet A, Delisle J, Anidjar M. Construct validity of the LapSim virtual reality laparoscopic simulator within a urology residency program. Can Urol Assoc J 2012;6:253-9.
Feifer A, Al-Ammari A, Kovac E, Delisle J, Carrier S, Anidjar M. Randomized controlled trial of virtual reality and hybrid simulation for robotic surgical training. BJU Int 2011;108:1652-6.
[Table 1], [Table 2], [Table 3], [Table 4]