SISE - Standardisation in Surgical Education
SISE stands for Standardisation in Surgical Education and aims to develop innovate and effective Hands-on-Training (HOT) programme’s, which offer practice and development of individual skills. With surgical training, the goal is to produce a surgeon who is highly competent and confident in performing surgical procedures; and thereby mitigate the risks of complications.
Mission
To produce a surgeon who is highly competent and confident in performing surgical
procedures, and thereby mitigate the risk of complications
- Providing international guidance on surgical training, to allow surgeons achieve competence by following pre-defined, standardized pathways
- Wide-spreading SISE standards globally with courses, events and dedicated study programmes, to raise the level of urological care throughout Europe and beyond
- National implementation of courses with international standards
- Creating an international network of certified trainers and training centers
- Collecting data to improve the educational system by certification, provide new quality benchmarks and finally increase patient safety.
- Contributing to the determination of European urological health care policies
Founding members
J. Palou, Barcelona (ES)
E. Liatsikos, Patras (GR)
B. Van Cleynenbreugel, Leuven (BE)
A. Gözen, Heilbronn (DE)
B. Somani, Southampton (GB)
D. Veneziano, Reggio Calabria (IT)
K. Ahmed, London (GB)
T. Brouwers, Wijchen (NL)
Contact
SiSE Urology
Attn Mr. T. Brouwers, Coordinator
European School of Urology
Mr. E.N. van Kleffensstraat 5
6842 CV Arnhem
The Netherlands
E: info@sise-urology.com
How we work
Simulation is a representation of a situation over time, useful to predict and optimize its outcomes. This means that its development is critical in order to produce the desired effects. Following a systematic methodology, despite requiring considerable amount of time, may produce effective results. One of the most reliable development methodologies was designed by R.Satava and is called “Full life-cycle curriculum development” (*1). The development process begins with the definition of the expected outcomes.
This is decided by a consensus of experts and is relative to a full procedure or part of it. After this first decisional step, the operation that needs to be simulated is deeply analyzed, with a process called Cognitive Task Analysis (CTA) (*2). The CTA can be structured as an interview to one or more experts, with the aim not only to define the single steps of the procedure, but also to understand the decisional pathway followed by the expert. A series of questions are formulated to define the different options, the reasons why one approach can be more favorable than another and the different information needed in order to decide which option can be the best. Within the CTA, the following data need to be collected:
– Indications
– Contraindications
– Equipment
– Procedural steps (listing all available techniques)
– Do’s
– Don’ts
Each of the aforementioned data will be useful afterwards, to build specific parts of the curriculum. Once the CTA is completed, it needs to be compared to guidelines to avoid any experience-based bias and, afterwards, it needs to be shared with the experts. In this phase, Delphi Method can be used to reach consensus.
After the preliminary data collection, in this phase the actual educational protocol starts to be designed. Information gathered within the CTA are now managed as follows:
– Indications and contraindications will provide information for the cognitive addendum of the training-session;
– Equipment will serve as an instrumentation list and to understand what will be needed to set-up the hands-on training session;
– Procedural steps will provide core info about the operation to simulate and its details;
– Do’s and don’ts will allow to define goals and errors of the training tasks.
Thanks to the mentioned considerations, preliminary training-task description is produced, mentioning not only the sequence of maneuvers to perform, but also the errors to avoid and the requirements of the simulator to be used, which will drive the next development step.
This is the most stimulating part of the process, as it allows to check and test all the devices that are useful to the designed tasks. In a first phase a test of the pertaining simulators on the market is needed. This is critical to highlight their pro’s and cons, to check the feasibility of the task on each of them and the possible upgrades or modifications to apply. Tests are usually run by a cohort of experts, after receiving in advance a detailed task description. If a simulator can be used with minimal modifications, then it’s usually preferred to a brand-new product design, which may require some dedicated investment. In case no system is fulfilling the requirements, then a new simulator is designed. Simulator development requires a close collaboration of educators, engineers and physicians. While the educator may provide an insight about the correct methodology to use, the engineers allow the development of each component (3d-print, electronics, materials, software) and the physician double checks and provides support along the whole process. After the early development, the prototype is then tested again by the experts, who may ask for changes to allow easier replicability of the original training task. Once the simulator has been finalized, it is double checked for eventual production modifications.
Validation might be considered as the most important step, but a wise development drastically increases the chances of a success. According to the latest concept inspired by Messick’s framework of validity (*3,4), validation is mainly focusing on how the simulator was designed, how relevant is the background of the surgeon who is approaching it and how important is the assessment to understand the actual acquisition of skills (*5). According to the updated validity taxonomy summarized by Goldenberg (*6), validation includes the following aspects: test content, response processes, internal structure, relationships to other variables and consequences of testing. Test content pertains the ability of the simulator to produce the expected outcomes, usually decided by a cohort of experts. Response process is the analysis of the assessment methodology and its ability to reflect and score the observed performance of the trainee. Internal structure focuses again on the assessment methodology, its replicability and statistical reliability. Relationship to other variables correlates the performance with known measures of skill or ability, like for example the clinical background of the participant. Consequences of testing are considering the relationship between the assessment and what comes after the training itself (eg. improvement on the surgical field).
Validation is anyway not absolute: a valid simulator might be more or less beneficial to a trainee, depending on several variables and, most important, the teaching ability of the tutor (*7).
Before becoming an actual “assessment tool”, the validated protocol has to be tested as a “training tool” on a large scale, together with the relative simulator. Wide-spreading allows to understand the feasibility of the teaching model in a regular setting, which can be either a simulation center, a University class or a conference, depending on the previously set goals. Implementation phase tests the portability of the simulator, the replicability of the training session and overall the “standardiz-ability” of the entire training system. Feedbacks are collected in this phase to check whether the participants and the funding companies feel satisfied and if their expectations are met. Once again, being standardization the core of high-quality training, this phase is fundamental to make sure that everything is working correctly.
The final part of the curriculum development endorses the assessment properties of the entire protocol and gives sense to the name suggested by Satava: Full life-cycle curriculum development1. Indeed, issuing the certification allows to confirm acquisition of the skills as planned during the first phase, outcomes and metrics. This “closes the circle” and needs to exactly correspond to what was expected since the very beginning, during the early consensus meetings.
History
The SISE programme was founded in 2019 by The European School of Urology (ESU) who delivers the education needs of urologists on behalf of the EAU Education Office. The ESU succesfully applied for an ERASMUS+ grant in collaboration with 6 leading institutions in the field of minimally-invasive surgery (MIS) field. These 6 leading institutions are Academisch Medisch Centrum (NL), SLK-Kliniken Heilbronn (DE), Institutul Oncologic Cluj-Napoca (RO), Uniwersytet Mikołaja Kopernika w Toruniu (PL), Univerzita Karlova (CZ) and Panepistimio Patron (GR).
Addressing the need for modern urological training, multiple methodologies have been developed in the last 20 years, starting from the original “learning-by-watching” dogma. This resulted in the production of a large pool of surgeons around Europe, who reached their professional proficiency with individualized modalities. However, these process was not following a planned pathway and was lacking measurability and replicability between different countries. In EAU we believe that between the most important goals of a scientific society, is wide-spreading education to guarantee the highest standards in patient safety. Therefore, in 2011 the European School of Urology (ESU) started with the development of standardized training programs for urological surgery. These research procedures were further supported and optimized since September 2013, by the newly instituted “ESU training research group”, which coordinated development and testing of new protocols in collaboration with the EAU section offices. In 2019 the ESU training research group successfully applied for a ERASMUS + grant to kick start the SISE programme.
References
- Satava R, Gallagher A. Next generation of procedural skills curriculum development: Proficiency-based progression. J Heal Spec. 2015;3(4):198. doi:10.4103/1658-600X.166497
- Salmon P, Stanton N, Gibbon A, Jenkins D, Walker G. Cognitive Task Analysis. In: Human Factors Methods and Sports Science. ; 2009. doi:10.1201/9781420072181-c4
- Messick S. Validity of Psychological Assessment. Am Psychol. 1995. doi:10.1037//0003-066X.50.9.741
- Korndorffer JR, Kasten SJ, Downing SM. A call for the utilization of consensus standards in the surgical education literature. Am J Surg. 2010;199(1):99-104. doi:10.1016/j.amjsurg.2009.08.018
- Sweet RM, Hananel D, Lawrenz F. A unified approach to validation, reliability, and education study design for surgical technical skills training. Arch Surg. 2010. doi:10.1001/archsurg.2009.266
- Goldenberg M, Lee JY. Surgical Education, Simulation, and Simulators—Updating the Concept of Validity. Curr Urol Rep. 2018;19(7). doi:10.1007/s11934-018-0799-7
- Satava RM. The future of sugical simulation and surgical robotics. Bull Am Coll Surg. 2007.