Validation of a virtual reality-based surgical training for pedicle screw placement using vertebral templates

When innovation meets the operating room: how virtual reality helps practicing surgeons master new techniques before scrubbing in

Published in Surgery and Education

Validation of a virtual reality-based surgical training for pedicle screw placement using vertebral templates
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

Springer London
Springer London Springer London

Validation of a virtual reality-based surgical training for pedicle screw placement using vertebral templates - Virtual Reality

Spinal surgery demands exceptional theoretical knowledge and practical skills, with pedicle screw procedures posing significant risks due to proximity to critical anatomical structures. This study validates a virtual reality (VR) simulation platform for training in pedicle screw arthrodesis using patient-specific vertebral drilling templates. The practical simulation of a specific case study was evaluated with both neurosurgical residents and medical students. Results demonstrate that while experienced residents completed simulated procedures significantly faster than students (p < 0.01), these latter ones showed marked improvement across consecutive training sessions (p < 0.001). The most substantial performance gains occurred between the first and second trials, highlighting the rapid learning curve facilitated by the VR environment. System Usability Scale assessments revealed high satisfaction with the simulation platform, with participants emphasizing the value of risk-free repetitive practice. Qualitative feedback from experienced participants confirmed the precision and realism of the training environment as critical factors contributing to their efficiency. This validation confirms that VR simulation platforms offer an effective training environment for complex spinal procedures, providing a safe space for skill development before clinical application, particularly when incorporating innovative surgical aids such as patient-specific drilling templates.

Surgical training has come a long way. For standard, well-established procedures, the tools available today — cadaver labs, physical simulators, structured residency programmes — have proven effective at building the foundational skills that surgeons need. Residents learn to suture, to navigate anatomy, to handle instruments. The system works, within its limits.

But what happens when a genuinely new technique enters clinical practice?

This is where the traditional model struggles. An innovative procedure — one that introduces a novel instrument, a different anatomical approach, or a non-standard workflow — cannot simply be absorbed through repetition of familiar tasks. It requires a different kind of preparation: not just manual dexterity, but a deep, procedural understanding of something the surgeon has never done before. And in high-stakes environments like spinal surgery, where critical structures such as the spinal cord, nerve roots and major blood vessels lie within millimetres of the operative field, there is very little room to learn on the job.

This is the problem our research set out to address.

A familiar procedure, an innovative tool

Pedicle screw arthrodesis — the insertion of screws into the vertebral pedicles to stabilise the spine — is a well-established surgical technique. Surgeons perform it routinely for degenerative disease, trauma, scoliosis correction and tumour resection. In that sense, it is not new. But the introduction of patient-specific drilling templates changes the picture significantly.

These custom guides, fabricated from individual patient CT scan data using reverse engineering and 3D printing, allow surgeons to plan and execute screw trajectories with sub-millimetre precision. Studies have shown they can reduce screw malposition rates from around 15% to less than 5%. They represent a genuine step forward in surgical precision, while also drastically cutting the radiation dose absorbed by both patient and surgical team.

Yet precisely because they are innovative, they introduce a new layer of complexity. Using them correctly requires not just surgical skill, but a thorough understanding of how to position and apply the template itself — a workflow that experienced surgeons have not necessarily encountered before, and that novice practitioners cannot be expected to grasp without structured exposure. The technology is only as good as the training that accompanies it.

Virtual reality as a transfer vehicle for specialised knowledge

This is where virtual reality enters — not as a replacement for clinical experience, but as a tool specifically suited to bridging the gap between a new technique and the people who need to master it.

Our group at the University of Salerno, working with Techno DESIGN S.r.l. and the Department of Neurosurgery at the San Giovanni di Dio e Ruggi d'Aragona hospital in Salerno, developed a VR simulation platform designed precisely for this purpose. The virtual environment — built using Blender for 3D modelling and the Unity game engine — recreates an immersive operating theatre in which participants can practice the full procedural sequence of template-guided pedicle screw insertion, step by step, without any risk to a real patient.

What makes this approach particularly relevant for innovative technique transfer is the patient-specific dimension. Templates in our system are derived from real CT data, meaning the simulation mirrors not a generic anatomy, but the kind of case-specific variability that surgeons actually encounter. Trainees are guided to learn the concrete reasoning behind each step — why the template is positioned as it is, what it is compensating for, what a correct versus incorrect placement looks and feels like in the virtual environment.

What the data showed

We recruited 40 participants — 20 neurosurgical residents familiar with pedicle screw procedures, and 20 medical students with no prior surgical experience — to validate the platform. Residents completed the simulation once, establishing a performance benchmark. Students repeated it three times, allowing us to observe the learning curve directly.

The results were clear. Medical students began with significantly longer completion times than residents (5.74 minutes versus 4.0 minutes on average). By their third session, they had reduced that gap substantially, reaching a mean time of 4.6 minutes and an efficiency index of nearly 90% relative to expert performance. The most dramatic improvement occurred between the first and second sessions — a pattern consistent with the rapid acquisition of procedural schema that VR training, with its structured guidance and immediate feedback, is particularly well-suited to facilitate.

The reduction in assistance requests reinforced this picture. Students went from an average of 1.6 requests per session in the first trial to just 0.1 in the third — a 94% reduction. By the final session, nine out of ten participants completed the procedure entirely independently. Usability assessments using the System Usability Scale confirmed high satisfaction across both groups, with residents noting the precision and realism of the environment as factors that matched their clinical expectations.

The broader implication

What this study ultimately demonstrates is not simply that VR training works but that VR is particularly well-positioned to support the adoption of innovative surgical techniques that carry inherent risk during the learning phase.

When a new tool or method enters clinical practice, the window between its introduction and its safe, widespread use depends entirely on how effectively knowledge can be transferred to the surgeons who need it. Physical models help. Observation helps. But neither offers the combination of immersion, repetition, patient-specificity, and zero clinical risk that a well-designed VR platform can provide.

Patient-specific drilling templates for pedicle screw arthrodesis represent one example. The principle, however, extends far beyond this single procedure. As surgical innovation accelerates — bringing new implants, new navigational tools, new minimally invasive approaches — the question of how to train surgeons to use them safely before they enter the operating room will only become more pressing. Surgery has always been learned by doing. Virtual reality does not change that truth but it demonstrates to ensure that, when the moment of doing finally arrives, it is not also the moment of learning.

 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Medical Education
Humanities and Social Sciences > Education > Professional and Vocational Education > Medical Education
Neurosurgery
Life Sciences > Health Sciences > Surgery > Neurosurgery
Surgery
Life Sciences > Health Sciences > Surgery

Related Collections

With Collections, you can get published faster and increase your visibility.

XRMemory: The Future of Memory Capture, Reconstruction, and Replay in XR

Over the past few years, XR and spatial computing technologies have become essential elements in how we interact with digital information. As these systems proliferate and become accessible to a broader audience, researching how users capture, reconstruct, and replay their personal experiences is crucial for understanding their design principles and user experience elements.

Alongside this expansion, we are witnessing significant advancements in artificial intelligence (AI) technology. Integrating AI into XR systems enhances the ability to perform more complex tasks, such as scene reconstruction, character animation, and interaction synthesis. These recent developments enable more dynamic and augmented interactions between humans and semi-automated systems, forming an ‘experiential memory’ where the user's real-time emotional and cognitive state can influence the experience.

Understanding the dynamics between humans and AI-based automation systems is paramount to ensuring effective, meaningful, and sound teamwork. Hybrid memory systems carry inherent risks, such as information overload and cognitive load, that can hinder decision-making. Therefore, the key challenge is distinguishing two core stages: first, capturing foundational multimodal data from experiences; second, reconstructing that data to create powerful and effective XRMemory experiences that complement rather than replace the user's skills.

Recent research on personal experiences has highlighted challenges like system understandability and user engagement. The rise of AI and increasingly complex human-AI interactions present new challenges. This special issue invites papers addressing the core tasks of capture, reconstruction, and interaction in hybrid memory systems. These enable effective, trustworthy, and seamless teamwork between humans and AI. We welcome papers from diverse fields, including human-computer interaction, artificial intelligence, cognitive science, and systems engineering.

This special issue welcomes original contributions as well as extended papers from the 2025 XRMemory Workshops:

XRMemory: 1st International Workshop on Spatial Memory in XR: The Future of Memory Capture and Replay Through XR and AI (IEEE VR 2025) - https://sites.google.com/view/spatial-memory-vr-2025/main Date: March 9, 2025

XRMemory'25: 2nd International Workshop on Spatial Memory in XR: Defining and Capturing Memory for Interactive Playback in XR (IEEE ISMAR 2025) - https://sites.google.com/view/xrmemory25/main Date: October 12, 2025

Publishing model:

Virtual Reality is a fully open-access (OA) journal. All articles will be freely available to readers worldwide, ensuring the widest possible dissemination and reuse. Transforming Virtual Reality to a full OA demonstrates Springer Nature commitment to open research, providing researchers with OA publication venues that enhance their visibility and comply with funder and institutional requirements. See the FAQs for more information: https://link.springer.com/journal/10055/updates/25856886.

Submission Guidelines:

Authors should prepare their manuscript in line with the Virtual Reality Guide to Authors available at https://link.springer.com/journal/10055/submission-guidelines.

Authors should submit through the online submission site at https://submission.springernature.com/new-submission/10055/3 and select “XRMemory: The Future of Memory Capture, Reconstruction, and Replay in XR" when they reach the “Collection” step in the submission process. Submitted papers should present original, unpublished work, relevant to the topics of the special issue.

All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process.

Please note that the authors of selected papers presented at the XRMemory Workshops are invited to submit an extended version of their contributions by taking into consideration both the reviewers’ comments on their conference paper, and the feedback received during presentation at the conference. It is worth clarifying that the extended version is expected to contain a substantial scientific contribution, e.g., in the form of new algorithms, experiments or qualitative/quantitative comparisons, and that neither verbatim transfer of large parts of the conference paper nor reproduction of already published figures will be tolerated. The extended versions of XRMemory papers will undergo the standard, rigorous journal review process and be accepted only if well-suited to the topic of this special issue and meeting the scientific level of the journal. Final decisions on all papers are made by the Editors in Chief.

Publishing Model: Open Access

Deadline: Jun 30, 2026