Skip to main
  • Resource Type:
  • Presentation

P98. Integrating Augmented Reality in Surgery for Medical Education and Telesurgery

May 8, 2023


Source:
103rd Annual Meeting, the Los Angeles Convention Center, Los Angeles, CA, USA
Los Angeles Convention Center, Exhibit Hall
  • Share this page:

Objective
Augmented reality (AR) is a novel technology that allows for the seamless visualization and superimposition of virtual content in a real-world view. In surgery, AR has the potential for improving the technical feasibility, accuracy, and safety of many procedures by overlaying relevant data in the surgical field. The objective of this study is to demonstrate an open-source and accessible framework for AR-powered surgical vision for medical education and telesurgery, without requiring additional specialized equipment or software.

Methods
A mobile and tablet-based application was developed from the ground up for real-time streaming of live surgical footage between a host and remote device, with a bi-directional flow of information: the remote viewer receives surgical footage and an audio stream from the host, while the latter receives guidance or reference information using audio, and any 2D/3D media placed by the remote viewer in the surgical field with real-world anchors. The quality of the streamed video is dynamically adjusted spatially and temporally to prioritize audio communication. To accurately place a 3D object in the host's view from a 2D screen tap on the remote view, 'raycasting' is performed by creating a ray originating from the tap location and intersecting with a detected plane or structure in the scene. These raycasts are continuously tracked and updated to refine the position of added objects. The framework's adaptability is tested on rattus transverse aortic constriction.

Results
Our results successfully show the ability to guide the host viewer by anchoring 3D models extracted from CT/MRI scans in the surgical field to facilitate the identification of anatomical structures, while relaying vital information in real-time without distracting from the task at hand. We also demonstrate the ability to highlight structures of interest using a 2D pointer controlled by the remote viewer. Finally, we showcase the flexibility of our approach as a future platform for augmented surgical vision by successfully running a machine-learning model for hand pose estimation in real-time to accurately predict surgical actions.

Conclusion
AR is a rapidly evolving technology with the potential to augment surgical vision and navigation through improved flow of information. This study serves as a proof-of-concept to jumpstart the development of an open-source ecosystem for low-cost and powerful AR-enabled applications in surgery.


Cyril Zakka (1), Alex Dalal (1), Rohan Shad (1), Robyn Fong (1), Curran Phillips (1), Jack Boyd (1), William Hiesinger (1), (1) Stanford University Medical Center, Stanford, CA


Cyril Zakka

Poster Presenter

Dr Cyril Zakka completed his medical training at the American University of Beirut Medical Center (AUBMC), where he also served as the Director of the Artificial Intelligence in Medicine (AIM) program, which he founded. Over the past 4 years, he has been working at the intersection of AI, robotics, and medicine to develop algorithms to improve on patient care, both in the clinic and the operating room. He is currently a postdoctoral scholar in the Hiesinger Lab at Stanford University, with a research focus involving surgical robotics and cardiac imaging.

Specialties: Multi-Specialty, General Interests of CardioThoracic Surgeons, Education, Educational Research, General Education, Residency Education, Treatment/Procedure/Operation/Surgery, General Thoracic, Thoracic, Procedures, Treatment/Procedure/Operation/Surgery