A vision transformer for decoding surgical activity

A machine learning system leveraging a vision transformer and supervised contrastive learning accurately decodes elements of intraoperative surgical activity from videos commonly collected during robotic surgeries.
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

The cover illustrates a machine-learning system for the decoding of intraoperative surgical activity from videos commonly collected during robotic surgeries.

See Kiyasseh et al.

Image: Melanie Lee, SayoStudio. Cover design: Alex Wing.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Biotechnology
Life Sciences > Biological Sciences > Biotechnology