![]() Considering that a newborn brain quadruples its size during the first year of life, this defect can result in very important functional and structural alterations. It affects one in every 2000 to 2500 live births and produces cranial deformities that may limit brain growth. This condition may be induced by genetic, teratogenic, or mechanical causes or can even arise sporadically. Craniosynostosis is a congenital defect that implies the premature fusion of one or more cranial sutures that separate the skull bones. Consequently, contextual support may guide junior clinicians during their first interventions to increase their confidence and enhance their outcomes.ĭespite the number of surgical procedures in which AI has improved workflow analysis, up to our knowledge, no studies have applied these techniques to open cranial vault remodeling for the correction of craniosynostosis. In fact, those students that have trained with simulators tend to perform better than those that followed traditional learning. On the other hand, the surgeons’ expertise influences the post-operative results : novel surgeons are more prone to errors in the OR. Measuring the actual time of each surgical step may also improve communication and coordination of the clinical staff, increasing the hospital’s efficiency. Identifying different phases in a surgical procedure could be beneficial in many aspects: on the one hand, it can facilitate intraoperative support, providing automated assistance and objective feedback moreover, real-time warnings can be displayed when unexpected workflow variations or adverse events are detected, reducing the rate of complications in operating rooms (OR) and enhancing patient’s safety. ![]() If tool detection algorithms provide good results, the next step is the automatic recognition of surgical workflows. Our results prove the feasibility of applying deep learning architectures for real-time tool detection and phase estimation in craniosynostosis surgeries. Regarding phase detection, InceptionV3 and VGG16 obtained the best results (94.5% and 94.4%), whereas MobileNetV2 and CranioNet presented worse values (91.1% and 89.8%). The results showed that CranioNet presents the lowest accuracy for tool recognition (93.4%), while the highest accuracy is achieved by the MobileNetV2 model (99.6%), followed by VGG16 and InceptionV3 (98.8% and 97.2%, respectively). The training and test data were acquired during a surgical simulation using a 3D printed patient-based realistic phantom of an infant’s head. A novel 3D Slicer module was specifically developed to implement these networks and recognize surgical tools in real time via video streaming. For this purpose, we implemented, trained, and tested three algorithms based on previously proposed Convolutional Neural Network architectures (VGG16, MobileNetV2, and InceptionV3) and one new architecture with fewer parameters (CranioNet). The objective was to automatically recognize surgical tools in real-time and estimate the surgical phase based on those predictions. This study applies these techniques in open cranial vault remodeling surgeries performed to correct craniosynostosis. Deep learning is a recent technology that has shown excellent capabilities for recognition and identification tasks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |