WO2022245833A2 - System, method and apparatus for robotically performing a medical procedure - Google Patents
System, method and apparatus for robotically performing a medical procedure Download PDFInfo
- Publication number
- WO2022245833A2 WO2022245833A2 PCT/US2022/029645 US2022029645W WO2022245833A2 WO 2022245833 A2 WO2022245833 A2 WO 2022245833A2 US 2022029645 W US2022029645 W US 2022029645W WO 2022245833 A2 WO2022245833 A2 WO 2022245833A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- computer
- data
- sensor data
- medical
- surgery system
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 121
- 238000002432 robotic surgery Methods 0.000 claims abstract description 88
- 238000012549 training Methods 0.000 claims abstract description 63
- 230000000694 effects Effects 0.000 claims abstract description 51
- 238000003062 neural network model Methods 0.000 claims abstract description 10
- 238000010801 machine learning Methods 0.000 claims description 56
- 230000000007 visual effect Effects 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 230000002787 reinforcement Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000001356 surgical procedure Methods 0.000 description 38
- 230000015654 memory Effects 0.000 description 27
- 238000003384 imaging method Methods 0.000 description 24
- 238000012545 processing Methods 0.000 description 23
- 238000003860 storage Methods 0.000 description 22
- 238000002591 computed tomography Methods 0.000 description 8
- 230000007175 bidirectional communication Effects 0.000 description 6
- 238000003780 insertion Methods 0.000 description 6
- 230000037431 insertion Effects 0.000 description 6
- 210000000988 bone and bone Anatomy 0.000 description 5
- 230000006854 communication Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 239000007943 implant Substances 0.000 description 5
- 230000000399 orthopedic effect Effects 0.000 description 5
- 239000000126 substance Substances 0.000 description 5
- 238000002604 ultrasonography Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000001493 electron microscopy Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 238000002600 positron emission tomography Methods 0.000 description 4
- 238000012285 ultrasound imaging Methods 0.000 description 4
- 210000002517 zygapophyseal joint Anatomy 0.000 description 4
- 238000002567 electromyography Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 210000004872 soft tissue Anatomy 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001839 endoscopy Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000008733 trauma Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 210000000436 anus Anatomy 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 210000001072 colon Anatomy 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000002683 foot Anatomy 0.000 description 1
- 238000002682 general surgery Methods 0.000 description 1
- 238000002683 hand surgery Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000002697 interventional radiology Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 231100000572 poisoning Toxicity 0.000 description 1
- 230000000607 poisoning effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 210000000664 rectum Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013403 standard screening design Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 238000007631 vascular surgery Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00203—Electrical control of surgical instruments with speech control or speech recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00207—Electrical control of surgical instruments with hand gesture control or hand gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00216—Electrical control of surgical instruments with eye tracking or head position tracking control
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
- A61B2034/2057—Details of tracking cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2059—Mechanical position encoders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
- A61B2034/256—User interfaces for surgical systems having a database of accessory information, e.g. including context sensitive help or scientific articles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/064—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/064—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
- A61B2090/066—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension for measuring torque
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/067—Measuring instruments not otherwise provided for for measuring angles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
Definitions
- the present disclosure is generally related a medical procedure that utilizes a number of sensed parameters to generate recommended positions of surgical instruments.
- BACKGROUND [0002]
- minimally invasive robotic surgical or tele-surgical systems have been developed to increase a dexterity and avoid some of the limitations on traditional minimally invasive techniques.
- the surgeon can be provided with an image of the surgical site at the surgical workstation.
- One embodiment of the disclosure is directed to a method (“the method”) comprising: acquiring first visual sensor data from one or more of a plurality of visual sensing devices, during a pre-operative stage; performing machine learning on the first visual sensor data; generating a recommended trajectory for a surgical instrument based on the machine learning; acquiring second visual sensor data from one or more of a plurality of visual sensing devices during an operative stage; performing voting on the first visual data and the second visual data; modifying the recommended trajectory for the surgical instrument based on the voting; and controlling movement of the surgical instrument based on the modified recommended trajectory.
- Another embodiment is directed to the method described above, further comprising acquiring audio sensor data from one or more of a plurality of audio sensing devices, during the operative stage; and utilizing the audio sensor data in the voting.
- Another embodiment is directed to the method described above further comprising acquiring haptic data from one or more of a plurality of haptic sensing devices, during the operative stage; and utilizing the haptic data in the voting.
- Another embodiment is directed to the method described above, further comprising displaying a representation of the movement of the surgical instrument at a heads-up display.
- Another embodiment is directed to the method described above, further comprising performing image training on the first visual sensor data and the second visual sensor data.
- Another embodiment is directed to the method described above, further comprising accessing input from one or more surgeons; and utilizing the input in the voting.
- Another embodiment is directed to the method described above, further comprising assigning a weight to the input from one or more surgeons; and utilizing the weight in the voting.
- Another embodiment is directed to the method described above, wherein the first visual sensor data and the second visual sensor data are acquired from a mesh network.
- Another embodiment is directed to the method described above, further comprising accessing input from one or more surgeons; and utilizing the input to generate the recommendation.
- Another embodiment is directed to the method described above, further comprising accessing preferences associated with a particular individual; and modifying the recommended trajectory based, at least in part, on the preferences associated with a particular individual.
- a computer-assisted robotic surgery system comprising: a navigation system for tracking a relative position of a patient region and one or more medical tools, the navigation system including two or more receivers configured to monitor an aspect of a medical procedure activity; a robotic arm; an actuator assembly operatively engaged with the robotic arm; and a computer operatively coupled to the actuator, having computer instructions that when executed: determine a reference frame for the patient region based on a neural network model trained on sensor data and on image training data sampled from a sensor system positioned to monitor an aspect of the medical procedure activity; track, using the navigation system, a relative position of the patient region; and determine a position, angle, and velocity for one of the medical tools relative to the patient region based on the neural network model trained on the sensor data and on the image training data
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the computer instructions are executed to apply, using the robotic arm, control forces to the patient region based on the determined position, angle, and velocity for the one of the medical tools while the robotic surgical apparatus is engaged with the patient region.
- the sensor data includes one or more of position, angle, force, torque, audio data, haptic data.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the angle is acquired based on an angle sensor coupled to an x-ray.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the position is acquired based on fiducial markers disposed around the patient region.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the image training data is determined based on medical images that are annotated and classified by one or more medical providers.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the neural network model is trained using reinforcement learning based, at least in part, on the image training data that is annotated and classified by the one or more medical providers.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is determined using a convolutional neural network.
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is a generative model trained via an adversarial learning process (GAN learning).
- GAN learning an adversarial learning process
- Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is refined by simulating segmentation of vertebral bodies.
- FIG.1 illustrates an exemplary computer-assisted robotic surgery system in accordance with an exemplary embodiment.
- FIGS.2A-2D illustrate an exemplary machine learning model for use by the computer-assisted robotic surgery system as described herein.
- FIG.3 illustrates a flowchart of an exemplary method for refining the training of the machine learning model for use by the computer-assisted robotic surgery system, as described herein.
- FIG.4 illustrates an exemplary computer-assisted robotic surgery system in accordance with an exemplary embodiment of the subject disclosure.
- FIG.5 illustrates a flowchart of an exemplary method for performing a medical procedure activity using the computer-assisted robotic surgery system as described herein.
- FIG.6 illustrates a process according to an embodiment of the disclosure.
- FIG.7 illustrates a network environment according to an embodiment of the disclosure. DETAILED DESCRIPTION [0011]
- a means “at least one.”
- the terminology includes the words above specifically mentioned, derivatives thereof, and words of similar import.
- “Substantially” as used herein shall mean considerable in extent, largely but not wholly that which is specified, or an appropriate variation therefrom as is acceptable within the field of art. “Exemplary” as used herein shall mean serving as an example.
- range format various aspects of the subject disclosure can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the subject disclosure. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, and 6. This applies regardless of the breadth of the range.
- FIG.1 illustrates an exemplary computer-assisted robotic surgery system 100 in accordance with an exemplary embodiment.
- the computer-assisted robotic surgery system 100 begins in a pre-operative training phase by acquiring sensor data, including, for example, pre- operative patient image data of a patient region.
- the pre-operative training phase continues with the computer-assisted robotic surgery system using the sensor data to construct and refine a machine learning model based on the acquired sensor data.
- the computer-assisted robotic surgery system proceeds to an intraoperative execution phase during which the system 100 determines and recommends a trajectory for a medical tool relative to the patient region, based on the machine learning model.
- the computer-assisted robotic surgery system 100 includes a patient 102, a patient body region 104, a navigation system 106, a plurality of sensors, or receivers 108(a)...(n), a robotic surgical apparatus 110, robotic arm 112 surgical instrument 113, and a computer 114.
- the computer-assisted robotic surgery system 100 begins in a pre-operative training phase by acquiring or obtaining sensor data relating to an activity of a medical procedure to be performed on a region 104 of a patient 102.
- the computer-assisted robotic surgery system 100 can enhance any suitable medial procedure.
- medical procedures including, but not limited to, endoscopy, interventional radiology, or any other medical procedure in which a medical provider uses a medical tool.
- medical procedures include general surgery, thoracic surgery, colon and rectal surgery, obstetrics and gynecology, gynecologic oncology, neurological surgery, ophthalmic surgery, oral and maxillofacial surgery, orthopedic surgery, otolaryngology, pediatric surgery, plastic and maxillofacial surgery, urology, vascular surgery, and the like.
- Example orthopedic surgeries include hand surgery, sports medicine, pediatric orthopedics, spine surgery, foot and ankle orthopedics, joint replacement, trauma surgery, oncology, and the like.
- the region 104 of the patient 102 can be a specific target, such as the L3 vertebra right pedicle on which a surgeon will perform an activity of a medical procedure such as a spine surgery.
- the patient region can be a broader area or patient body part, such as the anus, rectum, or mouth of the patient into which a medical provider will perform an activity of a medical procedure such as an endoscopy.
- the navigation system 106 is configured to track a relative position of the region 104 of the patient 102, along with one or more medical tools.
- the navigation system 106 can be an active system or a passive system and include multiple sensors, or receivers, 108(a), 108(b)... 108(n), where “n” is any suitable number.
- the receivers 108(a)...(n) are configured to monitor aspects of the medical procedure activity and acquire sensor data 116 relating to the medical procedure activity.
- the sensors 108 can sense visual data, audio data, haptic data, positional data of personnel in the operating room and other sensory input.
- the receivers, generally 108 can be mounted in various locations throughout the operating room. In this arrangement, even if one receiver is obstructed from acquiring and providing sensor data, one or more of the remaining receivers with an unobstructed view can continue to provide sensor data.
- the sensor data from multiple receivers 108 can be combined, e.g., triangulated, to determine sensor data that is more accurate than sensor data received from a single receiver.
- the computer-assisted robotic surgery system 100 can evaluate the sensor data, for example, based on a configuration whereby the receivers 108 are configured to vote on the quality or correctness of the sensor data.
- the computer 114 which has a processor and memory, can execute a voting routine to determine the desired result from conflicting, inadequate or conflicting receiver data.
- the receivers 108 can be connected in a mesh network so as to provide and exchange the sensor data, in a similar fashion to sensor data acquired and exchanged via the Internet of Things (IoT).
- IoT Internet of Things
- the computer-assisted robotic surgery system 100 acquires sensor data 116 pre-operatively via the navigation system 106 and an imaging system 118.
- imaging system 100 can include two-dimensional or three-dimensional surgical imaging platforms such as the O-arm system for use in spine, cranial, orthopedic, ear / nose / throat, trauma-related, or other surgeries, or a medical imaging device such as a C-arm imaging scanner intensifier.
- the sensor data 116 includes, for example, medical tool position, medical tool orientation, medical tool angle of insertion into the patient region 104, medical tool force, medical tool torque, medical tool size, medical tool audio data, medical tool haptic data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, chemical data, patient-specific image data, patient-specific video data, and electromyography (EMG) electrophysiologic data.
- LIDAR light detection and ranging
- EMG electromyography
- the sensor data can track a position, angle, force, torque, or size of a medical tool 113 while an activity of a medical procedure is performed on a patient 102.
- the sensor data can further track audio data, haptic data (e.g., force feedback, force that stimulates the senses of touch and motion, especially to reproduce in remote operation or computer simulation the sensations that would be felt by a user interacting directly with physical objects), LIDAR data, infrared data, radar data, ultrasound data, or chemical data generated while the medical tools are in use on the patient region.
- haptic data e.g., force feedback, force that stimulates the senses of touch and motion, especially to reproduce in remote operation or computer simulation the sensations that would be felt by a user interacting directly with physical objects
- LIDAR data infrared data
- radar data e.g., infrared data
- ultrasound data e.g., ultrasonic data
- the LIDAR data and patient-specific video data can be acquired from LIDAR sensors or cameras communicatively coupled in the computer-assisted robotic surgery system 100.
- the medical tool 113 position data can be acquired based on fiducial markers, or other image guiding marker, disposed around the patient’s region 104.
- the medical tool angle data can be acquired based on an angle sensor 119 coupled to an imaging system 118 such as an x-ray.
- the sensor data further includes, but is not limited to, object data and environmental data such as a position, orientation, and angle of each medical tool 113, medical kit, table, platform, and personnel occupying the operating room.
- the computer-assisted robotic surgical system 100 is configured to use the acquired sensor data 116 to generate and store dynamic electronic medical record data.
- dynamic electronic medical record data refers to sensor data associated with a time component, such as evolution over time of patient image data, object data, or environmental data including motion data of the patient, medical tools, kits, tables, and personnel from operating rooms.
- the electronic medical record data whether static or dynamic, can be stored and persisted using the Internet, a wired or wireless network connection, a cloud computing or edge computing architecture, or a decentralized secure digital ledger to facilitate secure maintenance, access, and verification of the electronic medical records.
- the sensor data 116 can also include patient image data.
- the patient image data can be acquired pre-operatively from an imaging system 118.
- the patient image data can include patient-specific image data acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the patient image data is anonymized or pseudonymized so as to remove at least some identifying characteristics of a subject of the image training data.
- the sensor data including the patient image data is used to generate image training data for use in training a machine learning model, as described in further detail below.
- the acquired sensor data 116 can include large amounts of data stored and leveraged for training purposes to construct and refine the machine learning model.
- the terms “large amounts of data” or “big data” refer to unstructured and semi-structured data in such large volumes (for example, petabytes or exabytes of data) as to be enormous cumbersome to load into a database for conventional analysis.
- the robotic surgical apparatus 110 includes a robotic arm 112 and an actuator assembly 115 operatively engaged with the robotic arm.
- the robotic surgical apparatus 110 is configured to operate medical tools so as to perform a medical procedure activity on the patient’s region 104 or on a region of a similarly situated patient.
- the navigation system 106 is configured to monitor and acquire sensor data relating to the performance of the medical procedure activity while the robotic surgical apparatus performs the medical procedure activity.
- the computer 114 is operatively coupled to the robotic surgical apparatus 110.
- the computer 114 is configured to perform the medical procedure activity intraoperatively, based on the machine learning model trained based on the sensor data and image training data.
- the computer 114 is configured to cause the robotic surgical apparatus 110 to operate the medical tool 113 intraoperatively to perform the medical procedure activity on the patient’s region 104.
- the computer 114 is configured to assist the user intraoperatively in performing the medical procedure activity, based on the machine learning model trained based on the sensor data and image training data.
- the computer 114 may be configured to communicate with the navigation system 106 and the robotic surgical apparatus 110 over one or more networks, as described in relation to FIG.7 herein.
- the apparatus shown in FIG.1 may be used in any suitable medical procedure activity that utilizes a recommended trajectory using the medical tool 113.
- the computer-assisted robotic surgery system 100 is configured to control a robotic surgical apparatus 110 to perform the medical procedure activity using the medical tool 113.
- the computer-assisted robotic surgery system 100 controls the robotic surgical apparatus 110 to operate a surgical drill 113 at the recommended position, angle, and velocity so as to insert a pedicle screw into the patient vertebra.
- the computer-assisted robotic surgery system is configured to execute the recommended trajectory in the form of a prescription, e.g., automatically performing the medical procedure activity without requiring prior approval from the surgeon.
- the computer-assisted robotic surgery system 100 is also configured to refine the recommended trajectory determined based on the machine learning model.
- the refinement can include applying image processing heuristics to one or more segments the patient region (e.g., patient bone) identified in intraoperative patient images based on the initial trajectory received from the machine learning model.
- the computer-assisted robotic surgery system 100 refines the recommended trajectory, for example, based on domain-specific enhancements, as described above.
- the refinement can include incorporating inference or knowledge of an individual surgeon’s preferences into the recommended trajectory or into the training of the machine learning model.
- the refinement can include modifying the recommended trajectory based on preferences inferred or known about particular surgeon so as to medialize by 1-2 mm a pedicle screw being inserted into a vertebral body so as to avoid risk of contact with a facet joint of the vertebral body.
- the preferences known or inferred of a particular surgeon can be modified for each particular individual surgeon. This dynamic modification is implemented based on the personnel in the room and the relative location of a surgeon to the patient. Thus, if a surgeon takes over a procedure, an indication is provided to the system and the new surgeon’s particular preferences are used during that surgeon’s portion of the procedure.
- the computer-assisted robotic surgery system 100 is configured to refine the recommended trajectory upon detection that the actual intraoperative trajectory has deviated or is predicted to deviate from the recommended trajectory.
- the apparatus 110 further includes receiving input to control the computer- assisted robotic surgery system during the pre-operative training phase or the intraoperative execution phase.
- the computer 114 is configured to receive and process inputs, such as commands, to control the robotic surgical apparatus 110.
- Non-limiting example input includes keyboard input, touch screen input, joystick input, pre-programmed console input, voice input, sound or aural input, eye movement input, facial expression input, and physical gesture input.
- the subject input modalities allow the computer-assisted robotic surgery system to operate independent of the surgeon’s location.
- the device 110 further includes executing the medical procedure activity based on the recommended trajectory using the medical tool 113.
- the computer- assisted robotic surgery system is configured to control a robotic surgical apparatus to perform the medical procedure activity using the medical tool.
- the computer-assisted robotic surgery system controls the robotic surgical apparatus to operate a surgical drill at the recommended position, angle, and velocity so as to insert a pedicle screw into the patient vertebra.
- the computer-assisted robotic surgery system is configured to execute the recommended trajectory in the form of a prescription, e.g., automatically performing the medical procedure activity without requiring prior approval from the surgeon.
- Executing the medical procedure activity based on the recommended trajectory using the medical tool 113 can also include displaying, on a portable display (shown as 440 in FIG.4), the recommended trajectory of the medical tool 113.
- the portable display can include a heads-up display (HUD) worn by the surgeon that uses augmented reality (AR) to display the recommended trajectory or other information related to the medical tool or the medical procedure activity.
- the portable display can be an arm-mounted display worn by the surgeon that displays the recommended trajectory or other information related to the medical tool or the medical procedure activity.
- Executing the medical procedure activity can also include customizing a surgical implant to the patient.
- the computer-assisted robotic surgery system 110 is configured to manufacture, using a 3D printer, an implant that is custom-fit to the patient region.
- the manufacturing process for customizing the implant can be subtractive or additive, as appropriate for the 3D printer being used and the implant being customized.
- FIGS.2A-2D illustrate an exemplary machine learning model for use by the computer-assisted robotic surgery system as described herein.
- an exemplary machine learning model 206 is shown for use by the computer-assisted robotic surgery system.
- the computer- assisted robotic surgery system receives image training data 202 that is generated based on the sensor data and on the patient image data acquired from the imaging system.
- the computer-assisted robotic surgery system generates a pre-operative model 204 of the patient’s region based on the acquired image training data.
- the pre-operative model 204 can be used, for example, in connection with a pre- operative surgical plan of how the medical procedure will proceed intraoperatively.
- the computer-assisted robotic surgery system constructs or subsequently refines the machine learning model based on the image training data and the pre-operative model of the patient region.
- the machine learning model can be determined using a convolutional neural network or recurring neural network.
- the machine learning model can be a generative model trained via an adversarial learning process (sometimes referred to as GAN learning).
- GAN learning an adversarial learning process
- the computer-assisted robotic surgery system can apply reinforcement learning to refine the machine learning model.
- reinforcement learning refers to using rewards or feedback to learn an underlying machine learning model.
- Non-limiting exemplary refinements to the machine learning model include receiving annotations or labels of the image training data based on feedback offered by domain experts (e.g., by surgeons or other medical providers with experience in the desired medical procedure activity).
- the feedback can be positive or negative and include annotations by domain experts of the patient images associated with the image training data.
- the image training data can be ranked or prioritized based on a determination or measurement of quality of the domain experts. In this regard, image training data associated with domain experts determined or considered to be higher quality can have an increased effect on the training of the machine learning model.
- FIG.2B an exemplary refinement of the machine learning model, described herein as 206, is shown for use by the computer-assisted robotic surgery system.
- the refinement can include applying image processing heuristics to the image training data, described herein as 202, based on domain-specific knowledge of the medical procedure activity to be performed intraoperatively.
- the region 208 indicates a narrow area of the medical tool 113, e.g., the isthmus (or narrowest section) of a pedicle screw for insertion during a spine surgical procedure.
- the axis 210 represents an initial recommended axis for insertion of the medical tool into the vertebra 218.
- the computer-assisted robotic surgery system can refine the recommended axis using domain-specific knowledge of the medical procedure activity.
- non-limiting exemplary domain- specific knowledge includes recommending medializing a pedicle screw being inserted into a vertebral body so as to avoid contact with a facet joint of the vertebral body.
- the system can refine an initial recommendation provided by the machine learning model to apply image processing heuristics on intraoperatively acquired patient image data.
- the image processing heuristics can segment the intraoperatively acquired patient image data to recommend adjusting parameters such as the insertion angle, position, or force (e.g., linear or rotational) of the pedicle screw so as to avoid contact with the facet joint.
- Another non-limiting example of domain-specific knowledge includes recommending staying a predetermined distance away from the anterior cortex, such as 5 mm or more away from the anterior cortex.
- the image training data 202 has been described in connection with image processing heuristics specific to patient bone, the computer-assisted robotic surgery device is not limited to use on image training data in connection with patient bone.
- the computer-assisted robotic surgery device is also configured for processing image training data containing patient image data for soft tissue, vascular systems, and the like.
- FIG.2B illustrates non-limiting exemplary image training data 202 using exemplary axial and sagittal views for illustrative purposes, the computer- assisted robotic surgery system is not limited to processing axial and sagittal views.
- FIGS.2C and 2D illustrate the computer-assisted robotic surgery system is configured to use and display other views of the image training data 202 so as to recommend and display trajectories for the medical tool 113. Accordingly, the image training data can be used and displayed in two-dimensional or three-dimensional as appropriate.
- FIG.3 illustrates a flowchart 300 of an exemplary method for refining and/or training of the machine learning model for use by the computer-assisted robotic surgery system, as described herein.
- the computer-assisted robotic surgery system obtains one or more patient images pre-operatively, as shown by 310.
- patient images can include patient-specific two-dimensional or three-dimensional images acquired from an imaging system, such as images that can include two-dimensional or three- dimensional images of the patient’s vertebra or other bone.
- the patient images can be acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the computer-assisted robotic surgery system classifies the obtained patient images, as shown by 320.
- the classification can include receiving annotations or other metadata such as labels or tags for the patient images based on feedback and assessments received from domain experts (e.g., by surgeons or other medical providers with experience in the desired medical procedure activity).
- the computer-assisted robotic surgery system is also configured to enhance the classifications based on sensor data received from the navigation system.
- the computer-assisted robotic surgery system generates image training data based on the obtained patient images, the received sensor data, and the received classifications.
- the computer-assisted robotic surgery system constructs and refines a machine learning model based on the classified patient images, as shown by 330.
- the machine learning model can be determined using a convolutional neural network.
- the machine learning model can be a generative model trained via an adversarial learning process (sometimes referred to as GAN learning).
- GAN learning an adversarial learning process
- the adversarial learning process takes advantage of the concept that most machine learning techniques were designed to work on specific problem sets in which the training and test data are generated from the same statistical distribution. When those models are applied to the real world, adversaries may supply data that violates that statistical assumption. This data may be arranged to exploit specific vulnerabilities and compromise the results. The four most common adversarial machine learning strategies are evasion, poisoning, model stealing (extraction), and inference. [0053] Further, the computer-assisted robotic surgery system can apply reinforcement learning to refine the machine learning model.
- Non-limiting exemplary refinements to the machine learning model include receiving annotations or labels of the image training data based on feedback offered by domain experts (e.g., by surgeons or other medical providers with experience in the medical procedure activity).
- FIG.4 illustrates an exemplary computer-assisted robotic surgery system 400 in accordance with an exemplary embodiment of the disclosure.
- the computer-assisted robotic surgery system 400 includes a patient 402, a patient region 404, a navigation system 406, a plurality of receivers 48(a)...(n), a robotic surgical apparatus 410, a robotic arm 412, a surgical tool 413, an actuator assembly 415 operatively engaged with the robotic arm 412 and a computer 414 operatively coupled to the robotic surgical apparatus 410 and imaging system 418.
- a surgeon, or other medical personnel 438 utilize a heads-up display 440.
- the robotic surgical apparatus 410 is configured to operate medical tools 413 so as to perform a medical procedure activity on the patient’s region 404 or on a region of a similarly situated patient.
- a user 438 e.g., a surgeon or other medical provider
- the navigation system 406 is configured to track a relative position of the region 404 of the patient 402, along with one or more medical tools 413 used in performance of the medical procedure activity.
- the navigation system 406 can be an active system or a passive system and include multiple receivers 408(a)...408(n), where “n” is any suitable number.
- the receivers 408 are configured to monitor any number of aspects of the medical procedure activity and acquire sensor data relating to the medical procedure activity.
- the receivers 408 are configured to monitor intraoperatively an aspect of the medical procedure activity and acquire sensor data relating to the medical procedure activity.
- the receivers 408 can acquire visual data, audible data, haptic data and motion data of the operating room.
- the receivers 408 can be mounted in various locations throughout the operating room. In this arrangement, even if one receiver 408 is obstructed from acquiring and providing sensor data intraoperatively, one or more of the remaining receivers 408 with an unobstructed view can continue to provide sensor data.
- the sensor data from multiple receivers 408 can be combined, e.g., triangulated, to determine sensor data that is more accurate than sensor data received from a single receiver 408. If the received sensor data is conflicting, then the computer-assisted robotic surgery system can evaluate the sensor data, for example, based in a configuration whereby the receivers are configured to vote on the quality or correctness of the sensor data.
- the receivers 408 can be connected in a mesh network so as to provide and exchange the sensor data, in a similar fashion to sensor data acquired and exchanged via the Internet of Things (IoT).
- IoT Internet of Things
- the mesh network connects receivers 408 by using proprietary or open communications protocols to self-organize and can pass measurement information back to central units such as computer 114, or other devices, such as shown in FIG.7 as historical database 726, training database 732, CNN 714, RNN 716, machine learning device 710 and/or surgeon database 704.
- central units such as computer 114, or other devices, such as shown in FIG.7 as historical database 726, training database 732, CNN 714, RNN 716, machine learning device 710 and/or surgeon database 704.
- the computer-assisted robotic surgery system 400 acquires sensor data intraoperatively via receivers 408 operating in conjunction with the navigation system 406 and an imaging system 418.
- the sensor data includes, but is not limited to, medical tool position, medical tool angle of insertion into the patient region 404, medical tool force, medical tool torque, medical tool size, medical tool audio data, medical tool haptic data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, chemical data, patient-specific image data, patient- specific video data, and electromyography (EMG) electrophysiologic data.
- the sensor data can track a position, angle, force, torque, or size of a medical tool 413 while an activity of a medical procedure is performed on a patient 402.
- the sensor data can further track audio data, haptic data (e.g., force feedback), LIDAR data, infrared data, radar data, ultrasound data, or chemical data generated while the medical tools 413 are in use on the patient region 404.
- audio or haptic data includes changes in audio pitch and force feedback during a spinal procedure while a medical tool 413 traverses through soft tissue, into a vertebra of the spine, and then drills a pedicle screw into the vertebral body.
- the LIDAR data and patient-specific video data can be acquired from LIDAR sensors or cameras 408 communicatively coupled in the computer- assisted robotic surgery system 400.
- the medical tool 413 position data can be acquired based on fiducial markers 421(a)...(n), where “n” is any suitable number, disposed around the patient’s region 404.
- the medical tool angle data can be acquired based on an angle sensor 408 coupled to an imaging system 418 such as an x-ray.
- the sensor data can also include patient image data.
- the patient image data can be acquired intraoperatively from an imaging system 418.
- the patient image data can include patient-specific image data acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- the sensor data can be used to determine a recommended position, angle, and velocity (sometimes referred to herein as a trajectory) for a medical tool 413 relative to the patient region 404 based on the machine learning model, as described in further detail below.
- Heads-up display, or portable display 440 may be worn on medical personnel.
- the portable display 440 can be an arm-mounted display worn by the surgeon 438 that displays the recommended trajectory or other information related to the medical tool 413 or the medical procedure activity.
- the execution of the medical procedure activity based on the recommended trajectory using the medical tool 413 can also include displaying, on portable display 440, the recommended trajectory of the medical tool 413.
- the portable display 440 can include a heads-up display (HUD) worn by the surgeon 438 that uses augmented reality (AR) to display the recommended trajectory or other information related to the medical tool 413 or the medical procedure activity.
- FIG.5 illustrates a flowchart 500 of an exemplary method for performing a medical procedure activity using the computer-assisted robotic surgery system as described herein. Operation of the computer-assisted robotic surgery system proceeds in multiple stages. Exemplary stages include a pre-operative training phase and an intraoperative execution phase.
- the computer-assisted robotic surgery system determines a reference frame for the patient region based on the machine learning model trained based on sensor data and image training data, as shown by 510.
- the reference frame provides a reference point or baseline for determining a recommended trajectory for the medical tool used to perform the medical procedure activity.
- the machine learning model is trained on the image training data.
- the image training data is generated pre-operatively based on the patient image data and the other sensor data processed from the navigation system and imaging system.
- the computer-assisted robotic surgery system tracks, using the surgical navigation system, a relative position of the patient region, as shown by 520. For example, the relative position of the patient region is determined relative to the reference frame.
- the computer-assisted robotic surgery system processes the relative position of the patient region and the reference frame in connection with the machine learning model to determine a recommended position, angle, and velocity (e.g., a recommended trajectory) for the medical tool used in the medical procedure activity.
- the computer-assisted robotic surgery system determines a recommended position, angle, and velocity for the medical tool relative to the patient region, based on the machine learning model, as shown by 530.
- the computer-assisted robotic surgery system determines parameters for the medical tool trajectory based on the machine learning model, such as a position in three- dimensional (3D) space, e.g., recommended (x, y, z) coordinates for a distal end of the medical tool, along with a recommended angle of insertion, force, and/or velocity (e.g., distance per unit time) from the machine learning model.
- 3D three- dimensional
- FIG.6 illustrates a process 600 according to an embodiment of the disclosure.
- the process 600 can be executed by one or more processors based on code stored on a non-transitory computer readable medium.
- the process begins with acquisition of first sensor data, 602.
- This sensor data can be acquired in a pre- operative stage by receivers disposed in an operating room, or other examination venue.
- Machine learning can be applied to the sensor data, which may be visual sensor data, 604.
- a recommended trajectory for a surgical instrument is generated, 606.
- Second visual sensed data is acquired, 608. This second visual sensed data may be acquired during an operative stage of the medical procedure.
- Audio data may also be acquired, 610.
- Haptic data may also be acquired, 612.
- Voting on the acquired data is performed to determine the most preferred course of action for control of the surgical instrument, 616.
- the voting may be based on input from surgeons 614, who may be selected based on their familiarity with the procedure.
- the most preferred course of action for control of the surgical instrument may be modified, 618, which may also be due, in part, to particular preferences of the surgeon currently actively performing the surgical procedure, 620.
- the control of the surgical tool may be displayed on a heads-up display (HUD), or other display in the surgical room, 622.
- the movement of the surgical tool is controlled based on the sensed data, voting and surgeon’s preferences, 624.
- a determination is made whether there is any additional sensed data, 626. If so, “yes” 628 shows that additional voting, 616, is performed based on the additional sensed data.
- FIG.7 illustrates a network environment 700 according to an embodiment of the disclosure.
- the computer-assisted robotic surgery system 700 includes the elements described in relation to FIG.1. Additionally, as shown in FIG.7, computer 114 is operatively coupled, via bi-directional, wireless or wired channel, 702 to network 750.
- the network 750 is operatively coupled to historical processing device 726, training database 732, convolutional neural networks, (CNN) 714, recurrent neural networks, (RNN) 716, machine learning device 710 and surgeon processing device 704.
- CNN convolutional neural networks
- RNN recurrent neural networks
- Network 750 is any suitable interconnected computers, servers, and/pr processing devices, such as the Internet.
- the network 750 may include an Internet Protocol (IP) network via hypertext transfer protocol (HTTP), secure HTTP (HTTPS), and the like.
- IP Internet Protocol
- HTTP hypertext transfer protocol
- HTTPS secure HTTP
- the network 750 may also support an e-mail server configured to operate as an interface between clients and the network components over the IP network via an email protocol (e.g., Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), etc.).
- SMTP Simple Mail Transfer Protocol
- IMAP Internet Message Access Protocol
- POP Post Office Protocol
- Network 750 may be implemented using any suitable communication techniques such as Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE), Wireless Local Area Network (WLAN), Infrared (IR) communication; Public Switched Telephone Network (PSTN), Plain Old Telephone Service (POTS), radio waves, and/or other suitable communication techniques.
- VLC Visible Light Communication
- WiMAX Worldwide Interoperability for Microwave Access
- LTE Long Term Evolution
- WLAN Wireless Local Area Network
- IR Infrared
- PSTN Public Switched Telephone Network
- POTS Plain Old Telephone Service
- radio waves and/or other suitable communication techniques.
- the network750 may allow ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet, and rely on sharing resources to achieve coherence economies of scale, like a public utility.
- third-party clouds which typically enable organizations to focus on their core businesses, may also be used.
- Network 750 is operatively coupled to: historical processing device 726 via wired or wireless bi-directional communication channel 727; training database 732 via wired or wireless bi-directional communication channel 735, CNN 714 via wired or wireless bi-directional communication channel 715; RNN 716 via wired or wireless bi-directional communication channel 717, machine learning device 710 via wired or wireless bi-directional communication channel 712; and surgeon processing device 704 via wired or wireless bi-directional communication channel 705.
- Historical processing device 726 includes processor 728 and memory 730.
- Processor 728 may include a single processor or a plurality of processors (e.g., distributed processors).
- Processor 728 may be any suitable processor capable of executing or otherwise performing instructions and may include an associated central processing unit (CPU), or general or special purpose microprocessors, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) that carries out program instructions to perform the arithmetical, logical, and input/output.
- Processor 728 may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions.
- Processor 728 may receive instructions and data from a memory (e.g., a local memory 730, or a remote memory, via network 750).
- a memory e.g., a local memory 730, or a remote memory, via network 750.
- Memory 730 may be a tangible program carrier having program instructions stored thereon.
- a tangible program carrier may include a non-transitory computer readable storage medium.
- a non-transitory computer readable storage medium may include a machine readable storage device, a machine readable storage substrate, a memory device, or any combination thereof.
- Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like.
- non-volatile memory e.g., flash memory, ROM, PROM, EPROM, EEPROM memory
- volatile memory e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)
- bulk storage memory e.g., CD-ROM and/or DVD-ROM, hard-drives
- Training database 732 includes processor 734 and memory 736.
- the processor 732 may be similar to processor 728, as described herein.
- Memory 736 may be similar to memory 730, as described herein.
- the training database may be used to store and process training data. This data may be models, procedures and/or protocols that surgeons performing medical procedures can use as a way to virtually perform the surgical procedure prior to the actual procedure. This training may include 3D models, computerized representations of a surgery or other information and guidance for the surgeon.
- a variety of model architectures are used, including stateful, for example, recurrent neural networks, or RNNs 716, and stateless, for example, convolutional neural networks, or CNNs 714; in some embodiments, a mix of the two may be used, depending on the nature of the particular surgical procedure being performed.
- Machine learning device 710 may be used to facilitate processing of the RNN 716 and CNN 714.
- Surgeon processing device 704 includes processor 706 and memory 708.
- the processor 706 may be similar to processor 728, as described herein.
- Memory 708 may be similar to memory 730, as described herein.
- the surgeon processing device 704 is configured to receive input from a plurality of surgeons.
- the input may be stored in memory 708 and accessed by any suitable processor, 706, 734, 728 or computer 114.
- the input may also be provided to machine learning device 710, RNN 716 and/or CNN 714.
- the input from the various surgeons may be weighted based on, for example, the experience level of a surgeon, number of similar procedures performed by a surgeon, specializations of a surgeon, expertise of a surgeon or other factors that give more credibility to an opinion of a surgeon providing input. Thus, the most qualified opinion will be given more weight than a less-qualified opinion as determined by professional medical factors of the surgeon providing the input.
- Machine learning device 710 is used to refine the recommended trajectory determined based on one or more machine learning models stored in machine learning device 710.
- the refinement can include applying image processing heuristics to segment the patient region (e.g., patient bone) identified in intraoperative patient images based on the initial trajectory received from the machine learning model.
- the computer-assisted robotic surgery system refines the recommended trajectory, for example, based on domain-specific enhancements, as described above.
- the refinement can include incorporating inference or knowledge of an individual surgeon’s preferences into the recommended trajectory or into the training of the machine learning model.
- the refinement can include modifying the recommended trajectory based on preferences inferred or known about Surgeon A so as to medialize by 1-2 mm a pedicle screw being inserted into a vertebral body so as to avoid risk of contact with a facet joint of the vertebral body.
- the computer-assisted robotic surgery system is configured to refine the recommended trajectory upon detection that the actual intraoperative trajectory has deviated or is predicted to deviate from the recommended trajectory.
- executable component is the name for a structure that is well understood to one of ordinary skill in the art in the field of computing as being a structure that can be software, hardware, or a combination thereof.
- structure of an executable component may include software objects, routines, methods, and so forth, that may be executed by one or more processors (706, 728 and 734) on the computing system, whether such an executable component exists in the heap of a computing system, or whether the executable component exists on computer- readable storage media.
- executable component is also well understood by one of ordinary skill as including structures that are implemented exclusively or near- exclusively in hardware, such as within a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), or any other specialized circuit. Accordingly, the term “executable component” is a term for a structure that is well understood by those of ordinary skill in the art of computing, whether implemented in software, hardware, or a combination. In this description, the terms “component,” “service,” “engine,” “module,” “control,” “generator,” or the like may also be used.
- executable component exists on a computer-readable medium in such a form that it is operable, when executed by one or more processors of the computing system, to cause the computing system to perform one or more function, such as the functions and methods described herein.
- Such a structure may be computer-readable directly by the processors—as is the case if the executable component were binary.
- Computer-readable storage media include RAM, ROM, EEPROM, solid state drives (“SSDs”), flash memory, phase-change memory (“PCM”), CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other physical and tangible storage medium which can be used to store desired program code in the form of computer-executable instructions or data structures and which can be accessed and executed by a general purpose or special purpose computing system to implement the disclosed functionality of the disclosure.
- One embodiment, as described herein, includes a computer-assisted robotic surgery system, the system comprising: a navigation system for tracking a relative position of a patient region and one or more medical tools, the navigation system including two or more receivers configured to monitor an aspect of a medical procedure activity; a robotic surgical apparatus including: a robotic arm, and an actuator assembly operatively engaged with the robotic arm; and a computer operatively coupled to the robotic surgical apparatus having computer instructions that when executed: determine a reference frame for the patient region based on a neural network model trained on sensor data and on image training data sampled from a sensor system positioned to monitor an aspect of the medical procedure activity, track, using the navigation system, a relative position of the patient region, and determine a position, angle, and velocity for one of the medical tools relative to the patient region based on the neural network model trained on the sensor data and on the image training data.
- Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the computer further comprises computer instructions executable to apply, using the robotic arm, control forces to the patient region based on the determined position, angle, and velocity for the one of the medical tools while the robotic surgical apparatus is engaged with the patient region.
- the sensor data includes one or more of position, angle, force, torque, surgical implant size, medical image data, video data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, audio data, haptic data, chemical data, and electromyography (EMG) electrophysiologic data.
- LIDAR light detection and ranging
- EMG electromyography
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the image training data is determined based on medical images that are annotated and classified by one or more medical providers.
- Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the neural network is trained using reinforcement learning based on the image training data that is annotated and classified by the one or more medical providers.
- Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the image training data is anonymized or pseudonymized so as to remove at least some identifying characteristics of a subject of the image training data.
- Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is determined using a convolutional neural network. [00106] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is refined by simulating segmentation of vertebral bodies. [00107] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is a generative model trained via an adversarial learning process (GAN learning). [00108] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the one of the medical tools includes a surgical drill for placement of a pedicle screw.
- GAN learning adversarial learning process
- Another embodiment as described herein relates to the computer-assisted surgery system, wherein the navigation system comprises an active system or a passive system.
- Another embodiment is directed to a method for performing a surgical procedure comprising: accessing composite visual sensor data from a plurality of visual sensors; constructing a machine learning model based on the composite visual sensor data; generating a recommended trajectory for a surgical instrument relative to a region of a patient based, at least in part, on the machine learning model; receiving a vote on the sensor data; performing additional machine learning; modifying the recommended trajectory based at least in part on the additional machine learning.
- the functions performed in the processes and methods described above may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples.
- embodiments of the disclosure may be described as a system, method, apparatus, or computer program product. Accordingly, embodiments of the disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the disclosure may take the form of a computer program product embodied in one or more computer readable storage media, such as a non-transitory computer readable storage medium, having computer readable program code embodied thereon.
- Modules may also be implemented in software for execution by various types of processors.
- An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically, or operationally, together, comprise the module and achieve the stated purpose for the module.
- a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices.
- operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure.
- the operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
- the system or network may include non-transitory computer readable media.
- a module or portions of a module are implemented in software, the software portions are stored on one or more computer readable storage media, which may be a non- transitory media.
- Any combination of one or more computer readable storage media may be utilized.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, including non-transitory computer readable media.
- More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a Blu-ray Disc, an optical storage device, a magnetic tape, a Bernoulli drive, a magnetic disk, a magnetic storage device, a punch card, integrated circuits, other digital processing apparatus memory devices, or any suitable combination of the foregoing, but would not include propagating signals.
- a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code for carrying out operations for aspects of the present disclosure may be generated by any combination of one or more programming language types, including, but not limited to any of the following: machine languages, scripted languages, interpretive languages, compiled languages, concurrent languages, list-based languages, object oriented languages, procedural languages, reflective languages, visual languages, or other language types.
- the program code may execute partially or entirely on the computer (114), or partially or entirely on the surgeon’s device (704).
- Any remote computer may be connected to the surgical apparatus (110) through any type of network (750), including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Embodiments, as described herein can be implemented using a computing system associated with a transaction device, the computing system comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions to cause the computing system to perform operations. Additionally, a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations may also be used.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Robotics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- General Business, Economics & Management (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Business, Economics & Management (AREA)
- Urology & Nephrology (AREA)
- Manipulator (AREA)
Abstract
A computer-assisted robotic surgery system that includes receivers to monitor an aspect of a medical procedure activity, a robotic arm, and an actuator assembly operatively engaged with the robotic arm. A computer operatively coupled to the robotic surgical apparatus determines a reference frame for the patient region based on a neural network model trained on sensor data and on image training data sampled from a sensor system positioned to monitor the medical procedure activity and determine a position, angle, and velocity for one of the medical tools relative to the patient region based on the neural network model trained on the sensor data and on the image training data.
Description
SYSTEM, METHOD AND APPARATUS FOR ROBOTICALLY PERFORMING A MEDICAL PROCEDURE FIELD OF THE DISCLOSURE [0001] The present disclosure is generally related a medical procedure that utilizes a number of sensed parameters to generate recommended positions of surgical instruments. BACKGROUND [0002] Typically, minimally invasive robotic surgical or tele-surgical systems have been developed to increase a dexterity and avoid some of the limitations on traditional minimally invasive techniques. In tele-surgery systems, the surgeon can be provided with an image of the surgical site at the surgical workstation. While viewing a two or three dimensional image of the surgical site on a display, the surgeon performs the surgical procedures on the patient by manipulating master control devices, which in turn control motion of the servo-mechanically operated instruments. It would be an advancement in the art to enable multiple sources of sensor data to improve reliability of tele-surgical procedures. BRIEF SUMMARY OF THE DISCLOSURE [0003] One embodiment of the disclosure is directed to a method (“the method”) comprising: acquiring first visual sensor data from one or more of a plurality of visual sensing devices, during a pre-operative stage; performing machine learning on the first visual sensor data; generating a recommended trajectory for a surgical instrument based on the machine learning; acquiring second visual sensor data from one or more of a plurality of visual sensing devices during an operative stage; performing voting on the first visual data and the second visual data; modifying the recommended trajectory for the surgical instrument based on the voting; and controlling movement of the surgical instrument based on the modified recommended trajectory. [0004] Another embodiment is directed to the method described above, further comprising acquiring audio sensor data from one or more of a plurality of audio sensing devices, during the operative stage; and utilizing the audio sensor data in the voting.
[0005] Another embodiment is directed to the method described above further comprising acquiring haptic data from one or more of a plurality of haptic sensing devices, during the operative stage; and utilizing the haptic data in the voting. [0006] Another embodiment is directed to the method described above, further comprising displaying a representation of the movement of the surgical instrument at a heads-up display. [0007] Another embodiment is directed to the method described above, further comprising performing image training on the first visual sensor data and the second visual sensor data. [0008] Another embodiment is directed to the method described above, further comprising accessing input from one or more surgeons; and utilizing the input in the voting. [0009] Another embodiment is directed to the method described above, further comprising assigning a weight to the input from one or more surgeons; and utilizing the weight in the voting. [0010] Another embodiment is directed to the method described above, wherein the first visual sensor data and the second visual sensor data are acquired from a mesh network. [0011] Another embodiment is directed to the method described above, further comprising accessing input from one or more surgeons; and utilizing the input to generate the recommendation. [0012] Another embodiment is directed to the method described above, further comprising accessing preferences associated with a particular individual; and modifying the recommended trajectory based, at least in part, on the preferences associated with a particular individual. [0013] Another embodiment is directed to a computer-assisted robotic surgery system, the system comprising: a navigation system for tracking a relative position of a patient region and one or more medical tools, the navigation system including two or more receivers configured to monitor an aspect of a medical procedure activity; a robotic arm; an actuator assembly operatively engaged with the robotic arm; and a computer operatively coupled to the actuator, having computer instructions that
when executed: determine a reference frame for the patient region based on a neural network model trained on sensor data and on image training data sampled from a sensor system positioned to monitor an aspect of the medical procedure activity; track, using the navigation system, a relative position of the patient region; and determine a position, angle, and velocity for one of the medical tools relative to the patient region based on the neural network model trained on the sensor data and on the image training data. [0014] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the computer instructions are executed to apply, using the robotic arm, control forces to the patient region based on the determined position, angle, and velocity for the one of the medical tools while the robotic surgical apparatus is engaged with the patient region. [0015] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the sensor data includes one or more of position, angle, force, torque, audio data, haptic data. [0016] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the angle is acquired based on an angle sensor coupled to an x-ray. [0017] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the position is acquired based on fiducial markers disposed around the patient region. [0018] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the image training data is determined based on medical images that are annotated and classified by one or more medical providers. [0019] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the neural network model is trained using reinforcement learning based, at least in part, on the image training data that is annotated and classified by the one or more medical providers. [0020] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is determined using a convolutional neural network.
[0021] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is a generative model trained via an adversarial learning process (GAN learning). [0022] Another embodiment is directed to the computer-assisted robotic surgery system, wherein the model is refined by simulating segmentation of vertebral bodies. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0001] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present disclosure and, together with the general description given above, and the detailed description given below, serve to explain the principles of the present disclosure. [0002] The following detailed description of the exemplary embodiments of the subject disclosure, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the subject disclosure, there are shown in the drawings embodiments that are presently preferred. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown. [0003] In the drawings: [0004] FIG.1 illustrates an exemplary computer-assisted robotic surgery system in accordance with an exemplary embodiment. [0005] FIGS.2A-2D illustrate an exemplary machine learning model for use by the computer-assisted robotic surgery system as described herein. [0006] FIG.3 illustrates a flowchart of an exemplary method for refining the training of the machine learning model for use by the computer-assisted robotic surgery system, as described herein. [0007] FIG.4 illustrates an exemplary computer-assisted robotic surgery system in accordance with an exemplary embodiment of the subject disclosure. [0008] FIG.5 illustrates a flowchart of an exemplary method for performing a medical procedure activity using the computer-assisted robotic surgery system as described herein.
[0009] FIG.6 illustrates a process according to an embodiment of the disclosure. [0010] FIG.7 illustrates a network environment according to an embodiment of the disclosure. DETAILED DESCRIPTION [0011] Reference will now be made in detail to the various embodiments of the subject disclosure illustrated in the accompanying drawings. Wherever possible, the same or like reference numbers will be used throughout the drawings to refer to the same or like features. It should be noted that the drawings are in simplified form and are not drawn to precise scale. Certain terminology is used in the following description for convenience only and is not limiting. Directional terms such as top, bottom, left, right, above, below and diagonal, are used with respect to the accompanying drawings. The term “distal” shall mean away from the center of a body. The term “proximal” shall mean closer towards the center of a body and/or away from the “distal” end. The words “inwardly” and “outwardly” refer to directions toward and away from, respectively, the geometric center of the identified element and designated parts thereof. Such directional terms used in conjunction with the following description of the drawings should not be construed to limit the scope of the subject disclosure in any manner not explicitly set forth. Additionally, the term “a,” as used in the specification, means “at least one.” The terminology includes the words above specifically mentioned, derivatives thereof, and words of similar import. [0012] “About” as used herein when referring to a measurable value such as an amount, a temporal duration, and the like, is meant to encompass variations of ±20%, ±10%, ±5%, ±1%, or ±0.1% from the specified value, as such variations are appropriate. [0013] “Substantially” as used herein shall mean considerable in extent, largely but not wholly that which is specified, or an appropriate variation therefrom as is acceptable within the field of art. “Exemplary” as used herein shall mean serving as an example. [0014] Throughout this disclosure, various aspects of the subject disclosure can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the subject disclosure. Accordingly, the
description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 2.7, 3, 4, 5, 5.3, and 6. This applies regardless of the breadth of the range. [0015] Furthermore, the described features, advantages and characteristics of the exemplary embodiments of the subject disclosure may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the present disclosure can be practiced without one or more of the specific features or advantages of a particular exemplary embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all exemplary embodiments of the subject disclosure. [0016] Embodiments of the disclosure will be described with reference to the accompanying drawings. Like numerals represent like elements throughout the several figures, and in which example embodiments are shown. However, embodiments of the claims may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples, among other possible examples. [0017] FIG.1 illustrates an exemplary computer-assisted robotic surgery system 100 in accordance with an exemplary embodiment. [0018] In general, the computer-assisted robotic surgery system 100 begins in a pre-operative training phase by acquiring sensor data, including, for example, pre- operative patient image data of a patient region. The pre-operative training phase continues with the computer-assisted robotic surgery system using the sensor data to construct and refine a machine learning model based on the acquired sensor data. The computer-assisted robotic surgery system proceeds to an intraoperative execution phase during which the system 100 determines and recommends a
trajectory for a medical tool relative to the patient region, based on the machine learning model. [0019] The computer-assisted robotic surgery system 100 includes a patient 102, a patient body region 104, a navigation system 106, a plurality of sensors, or receivers 108(a)…(n), a robotic surgical apparatus 110, robotic arm 112 surgical instrument 113, and a computer 114. The computer-assisted robotic surgery system 100 begins in a pre-operative training phase by acquiring or obtaining sensor data relating to an activity of a medical procedure to be performed on a region 104 of a patient 102. [0020] The computer-assisted robotic surgery system 100 can enhance any suitable medial procedure. For example, medical procedures including, but not limited to, endoscopy, interventional radiology, or any other medical procedure in which a medical provider uses a medical tool. Further non-limiting example medical procedures include general surgery, thoracic surgery, colon and rectal surgery, obstetrics and gynecology, gynecologic oncology, neurological surgery, ophthalmic surgery, oral and maxillofacial surgery, orthopedic surgery, otolaryngology, pediatric surgery, plastic and maxillofacial surgery, urology, vascular surgery, and the like. Example orthopedic surgeries include hand surgery, sports medicine, pediatric orthopedics, spine surgery, foot and ankle orthopedics, joint replacement, trauma surgery, oncology, and the like. [0021] The region 104 of the patient 102 can be a specific target, such as the L3 vertebra right pedicle on which a surgeon will perform an activity of a medical procedure such as a spine surgery. Alternatively, the patient region can be a broader area or patient body part, such as the anus, rectum, or mouth of the patient into which a medical provider will perform an activity of a medical procedure such as an endoscopy. [0022] The navigation system 106 is configured to track a relative position of the region 104 of the patient 102, along with one or more medical tools. The navigation system 106 can be an active system or a passive system and include multiple sensors, or receivers, 108(a), 108(b)… 108(n), where “n” is any suitable number. The receivers 108(a)…(n) are configured to monitor aspects of the medical procedure activity and acquire sensor data 116 relating to the medical procedure
activity. The sensors 108 can sense visual data, audio data, haptic data, positional data of personnel in the operating room and other sensory input. The receivers, generally 108 can be mounted in various locations throughout the operating room. In this arrangement, even if one receiver is obstructed from acquiring and providing sensor data, one or more of the remaining receivers with an unobstructed view can continue to provide sensor data. Further, the sensor data from multiple receivers 108 can be combined, e.g., triangulated, to determine sensor data that is more accurate than sensor data received from a single receiver. [0023] If the received sensor data is conflicting, then the computer-assisted robotic surgery system 100 can evaluate the sensor data, for example, based on a configuration whereby the receivers 108 are configured to vote on the quality or correctness of the sensor data. The computer 114, which has a processor and memory, can execute a voting routine to determine the desired result from conflicting, inadequate or conflicting receiver data. The receivers 108 can be connected in a mesh network so as to provide and exchange the sensor data, in a similar fashion to sensor data acquired and exchanged via the Internet of Things (IoT). [0024] The computer-assisted robotic surgery system 100 acquires sensor data 116 pre-operatively via the navigation system 106 and an imaging system 118. For example, imaging system 100 can include two-dimensional or three-dimensional surgical imaging platforms such as the O-arm system for use in spine, cranial, orthopedic, ear / nose / throat, trauma-related, or other surgeries, or a medical imaging device such as a C-arm imaging scanner intensifier. The sensor data 116 includes, for example, medical tool position, medical tool orientation, medical tool angle of insertion into the patient region 104, medical tool force, medical tool torque, medical tool size, medical tool audio data, medical tool haptic data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, chemical data, patient-specific image data, patient-specific video data, and electromyography (EMG) electrophysiologic data. [0025] For example, the sensor data can track a position, angle, force, torque, or size of a medical tool 113 while an activity of a medical procedure is performed on a patient 102. The sensor data can further track audio data, haptic data (e.g., force feedback, force that stimulates the senses of touch and motion, especially to
reproduce in remote operation or computer simulation the sensations that would be felt by a user interacting directly with physical objects), LIDAR data, infrared data, radar data, ultrasound data, or chemical data generated while the medical tools are in use on the patient region. Non-limiting example audio or haptic data includes changes in audio pitch and force feedback during a spinal procedure while a medical tool 113 traverses through soft tissue, into a vertebra of the spine, and then drills a pedicle screw into the vertebra. [0026] The LIDAR data and patient-specific video data can be acquired from LIDAR sensors or cameras communicatively coupled in the computer-assisted robotic surgery system 100. The medical tool 113 position data can be acquired based on fiducial markers, or other image guiding marker, disposed around the patient’s region 104. The medical tool angle data can be acquired based on an angle sensor 119 coupled to an imaging system 118 such as an x-ray. [0027] The sensor data further includes, but is not limited to, object data and environmental data such as a position, orientation, and angle of each medical tool 113, medical kit, table, platform, and personnel occupying the operating room. In this regard, the computer-assisted robotic surgical system 100 is configured to use the acquired sensor data 116 to generate and store dynamic electronic medical record data. As used herein, dynamic electronic medical record data refers to sensor data associated with a time component, such as evolution over time of patient image data, object data, or environmental data including motion data of the patient, medical tools, kits, tables, and personnel from operating rooms. Further, the electronic medical record data, whether static or dynamic, can be stored and persisted using the Internet, a wired or wireless network connection, a cloud computing or edge computing architecture, or a decentralized secure digital ledger to facilitate secure maintenance, access, and verification of the electronic medical records. A non-limiting example of digital ledger technology includes storing electronic medical records via blockchain that can be managed by a peer-to-peer network for use as a publicly distributed ledger. [0028] The sensor data 116 can also include patient image data. The patient image data can be acquired pre-operatively from an imaging system 118. For example, the patient image data can include patient-specific image data acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI)
scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system. The patient image data is anonymized or pseudonymized so as to remove at least some identifying characteristics of a subject of the image training data. The sensor data including the patient image data is used to generate image training data for use in training a machine learning model, as described in further detail below. [0029] The acquired sensor data 116 can include large amounts of data stored and leveraged for training purposes to construct and refine the machine learning model. As used herein, the terms “large amounts of data” or “big data” refer to unstructured and semi-structured data in such large volumes (for example, petabytes or exabytes of data) as to be immensely cumbersome to load into a database for conventional analysis. [0030] The robotic surgical apparatus 110 includes a robotic arm 112 and an actuator assembly 115 operatively engaged with the robotic arm. The robotic surgical apparatus 110 is configured to operate medical tools so as to perform a medical procedure activity on the patient’s region 104 or on a region of a similarly situated patient. The navigation system 106 is configured to monitor and acquire sensor data relating to the performance of the medical procedure activity while the robotic surgical apparatus performs the medical procedure activity. [0031] The computer 114 is operatively coupled to the robotic surgical apparatus 110. The computer 114 is configured to perform the medical procedure activity intraoperatively, based on the machine learning model trained based on the sensor data and image training data. For example, the computer 114 is configured to cause the robotic surgical apparatus 110 to operate the medical tool 113 intraoperatively to perform the medical procedure activity on the patient’s region 104. Alternatively, the computer 114 is configured to assist the user intraoperatively in performing the medical procedure activity, based on the machine learning model trained based on the sensor data and image training data. The computer 114 may be configured to communicate with the navigation system 106 and the robotic surgical apparatus 110 over one or more networks, as described in relation to FIG.7 herein. [0032] The apparatus shown in FIG.1 may be used in any suitable medical procedure activity that utilizes a recommended trajectory using the medical tool 113.
The computer-assisted robotic surgery system 100 is configured to control a robotic surgical apparatus 110 to perform the medical procedure activity using the medical tool 113. [0033] For example, the computer-assisted robotic surgery system 100 controls the robotic surgical apparatus 110 to operate a surgical drill 113 at the recommended position, angle, and velocity so as to insert a pedicle screw into the patient vertebra. Advantageously, in this regard the computer-assisted robotic surgery system is configured to execute the recommended trajectory in the form of a prescription, e.g., automatically performing the medical procedure activity without requiring prior approval from the surgeon. [0034] As described above, the computer-assisted robotic surgery system 100 is also configured to refine the recommended trajectory determined based on the machine learning model. For example, the refinement can include applying image processing heuristics to one or more segments the patient region (e.g., patient bone) identified in intraoperative patient images based on the initial trajectory received from the machine learning model. Upon applying the image processing heuristics to segment the intraoperative image, the computer-assisted robotic surgery system 100 refines the recommended trajectory, for example, based on domain-specific enhancements, as described above. [0035] By way of further example, the refinement can include incorporating inference or knowledge of an individual surgeon’s preferences into the recommended trajectory or into the training of the machine learning model. For example, the refinement can include modifying the recommended trajectory based on preferences inferred or known about particular surgeon so as to medialize by 1-2 mm a pedicle screw being inserted into a vertebral body so as to avoid risk of contact with a facet joint of the vertebral body. The preferences known or inferred of a particular surgeon can be modified for each particular individual surgeon. This dynamic modification is implemented based on the personnel in the room and the relative location of a surgeon to the patient. Thus, if a surgeon takes over a procedure, an indication is provided to the system and the new surgeon’s particular preferences are used during that surgeon’s portion of the procedure.
[0036] In a still further example, the computer-assisted robotic surgery system 100 is configured to refine the recommended trajectory upon detection that the actual intraoperative trajectory has deviated or is predicted to deviate from the recommended trajectory. [0037] The apparatus 110 further includes receiving input to control the computer- assisted robotic surgery system during the pre-operative training phase or the intraoperative execution phase. The computer 114 is configured to receive and process inputs, such as commands, to control the robotic surgical apparatus 110. Non-limiting example input includes keyboard input, touch screen input, joystick input, pre-programmed console input, voice input, sound or aural input, eye movement input, facial expression input, and physical gesture input. Advantageously, the subject input modalities allow the computer-assisted robotic surgery system to operate independent of the surgeon’s location. For example, if the surgeon is co-located with the computer-assisted robotic surgery system, the computer-assisted robotic surgery system is controllable using any of the listed example inputs. Additionally, the voice input, sound or aural input, eye movement input, facial expression input, and physical gesture input allow the surgeon to control the computer-assisted robotic surgery system even if physically remote from the patient, such as in a different room, building, state, country, or the like. [0038] The device 110 further includes executing the medical procedure activity based on the recommended trajectory using the medical tool 113. The computer- assisted robotic surgery system is configured to control a robotic surgical apparatus to perform the medical procedure activity using the medical tool. For example, the computer-assisted robotic surgery system controls the robotic surgical apparatus to operate a surgical drill at the recommended position, angle, and velocity so as to insert a pedicle screw into the patient vertebra. Advantageously, in this regard the computer-assisted robotic surgery system is configured to execute the recommended trajectory in the form of a prescription, e.g., automatically performing the medical procedure activity without requiring prior approval from the surgeon. [0039] Executing the medical procedure activity based on the recommended trajectory using the medical tool 113 can also include displaying, on a portable display (shown as 440 in FIG.4), the recommended trajectory of the medical tool 113. The portable display can include a heads-up display (HUD) worn by the
surgeon that uses augmented reality (AR) to display the recommended trajectory or other information related to the medical tool or the medical procedure activity. Alternatively, the portable display can be an arm-mounted display worn by the surgeon that displays the recommended trajectory or other information related to the medical tool or the medical procedure activity. [0040] Executing the medical procedure activity can also include customizing a surgical implant to the patient. For example, the computer-assisted robotic surgery system 110 is configured to manufacture, using a 3D printer, an implant that is custom-fit to the patient region. The manufacturing process for customizing the implant can be subtractive or additive, as appropriate for the 3D printer being used and the implant being customized. [0041] FIGS.2A-2D illustrate an exemplary machine learning model for use by the computer-assisted robotic surgery system as described herein. [0042] Referring to FIG.2A, an exemplary machine learning model 206 is shown for use by the computer-assisted robotic surgery system. Specifically, the computer- assisted robotic surgery system receives image training data 202 that is generated based on the sensor data and on the patient image data acquired from the imaging system. The computer-assisted robotic surgery system generates a pre-operative model 204 of the patient’s region based on the acquired image training data. The pre-operative model 204 can be used, for example, in connection with a pre- operative surgical plan of how the medical procedure will proceed intraoperatively. The computer-assisted robotic surgery system constructs or subsequently refines the machine learning model based on the image training data and the pre-operative model of the patient region. The machine learning model can be determined using a convolutional neural network or recurring neural network. Alternatively, the machine learning model can be a generative model trained via an adversarial learning process (sometimes referred to as GAN learning). [0043] For example, the computer-assisted robotic surgery system can apply reinforcement learning to refine the machine learning model. As used herein, reinforcement learning refers to using rewards or feedback to learn an underlying machine learning model. Non-limiting exemplary refinements to the machine learning model include receiving annotations or labels of the image training data
based on feedback offered by domain experts (e.g., by surgeons or other medical providers with experience in the desired medical procedure activity). The feedback can be positive or negative and include annotations by domain experts of the patient images associated with the image training data. Further, the image training data can be ranked or prioritized based on a determination or measurement of quality of the domain experts. In this regard, image training data associated with domain experts determined or considered to be higher quality can have an increased effect on the training of the machine learning model. [0044] Referring to FIG.2B, an exemplary refinement of the machine learning model, described herein as 206, is shown for use by the computer-assisted robotic surgery system. The refinement can include applying image processing heuristics to the image training data, described herein as 202, based on domain-specific knowledge of the medical procedure activity to be performed intraoperatively. For example, the region 208 indicates a narrow area of the medical tool 113, e.g., the isthmus (or narrowest section) of a pedicle screw for insertion during a spine surgical procedure. The axis 210 represents an initial recommended axis for insertion of the medical tool into the vertebra 218. The computer-assisted robotic surgery system can refine the recommended axis using domain-specific knowledge of the medical procedure activity. In the domain of spine surgery, non-limiting exemplary domain- specific knowledge includes recommending medializing a pedicle screw being inserted into a vertebral body so as to avoid contact with a facet joint of the vertebral body. [0045] Accordingly, the system can refine an initial recommendation provided by the machine learning model to apply image processing heuristics on intraoperatively acquired patient image data. As shown by dashed lines 212 and 214, for example, the image processing heuristics can segment the intraoperatively acquired patient image data to recommend adjusting parameters such as the insertion angle, position, or force (e.g., linear or rotational) of the pedicle screw so as to avoid contact with the facet joint. Another non-limiting example of domain-specific knowledge includes recommending staying a predetermined distance away from the anterior cortex, such as 5 mm or more away from the anterior cortex. [0046] Although the image training data 202 has been described in connection with image processing heuristics specific to patient bone, the computer-assisted
robotic surgery device is not limited to use on image training data in connection with patient bone. The computer-assisted robotic surgery device is also configured for processing image training data containing patient image data for soft tissue, vascular systems, and the like. [0047] Although FIG.2B illustrates non-limiting exemplary image training data 202 using exemplary axial and sagittal views for illustrative purposes, the computer- assisted robotic surgery system is not limited to processing axial and sagittal views. The computer-assisted robotic surgery system is operable to leverage and display other views of image training data as appropriate. [0048] FIGS.2C and 2D illustrate the computer-assisted robotic surgery system is configured to use and display other views of the image training data 202 so as to recommend and display trajectories for the medical tool 113. Accordingly, the image training data can be used and displayed in two-dimensional or three-dimensional as appropriate. [0049] FIG.3 illustrates a flowchart 300 of an exemplary method for refining and/or training of the machine learning model for use by the computer-assisted robotic surgery system, as described herein. [0050] The computer-assisted robotic surgery system obtains one or more patient images pre-operatively, as shown by 310. For example, patient images can include patient-specific two-dimensional or three-dimensional images acquired from an imaging system, such as images that can include two-dimensional or three- dimensional images of the patient’s vertebra or other bone. The patient images can be acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system. [0051] The computer-assisted robotic surgery system classifies the obtained patient images, as shown by 320. For example, the classification can include receiving annotations or other metadata such as labels or tags for the patient images based on feedback and assessments received from domain experts (e.g., by surgeons or other medical providers with experience in the desired medical procedure activity). The computer-assisted robotic surgery system is also configured
to enhance the classifications based on sensor data received from the navigation system. The computer-assisted robotic surgery system generates image training data based on the obtained patient images, the received sensor data, and the received classifications. [0052] The computer-assisted robotic surgery system constructs and refines a machine learning model based on the classified patient images, as shown by 330. For example, the machine learning model can be determined using a convolutional neural network. Alternatively, the machine learning model can be a generative model trained via an adversarial learning process (sometimes referred to as GAN learning). The adversarial learning process takes advantage of the concept that most machine learning techniques were designed to work on specific problem sets in which the training and test data are generated from the same statistical distribution. When those models are applied to the real world, adversaries may supply data that violates that statistical assumption. This data may be arranged to exploit specific vulnerabilities and compromise the results. The four most common adversarial machine learning strategies are evasion, poisoning, model stealing (extraction), and inference. [0053] Further, the computer-assisted robotic surgery system can apply reinforcement learning to refine the machine learning model. Non-limiting exemplary refinements to the machine learning model include receiving annotations or labels of the image training data based on feedback offered by domain experts (e.g., by surgeons or other medical providers with experience in the medical procedure activity). [0054] FIG.4 illustrates an exemplary computer-assisted robotic surgery system 400 in accordance with an exemplary embodiment of the disclosure. [0055] Referring to FIG.4, the computer-assisted robotic surgery system 400 includes a patient 402, a patient region 404, a navigation system 406, a plurality of receivers 48(a)…(n), a robotic surgical apparatus 410, a robotic arm 412, a surgical tool 413, an actuator assembly 415 operatively engaged with the robotic arm 412 and a computer 414 operatively coupled to the robotic surgical apparatus 410 and imaging system 418. A surgeon, or other medical personnel 438 utilize a heads-up display 440. Also shown are fiducial markers 421(a)…(n).
[0056] The robotic surgical apparatus 410 is configured to operate medical tools 413 so as to perform a medical procedure activity on the patient’s region 404 or on a region of a similarly situated patient. Intraoperatively, a user 438 (e.g., a surgeon or other medical provider) uses the computer-assisted robotic surgery system 400 to perform an activity of a medical procedure on a region 404 of a patient 402. [0057] The navigation system 406 is configured to track a relative position of the region 404 of the patient 402, along with one or more medical tools 413 used in performance of the medical procedure activity. The navigation system 406 can be an active system or a passive system and include multiple receivers 408(a)…408(n), where “n” is any suitable number. [0058] The receivers 408 are configured to monitor any number of aspects of the medical procedure activity and acquire sensor data relating to the medical procedure activity. The receivers 408 are configured to monitor intraoperatively an aspect of the medical procedure activity and acquire sensor data relating to the medical procedure activity. The receivers 408 can acquire visual data, audible data, haptic data and motion data of the operating room. The receivers 408 can be mounted in various locations throughout the operating room. In this arrangement, even if one receiver 408 is obstructed from acquiring and providing sensor data intraoperatively, one or more of the remaining receivers 408 with an unobstructed view can continue to provide sensor data. Further, the sensor data from multiple receivers 408 can be combined, e.g., triangulated, to determine sensor data that is more accurate than sensor data received from a single receiver 408. If the received sensor data is conflicting, then the computer-assisted robotic surgery system can evaluate the sensor data, for example, based in a configuration whereby the receivers are configured to vote on the quality or correctness of the sensor data. [0059] The receivers 408 can be connected in a mesh network so as to provide and exchange the sensor data, in a similar fashion to sensor data acquired and exchanged via the Internet of Things (IoT). The mesh network connects receivers 408 by using proprietary or open communications protocols to self-organize and can pass measurement information back to central units such as computer 114, or other devices, such as shown in FIG.7 as historical database 726, training database 732, CNN 714, RNN 716, machine learning device 710 and/or surgeon database 704.
[0060] The computer-assisted robotic surgery system 400 acquires sensor data intraoperatively via receivers 408 operating in conjunction with the navigation system 406 and an imaging system 418. The sensor data includes, but is not limited to, medical tool position, medical tool angle of insertion into the patient region 404, medical tool force, medical tool torque, medical tool size, medical tool audio data, medical tool haptic data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, chemical data, patient-specific image data, patient- specific video data, and electromyography (EMG) electrophysiologic data. For example, the sensor data can track a position, angle, force, torque, or size of a medical tool 413 while an activity of a medical procedure is performed on a patient 402. The sensor data can further track audio data, haptic data (e.g., force feedback), LIDAR data, infrared data, radar data, ultrasound data, or chemical data generated while the medical tools 413 are in use on the patient region 404. [0061] Non-limiting example audio or haptic data includes changes in audio pitch and force feedback during a spinal procedure while a medical tool 413 traverses through soft tissue, into a vertebra of the spine, and then drills a pedicle screw into the vertebral body. The LIDAR data and patient-specific video data can be acquired from LIDAR sensors or cameras 408 communicatively coupled in the computer- assisted robotic surgery system 400. The medical tool 413 position data can be acquired based on fiducial markers 421(a)…(n), where “n” is any suitable number, disposed around the patient’s region 404. The medical tool angle data can be acquired based on an angle sensor 408 coupled to an imaging system 418 such as an x-ray. [0062] The sensor data can also include patient image data. The patient image data can be acquired intraoperatively from an imaging system 418. For example, the patient image data can include patient-specific image data acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging from a corresponding imaging system. The sensor data, including the patient image data, can be used to determine a recommended position, angle, and velocity (sometimes referred to herein as a trajectory) for a medical tool 413 relative to the patient region 404 based on the machine learning model, as described in further detail below.
[0063] Heads-up display, or portable display 440 may be worn on medical personnel. Alternatively, the portable display 440 can be an arm-mounted display worn by the surgeon 438 that displays the recommended trajectory or other information related to the medical tool 413 or the medical procedure activity. The execution of the medical procedure activity based on the recommended trajectory using the medical tool 413 can also include displaying, on portable display 440, the recommended trajectory of the medical tool 413. The portable display 440 can include a heads-up display (HUD) worn by the surgeon 438 that uses augmented reality (AR) to display the recommended trajectory or other information related to the medical tool 413 or the medical procedure activity. [0064] FIG.5 illustrates a flowchart 500 of an exemplary method for performing a medical procedure activity using the computer-assisted robotic surgery system as described herein. Operation of the computer-assisted robotic surgery system proceeds in multiple stages. Exemplary stages include a pre-operative training phase and an intraoperative execution phase. [0065] The computer-assisted robotic surgery system determines a reference frame for the patient region based on the machine learning model trained based on sensor data and image training data, as shown by 510. The reference frame provides a reference point or baseline for determining a recommended trajectory for the medical tool used to perform the medical procedure activity. As described above, during the pre-operative training stage the machine learning model is trained on the image training data. The image training data is generated pre-operatively based on the patient image data and the other sensor data processed from the navigation system and imaging system. [0066] The computer-assisted robotic surgery system tracks, using the surgical navigation system, a relative position of the patient region, as shown by 520. For example, the relative position of the patient region is determined relative to the reference frame. The computer-assisted robotic surgery system processes the relative position of the patient region and the reference frame in connection with the machine learning model to determine a recommended position, angle, and velocity (e.g., a recommended trajectory) for the medical tool used in the medical procedure activity.
[0067] The computer-assisted robotic surgery system determines a recommended position, angle, and velocity for the medical tool relative to the patient region, based on the machine learning model, as shown by 530. For example, the computer-assisted robotic surgery system determines parameters for the medical tool trajectory based on the machine learning model, such as a position in three- dimensional (3D) space, e.g., recommended (x, y, z) coordinates for a distal end of the medical tool, along with a recommended angle of insertion, force, and/or velocity (e.g., distance per unit time) from the machine learning model. [0068] FIG.6 illustrates a process 600 according to an embodiment of the disclosure. The process 600 can be executed by one or more processors based on code stored on a non-transitory computer readable medium. The process begins with acquisition of first sensor data, 602. This sensor data can be acquired in a pre- operative stage by receivers disposed in an operating room, or other examination venue. [0069] Machine learning can be applied to the sensor data, which may be visual sensor data, 604. [0070] A recommended trajectory for a surgical instrument is generated, 606. [0071] Second visual sensed data is acquired, 608. This second visual sensed data may be acquired during an operative stage of the medical procedure. [0072] Audio data may also be acquired, 610. [0073] Haptic data may also be acquired, 612. [0074] Voting on the acquired data is performed to determine the most preferred course of action for control of the surgical instrument, 616. [0075] The voting may be based on input from surgeons 614, who may be selected based on their familiarity with the procedure. The most preferred course of action for control of the surgical instrument may be modified, 618, which may also be due, in part, to particular preferences of the surgeon currently actively performing the surgical procedure, 620. [0076] The control of the surgical tool may be displayed on a heads-up display (HUD), or other display in the surgical room, 622. The movement of the surgical tool is controlled based on the sensed data, voting and surgeon’s preferences, 624.
[0077] A determination is made whether there is any additional sensed data, 626. If so, “yes” 628 shows that additional voting, 616, is performed based on the additional sensed data. [0078] If there is no additional sensed data, “no” 630 shows that data related to the surgical procedure is stored, 632. [0079] FIG.7 illustrates a network environment 700 according to an embodiment of the disclosure. [0080] The computer-assisted robotic surgery system 700 includes the elements described in relation to FIG.1. Additionally, as shown in FIG.7, computer 114 is operatively coupled, via bi-directional, wireless or wired channel, 702 to network 750. The network 750 is operatively coupled to historical processing device 726, training database 732, convolutional neural networks, (CNN) 714, recurrent neural networks, (RNN) 716, machine learning device 710 and surgeon processing device 704. [0081] Network 750 is any suitable interconnected computers, servers, and/pr processing devices, such as the Internet. The network 750 may include an Internet Protocol (IP) network via hypertext transfer protocol (HTTP), secure HTTP (HTTPS), and the like. The network 750 may also support an e-mail server configured to operate as an interface between clients and the network components over the IP network via an email protocol (e.g., Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), etc.). [0082] Network 750 may be implemented using any suitable communication techniques such as Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE), Wireless Local Area Network (WLAN), Infrared (IR) communication; Public Switched Telephone Network (PSTN), Plain Old Telephone Service (POTS), radio waves, and/or other suitable communication techniques. The network750 may allow ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet, and rely on sharing resources to achieve coherence economies of scale, like a public utility. Alternatively, third-party clouds, which typically enable organizations to focus on their core businesses, may also be used. Network 750 is operatively coupled to: historical processing device 726 via wired or wireless bi-directional communication channel
727; training database 732 via wired or wireless bi-directional communication channel 735, CNN 714 via wired or wireless bi-directional communication channel 715; RNN 716 via wired or wireless bi-directional communication channel 717, machine learning device 710 via wired or wireless bi-directional communication channel 712; and surgeon processing device 704 via wired or wireless bi-directional communication channel 705. [0083] Historical processing device 726 includes processor 728 and memory 730. Processor 728 may include a single processor or a plurality of processors (e.g., distributed processors). Processor 728 may be any suitable processor capable of executing or otherwise performing instructions and may include an associated central processing unit (CPU), or general or special purpose microprocessors, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) that carries out program instructions to perform the arithmetical, logical, and input/output. [0084] Processor 728 may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions. Processor 728 may receive instructions and data from a memory (e.g., a local memory 730, or a remote memory, via network 750). Multiple processors may be employed to provide for parallel or sequential execution of one or more portions of the embodiments described herein. [0085] Memory 730 may be a tangible program carrier having program instructions stored thereon. A tangible program carrier may include a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may include a machine readable storage device, a machine readable storage substrate, a memory device, or any combination thereof. Non-transitory computer readable storage medium may include non-volatile memory (e.g., flash memory, ROM, PROM, EPROM, EEPROM memory), volatile memory (e.g., random access memory (RAM), static random access memory (SRAM), synchronous dynamic RAM (SDRAM)), bulk storage memory (e.g., CD-ROM and/or DVD-ROM, hard-drives), or the like.
[0086] The historical processing device 726 is used to store and process previous surgical procedure data, which may be used to generate data for a future procedure. The historical processing device 726 is used as a repository for data acquired from other procedures and may be classified and retrieved based on parameters such as patient data, ailment, diagnosis, and surgeon. [0087] Training database 732 includes processor 734 and memory 736. The processor 732 may be similar to processor 728, as described herein. Memory 736 may be similar to memory 730, as described herein. The training database may be used to store and process training data. This data may be models, procedures and/or protocols that surgeons performing medical procedures can use as a way to virtually perform the surgical procedure prior to the actual procedure. This training may include 3D models, computerized representations of a surgery or other information and guidance for the surgeon. [0088] A variety of model architectures are used, including stateful, for example, recurrent neural networks, or RNNs 716, and stateless, for example, convolutional neural networks, or CNNs 714; in some embodiments, a mix of the two may be used, depending on the nature of the particular surgical procedure being performed. Machine learning device 710 may be used to facilitate processing of the RNN 716 and CNN 714. [0089] Surgeon processing device 704 includes processor 706 and memory 708. The processor 706 may be similar to processor 728, as described herein. Memory 708 may be similar to memory 730, as described herein. The surgeon processing device 704 is configured to receive input from a plurality of surgeons. The input mat be stored in memory 708 and accessed by any suitable processor, 706, 734, 728 or computer 114. The input may also be provided to machine learning device 710, RNN 716 and/or CNN 714. The input from the various surgeons may be weighted based on, for example, the experience level of a surgeon, number of similar procedures performed by a surgeon, specializations of a surgeon, expertise of a surgeon or other factors that give more credibility to an opinion of a surgeon providing input. Thus, the most qualified opinion will be given more weight than a less-qualified opinion as determined by professional medical factors of the surgeon providing the input.
[0090] Machine learning device 710 is used to refine the recommended trajectory determined based on one or more machine learning models stored in machine learning device 710. For example, the refinement can include applying image processing heuristics to segment the patient region (e.g., patient bone) identified in intraoperative patient images based on the initial trajectory received from the machine learning model. Upon applying the image processing heuristics to segment the intraoperative image, the computer-assisted robotic surgery system refines the recommended trajectory, for example, based on domain-specific enhancements, as described above. [0091] By way of further example, the refinement can include incorporating inference or knowledge of an individual surgeon’s preferences into the recommended trajectory or into the training of the machine learning model. For example, the refinement can include modifying the recommended trajectory based on preferences inferred or known about Surgeon A so as to medialize by 1-2 mm a pedicle screw being inserted into a vertebral body so as to avoid risk of contact with a facet joint of the vertebral body. In a still further example, the computer-assisted robotic surgery system is configured to refine the recommended trajectory upon detection that the actual intraoperative trajectory has deviated or is predicted to deviate from the recommended trajectory. [0092] The system, as described herein has thereon multiple structures that can be described as an “executable component.” For instance, the memories (708, 730 and 736) of the system are illustrated as including executable component. The term “executable component” is the name for a structure that is well understood to one of ordinary skill in the art in the field of computing as being a structure that can be software, hardware, or a combination thereof. For instance, when implemented in software, one of ordinary skill in the art would understand that the structure of an executable component may include software objects, routines, methods, and so forth, that may be executed by one or more processors (706, 728 and 734) on the computing system, whether such an executable component exists in the heap of a computing system, or whether the executable component exists on computer- readable storage media. [0093] The term “executable component” is also well understood by one of ordinary skill as including structures that are implemented exclusively or near-
exclusively in hardware, such as within a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), or any other specialized circuit. Accordingly, the term “executable component” is a term for a structure that is well understood by those of ordinary skill in the art of computing, whether implemented in software, hardware, or a combination. In this description, the terms “component,” “service,” “engine,” “module,” “control,” “generator,” or the like may also be used. As used in this description and in this case, these terms—whether expressed with or without a modifying clause—are also intended to be synonymous with the term “executable component,” and thus also have a structure that is well understood by those of ordinary skill in the art of computing. [0094] The structure of the executable component exists on a computer-readable medium in such a form that it is operable, when executed by one or more processors of the computing system, to cause the computing system to perform one or more function, such as the functions and methods described herein. Such a structure may be computer-readable directly by the processors—as is the case if the executable component were binary. Alternatively, the structure may be structured to be interpretable and/or compiled—whether in a single stage or in multiple stages—so as to generate such binary that is directly interpretable by the processors. Such an understanding of exemplary structures of an executable component is well within the understanding of one of ordinary skill in the art of computing when using the term “executable component.” [0095] Computer-readable storage media include RAM, ROM, EEPROM, solid state drives (“SSDs”), flash memory, phase-change memory (“PCM”), CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other physical and tangible storage medium which can be used to store desired program code in the form of computer-executable instructions or data structures and which can be accessed and executed by a general purpose or special purpose computing system to implement the disclosed functionality of the disclosure. [0096] One embodiment, as described herein, includes a computer-assisted robotic surgery system, the system comprising: a navigation system for tracking a relative position of a patient region and one or more medical tools, the navigation
system including two or more receivers configured to monitor an aspect of a medical procedure activity; a robotic surgical apparatus including: a robotic arm, and an actuator assembly operatively engaged with the robotic arm; and a computer operatively coupled to the robotic surgical apparatus having computer instructions that when executed: determine a reference frame for the patient region based on a neural network model trained on sensor data and on image training data sampled from a sensor system positioned to monitor an aspect of the medical procedure activity, track, using the navigation system, a relative position of the patient region, and determine a position, angle, and velocity for one of the medical tools relative to the patient region based on the neural network model trained on the sensor data and on the image training data. [0097] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the computer further comprises computer instructions executable to apply, using the robotic arm, control forces to the patient region based on the determined position, angle, and velocity for the one of the medical tools while the robotic surgical apparatus is engaged with the patient region. [0098] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the sensor data includes one or more of position, angle, force, torque, surgical implant size, medical image data, video data, light detection and ranging (LIDAR) data, infrared data, radar data, ultrasound data, audio data, haptic data, chemical data, and electromyography (EMG) electrophysiologic data. [0099] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the angle sensor data is acquired based on an angle sensor coupled to an x-ray. [00100] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the position is acquired based on fiducial markers disposed around the patient region. [00101] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the medical image data is acquired based on a computed tomography (CT) scan, a magnetic resonance imaging (MRI) scan, ultrasound imaging, electron microscopy imaging, a positron emission tomography (PET) scan, or x-ray imaging.
[00102] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the image training data is determined based on medical images that are annotated and classified by one or more medical providers. [00103] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the neural network is trained using reinforcement learning based on the image training data that is annotated and classified by the one or more medical providers. [00104] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the image training data is anonymized or pseudonymized so as to remove at least some identifying characteristics of a subject of the image training data. [00105] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is determined using a convolutional neural network. [00106] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is refined by simulating segmentation of vertebral bodies. [00107] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the model is a generative model trained via an adversarial learning process (GAN learning). [00108] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the one of the medical tools includes a surgical drill for placement of a pedicle screw. [00109] Another embodiment, as described herein relates to the computer-assisted surgery system, wherein the navigation system comprises an active system or a passive system. [00110] Another embodiment is directed to a method for performing a surgical procedure comprising: accessing composite visual sensor data from a plurality of visual sensors; constructing a machine learning model based on the composite visual sensor data; generating a recommended trajectory for a surgical instrument relative to a region of a patient based, at least in part, on the machine learning
model; receiving a vote on the sensor data; performing additional machine learning; modifying the recommended trajectory based at least in part on the additional machine learning. [00111] The functions performed in the processes and methods described above may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples. Some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the disclosed embodiments' essence. [00112] Some embodiments of the disclosure may be described as a system, method, apparatus, or computer program product. Accordingly, embodiments of the disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the disclosure may take the form of a computer program product embodied in one or more computer readable storage media, such as a non-transitory computer readable storage medium, having computer readable program code embodied thereon. [00113] Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically, or operationally, together, comprise the module and achieve the stated purpose for the module. [00114] Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may
exist, at least partially, merely as electronic signals on a system or network. The system or network may include non-transitory computer readable media. Where a module or portions of a module are implemented in software, the software portions are stored on one or more computer readable storage media, which may be a non- transitory media. [00115] Any combination of one or more computer readable storage media may be utilized. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, including non-transitory computer readable media. [00116] More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), a Blu-ray Disc, an optical storage device, a magnetic tape, a Bernoulli drive, a magnetic disk, a magnetic storage device, a punch card, integrated circuits, other digital processing apparatus memory devices, or any suitable combination of the foregoing, but would not include propagating signals. [00117] In the context of this disclosure, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. [00118] Program code for carrying out operations for aspects of the present disclosure may be generated by any combination of one or more programming language types, including, but not limited to any of the following: machine languages, scripted languages, interpretive languages, compiled languages, concurrent languages, list-based languages, object oriented languages, procedural languages, reflective languages, visual languages, or other language types. [00119] The program code may execute partially or entirely on the computer (114), or partially or entirely on the surgeon’s device (704). Any remote computer may be connected to the surgical apparatus (110) through any type of network (750), including a local area network (LAN) or a wide area network (WAN), or the
connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). [00120] Although the following detailed description contains many specifics for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the disclosure. Accordingly, the following embodiments are set forth without any loss of generality to, and without imposing limitations upon, the claims. [00121] In this detailed description, a person skilled in the art should note that directional terms, such as “above,” “below,” “upper,” “lower,” and other like terms are used for the convenience of the reader in reference to the drawings. Also, a person skilled in the art should notice this description may contain other terminology to convey position, orientation, and direction without departing from the principles of the present disclosure. [00122] Furthermore, in this detailed description, a person skilled in the art should note that quantitative qualifying terms such as “generally,” “substantially,” “mostly,” “approximately” and other terms are used, in general, to mean that the referred to object, characteristic, or quality constitutes a majority of the subject of the reference. The meaning of any of these terms is dependent upon the context within which it is used, and the meaning may be expressly modified. [00123] Some of the illustrative embodiments of the present disclosure may be advantageous in solving the problems herein described and other problems not discussed which are discoverable by a skilled artisan. While the above description contains much specificity, these should not be construed as limitations on the scope of any embodiment, but as exemplifications of the presented embodiments thereof. Many other ramifications and variations are possible within the teachings of the various embodiments. While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted for elements thereof without departing from the scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof.
[00124] Therefore, it is intended that the disclosure not be limited to the particular embodiment disclosed as the best or only mode contemplated for carrying out this disclosure, but that the disclosure will include all embodiments falling within the scope of the appended claims. Also, in the drawings and the description, there have been disclosed exemplary embodiments and, although specific terms may have been employed, they are unless otherwise stated used in a generic and descriptive sense only and not for purposes of limitation, the scope of the disclosure therefore not being so limited. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another. Furthermore, the use of the terms a, an, etc. do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Thus, the scope of the disclosure should be determined by the appended claims and their legal equivalents, and not by the examples given. [00125] Embodiments, as described herein can be implemented using a computing system associated with a transaction device, the computing system comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions to cause the computing system to perform operations. Additionally, a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations may also be used. [00126] It will be appreciated by those skilled in the art that changes could be made to the various aspects described above without departing from the broad inventive concept thereof. It is to be understood, therefore, that the subject application is not limited to the particular aspects disclosed, but it is intended to cover modifications within the spirit and scope of the subject application and appended claims.
Claims
CLAIMS 1. A method for controlling movement of a surgical instrument characterized by: acquiring first visual sensor data from one or more of a plurality of visual sensing devices (108), during a pre-operative stage; performing machine learning on the first visual sensor data; generating a recommended trajectory for a surgical instrument (113) based on the machine learning; acquiring second visual sensor data from one or more of a plurality of visual sensing devices (108) during an operative stage; performing voting on the first visual data and the second visual data; modifying the recommended trajectory for the surgical instrument based on the voting; and controlling movement of the surgical instrument (113) based on the modified recommended trajectory.
2. The method of claim 1, further characterized by: acquiring audio sensor data from one or more of a plurality of audio sensing devices (108), during the operative stage; and utilizing the audio sensor data in the voting.
3. The method of claim 1, further characterized by: acquiring haptic data from one or more of a plurality of haptic sensing devices (108), during the operative stage; and utilizing the haptic data in the voting.
4. The method of claim 1, further characterized by: displaying a representation of the movement of the surgical instrument (113) at a heads-up display (438).
5. The method of claim 1, further characterized by: performing image training on the first visual sensor data and the second visual sensor data.
6. The method of claim 1, further characterized by: accessing input from one or more surgeons (440); and utilizing the input in the voting.
7. The method of claim 6, further characterized by: assigning a weight to the input from one or more surgeons (440); and utilizing the weight in the voting.
8. The method of claim 1, wherein the first visual sensor data and the second visual sensor data are acquired from a mesh network.
9. The method of claim 1, further characterized by: accessing input from one or more surgeons (440); utilizing the input to generate the recommendation.
10. The method of claim 1, further characterized by: accessing preferences associated with a particular individual; and modifying the recommended trajectory based, at least in part, on the preferences associated with a particular individual.
11. A computer-assisted robotic surgery system (100), the system characterized by: a navigation system (106) for tracking a relative position of a patient region (104) and one or more medical tools (113), the navigation system (106) including two or more receivers (108) configured to monitor an aspect of a medical procedure activity; a robotic arm (112); an actuator assembly (115) operatively engaged with the robotic arm; and a computer (114) operatively coupled to the actuator assembly (115), having computer instructions that when executed: determine a reference frame for the patient region (104) based on a neural network model trained on sensor data and on image training data sampled from a sensor system (108) positioned to monitor an aspect of the medical procedure activity; track, using the navigation system, a relative position of the patient region (104); and determine a position, angle, and velocity for one of the medical tools (113) relative to the patient region (104) based on the neural network model trained on the sensor data and on the image training data.
12. The computer-assisted robotic surgery system (100) of claim 11, wherein the computer instructions are executed to apply, using the robotic arm (112), control forces to the patient region (104) based on the determined position, angle, and velocity for the one of the medical tools (113) while the robotic surgical apparatus is engaged with the patient region.
13. The computer-assisted robotic surgery system (100) of claim 11, wherein the sensor data includes one or more of position, angle, force, torque, audio data, haptic data.
14. The computer-assisted robotic surgery system (100) of claim 11, wherein the angle is acquired based on an angle sensor coupled to an x-ray.
15. The computer-assisted robotic surgery system (100) of claim 11, wherein the position is acquired based on fiducial markers (421) disposed around the patient region (104).
16. The computer-assisted robotic surgery system (100) of claim 11, wherein the image training data is determined based on medical images that are annotated and classified by one or more medical providers (438).
17. The computer-assisted robotic surgery system (100) of claim 11, wherein the neural network model is trained using reinforcement learning based, at least in part, on the image training data that is annotated and classified by the one or more medical providers (438).
18. The computer-assisted robotic surgery system (100) of claim 11, wherein the model is determined using a convolutional neural network.
19. The computer-assisted robotic surgery system (100) of claim 11, wherein the model is a generative model trained via an adversarial learning process (GAN learning).
20. The computer-assisted robotic surgery system (100) of claim 11, wherein the model is refined by simulating segmentation of vertebral bodies.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163189391P | 2021-05-17 | 2021-05-17 | |
US63/189,391 | 2021-05-17 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2022245833A2 true WO2022245833A2 (en) | 2022-11-24 |
WO2022245833A3 WO2022245833A3 (en) | 2023-01-12 |
Family
ID=84140963
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/029645 WO2022245833A2 (en) | 2021-05-17 | 2022-05-17 | System, method and apparatus for robotically performing a medical procedure |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022245833A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116092627A (en) * | 2023-04-04 | 2023-05-09 | 南京大经中医药信息技术有限公司 | Intelligent prescription system based on syndrome differentiation of traditional Chinese medicine pathogenesis |
US12105876B2 (en) * | 2023-01-04 | 2024-10-01 | Wispr AI, Inc. | System and method for using gestures and expressions for controlling speech applications |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070100666A1 (en) * | 2002-08-22 | 2007-05-03 | Stivoric John M | Devices and systems for contextual and physiological-based detection, monitoring, reporting, entertainment, and control of other devices |
CA2899359C (en) * | 2013-03-15 | 2017-01-17 | Synaptive Medical (Barbados) Inc. | Planning, navigation and simulation systems and methods for minimally invasive therapy |
US10013808B2 (en) * | 2015-02-03 | 2018-07-03 | Globus Medical, Inc. | Surgeon head-mounted display apparatuses |
CN113473936A (en) * | 2019-02-05 | 2021-10-01 | 史密夫和内修有限公司 | Robotic surgical data for long term care periods |
-
2022
- 2022-05-17 WO PCT/US2022/029645 patent/WO2022245833A2/en active Application Filing
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12105876B2 (en) * | 2023-01-04 | 2024-10-01 | Wispr AI, Inc. | System and method for using gestures and expressions for controlling speech applications |
CN116092627A (en) * | 2023-04-04 | 2023-05-09 | 南京大经中医药信息技术有限公司 | Intelligent prescription system based on syndrome differentiation of traditional Chinese medicine pathogenesis |
CN116092627B (en) * | 2023-04-04 | 2023-06-27 | 南京大经中医药信息技术有限公司 | Intelligent prescription system based on syndrome differentiation of traditional Chinese medicine pathogenesis |
Also Published As
Publication number | Publication date |
---|---|
WO2022245833A3 (en) | 2023-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11112770B2 (en) | Systems and methods for assisting a surgeon and producing patient-specific medical devices | |
US11944385B2 (en) | Systems and methods for medical image analysis | |
Satava | Emerging technologies for surgery in the 21st century | |
JP7547333B2 (en) | Systems and methods for orthopedic implants | |
Zhang et al. | Robotic navigation during spine surgery | |
JP6433668B2 (en) | Generating patient-specific orthopedic surgery plans from medical image data | |
Metson et al. | The role of image-guidance systems for head and neck surgery | |
JP2020518311A (en) | System for developing one or more patient-specific spinal implants | |
Strong et al. | Comparison of 3 optical navigation systems for computer-aided maxillofacial surgery | |
WO2022245833A2 (en) | System, method and apparatus for robotically performing a medical procedure | |
Eliashar et al. | Image guided navigation system—a new technology for complex endoscopic endonasal surgery | |
US20240320935A1 (en) | Systems, Methods and Devices for Augmented Reality Assisted Surgery | |
Shelke et al. | Augmented reality and virtual reality transforming spinal imaging landscape: a feasibility study | |
Devoto et al. | Highly accurate, patient-specific, 3-dimensional mixed-reality model creation for surgical training and decision-making | |
Wagner et al. | Future directions in robotic neurosurgery | |
Hartnick et al. | Endoscopic access to the infratemporal fossa and skull base: a cadaveric study | |
Hernigou et al. | Artificial intelligence and robots like us (surgeons) for people like you (patients): toward a new human–robot-surgery shared experience. What is the moral and legal status of robots and surgeons in the operating room? | |
Badiali et al. | An average three-dimensional virtual human skull for a template-assisted maxillofacial surgery | |
Kral et al. | Navigated surgery at the lateral skull base and registration and preoperative imagery: experimental results | |
Hofer et al. | The influence of various registration procedures upon surgical accuracy during navigated controlled petrous bone surgery | |
Zhang et al. | A Review of Cognitive Support Systems in the Operating Room | |
Citardi et al. | Computer-aided assessment of bony nasal pyramid dimensions | |
Popescu et al. | DICOM 3D viewers, virtual reality or 3D printing–a pilot usability study for assessing the preference of orthopedic surgeons | |
Zheng et al. | Frameless optical computer-aided tracking of a microscope for otorhinology and skull base surgery | |
EP3863551B1 (en) | Using a current workflow step for control of medical data processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22805322 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22805322 Country of ref document: EP Kind code of ref document: A2 |