CN116867459A - Bone entry point verification system and method - Google Patents

Bone entry point verification system and method Download PDF

Info

Publication number
CN116867459A
CN116867459A CN202280012669.5A CN202280012669A CN116867459A CN 116867459 A CN116867459 A CN 116867459A CN 202280012669 A CN202280012669 A CN 202280012669A CN 116867459 A CN116867459 A CN 116867459A
Authority
CN
China
Prior art keywords
entry point
bone
imaging device
identified
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280012669.5A
Other languages
Chinese (zh)
Inventor
D·朱尼奥
M·肖哈姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/575,245 external-priority patent/US20220241016A1/en
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Priority claimed from PCT/IL2022/050128 external-priority patent/WO2022162670A1/en
Publication of CN116867459A publication Critical patent/CN116867459A/en
Pending legal-status Critical Current

Links

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A method for verifying a bone entry point, the method comprising: receiving a surgical plan defining a target bone entry point and a first portion of a bone surface at least partially surrounding the target bone entry point; positioning an imaging device near the identified bone entry point; receiving an image of a second portion of the bone surface surrounding the identified bone entry point from the imaging device; comparing at least one feature of the first portion with at least one feature of the second portion to quantify the similarity therebetween; and generating a confirmation when the quantified similarity between the first portion and the second portion exceeds a threshold.

Description

Bone entry point verification system and method
Technical Field
The present technology relates generally to surgical procedures and, more particularly, to verifying surgical access points into hard tissue.
Background
The surgical procedure may follow a surgical plan and may be performed autonomously or semi-autonomously. Imaging may be used to help a surgeon or surgical robot ensure that the surgical procedure is successfully performed. Patient anatomy may change over time, including between capturing images of a patient for a surgical plan and starting a surgical procedure.
Disclosure of Invention
Exemplary aspects of the present disclosure include:
a method for verifying a bone entry point according to at least one embodiment of the present disclosure includes: receiving a surgical plan defining a target bone entry point and a first portion of a bone surface at least partially surrounding the target bone entry point; positioning an imaging device near the identified bone entry point; receiving an image of a second portion of the bone surface at least partially surrounding the identified bone entry point from the imaging device; comparing at least one feature of the first portion with at least one feature of the second portion to quantify the similarity therebetween; and generating a confirmation when the quantized similarity between the first portion and the second portion exceeds a threshold.
Any of the aspects herein, wherein the confirmation indicates that the identified bone entry point matches the target bone entry point.
Any of the aspects herein, wherein the confirmation causes the surgical tool to drill into the identified bone entry point along the planned bone entry trajectory.
Any of the aspects herein, wherein each of the at least one feature of the first portion and the at least one feature of the second portion is a surface gradient.
Any of the aspects herein, wherein each of the at least one feature of the first portion and the at least one feature of the second portion is an anatomical landmark.
Any of the aspects herein, wherein the confirmation indicates a level of statistical certainty that the identified bone entry point matches the target bone entry point.
Any of the aspects herein, wherein the threshold is a percentage of similarity between the at least one feature of the first portion and the at least one feature of the second portion.
Any of the aspects herein, wherein the threshold is at least ninety-nine percent.
Any of the aspects herein, wherein the imaging device is an ultrasound probe.
Any of the aspects herein, wherein the ultrasound probe is positioned in a Minimally Invasive Surgical (MIS) port.
Any of the aspects herein, wherein the MIS port is filled with a solution.
Any of the aspects herein, wherein the solution is water or brine.
Any of the aspects herein wherein the MIS port is located on the superior portion of the inferior vertebral body.
Any of the aspects herein, wherein the imaging device is an optical imaging device.
Any of the aspects herein, wherein the target bone is a vertebra.
In any of the aspects herein, the method further comprises drilling into the identified bone entry point using a surgical tool.
Any of the aspects herein, wherein the positioning the imaging device near the identified bone entry point comprises orienting the imaging device substantially parallel to a planned bone entry trajectory.
A system for verifying an entry point into an anatomical tissue in accordance with at least one embodiment of the present disclosure, the system comprising: a processor; and a memory storing instructions for execution by the processor, the instructions when executed by the processor cause the processor to: receiving a surgical plan defining a target entry point of an anatomical tissue and including a first image of a first portion of the anatomical tissue in the vicinity of the target entry point; causing an imaging device to be positioned near the identified entry point of the anatomical tissue; receiving a second image of a second portion of the anatomical tissue from the imaging device near the identified entry point; and causing the first image and the second image to be rendered to a user interface.
Any of the aspects herein, wherein the anatomical tissue is bone.
Any of the aspects herein, wherein the imaging device is positioned in a surgical incision.
Any of the aspects herein, wherein the surgical incision comprises a MIS port.
Any of the aspects herein, wherein the image of the second portion is an ultrasound image.
Any of the aspects herein wherein the instructions further cause the processor to: the first portion is compared to the second portion.
Any of the aspects herein, wherein the comparing comprises quantifying a difference between at least one characteristic of the first portion of the anatomical tissue and at least one characteristic of the second portion of the anatomical tissue.
Any of the aspects herein, wherein the anatomical tissue is a vertebra.
Any of the aspects herein wherein the instructions further cause the processor to: when the quantified differences between the at least one characteristic of the first portion of the anatomy and the at least one characteristic of the second portion of the anatomy are below a threshold, a confirmation is generated that the identified entry point matches the target entry point.
Any of the aspects herein, wherein the positioning the imaging device near the planned entry point comprises orienting the imaging device substantially parallel to the planned entry trajectory.
A system according to at least one embodiment of the present disclosure includes: an imaging device; a processor; and a memory storing instructions for execution by the processor, the instructions when executed by the processor cause the processor to: receiving a surgical plan defining a target bone entry point and a first bone contour proximate the target bone entry point; positioning the imaging device near the identified bone entry point; causing the imaging device to capture an image of a second bone contour proximate the identified bone entry point; determining whether the first bone contour matches the second bone contour based on a predetermined threshold; generating a confirmation that the identified bone entry point matches the target bone entry point when the first bone contour matches the second bone contour; and when the first bone contour does not match the second bone contour, generating an alert that the identified bone entry point does not match the target bone entry point.
Any of the aspects herein, wherein the imaging device is an ultrasound probe.
Any of the aspects herein, wherein the imaging device is oriented substantially parallel to the planned bone entry trajectory.
Any aspect may be combined with any one or more other aspects.
Any one or more of the features disclosed herein.
Any one or more of the features are generally disclosed herein.
Any one or more of the features generally disclosed herein are combined with any one or more other features generally disclosed herein.
Any one of the aspects/features/embodiments is combined with any one or more other aspects/features/embodiments.
Any one or more of the aspects or features disclosed herein are used.
It should be understood that any feature described herein may be claimed in combination with any other feature as described herein, whether or not the feature is from the same described embodiment.
The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the technology described in this disclosure will be apparent from the description and drawings, and from the claims.
The phrases "at least one," "one or more," and/or "are open-ended expressions that have both connectivity and separability in operation. For example, the expressions "at least one of A, B and C", "at least one of A, B or C", "one or more of A, B and C", "one or more of A, B or C" and "one of A, B and/or C" mean a alone, B alone, C, A alone and B together, a alone and C together, B alone and C together, or A, B alone and C together. When each of A, B and C in the above description refers to an element such as X, Y and Z or an element such as X 1 -X n 、Y 1 -Y m And Z 1 -Z o The phrase is intended to refer to a single element selected from X, Y and Z, elements selected from the same class (e.g., X 1 And X 2 ) And elements selected from two or more classes (e.g., Y 1 And Z o ) Is a combination of (a) and (b).
The term "a/an" entity refers to one or more of that entity. Thus, the terms "a/an", "one or more", and "at least one" may be used interchangeably herein. It should also be noted that the terms "comprising" and "having" may be used interchangeably.
The foregoing is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is not an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended to neither identify key or critical elements of the disclosure nor delineate the scope of the disclosure, but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As should be appreciated, other aspects, embodiments, and configurations of the present disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
Many additional features and advantages of the invention will become apparent to those skilled in the art upon consideration of the description of embodiments presented below.
Drawings
The accompanying drawings are incorporated in and form a part of this specification to illustrate several examples of the present disclosure. Together with the description, these drawings serve to explain the principles of the disclosure. The drawings only show preferred and alternative examples of how the disclosure may be made and used, and these examples should not be construed as limiting the disclosure to only the examples shown and described. Additional features and advantages will be made apparent from the following more detailed description of various aspects, embodiments and configurations of the present disclosure, as illustrated by the accompanying drawings referenced below.
FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure;
FIG. 2 is a flow chart according to at least one embodiment of the present disclosure; and is also provided with
FIG. 3 is a flow chart according to at least one embodiment of the present disclosure;
Detailed Description
It should be understood that the various aspects disclosed herein may be combined in different combinations than specifically presented in the specification and drawings. It should also be appreciated that certain acts or events of any of the processes or methods described herein can be performed in a different order, and/or can be added, combined, or omitted entirely, depending on the example or implementation (e.g., not all of the described acts or events may be required to implement the disclosed techniques in accordance with different implementations of the disclosure). Moreover, although certain aspects of the disclosure are described as being performed by a single module or unit for clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media corresponding to tangible media, such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors (e.g., intel Core i3, i5, i7, or i9 processors, intel Celeron processors, intel Xeon processors, intel Pentium processors, AMD Ryzen processors, AMD Athlon processors, AMD Phenom processors, apple A10 or 10 Xfusion processors, apple A11, A12X, A Z, or A13 Bionic processors, or any other general purpose microprocessor), graphics processing units (e.g., nvidia GeForce RTX series processor, nvidia GeForce RTX series processor, AMD Radeon RX 5000 series processor, AMD Radeon 6000 series processor, or any other graphics processing unit), application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuits. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. In addition, the present techniques may be fully implemented in one or more circuits or logic elements.
Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including" or "having" and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. The use or listing of one or more examples (which may be indicated by "for example)", "by way of example", "e.g. (e.g.)," such as "or similar language, is not intended and does not limit the scope of the present disclosure unless expressly stated otherwise.
In removing bone near important patient anatomy (e.g., near blood vessels, near nerve roots, etc.) during Minimally Invasive Surgery (MIS), the trajectory of the entry point and/or surgical tool through the entry point should be limited and accurate. Accurate entry points and/or trajectories may help the surgeon safely access the relevant patient anatomy while also reducing the risk of patient injury and/or inadequate bone removal. Freehand surgical techniques and technique assistance techniques generally fail to provide real-time feedback of the entry point and/or trajectory of the surgical tool. In contrast, the MIS port configuration may be approved and the procedure may proceed by assuming that the robot and/or navigation system is accurate (e.g., registration is correct) and that the MIS configuration is known to be rigidly positioned, the entry point and/or trajectory of the surgical tool is also correct. Assuming that the entry point and/or trajectory of the surgical tool is correct, the position of the MIS construct may additionally or alternatively be tracked as long as the MIS construct does not change position. In accordance with embodiments of the present disclosure, a real-time feedback mechanism for indicating whether an entry point and/or trajectory of a surgical tool is accurate may be used to reduce trajectory problems caused by, for example, tool scraping, patient movement, misalignment registration, combinations thereof, and the like.
Feedback mechanisms according to embodiments of the present disclosure may include techniques that enable intraoperative imaging of the vicinity of an entry point and comparing (e.g., mapping) the image to a model (e.g., computerized model) of an identified entry point (pre-or intraoperatively determined) such that the surface of the entry point may be compared to the modeled surface and the entry point may be confirmed to be correct prior to actually making such entry. The binary scoring system may be used to approve or disapprove surgical tools from entering the entry point based on how well the imaged entry point matches the model. Imaging near the entry point may be obtained within the MIS incision or incision, and may be obtained using an imaging device configured to obtain images in a low light environment (e.g., within a lumen of a human body). The imaging device may utilize, for example, RGB/CCD visible light in combination with associated illumination, coded light, laser patterns, lidar, ultrasound, combinations thereof, and the like. Imaging may occur at various angles near the entry point, including, for example, parallel to the trajectory of the surgical tool into the bone.
Embodiments of the present disclosure provide technical solutions to one or more of the following problems: verifying an entry point of a surgical tool into an anatomical tissue (e.g., bone); avoiding track misalignment problems caused by tool scraping, patient movement, registration misalignment or inaccuracy/inaccuracy, combinations thereof, and the like; and/or lack of real-time feedback regarding the entry point and/or trajectory of the surgical tool into the anatomy.
Turning first to fig. 1, a block diagram of a system 100 in accordance with at least one embodiment of the present disclosure is shown. The system 100 may be used to verify entry points and/or trajectories of surgical tools into anatomical tissue (e.g., soft tissue, bone, etc.), and/or to perform one or more other aspects of one or more methods disclosed herein. The system 100 includes a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, a database 130, and/or a cloud or other network 134. Systems according to other embodiments of the present disclosure may include more or fewer components than system 100. For example, the system 100 may not include one or more components of the imaging device 112, the robot 114, the navigation system 118, the computing device 102, the database 130, and/or the cloud 134.
The computing device 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other embodiments of the present disclosure may include more or fewer components than computing device 102.
The processor 104 of the computing device 102 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions stored in the memory 106 that may cause the processor 104 to perform one or more computing steps with or based on data received from the imaging device 112, the robot 114, the navigation system 118, the database 130, and/or the cloud 134.
Memory 106 may be or include RAM, DRAM, SDRAM, other solid state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions. Memory 106 may store information or data for performing any of the steps of methods 200 and/or 300, or any other method, such as those described herein. The memory 106 may store, for example, one or more image processing algorithms 120, one or more quantization algorithms 122, one or more transformation algorithms 124, one or more comparison algorithms 126, and/or one or more registration algorithms 128. In some implementations, such instructions or algorithms may be organized into one or more applications, modules, packages, layers, or engines. The algorithms and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the robot 114, the database 130, and/or the cloud 134.
Computing device 102 may also include a communication interface 108. The communication interface 108 may be used to receive image data or other information from external sources (e.g., the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component that is not part of the system 100) and/or to transmit instructions, images, or other information to external systems or devices (e.g., another computing device 102, the imaging device 112, the robot 114, the navigation system 118, the database 130, the cloud 134, and/or any other system or component that is not part of the system 100). The communication interface 108 may include one or more wired interfaces (e.g., USB ports, ethernet ports, firewire ports) and/or one or more wireless transceivers or interfaces (configured to transmit and/or receive information, e.g., via one or more wireless communication protocols such as 802.11a/b/g/n, bluetooth, NFC, purple peak, etc.). In some implementations, the communication interface 108 may be used to enable the computing device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time required to complete computationally intensive tasks or for any other reason.
The computing device 102 may also include one or more user interfaces 110. The user interface 110 may be or include a keyboard, mouse, trackball, monitor, television, screen, touch screen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 110 may be used, for example, to receive user selections or other user inputs regarding any of the steps of any of the methods described herein. Nonetheless, any desired input for any step of any method described herein may be automatically generated by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100. In some embodiments, the user interface 110 may be used to allow a surgeon or other user to modify instructions to be executed by the processor 104 and/or to modify or adjust settings of other information displayed on or corresponding to the user interface 110 in accordance with one or more embodiments of the present disclosure.
Although the user interface 110 is shown as part of the computing device 102, in some embodiments, the computing device 102 may utilize the user interface 110 housed separately from one or more remaining components of the computing device 102. In some embodiments, the user interface 110 may be located proximate to one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computing device 102.
The imaging device 112 may be used to image anatomical features (e.g., bones, veins, tissue, etc.) and/or other aspects of the patient anatomy to produce image data (e.g., image data depicting or corresponding to bones, veins, tissue, etc.). As used herein, "image data" refers to data generated or captured by the imaging device 112, including data in machine-readable form, graphical/visual form, and in any other form. In different examples, the image data may include data corresponding to anatomical features of the patient or a portion thereof. The image data may be or include pre-operative images, intra-operative images, post-operative images, or images taken independently of any operative procedure. In some implementations, the first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and the second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time that is subsequent to the first time. The imaging device 112 may be capable of capturing 2D images or 3D images to generate image data. The imaging device 112 may be or include, for example, an ultrasound scanner or probe (which may include, for example, physically separate transducers and receivers, or a single ultrasound transceiver), an O-arm, a C-arm, a G-arm, or any other device utilizing X-ray based imaging (e.g., fluoroscope, CT scanner, or other X-ray machine), a Magnetic Resonance Imaging (MRI) scanner, an Optical Coherence Tomography (OCT) scanner, an endoscope, a microscope, a thermal imaging camera (e.g., an infrared camera), a radar system (which may include, for example, a transmitter, receiver, processor, and one or more antennas), or any other imaging device 112 suitable for obtaining images of anatomical features of a patient. The imaging device 112 may be contained entirely within a single housing, or may include a transmitter/emitter and receiver/detector located in separate housings or otherwise physically separated.
In some embodiments, the imaging device 112 may include more than one imaging device 112. For example, the first imaging device may provide first image data and/or a first image, and the second imaging device may provide second image data and/or a second image. In yet other implementations, the same imaging device may be used to provide both the first image data and the second image data and/or any other image data described herein. The imaging device 112 may be used to generate an image data stream. For example, the imaging device 112 may be configured to operate with a shutter that is open, or with a shutter that continuously alternates between open and closed, in order to capture successive images. For the purposes of this disclosure, image data may be considered continuous and/or provided as a stream of image data if the image data represents two or more frames per second, unless otherwise specified.
Robot 114 may be any surgical robot or surgical robotic system. The robot 114 may be or include, for example, a Mazor X TM A stealth robot guidance system. The robot 114 may be configured to position the imaging device 112 at one or more precise locations and orientations and/or return the imaging device 112 to the same location and orientation at a later point in time. The robot 114 may additionally or alternatively be configured to manipulate surgical tools (whether based on guidance from the navigation system 118 or not) to complete or assist in surgery And (5) carrying out business. In some embodiments, the robot 114 may be configured to hold and/or manipulate anatomical elements during or in conjunction with a surgical procedure. The robot 114 may include one or more robotic arms 116 and one or more surgical tools 140. In some embodiments, robotic arm 116 may include a first robotic arm and a second robotic arm, but robot 114 may include more than two robotic arms. In some embodiments, one or more of the robotic arms 116 may be used to hold and/or manipulate the imaging device 112. In embodiments where the imaging device 112 includes two or more physically separate components (e.g., a transmitter and a receiver), one robotic arm 116 may hold one such component and another robotic arm 116 may hold another such component. Each robotic arm 116 may be positioned independently of the other robotic arms. The robotic arms may be controlled in a single shared coordinate space or in separate coordinate spaces.
The robot 114 along with the robot arm 116 may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, each robotic arm 116 may be positioned or positionable in any pose, plane, and/or focus. The pose includes a position and an orientation. Thus, the imaging device 112, surgical tool 140, or other object held by the robot 114 (or more specifically, by the robotic arm 116) may be precisely positioned at one or more desired and specific locations and orientations.
The robotic arm 116 may include one or more sensors that enable the processor 104 (or the processor of the robot 114) to determine the precise pose of the robotic arm (and any objects or elements held by or secured to the robotic arm) in space.
In some embodiments, the reference marks (i.e., navigation marks) may be placed on the robot 114 (including, for example, on the robotic arm 116 and/or a surgical tool 140 held by or otherwise attached to the robotic arm 116), the imaging device 112, or any other object in the surgical space. The reference marks may be tracked by the navigation system 118 and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof. In some embodiments, the navigation system 118 may be used to track other components of the system (e.g., the imaging device 112), and the system may operate without the use of the robot 114 (e.g., the surgeon manually manipulates the imaging device 112 and/or one or more surgical tools, for example, based on information and/or instructions generated by the navigation system 118).
During operation, the navigation system 118 may provide navigation to the surgeon and/or surgical robot. The navigation system 118 may be any known or future developed navigation system including, for example, the Medtronic (Medtronic) sealthstation TM S8 a surgical navigation system or any subsequent product thereof. The navigation system 118 may include one or more cameras or other sensors for tracking one or more reference marks, navigation trackers, or other objects within the operating room or other room in which part or all of the system 100 is located. The one or more cameras may be optical cameras, infrared cameras, or other cameras. In some embodiments, the navigation system may include one or more electromagnetic sensors. In various embodiments, the navigation system 118 may be used to track the position and orientation (i.e., pose) of the imaging device 112, the robot 114, and/or the robotic arm 116, and/or the one or more surgical tools 140 (or more specifically, to track the pose of a navigation tracker attached directly or indirectly in a fixed relationship to one or more of the foregoing).
The navigation system 118 can include a display for displaying one or more images from an external source (e.g., the computing device 102, the imaging device 112, or other sources) or for displaying images and/or video streams from one or more cameras or other sensors of the navigation system 118. In some embodiments, the system 100 may operate without the use of the navigation system 118. The navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or component thereof, to the robot 114 or any other element of the system 100 regarding, for example, the pose of one or more anatomical elements, whether the tool is in an appropriate trajectory, and/or how to perform surgical tasks to move the tool into an appropriate trajectory according to a pre-operative or other surgical plan.
The one or more surgical tools 140 may be configured to perform and/or assist a surgeon in performing a surgical procedure or surgical task. The surgical tool 140 may have a proximal end (e.g., an end that is distal from the patient or that does not contact the patient) that may be attached to the robotic arm 116. The distal end of the surgical tool (e.g., the end positioned on, in, or closest to the patient) may be configured to facilitate a surgical procedure or surgical task. For example, the distal end may be equipped with a tool head (e.g., saw, drill, rasp, razor, scalpel, reamer, tap, etc.) to drill into anatomical tissue, for example. In some embodiments, the surgical tool may include more than one tool bit, and/or may be configured to receive more than one tool bit (e.g., so that the tool bit may be replaced, either preoperatively, intraoperatively, or both). In still other embodiments, the robotic arm 116 may be configured to select different surgical tools 140 for different surgical tasks, or an operator of the robot 114 may secure different surgical tools 140 to the robotic arm 116 in preparation for different surgical tasks.
In some embodiments, activation of the surgical tool (e.g., application of power to the tool, or switching of the surgical tool from a non-operational state to an operational state) may be controlled and/or monitored by the system 100 and/or components thereof (e.g., the computing device 102). In such embodiments, the system 100 may automatically disable the surgical tool 140 until the system 100 receives a confirmation (e.g., an electronic signal) indicating that the surgical procedure or procedure is allowed to proceed.
The system 100 or similar system may be used, for example, to perform one or more aspects of any of the methods 200 and 300 described herein. The system 100 or similar system may also be used for other purposes.
Fig. 2 depicts a method 200 that may be used, for example, to verify entry points into an anatomy and/or to generate a confirmation that entry points into an anatomy are accurate relative to a preoperative plan. Generally, the method 200 provides for comparing a portion of anatomy surrounding or otherwise proximate to a target access point (as identified in a pre-operative image or model) with a portion of anatomy surrounding or otherwise proximate to an identified access point (as identified in an intra-operative image) at which a surgical tool is to be advanced into an anatomical element. If the comparison results in a determination of a match between one or more of the contours, geometries, contours, surfaces, edges, and/or gradients of two portions of anatomical tissue (as depicted in the two images), the identified entry point is considered to be the same as the target entry point. If the comparison is a negative match determination, the identified entry point is considered to be different from the target entry point (which in turn means that the target entry point has not been positioned correctly).
The method 200 (and/or one or more steps thereof) may be performed or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 114, or part of a navigation system, such as navigation system 118. Processors other than any of the processors described herein may also be used to perform the method 200. The at least one processor may perform the method 200 by executing instructions stored in a memory, such as the memory 106. The instructions may correspond to one or more steps of the method 200 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, quantization algorithm 122, transformation algorithm 124, comparison algorithm 126, and/or registration algorithm 128.
The method 200 includes receiving a first image depicting or otherwise identifying a target entry point and a first portion of anatomical tissue surrounding the target entry point (step 204). The target entry point may be a designated location for the surgical tool to access the anatomy. The target entry point may be determined by a surgical or pre-operative plan, a surgeon, and/or a combination thereof, and may be a preferred, ideal, or otherwise modeled/simulated entry point for the distal end (e.g., cutting end) of the surgical tool. The target entry point may vary depending on the type of surgical procedure, the availability of surgical tools, the level of autonomy associated with the surgical procedure (e.g., manual, semi-autonomy, etc.), and so forth. For example, the target entry point may be located on a surface of a bone, such as a vertebra.
The target entry point may be associated with a particular trajectory, which may depend on information about the bone and/or information about the surgical tool being used. The particular trajectory associated with a given target entry point for the drill bit may be based, for example, on the type, size, and/or location of the surgical drill bit used. Embodiments of the present disclosure may be used to verify that not only an identified entry point matches a predetermined target entry point, but also that an expected trajectory matches a predetermined trajectory. This may be accomplished, for example, by comparing a first portion of the anatomy surrounding or otherwise approaching the predetermined target entry point as seen from the predetermined trajectory with a second portion of the anatomy surrounding or otherwise approaching the identified entry point as seen from the intended trajectory. If a first portion of the anatomy as seen from the predetermined trajectory matches a second portion of the anatomy as seen from the expected trajectory, then it may be confirmed that the expected trajectory matches the predetermined trajectory.
As described above, the received image may depict or otherwise show a first portion of anatomical tissue (e.g., a portion of a vertebra or other bone) surrounding the target entry point. The received image of the first portion of anatomy may include or show information for one or more of contours, geometries, surfaces, edges, gradients, contours, etc. of the anatomy surrounding the target entry point. The image may depict varying amounts of anatomy around the target entry point. For example, the image may depict a percentage (e.g., 1%, 2%, 5%, 10%, 25%, 50%, 75%, 90%, 100%) of anatomical tissue surrounding the target entry point, depending on, for example, the surgical procedure, anatomical type, etc. In some embodiments, the anatomical tissue may be an organ, and the image may depict only a portion thereof that is relevant to the surgical procedure. In some embodiments, the anatomical tissue may be located near the target entry point. For example, the anatomical tissue may be a known distance from the target entry point. In still further embodiments, the anatomical tissue may appear as or be a prominent, distinct, or otherwise identifiable logo, graphic, shape, and/or shadow at or near a target entry point in the received image.
In some embodiments, an image depicting a first portion of anatomical tissue may be captured prior to surgery (e.g., prior to surgery) and may be stored in a system (e.g., system 100) and/or one or more components thereof (e.g., database 130, memory 106, etc.). In some embodiments, the image may show simulated anatomy based on data obtained from pre-operative imaging (e.g., CT scan, X-ray, etc.) and/or patient data (e.g., patient history). In at least one embodiment, the received image may depict one or more views of a vertebra (e.g., lumbar vertebra). In this embodiment, the surgical procedure or procedure may be performed on the vertebrae using a surgical tool (e.g., surgical tool 140) that operates autonomously or semi-autonomously to drill and/or insert screws (e.g., pedicle screws) into the vertebrae of the patient.
In some embodiments, the image may depict one or more poses (e.g., positions and orientations) of the first portion of anatomical tissue. For example, the anatomical tissue may be a vertebra and the target access point may be a location on the vertebra. The first image may depict a plurality of surfaces of vertebrae and/or a proximate region of vertebrae, or a profile of vertebrae near a target access point. As previously described, the first image may be a rendered or computerized model of a portion of the anatomy surrounding the target entry point. For example, one or more algorithms may be used to generate an image of the first portion of anatomical tissue. The one or more algorithms may be Artificial Intelligence (AI) or machine learning (e.g., deep learning) algorithms that generate a model of the anatomy after receiving as input, for example, a patient medical record, a type of planning procedure, a surgical instrument required for the procedure, a previously captured image or scan of the anatomy (e.g., CT scan, X-ray, MRI scan, ultrasound image, etc.), a combination thereof, and the like.
The method 200 also includes causing the imaging device to move to a position near the identified entry point of the anatomical tissue (step 208). The causing may include controlling a robotic arm (e.g., robotic arm 116) to which the imaging device is attached. The imaging device may be any of the imaging devices discussed herein (e.g., imaging device 112), but the imaging device may also be an imaging device not specifically mentioned herein. The imaging device may be controlled, navigated, or otherwise manipulated using a computing device such as computing device 102, a processor such as processor 104, a navigation system such as navigation system 118, and/or a combination thereof. The imaging device may be oriented with respect to the identified entry point of the anatomical tissue. For example, the imaging device may be aligned in a parallel orientation relative to a planned trajectory of the distal end of the surgical tool. In some embodiments, the imaging device may be oriented in the same pose as the surgical tool was in when performing the procedure. In other words, the imaging device may be positioned such that the imaging device may capture an image depicting a similar or identical trajectory as the distal end of the surgical tool when performing a surgical procedure or procedure with respect to the identified entry point.
In some embodiments, the imaging device may be positioned partially or completely within the patient, the MIS port, and incision, combinations thereof, and the like. In one embodiment, the imaging device may be an ultrasound probe positioned within the MIS port (which in turn is positioned within the incision). In this embodiment, the MIS port may be filled with a fluid (e.g., water, saline, combinations thereof, etc.) such that the fluid fills any empty space between the ultrasound probe and the imaging anatomical element (e.g., vertebrae and surrounding tissue), which may be necessary to obtain an ultrasound image of the target entry point. The particular location of the MIS port is in no way limiting, and the MIS port may be constructed or otherwise disposed on different areas of the patient depending on, for example, the type of surgical procedure or procedure being performed, the availability of surgical tools, combinations thereof, and the like. For example, the MIS port may be located on an upper portion of a lower vertebral body (e.g., lumbar vertebra) (e.g., a portion closer to the patient's head). The positioning of the MIS port then allows a surgical tool operated by a surgeon or robot to access an access point located in the superior portion of the inferior vertebral body to perform a surgical procedure or procedure on the inferior vertebral body. The MIS ports may be used preoperatively (e.g., by an imaging device) and/or intraoperatively (e.g., by a surgical tool) to allow visual and/or physical access to the inferior vertebral body.
The identified entry point may include or correspond to pose (e.g., position and orientation) information related to a surgical tool used to perform a surgical procedure or procedure. For example, the identified entry point may specify or correspond to a specified pose of the surgical tool and/or components thereof (e.g., including the distal end of the drill bit) during performance of the surgical procedure or procedure. The identified entry point may be determined by a system (e.g., system 100) and/or components thereof (e.g., processor 104) using one or more of a surgical plan, patient medical data, simulated images, and the like. The system (e.g., system 100) and/or components thereof (e.g., computing device 102) may use pose information to align the imaging device relative to the identified entry point, allowing the imaging device to capture images along the same trajectory that the surgical tool will be used. The identified entry point may be determined by the surgeon based on a preoperative plan or a surgical plan. In some implementations, the identified entry point can be based on a location of the MIS port.
The method 200 also includes receiving a second image depicting a second portion of the anatomical tissue surrounding the identified entry point (step 212). A second image depicting a second portion of the anatomical tissue surrounding the identified entry point may be captured by an imaging device (e.g., imaging device 112), for example, after being positioned near the identified entry point in step 208. In some embodiments, the imaging device may be positioned and instructed (e.g., by the processor 104) to capture one or more images of the surface of the anatomical tissue based on the planned or actual pose and/or trajectory of the surgical tool. The second image may be, for example, a CT scan, an MRI scan, or an ultrasound image. In some embodiments, the captured image may undergo an algorithm (e.g., image processing algorithm 120) to filter the captured image (e.g., remove artifacts) and/or enhance the captured image (e.g., by enhancing contrast between anatomical tissue and background). The captured image may depict a portion of a surface of a second portion of the anatomical tissue. For example, the anatomical tissue may be a vertebra, and the captured image may show a portion of the vertebral surface around the identified entry point. The captured images may be used by a system (e.g., system 100) and/or components thereof (e.g., computing device 102), stored in a database (e.g., database 130), and/or shared with components external to the system via a cloud (e.g., cloud 134). In some embodiments, the second portion of the anatomy may be located near the planned entry point. For example, the second portion may be a known distance from the planned entry point. In yet other embodiments, the second portion may appear as or be a protruding, distinct, or otherwise identifiable logo, graphic, shape, and/or shadow at or near the planned entry point in the captured image.
The method 200 further includes comparing at least one feature of the second portion of the anatomical tissue surrounding the identified entry point as depicted in the second image with at least one feature of the first portion of the anatomical tissue surrounding the identified target entry point as in the first image (step 216). The comparison of the features may include using one or more algorithms (e.g., comparison algorithm 126) and/or generating one or more overlaps of the first image on the second image, or vice versa. In some embodiments, the algorithm may identify differences between the first portion and the second portion of the anatomical tissue (e.g., by comparing one or more features present in one or more images). For example, the comparison algorithm may process information associated with one or more features, such as one or more contours, geometric forms, contours, surfaces, edges, gradients, combinations thereof, and the like associated with the first and second portions of anatomical tissue as represented in the first and second images. In some embodiments, the algorithm may then provide a quantified similarity (or difference) between the compared one or more features (e.g., percentage of overlap of one or more features in one or more images, statistical likelihood that the compared features are the same feature or different features, etc.). The quantized similarity may be used in conjunction with a threshold to determine whether the first portion and the second portion match. For example, in some embodiments, the similarity may be compared to a threshold, and when the similarity exceeds the threshold (or in some cases falls below the threshold), the system (e.g., system 100) may define the first portion and the second portion as matching each other (e.g., the first portion and the second portion share sufficient similarity or are sufficiently close in appearance to each other to be defined as the same).
In some embodiments, one or more features associated with the surface of the anatomical tissue may be compared. For example, in embodiments in which the anatomy is a vertebra, the comparison of the anatomy may include comparing a surface of the vertebra corresponding to a first portion of the vertebra depicted in the first image (e.g., a surface of the vertebra surrounding a target access point of the vertebra) to a surface of the vertebra corresponding to a second portion of the vertebra depicted in the second image (e.g., a surface of the vertebra surrounding an identified access point of the vertebra). In such embodiments, where the surface matches or otherwise determined (e.g., based on quantified similarity) to be the same for vertebrae in the imaged first and second portions, then the identified entry point target entry point matches may be confirmed. Moreover, in some embodiments, the comparison of the surfaces may help verify that the surgical tool (e.g., surgical tool 140) is properly positioned to have a trajectory that matches a predetermined trajectory through the vertebrae, which may have been selected to minimize the probability associated with, for example, bone surface scraping (e.g., the surgical tool piercing an incorrect portion of the vertebrae).
In some implementations, the comparison of one or more features may be based on, for example, gradient matching, surface detection, one or more surface angle comparisons, feature matching, combinations thereof, and the like. The gradient matching may be or include matching (e.g., aligning, overlapping, comparing, etc.) one or more gradients of the anatomical tissue as represented in the first image with one or more gradients of the anatomical tissue as depicted in the second image and determining differences between the respective gradients. The surface detection may be or include detection of one or more surfaces associated with anatomy in a first image and detection of one or more surfaces associated with anatomy in a second image, and in some embodiments, comparing the respective surfaces to quantify the difference between the surfaces. Similarly, the surface angle comparison may be or include a comparison between one or more angles of one or more identified or detected surfaces of the anatomical tissue depicted in the two images (e.g., to determine similarity or variance, etc.). Feature matching may be or include a comparison of certain anatomical features (e.g., different landmarks, shapes, contours, etc.) present in the anatomical tissue depicted in the two images. For example, feature matching may be accomplished by one or more algorithms (e.g., comparison algorithm 126) that may take two images and identify a particular anatomical landmark in the anatomical tissue in the first image and attempt to locate a corresponding anatomical landmark in the second image. The algorithm may determine a relative difference between the state, shape, structure, contour line and/or contour of the anatomical landmark between the first image and the second image and output a value, e.g., based on the determined difference.
The method 200 further includes generating a confirmation that the identified entry point matches the target entry point when the first portion matches the second portion (step 220). The confirmation may be based on a comparison between the first portion and the second portion of the anatomical tissue. For example, if the comparison indicates that one or more contours, geometries, contours, surfaces, edges, gradients, combinations thereof, etc. of the first and second portions of the anatomy match (e.g., are located at the same location in a common coordinate system), then the respective target entry point and the identified entry point may be considered to also match (e.g., represent the same point on the anatomy). The matching of the target entry point to the identified entry point may indicate to a system (e.g., system 100), one or more components of the system, and/or a surgeon that the surgical tool is positioned to enter the anatomical tissue at the identified entry point in the anatomical plan.
In some implementations, the method 200 may utilize a threshold to generate the acknowledgement. The threshold may be defined by the system, components thereof, and/or the surgeon, and may indicate a degree of match between the target entry point and the identified entry point sufficient to continue the surgical procedure or surgical procedure (based on analysis of the first and second portions of surrounding anatomy). The confirmation may be sent to one or more components of the system (e.g., system 100) that may determine whether the surgical procedure or procedure is permitted to proceed, and may adjust the operation upon receipt of the confirmation (e.g., the surgeon may wait for confirmation before proceeding with the surgical procedure, the processor may prevent activation of the surgical tool before receiving the confirmation, etc.).
In some embodiments, the system may define a percentage threshold (e.g., 99.9%, 99%, 98%, 97%, etc.) above which the system defines the identified entry point as matching the target entry point. For example, a threshold of 98% may indicate that when 98% or more of the features of the first and second portions of anatomical tissue are identical to one another, the target entry point and the identified entry point will be considered identical and a confirmation (e.g., an electronic signal) will be generated. In this example, no value below the 98% threshold returns a confirmation, which may indicate that the target entry point and the identified entry point are at least to a desired degree of certainty different (e.g., if the surgical procedure is continued by entering the anatomy at the identified entry point using the surgical tool, the surgical tool will have insufficient certainty of entering the anatomy at the identified target entry point in the preoperative or surgical plan).
The method 200 also includes causing a first image depicting a first portion of the anatomical tissue (e.g., the image received in step 204) and a second image depicting a second portion of the anatomical tissue (e.g., the image received in step 212) to be rendered to a user interface (step 224). A user interface (e.g., user interface 110) may provide a visualization of the first and second portions of the anatomical tissue for visual comparison therebetween. In some implementations, the method 200 may utilize an algorithm (e.g., the transformation algorithm 124) to transform one or both of the images such that when rendered to the user interface, the two images display the same features relative to a single coordinate system. For example, if the imaging device is used to capture a second image of a second portion of the anatomical tissue at an angle relative to the planned trajectory of the surgical tool, the algorithm may map, adjust, or otherwise alter the depicted second portion of the anatomical tissue surrounding the identified entry point to be shown from the same perspective as the first portion of the anatomical tissue for visual comparison thereof. In addition, step 224 may include automatically annotating one or more features of the first portion of the anatomy and one or more features of the second portion of the anatomy, whether to identify matching features and/or identifying distinguishing features therebetween, in some embodiments.
The user interface may allow the surgeon to view both images in order to confirm that the identified entry point matches the target entry point. In some embodiments, one of the two images may be overlaid on the other image to better assist the surgeon in viewing the difference between the two images. In some implementations, one or more of the two images may be rendered with metadata associated therewith (e.g., information about a quantization difference between the two images, information about different amounts or degrees of the two images, etc.). The rendering of the two images may display each image with a different visual marker based on the type of tissue (e.g., soft or hard tissue), based on a relative difference between the two images (e.g., darker color represents a greater color gradient of the relative difference), based on a relative distance of the anatomical tissue from the entry point (e.g., greater contrast near the entry point), combinations thereof, and the like.
The present disclosure encompasses embodiments of the method 200 that include more or fewer steps than those described above, and/or one or more steps that differ from the steps described above.
Method 300 includes receiving a surgical plan defining a target entry point and one or more first features of anatomical tissue surrounding the target entry point (step 304). The target entry point may be a designated location for the surgical tool to access the anatomy. For example, in the case where spinal surgery is to be performed to implant pedicle screws in multiple vertebrae, a target access point may be identified on each vertebra, and contours (and/or one or more other features) of the vertebrae in the area surrounding the target access point may be delineated, described, or otherwise identified in the surgical plan. The target entry point may be determined autonomously by the surgeon or other user, and/or partially autonomously and partially with input from the surgeon or other user, and may be a preferred, ideal, or otherwise modeled/simulated entry point for the distal end (e.g., cutting end) of the surgical tool. The target entry point may vary depending on the type of surgery, available surgical tools, the level of autonomy associated with the surgery (e.g., manual, semi-autonomy, etc.), and so forth. For example, the target entry point may be located on a surface of a bone, such as a vertebra. The target entry point may be associated with a particular trajectory, which may depend on information about the bone and/or information about the surgical tool being used. The particular trajectory associated with a given target entry point for the drill bit may be based, for example, on the type, size, and/or location of the surgical drill bit used.
As described above, the received surgical plan may show or otherwise contain information regarding one or more first features of anatomical tissue (e.g., a portion of a vertebra or other bone) surrounding the target access point. The one or more first features of the anatomical tissue may include one or more of contours, geometric forms, surfaces, edges, gradients, contours, etc. of the anatomical tissue surrounding the target entry point. The surgical plan may depict, for example, images of varying amounts of anatomy around the target entry point. For example, the image may depict a percentage (e.g., 1%, 2%, 5%, 10%, 25%, 50%, 75%, 90%, 100%) of anatomical tissue surrounding the target entry point, depending on, for example, the surgical procedure, anatomical type, etc. In some embodiments, the anatomical tissue may be an organ, and the image may depict only a portion thereof that is relevant to the surgical procedure. In some embodiments, the anatomical tissue may be located near the target entry point. For example, the anatomical tissue may be a known distance from the target entry point. In still further embodiments, the anatomical tissue may appear as or be a prominent, distinct, or otherwise identifiable logo, graphic, shape, and/or shadow at or near a target entry point in the received image.
In some embodiments, the surgical plan may include one or more images of the anatomical tissue, which may be captured prior to surgery (e.g., prior to surgery), and may be stored in the system (e.g., system 100) and/or one or more components thereof (e.g., database 130, memory 106, etc.). In some embodiments, the surgical plan may include information related to simulated anatomy based on data obtained from pre-operative imaging (e.g., CT scan, X-ray, etc.) and/or patient data (e.g., patient history).
In some embodiments, the anatomical tissue may be a vertebra and the target access point may be a location on the vertebra. The surgical plan may include images of vertebrae and/or immediately adjacent regions of vertebrae, or images of multiple surfaces of vertebrae contours near the target access point. As previously described, one or more images in the surgical plan may include a rendered or computerized model of the anatomy surrounding the target entry point. In some embodiments, one or more algorithms may be used to generate an image of the anatomical tissue. The one or more algorithms may be Artificial Intelligence (AI) or machine learning (e.g., deep learning) algorithms that generate a model of the anatomy after receiving as input, for example, a patient medical record, a type of planning procedure, a surgical instrument required for the procedure, a previously captured image or scan of the anatomy (e.g., CT scan, X-ray, MRI scan, ultrasound image, etc.), a combination thereof, and the like.
The method 300 further includes positioning the imaging device near the identified entry point of the anatomical tissue (step 308). Positioning may include controlling a robotic arm (e.g., robotic arm 116) to which the imaging device is attached. The imaging device may be any of the imaging devices discussed herein (e.g., imaging device 112), but the imaging device may also be an imaging device not specifically mentioned herein. The imaging device may be controlled, navigated, or otherwise manipulated using a computing device such as computing device 102, a processor such as processor 104, a navigation system such as navigation system 118, and/or a combination thereof. The imaging device may be oriented with respect to the identified entry point of the anatomical tissue. For example, the imaging device may be aligned in a parallel orientation relative to a planned trajectory of the distal end of the surgical tool. In some embodiments, the imaging device may be oriented in the same pose as the surgical tool was in when performing the procedure. In other words, the imaging device may be positioned such that the imaging device may capture an image depicting a similar or identical trajectory as the distal end of the surgical tool when performing a surgical procedure or procedure with respect to the identified entry point.
In some embodiments, the imaging device may be positioned partially or completely within the patient, the MIS port, and incision, combinations thereof, and the like. In one embodiment, the imaging device may be an ultrasound probe positioned within a MIS port, which in turn may be positioned within the incision. In this embodiment, the MIS port may be filled with a fluid (e.g., water, saline, combinations thereof, etc.) such that the fluid fills any other empty space between the ultrasound probe and the imaging anatomical element, which may be necessary to obtain an ultrasound image of the target entry point. The particular location of the MIS port is in no way limiting, and the MIS port may be constructed or otherwise disposed on different areas of the patient depending on, for example, the type of surgical procedure or procedure being performed, the availability of surgical tools, combinations thereof, and the like. For example, the MIS port may be located on an upper portion of a lower vertebral body (e.g., lumbar vertebra) (e.g., a portion closer to the patient's head). The positioning of the MIS port then allows a surgical tool operated by a surgeon or robot to access an access point located in the superior portion of the inferior vertebral body to perform a surgical procedure or procedure on the inferior vertebral body. The MIS ports may be used preoperatively (e.g., by an imaging device) and/or intraoperatively (e.g., by a surgical tool) to allow visual and/or physical access to the inferior vertebral body.
The identified entry point may include or correspond to pose (e.g., position and orientation) information related to a surgical tool used to perform a surgical procedure or procedure. For example, the identified entry point may specify or correspond to a specified pose of the surgical tool and/or components thereof (e.g., including the distal end of the drill bit) during performance of the surgical procedure or procedure. The system (e.g., system 100) and/or components thereof (e.g., computing device 102) may use pose information to align the imaging device with respect to the identified entry point, allowing the imaging device to capture images along the same trajectory (or a trajectory parallel to the trajectory) that the surgical tool will use. The identified entry point may be determined by the surgeon based on a preoperative plan or a surgical plan. In some implementations, the identified entry point can be based on a location of the MIS port.
The method 300 further includes receiving a second image of one or more second features of the anatomical tissue surrounding the identified entry point (step 312). The one or more second features may be or include one or more of contours, geometric forms, surfaces, edges, gradients, contours, etc. of anatomical tissue surrounding the identified entry point. A second image depicting one or more second features of the anatomical tissue surrounding the identified entry point may be captured by an imaging device (e.g., imaging device 112) after being positioned near the identified entry point in step 308. In some embodiments, the imaging device may be positioned and instructed (e.g., by the processor 104) to capture one or more images of the surface of the anatomical tissue based on the planned or actual pose and/or trajectory of the surgical tool. The image capture may be, for example, a CT scan, an MRI scan, or an ultrasound image. In some implementations, the second image may undergo an algorithm (e.g., image processing algorithm 120) to filter the captured image (e.g., remove artifacts) and/or enhance the captured image (e.g., by enhancing contrast between anatomical tissue and background). The captured image may depict the surface of the anatomical tissue. For example, the anatomical tissue may be a vertebra, and the captured image may show the surface of the vertebra around the identified entry point. The captured images may be used by a system (e.g., system 100) and/or components thereof (e.g., computing device 102), stored in a database (e.g., database 130), and/or shared with components external to the system via a cloud (e.g., cloud 134).
The method 300 further includes determining whether the one or more first features match the one or more second features and quantifying any one or more differences between the one or more first features and the one or more second features (step 316). The method 300 may utilize one or more algorithms (e.g., the comparison algorithm 126) to compare one or more first features to one or more second features and determine if they are identical. In the event that the one or more first features are not identical to the one or more second features, the method 300 may utilize one or more algorithms (e.g., quantization algorithm 122) to generate an amount reflecting the difference between the one or more first features and the one or more second features. For example, the algorithm may receive a comparison between the relative positions of one or more first features and one or more second features, and may quantify based on the compared positions. The quantified differences may be based on percentages (e.g., 0.01% difference, 0.05% difference, 0.1% difference, 0.5% difference, 1% difference, etc.). In some implementations, the percentage-based difference may be based on a percentage of each of the one or more first features that matches (e.g., aligns with, overlaps, shares the same coordinates as, etc. corresponding each of the one or more second features). If a percentage-based difference of multiple features is reported, a weighted average may be used for the percentage-based difference, and a percentage-based difference for each feature found to be used when quantifying the difference between the two sets of features may additionally or alternatively be reported. In some embodiments, a 100% difference may indicate that one or more first features of the anatomical tissue do not share a common feature, do not share a reference point, and/or do not overlap with one or more second features. In some embodiments, the quantified differences may be position or angle differences (e.g., 0.1mm differences, 0.2mm differences, 0.5mm differences, 1mm differences, 0.1 degree differences, 0.2 degree differences, 1 degree differences, etc.). The positional differences may be based on relative coordinate differences between the one or more first features and the corresponding one or more second features in one or more directions and/or at one or more angles (e.g., positional differences and/or angular differences along the length, width, and/or height of the surface of the anatomical tissue). The type and number of features compared is not limited in any way and various features may be compared by the method 300. Examples of features include, but are not limited to, one or more gradients (e.g., gradients of one or more surfaces) and/or one or more anatomical landmarks (e.g., shapes, contours, and/or forms depicted in the first image and/or the second image).
The method 300 further includes generating a confirmation that the identified entry point matches the target entry point when the one or more first features match the one or more second features (step 320). Even when step 316 results in quantization of one or more differences between one or more first features and one or more second features, a confirmation may be generated if, for example, the quantized differences do not exceed (or fall below, as the case may be) a predetermined threshold. In other words, if 95% similarity between the first feature and the second feature is required to produce a determination that one or more first features match one or more second features, a quantization showing 96% similarity between one or more first features and one or more second features will be sufficient to produce a match determination, which in turn will result in a confirmation that the identified entry point matches the target entry point.
In some embodiments, step 320 may be the same or similar to step 220 of method 200. The generating may also include operating the system (e.g., system 100) and/or one or more components thereof (e.g., surgical tool 140) based on the received confirmation. For example, the system and/or one or more components thereof may be configured to not proceed with the surgical procedure, or at least not proceed with the surgical task involving access to the anatomical tissue, until a confirmation is received. In some embodiments, the method 300 may perform step 320 simultaneously or nearly simultaneously with a step involving generating an alert (see, e.g., step 324 below). In such embodiments, the system and/or one or more components thereof may be configured to receive any of the acknowledgements or alarms, and may operate in different manners based on the received signals (e.g., based on whether the system or component received the acknowledgements or alarms). For example, the surgical tool 140 may be instructed (e.g., controlled by a processor) to wait to receive a confirmation or alarm before drilling into the anatomy. Surgical tool 140 may be locked, idle, or may otherwise be unable to drill or apply power to the drill bit until confirmation is received.
The method 300 further includes generating an alert that the identified entry point does not match the target entry point when the one or more first features do not match the one or more second features (step 324). The alert may be based on quantification between anatomy surrounding the target entry point and anatomy surrounding the identified entry point. For example, if the quantification indicates that one or more features of the anatomy surrounding the identified entry point do not match (e.g., a predetermined threshold for identifying a match has not been met) with a corresponding one or more features of the anatomy surrounding the target entry point, the identified entry point and the target entry point may not match. The alert may be sent to a system (e.g., system 100) and/or one or more components thereof (e.g., computing device 102) to indicate that the identified entry point may not match the target entry point. The alert may also indicate to the system, components of the system, surgeon, etc. that the surgical tool is not properly aligned with the target entry point. The alarm may be an audible alarm, a visual alarm, or a combination thereof.
As discussed above, in some embodiments, the method 300 may implement a threshold below which an alert is generated. The threshold may be a percentage threshold (e.g., 99%, 95%, 90%, etc.) of the quantified percentage below which the system determines that the identified entry point and the target entry point do not match and generates an alert. For example, where the threshold is 95%, any quantified result in the system below 95% generates an alarm. In this example, a threshold value below 95% may indicate that the identified entry point and the target entry point do not match to a confidence level (e.g., 95% similarity or match) suitable to allow the surgical procedure or surgical procedure to continue. In some embodiments, the generation of an alert may temporarily limit certain functions of the system (e.g., system 100). For example, the generation of an alarm may limit or lock the use of a surgical tool (e.g., surgical tool 140) until the alarm is overridden. In this example, the surgeon can manually override the lock on the surgical tool.
The method 300 also includes performing a surgical procedure on the anatomical tissue using the surgical tool (step 328). A surgical tool (e.g., surgical tool 140) may be aligned with the identified entry point at a planned trajectory (which may be parallel or identical to the trajectory at which the image of step 312 was taken) and activated to drill, ream, tap, shave, or otherwise cut into the anatomy upon receipt of a confirmation (e.g., an indication that the identified entry point matches the target entry point). In some embodiments, the surgical tool may be positioned near or in the MIS port. The surgical tool can be positioned in the MIS port such that the distal end of the surgical tool can contact anatomical tissue without the proximal end entering the patient. In some embodiments, the surgical tool may be attached to a robot (e.g., robot 114) and/or robotic arms (such as robotic arm 116) that may perform or assist in a surgical procedure or procedure according to a surgical plan. In some embodiments, the surgical tool may not begin the surgical procedure (or at least not begin a surgical task involving access to anatomical tissue) until the system and/or components thereof receive a confirmation (e.g., a signal indicating that the target entry point matches the identified entry point as generated in step 320). In one embodiment, the surgical tool may be drilled into the vertebrae. During steps 308 and/or 312 of method 300, the trajectory of the surgical tool may be parallel to the trajectory of the imaging device. In other words, the trajectory of the surgical tool may be a trajectory that has been confirmed to match the predetermined trajectory identified in the surgical plan of step 304, for example. Surgical tools may be drilled into vertebrae to: insertion screws (e.g., pedicle screws); insertion of a fiducial point (e.g., a device that can be used with imaging techniques such as fluoroscopy to capture additional patient information); releasing the pressure; preparing the patient for additional surgical procedures, or surgical tasks; combinations of the above; and/or for any other purpose. The confirmation code may indicate to the surgical tool (e.g., to a processor within or controlling the surgical tool) that the surgical tool is properly aligned with the target entry point to reduce the probability of negative consequences associated with improper tool alignment, such as tool scraping, drilling at an improper angle, over-cutting or drilling through anatomical material, and so forth.
The present disclosure encompasses embodiments of the method 300 that include more or fewer steps than those described above, and/or one or more steps that differ from the steps described above.
As described above, the present disclosure encompasses methods having fewer than all of the steps identified in fig. 2 and 3 (and corresponding descriptions of methods 200 and 300), as well as methods including additional steps beyond those identified in fig. 2 and 3 (and corresponding descriptions of methods 200 and 300). The present disclosure also encompasses methods comprising one or more steps from one method described herein and one or more steps from another method described herein. Any of the correlations described herein may be or include registration or any other correlation.
The foregoing is not intended to limit the disclosure to one or more of the forms disclosed herein. In the foregoing detailed description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. Features of aspects, embodiments, and/or configurations of the present disclosure may be combined in alternative aspects, embodiments, and/or configurations than those discussed above. The methods of the present disclosure should not be construed as reflecting the following intent: the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Furthermore, while the foregoing has included descriptions of one or more aspects, embodiments and/or configurations, and certain variations and modifications, other variations, combinations, and modifications are within the scope of this disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims (30)

1. A method for verifying a bone entry point, the method comprising:
receiving a surgical plan defining a target bone entry point and a first portion of a bone surface at least partially surrounding the target bone entry point;
positioning an imaging device near the identified bone entry point;
receiving an image of a second portion of the bone surface at least partially surrounding the identified bone entry point from the imaging device;
Comparing at least one feature of the first portion with at least one feature of the second portion to quantify the similarity therebetween; and
a confirmation is generated when the quantified similarity between the first portion and the second portion exceeds a threshold.
2. The method of claim 1, wherein the confirmation indicates that the identified bone entry point matches the target bone entry point.
3. The method of claim 1, wherein the confirmation causes a surgical tool to drill into the identified bone entry point along a planned bone entry trajectory.
4. The method of claim 1, wherein each of the at least one feature of the first portion and the at least one feature of the second portion is a surface gradient.
5. The method of claim 1, wherein each of the at least one feature of the first portion and the at least one feature of the second portion is an anatomical landmark.
6. The method of claim 1, wherein the confirmation indicates a level of statistical certainty that the identified bone entry point matches the target bone entry point.
7. The method of claim 1, wherein the threshold is a percent similarity between the at least one feature of the first portion and the at least one feature of the second portion.
8. The method of claim 7, wherein the threshold is at least ninety-nine percent.
9. The method of claim 1, wherein the imaging device is an ultrasound probe.
10. The method of claim 9, wherein the ultrasound probe is positioned in a Minimally Invasive Surgical (MIS) port.
11. The method of claim 10, wherein the MIS port is filled with a solution.
12. The method of claim 11, wherein the solution is water or brine.
13. The method of claim 9, wherein the MIS port is located on an upper portion of an inferior vertebral body.
14. The method of claim 1, wherein the imaging device is an optical imaging device.
15. The method of claim 1, wherein the target bone is a vertebra.
16. The method of claim 1, the method further comprising:
a surgical tool is used to drill into the identified bone entry point.
17. The method of claim 1, wherein the positioning the imaging device near the identified bone entry point comprises orienting the imaging device substantially parallel to a planned bone entry trajectory.
18. A system for verifying entry into an anatomical tissue, the system comprising:
a processor; and
a memory storing instructions for execution by the processor, the instructions when executed by the processor cause the processor to:
receiving a surgical plan defining a target entry point of an anatomical tissue and comprising a first image of a first portion of the anatomical tissue in the vicinity of the target entry point;
causing an imaging device to be positioned near the identified entry point of the anatomical tissue;
receiving a second image of a second portion of the anatomical tissue from the imaging device near the identified entry point; and
causing the first image and the second image to be rendered to a user interface.
19. The system of claim 18, wherein the anatomical tissue is bone.
20. The system of claim 18, wherein the imaging device is positioned in a surgical incision.
21. The system of claim 20, wherein the surgical incision comprises a MIS port.
22. The system of claim 18, wherein the image of the second portion is an ultrasound image.
23. The system of claim 18, wherein the instructions further cause the processor to:
the first portion is compared to the second portion.
24. The system of claim 23, wherein the comparing comprises quantifying a difference between at least one characteristic of the first portion of the anatomical tissue and at least one characteristic of the second portion of the anatomical tissue.
25. The system of claim 18, wherein the anatomical tissue is a vertebra.
26. The system of claim 18, wherein the instructions further cause the processor to:
when the quantified differences between the at least one characteristic of the first portion of the anatomical tissue and the at least one characteristic of the second portion of the anatomical tissue are below a threshold, a confirmation is generated that the identified entry point matches the target entry point.
27. The system of claim 26, wherein positioning the imaging device near the planned entry point comprises orienting the imaging device substantially parallel to a planned entry trajectory.
28. A system, the system comprising:
an imaging device;
A processor; and
a memory storing instructions for execution by the processor, the instructions when executed by the processor cause the processor to:
receiving a surgical plan defining a target bone entry point and a first bone contour proximate the target bone entry point;
positioning the imaging device near the identified bone entry point;
causing the imaging device to capture an image of a second bone contour proximate the identified bone entry point;
determining whether the first bone contour matches the second bone contour based on a predetermined threshold;
generating a confirmation that the identified bone entry point matches the target bone entry point when the first bone contour matches the second bone contour; and
when the first bone contour does not match the second bone contour, an alert is generated that the identified bone entry point does not match the target bone entry point.
29. The system of claim 28, wherein the imaging device is an ultrasound probe.
30. The system of claim 29, wherein the imaging device is oriented substantially parallel to a planned bone entry trajectory.
CN202280012669.5A 2021-02-01 2022-01-30 Bone entry point verification system and method Pending CN116867459A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/144,118 2021-02-01
US17/575,245 2022-01-13
US17/575,245 US20220241016A1 (en) 2021-02-01 2022-01-13 Bone entry point verification systems and methods
PCT/IL2022/050128 WO2022162670A1 (en) 2021-02-01 2022-01-30 Bone entry point verification systems and methods

Publications (1)

Publication Number Publication Date
CN116867459A true CN116867459A (en) 2023-10-10

Family

ID=88223846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280012669.5A Pending CN116867459A (en) 2021-02-01 2022-01-30 Bone entry point verification system and method

Country Status (1)

Country Link
CN (1) CN116867459A (en)

Similar Documents

Publication Publication Date Title
US20210186615A1 (en) Multi-arm robotic system for spine surgery with imaging guidance
WO2022024130A2 (en) Object detection and avoidance in a surgical setting
US20220280240A1 (en) Devices, methods, and systems for screw planning in surgery
EP4284289A1 (en) Bone entry point verification systems and methods
US20220241016A1 (en) Bone entry point verification systems and methods
EP4026511A1 (en) Systems and methods for single image registration update
CN117425449A (en) Multi-arm robotic system and method for monitoring a target or performing a surgical procedure
CN116867459A (en) Bone entry point verification system and method
CN117396153A (en) System and method for gesture detection and device positioning
US20240138932A1 (en) Systems and methods for controlling one or more surgical tools
EP4296949B1 (en) System and methods to achieve redundancy and diversification in computer assisted and robotic surgery in order to achieve maximum robustness and safety
US20240173077A1 (en) Smart surgical instrument selection and suggestion
US20230240755A1 (en) Systems and methods for registering one or more anatomical elements
US20230240749A1 (en) Systems and methods for controlling surgical tools based on bone density estimation
US20240138939A1 (en) Systems and methods for setting an implant
US20220323158A1 (en) Tracking soft tissue changes intraoperatively
US20230149082A1 (en) Systems, methods, and devices for performing a surgical procedure using a virtual guide
US20230240753A1 (en) Systems and methods for tracking movement of an anatomical element
US20230115512A1 (en) Systems and methods for matching images of the spine in a variety of postures
US20220218428A1 (en) Systems, methods, and devices for robotic manipulation of the spine
EP4296940A1 (en) Systems and methods for effortless and reliable 3d navigation for musculoskeletal surgery based on single 2d x-ray images
WO2024116018A1 (en) Smart surgical instrument selection and suggestion
CN118102988A (en) System for defining object geometry using robotic arm
CN117279586A (en) System and method for generating multiple registrations
CN117769399A (en) Path planning based on working volume mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination