CN116762095A - Registration of time-spaced X-ray images - Google Patents

Registration of time-spaced X-ray images Download PDF

Info

Publication number
CN116762095A
CN116762095A CN202180084396.0A CN202180084396A CN116762095A CN 116762095 A CN116762095 A CN 116762095A CN 202180084396 A CN202180084396 A CN 202180084396A CN 116762095 A CN116762095 A CN 116762095A
Authority
CN
China
Prior art keywords
image
rigid
time
homographies
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180084396.0A
Other languages
Chinese (zh)
Inventor
A·列夫-托夫
S·佩列兹
Y·本兹里汉姆
M·肖汉姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/526,935 external-priority patent/US20220189047A1/en
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Publication of CN116762095A publication Critical patent/CN116762095A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method according to one embodiment of the present disclosure includes: receiving a first image of a patient anatomy, the first image generated at a first time and depicting a plurality of rigid cells; receiving a second image of the patient anatomy, the second image generated and depicting the plurality of rigid units at a second time subsequent to the first time; determining, for each rigid element of the plurality of rigid elements, a transformation from the first image to the second image to produce a set of transformations; computing homographies for each transform in the set of transforms to produce a set of homographies; and using the set of homographies to identify a common portion of each transformation attributable to camera pose changes and a separate portion of each transformation attributable to rigid unit pose changes.

Description

Registration of time-spaced X-ray images
Technical Field
The present technology relates generally to surgical imaging and navigation, and more particularly to tracking an anatomical unit (anatomical element) before, during, and after surgery.
Background
Imaging may be used by medical providers for diagnostic and/or therapeutic purposes. The patient anatomy may change over time, particularly after placement of the medical implant in the patient anatomy. Registration of one image with another image enables identification and quantification of changes in anatomical location.
Disclosure of Invention
Exemplary aspects of the present disclosure include:
a method, the method comprising: receiving a first image of a patient anatomy, the first image generated at a first time and depicting a plurality of rigid units, each rigid unit of the plurality of rigid units being movable relative to at least another rigid unit of the plurality of rigid units; receiving a second image of the patient anatomy, the second image generated and depicting the plurality of rigid units at a second time subsequent to the first time; determining, for each rigid element of the plurality of rigid elements, a transformation from the first image to the second image to produce a set of transformations; and using the set of transforms to identify a common portion of each transform attributable to camera pose changes and a separate portion of each transform attributable to rigid unit pose changes.
Any of the aspects herein further comprising: the second image is registered with the first image based on the identified common portion of each transformation.
Any of the aspects herein further comprising: the preoperative model is updated based on a separate portion of each transformation.
Any of the aspects herein further comprising: the registration of one of the robot space or the navigation space with the image space is updated based on one of the common portion of each transformation or the separate portion of each transformation.
Any of the aspects herein, wherein each transformation is a homography, and the set of transformations is a set of homographies.
Any of the aspects herein, wherein the identifying step utilizes clustering to separate transformations of the set of transformations that result from camera pose changes.
Any of the aspects herein, wherein the registering step comprises spatially correlating both the first image and the second image with a common vector.
Any of the aspects herein, wherein the first image is a preoperative image.
Any of the aspects herein, wherein at least one of the first image and the second image is an intra-operative image.
Any of the aspects herein, wherein computing the transformation includes identifying at least four points on each rigid unit of the plurality of rigid units as depicted in the first image and corresponding at least four points on each rigid unit of the plurality of rigid units as depicted in the second image.
Any of the aspects herein, wherein the first image and the second image are two-dimensional.
Any of the aspects herein, wherein the first image and the second image are three-dimensional.
Any of the aspects herein, wherein the plurality of rigid units comprises a plurality of vertebrae of a patient's spine.
Any of the aspects herein, wherein the plurality of rigid units comprises at least one implant.
Any of the aspects herein further comprising: quantifying a change in attitude of at least one rigid element of the plurality of rigid elements from a first time to a second time.
A method of correlating images taken at different times, the method comprising: dividing each rigid cell of the plurality of rigid cells in a first image of the plurality of rigid cells taken at a first time and in a second image of the plurality of rigid cells taken at a second time subsequent to the first time; computing a homography for each rigid unit of the plurality of rigid units to generate a set of homographies, each homography relating the rigid unit as depicted in the first image to the rigid unit as depicted in the second image; ranking the set of homographies into homographies clusters based on at least one characteristic; selecting a homography cluster based on at least one parameter; and projecting each rigid element of the plurality of rigid elements as depicted in the second image onto the first image using the average of the selected homography clusters to produce a projected image.
Any of the aspects herein, wherein the second time is at least one month after the first time.
Any of the aspects herein, wherein the second time is at least one year after the first time.
Any of the aspects herein, wherein the at least one parameter is a contour.
Any of the aspects herein, wherein at least one rigid unit of the plurality of rigid units is an implant.
Any of the aspects herein, wherein the plurality of rigid units comprises a plurality of vertebrae of a patient's spine.
Any of the aspects herein further comprising: at least one of an angle or a distance corresponding to a change in attitude of one of the plurality of rigid units as reflected in the projection image is measured.
Any of the aspects herein further comprising: any homographies affected by one or more of the compression fracture or osteophyte depicted in the second image but not the first image are removed from the set of homographies.
Any of the aspects herein, wherein calculating the homography for each rigid unit of the plurality of rigid units includes identifying edge points of the vertebral endplates.
A system for comparing images, the system comprising: at least one processor; and a memory. The memory stores instructions for execution by the processor that, when executed, cause the processor to: identifying a plurality of cells in a first image generated at a first time; identifying the plurality of cells in a second image generated at a second time subsequent to the first time; calculating a homography for each of the plurality of cells using the first image and the second image to generate a set of homographies; and determining based on the set of homographies: a first pose change of one or more cells of the plurality of cells in the second image relative to the first image, the first pose change attributable to an imaging device position change from the first image to the second image relative to the plurality of cells; and a second pose change of at least one of the plurality of cells in the second image relative to the first image, the second pose change not attributable to the imaging device position change.
Any of the aspects herein, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to: the second image is registered with the first image based on the first pose change.
Any of the aspects herein, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to: the pre-operative model is updated based on the second pose change.
Any of the aspects herein, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to: the registration of one of the robot space or the navigation space with the image space is updated based on one of the first pose change or the second pose change.
The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the technology described in this disclosure will be apparent from the description and drawings, and from the claims.
The phrases "at least one," "one or more," and/or "are open-ended expressions that have both connectivity and separability in operation. For example, the expressions "at least one of A, B and C", "at least one of A, B or C", "one or more of A, B and C", "one or more of A, B or C" and "one of A, B and/or C" mean a alone, B alone, C, A alone and B together, a alone and C together, B alone and C together, or A, B alone and C together. When each of A, B and C in the above description refers to an element such as X, Y and Z or an element such as X 1 -X n 、Y 1 -Y m And Z 1 -Z o The phrase is intended to refer to a single element selected from X, Y and Z, elements selected from the same class (e.g., X 1 And X 2 ) And elements selected from two or more classes (e.g., Y 1 And Z o ) Is a combination of (a) and (b).
The term "a (a/an)" entity refers to one or more of that entity. Thus, the terms "a/an", "one or more", and "at least one" may be used interchangeably herein. It should also be noted that the terms "comprising" and "having" may be used interchangeably.
The foregoing is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is not an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended to neither identify key or critical elements of the disclosure nor delineate the scope of the disclosure, but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As should be appreciated, other aspects, embodiments, and configurations of the present disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
Many additional features and advantages of the invention will become apparent to those skilled in the art upon consideration of the description of embodiments presented below.
Drawings
The accompanying drawings are incorporated in and form a part of this specification to illustrate several examples of the present disclosure. Together with the description, these drawings serve to explain the principles of the disclosure. The drawings only show preferred and alternative examples of how the disclosure may be made and used, and these examples should not be construed as limiting the disclosure to only the examples shown and described. Additional features and advantages will be made apparent from the following more detailed description of various aspects, embodiments and configurations of the present disclosure, as illustrated by the accompanying drawings referenced below.
FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure;
FIG. 2 is a series of X-ray images of a patient's anatomy taken at different times;
FIG. 3 is a flow chart of a method according to at least one embodiment of the present disclosure;
FIG. 4 is a flow chart of another method in accordance with at least one embodiment of the present disclosure; and is also provided with
Fig. 5 is a flow chart of another method in accordance with at least one embodiment of the present disclosure.
Detailed Description
It should be understood that the various aspects disclosed herein may be combined in different combinations than specifically presented in the specification and drawings. It should also be appreciated that certain acts or events of any of the processes or methods described herein can be performed in a different order, and/or can be added, combined, or omitted entirely, depending on the example or implementation (e.g., not all of the described acts or events may be required to implement the disclosed techniques in accordance with different implementations of the disclosure). Moreover, although certain aspects of the disclosure are described as being performed by a single module or unit for clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media corresponding to tangible media, such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors (e.g., intel Core i3, i5, i7, or i9 processors, intel Celeron processors, intel Xeon processors, intel Pentium processors, AMD Ryzen processors, AMD Athlon processors, AMD Phenom processors, apple A10 or 10 Xfusion processors, apple A11, A12X, A Z or A13 Bionic processors, or any other general purpose microprocessor), application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. In addition, the present techniques may be fully implemented in one or more circuits or logic elements.
Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including" or "having" and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. The use or listing of one or more examples (which may be indicated by "for example)", "by way of example", "e.g. (e.g.)," such as "or similar language, is not intended and does not limit the scope of the present disclosure unless expressly stated otherwise.
Images of a portion of a patient's anatomy taken at different points in time may reflect considerable structural variability of the patient's anatomy. This is especially true in the case of pre-and post-operative images taken and/or when the images are spaced apart for a longer period of time (including months or years). For example, the spinal structure of a patient after insertion of a spinal rod may differ significantly from the spinal structure of the patient prior to insertion of the rod. In addition, the patient's spine may experience significant deformation within weeks, months and years after insertion of the rod. It is desirable to identify and quantify the change in pose of one or more anatomical elements from a first time when a first image is taken to a second time when a second image is taken.
Taking the structural change of the patient's spine as an example, there are several factors that make it impossible to compare the periodic measurements of such changes. Such factors may include: a change in pose of a camera or other imaging device that generates the first image and the second image; a change in the pose of the patient's anatomy when the first and second images are captured (e.g., the first image may be taken while the patient is in a prone or supine position, and the second image may be taken while the patient is in a standing position); noise in the source signature due to noisy images and/or segmentation errors; and non-rigid transformation of the spine over time or before and after surgery.
Embodiments of the present disclosure utilize corresponding points along the perimeter of each vertebra depicted in the first and second images taken at times t1 and t2, respectively. For example, the edge points of the vertebral endplates in the AP or LT projections taken at any two times t1 and t2 may be used. These points may be identified manually or automatically.
Because of the non-rigid transformation of the spine over time or before and after surgery, it is not possible to directly calculate the transformation between times t1 and t 2. In other words, simply comparing the changes in the overall spinal structure from time t1 to time t2 does not provide accurate results because the vertebrae of the spinal column can move and rotate in different ways. In contrast, with the segmental stiffness of the spine, this problem can be addressed in the vertebral range. Because the motion of each vertebra itself can be assumed to be rigid, a transformation can be calculated for each vertebra. Furthermore, because the vertebral periphery may be represented as a plane (e.g., endplate, side, lateral, or anterior projection), the homography transform H may be a sufficiently useful representation.
Then, for each vertebra, at least four corresponding points in each image are used to calculate homography H parameters, according to embodiments of the present disclosure. To reduce noise in the computation, if more points are available, these points can be used; typical computer vision methods can be used to automatically fix the corner points of the markers; and interpolation points along the marker line may be used.
If the spinal motion is rigid, all the homographies calculated { H } will be more or less identical. However, since there is some movement of the individual vertebrae, the calculated homographies are expected to be different. In addition, noise in the marker points will increase noise homography.
In accordance with the foregoing, a set of homographies { H } in a transform space (whether 9-dimensional or reduced) may be clustered according to predetermined characteristics. The most coherent cluster may be selected and/or the clusters/homographies may be filtered according to other criteria. The average of the resulting clusters can then be regarded as the homography H' between times t1, t 2. All vertebrae can then be projected from t2 onto t1 using H', and the measurements/features can be calculated in a more comparable manner.
Embodiments of the present disclosure are based on the following assumptions: the change in bone structure over time is less pronounced than the change in soft tissue over time. Even so, compression fractures and osteophyte (bony spur) changes can interfere with the successful utilization of embodiments of the present disclosure. In the event that one or more homographies are affected by a compression fracture and/or osteophyte (and/or other changes in shape of the rigid anatomical element), it may be desirable to filter out such homographies before the other homographies are averaged or otherwise utilized.
For registration of two-dimensional images, at least four corresponding points are required, while for registration of three-dimensional images, at least eight corresponding points are required. In some embodiments, the implant itself may be used as a source of corresponding points in place of or in addition to the vertebral endplates or other anatomical features. For example, the rod may provide two corresponding points (e.g., one point at each end of the rod) such that four corresponding points between two images may be obtained with two rods, one rod and one screw, or even two screws. Of course, where one or more implants are to be used to define one or more corresponding points, only pairs of images depicting the one or more implants may be registered with each other. Therefore, images taken prior to insertion of such implants cannot be used in these embodiments. Even so, the use of implants to define corresponding points advantageously exploits the fact that: unlike some anatomical units, the implant structure generally does not change over time.
Embodiments of the present disclosure advantageously enable long-term registration, i.e., registration of two images generated over a period of weeks, months, or even years apart. Embodiments of the present disclosure also advantageously utilize the segmented stiffness characteristics of the spine (and/or other anatomical elements composed of a plurality of individual stiffness elements) to overcome computational difficulties in directly determining the transformation of the spine or other anatomical elements. By combining data science methods with typical computer vision methods, a set of potential transformations can be generated and then analyzed using clustering methods.
Embodiments of the present disclosure provide technical solutions to one or more of the following problems: (1) Generating accurate geometric measurements with respect to X-ray images of the same patient taken at different points in time spaced apart weeks, months or even years and comparing the geometric measurements; (2) During registration of two images taken at two different times and possibly by two different imaging devices, (i) taking into account the effects of changes in camera pose from one image to the other relative to the patient anatomy, (ii) taking into account the effects of changes in body pose and position from one image to the other, and (iii) taking into account noise in the source marker due to noisy images or segmentation errors; (3) Registering two spine images generated at different times with each other regardless of a non-rigid transformation of the spine during a period between generating the first image and generating the second image; and (4) distinguishing a change in the pose of one or more rigid units depicted in the two images due to a camera pose change from a change in the pose of the one or more rigid units themselves.
Turning first to fig. 1, a block diagram of a system 100 in accordance with at least one embodiment of the present disclosure is shown. The system 100 may be used to register two time-spaced images with each other and/or to perform one or more other aspects of one or more methods disclosed herein. The system 100 includes a computing device 102, one or more imaging devices 112, a navigation system 114, a robot 130, a database 136, and a cloud 138. Systems according to other embodiments of the present disclosure may include more or fewer components than system 100. For example, the system 100 may not include the navigation system 114, the robot 130, one or more components of the computing device 102, the database 136, and/or the cloud 138.
The computing device 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other embodiments of the present disclosure may include more or fewer components than computing device 102.
The processor 104 of the computing device 102 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions stored in the memory 106 that may cause the processor 104 to perform one or more computing steps using or based on data received from the imaging device 112, the robot 130, the navigation system 114, the database 136, and/or the cloud 138.
Memory 106 may be or include RAM, DRAM, SDRAM, other solid state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions. Memory 106 may store information or data for performing any steps of methods 300, 400, and/or 500, or any other method, such as described herein. The memory 106 may store, for example, one or more image processing algorithms 120, one or more segmentation algorithms 122, one or more transformation algorithms 124, one or more homography algorithms 126, and/or one or more registration algorithms 128. In some implementations, such instructions or algorithms may be organized into one or more applications, modules, packages, layers, or engines. The algorithms and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device 112, the robot 130, the database 136, and/or the cloud 138.
Computing device 102 may also include a communication interface 108. The communication interface 108 may be used to receive data or other information from external sources (such as the imaging device 112, the navigation system 114, the robot 130, the database 136, the cloud 138, and/or any other system or component that is not part of the system 100) and/or to transmit instructions, images, or other information to external systems or devices (e.g., another computing device 102, the navigation system 114, the imaging device 112, the robot 130, the database 136, the cloud 138, and/or any other system or component that is not part of the system 100). The communication interface 108 may include one or more wired interfaces (e.g., USB ports, ethernet ports, firewire ports) and/or one or more wireless transceivers or interfaces (configured to transmit and/or receive information, e.g., via one or more wireless communication protocols such as 802.11a/b/g/n, bluetooth, NFC, purple peak, etc.). In some implementations, the communication interface 108 may be used to enable the device 102 to communicate with one or more other processors 104 or computing devices 102, whether to reduce the time required to complete computationally intensive tasks or for any other reason.
The computing device 102 may also include one or more user interfaces 110. The user interface 110 may be or include a keyboard, mouse, trackball, monitor, television, screen, touch screen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 110 may be used, for example, to receive user selections or other user inputs regarding any of the steps of any of the methods described herein. Nonetheless, any desired input for any step of any method described herein may be automatically generated by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100. In some embodiments, the user interface 110 may be used to allow a surgeon or other user to modify instructions to be executed by the processor 104 and/or to modify or adjust settings of other information displayed on or corresponding to the user interface 110 in accordance with one or more embodiments of the present disclosure.
Although the user interface 110 is shown as part of the computing device 102, in some embodiments, the computing device 102 may utilize the user interface 110 housed separately from one or more remaining components of the computing device 102. In some embodiments, the user interface 110 may be located proximate to one or more other components of the computing device 102, while in other embodiments, the user interface 110 may be located remotely from one or more other components of the computing device 102.
The imaging device 112 may be used to image anatomical features (e.g., bones, veins, tissue, etc.) and/or other aspects of the patient anatomy to produce image data (e.g., image data depicting or corresponding to bones, veins, tissue, etc.). The image data may be or include preoperative images, post-operative images or images taken independently of any surgical procedure. In some embodiments, the first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and the second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time that is subsequent to the first time. The first time and the second time may be separated by a surgical time (e.g., one may be preoperative and the other may be post-operative) or a period of time (e.g., days, weeks, months, or years). The imaging device 112 may be capable of capturing 2D images or 3D images to generate image data. As used herein, "image data" refers to data generated or captured by the imaging device 112, including data in machine-readable form, graphical/visual form, and in any other form. In different examples, the image data may include data corresponding to anatomical features of the patient or a portion thereof. The imaging device 112 may be or include, for example, an ultrasound scanner (which may include, for example, physically separate transducers and receivers, or a single ultrasound transceiver), a radar system (which may include, for example, a transmitter, receiver, processor, and one or more antennas), an O-arm, a C-arm, a G-arm, or any other device that utilizes X-ray based imaging (e.g., fluoroscope, CT scanner, or other X-ray machine), a Magnetic Resonance Imaging (MRI) scanner, an optical coherence tomography scanner, an endoscope, a telescope, a thermal imaging camera (e.g., an infrared camera), or any other imaging device 112 suitable for obtaining images of anatomical features of a patient.
In some embodiments, the imaging device 112 may include more than one imaging device 112. For example, the first imaging device may provide first image data and/or a first image, and the second imaging device may provide second image data and/or a second image. In other embodiments, the same imaging device may be used to provide both the first image data and the second image data and/or any other image data described herein. The imaging device 112 may be used to generate an image data stream. For example, the imaging device 112 may be configured to operate with a shutter that is open, or with a shutter that continuously alternates between open and closed, in order to capture successive images. For the purposes of this disclosure, image data may be considered continuous and/or provided as a stream of image data if the image data represents two or more frames per second, unless otherwise specified.
During operation, the navigation system 114 may provide navigation for a surgeon and/or surgical robot. The navigation system 114 may be any now known or future developed navigation system including, for example, medtronic StealthStation TM S8 a surgical navigation system or any successor thereof. The navigation system 114 may include one or more cameras or other sensors for tracking one or more reference markers, navigation trackers, or other objects within the operating room or other room in which some or all of the system 100 is located. In various embodiments, the navigation system 114 may be used to track the imaging device 112, the robot 130, and/or the robotic arm 132, and/or one or more surgical operators With a position and orientation (i.e., pose) (or more specifically, a pose for tracking a navigation tracker attached directly or indirectly in a fixed relationship to one or more of the foregoing). The navigation system 114 may include a display for displaying one or more images from an external source (e.g., the computing device 102, the imaging device 112, or other sources) or for displaying images and/or video streams from a camera or other sensor of the navigation system 114. In some implementations, the system 100 may operate without the use of the navigation system 114. The navigation system 114 may be configured to provide guidance to a surgeon or other user of the system 100 or components thereof, to the robot 130, or to any other element of the system 100 regarding, for example, the pose of one or more anatomical units and/or whether (and/or how) a tool is in an appropriate trajectory to perform a surgical task according to a pre-operative plan.
The robot 130 may be any surgical robot or surgical robotic system. The robot 130 may be or include, for example, a Mazor X TM A stepth Edition robot guidance system. The robot 130 may be configured to position the imaging device 112 at one or more precise locations and orientations and/or to return the imaging device 112 to the same location and orientation at a later point in time. The robot 130 may additionally or alternatively be configured to manipulate the surgical tool (whether based on guidance from the navigation system 114 or not) to complete or assist in a surgical task. The robot 130 may include one or more robotic arms 132. In some embodiments, robotic arm 132 may include a first robotic arm and a second robotic arm, but robot 130 may include more than two robotic arms. In some embodiments, one or more of the robotic arms 132 may be used to hold and/or manipulate the imaging device 112. In embodiments where the imaging device 112 includes two or more physically separate components (e.g., a transmitter and a receiver), one robotic arm 132 may hold one such component and another robotic arm 132 may hold another such component. Each robotic arm 132 may be positioned independently of the other robotic arms.
The robot 130 along with the robot arm 132 may have, for example, at least five degrees of freedom. In some embodiments, robotic arm 132 has at least six degrees of freedom. In still other embodiments, the robotic arm 132 may have less than five degrees of freedom. Additionally, the robotic arm 132 may be positioned or positionable in any pose, plane, and/or focus. The pose includes a position and an orientation. Thus, the imaging device 112, surgical tool, or other object held by the robot 130 (or more specifically, held by the robotic arm 132) may be precisely positioned at one or more desired and specific positions and orientations.
In some embodiments, the reference markers (i.e., navigation markers) may be placed on the robot 130 (including, for example, on the robotic arm 132), the imaging device 112, or any other object in the surgical space. The reference markers may be tracked by the navigation system 114 and the results of the tracking may be used by the robot 130 and/or by an operator of the system 100 or any component thereof. In some embodiments, the navigation system 114 may be used to track other components of the system (e.g., the imaging device 112), and the system may operate without the use of the robot 130 (e.g., the surgeon manually manipulates the imaging device 112 and/or one or more surgical tools, e.g., based on information and/or instructions generated by the navigation system 114).
The system 100 or similar system may be used, for example, to perform one or more aspects of any of the methods 300, 400, and/or 500 described herein. The system 100 or similar system may also be used for other purposes. In some embodiments, the system 100 may be used to generate and/or display a 3D model of an anatomical feature or anatomical volume of a patient. For example, a robotic arm 132 (controlled by the processor of the robot 130, the processor 104 of the computing device 102, or some other processor, with or without any manual input) may be used to position the imaging device 112 at a plurality of predetermined known poses such that the imaging device 112 may obtain one or more images at each of the predetermined known poses. Because the pose of taking each image is known, the resulting images can be assembled together to form or reconstruct a 3D model. As described elsewhere herein, the system 100 may update the model based on information received from the imaging device 112 (e.g., fragment tracking information).
Turning now to fig. 2, embodiments of the present disclosure may be used, for example, to register two images 200 with a time interval. For example, embodiments of the present disclosure may be used to: registering the pre-operative image 200A with the post-operative image 200B, the image 200C taken six months post-operative, and/or the image 200D taken one year post-operative; registering the post-operative image 200B with the image 200C taken six months post-operative and/or the image 200D taken one year post-operative; and/or registering an image 200C taken six months after surgery with an image 200D taken one year after surgery. Additionally, embodiments of the present disclosure may be used to obtain accurate geometric measurements of changes in pose of one or more anatomical units or medical implants depicted in registered images, although the images cannot be directly registered to each other (e.g., by simply overlaying one image on another and aligning the corresponding points) due to: a change in pose of a camera for capturing images relative to an anatomy of a patient being imaged; changes in the posture of the patient when the image is generated; noise in the image; and/or a non-rigid transformation of the anatomical structure depicted in the image.
Although fig. 2 shows a pre-operative image 200A, a post-operative image 200B taken immediately after a surgical procedure for implanting rods and screws depicted in image 200B, an image 200C taken six months after the same surgical procedure, and an image 200D taken one year after the same surgical procedure, embodiments of the present disclosure may be used to register two images spaced apart for a longer or shorter period of time than from pre-operative to post-operative, six months, and/or one year. In some embodiments, the present disclosure may be used to register two images taken two, five, ten or more years apart. In other embodiments, the present disclosure may be used to register two images taken one, two, three, four, five, seven, eight, nine, ten, or eleven months apart. In other embodiments, the present disclosure may be used to register two images taken a few weeks apart or a few days apart. While the benefits of embodiments of the present disclosure may be most pronounced when there is a significant transformation of the imaged anatomical element from one image to another, those same embodiments may be used regardless of the degree of transformation of the anatomical element between the times the two images of the anatomical element are taken.
Fig. 3 depicts a method 300 that may be used, for example, for long-term registration, short-term registration, updating a pre-operative model of a patient anatomy, and/or updating registration between any two or more of robot space, navigation space, and/or patient space. The term "long-term registration" is intended to convey that the method 300 may be used to register images at intervals, including images taken days, weeks, months, or even years apart. Even so, the method 300 may be used to register images taken at relatively close times (e.g., preoperatively and intraoperatively).
The method 300 (and/or one or more steps thereof) may be performed or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 130, or part of a navigation system, such as navigation system 114. Processors other than any of the processors described herein may also be used to perform the method 300. The at least one processor may perform the method 300 by executing instructions stored in a memory, such as the memory 106. The instructions may correspond to one or more steps of the method 300 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, segmentation algorithm 122, transformation algorithm 124, homography algorithm 126, and/or registration algorithm 128.
The method 300 includes receiving a first image of a patient anatomy (step 304). The first image is generated by an imaging device, such as imaging device 112, and is generated at a first time. The first time may be one or more days, weeks, or months prior to the surgical procedure affecting the anatomy being imaged, or the first time may be immediately prior to the surgical procedure (e.g., while the patient is on an operating table and/or in an operating room), or the first time may be after the surgical procedure. In some embodiments, the first image is taken independently of any surgical procedure.
The anatomical structure being imaged may be, for example, the spine or a portion thereof of a patient including a plurality of vertebrae. In other embodiments, the imaged anatomical structure may be any other anatomical object that is composed of a plurality of rigid or substantially rigid subunits, or any other anatomical object that undergoes non-rigid deformation and that can be analyzed at the subunit level.
The first image may be received directly or indirectly from an imaging device, such as imaging device 112. The first image may be a two-dimensional image or a three-dimensional image. In some embodiments, the first image is an X-ray image or an image generated using X-rays, such as a CT image or a fluoroscopic image. However, the image may be an image generated using any other imaging modality, such as ultrasound, magnetic resonance imaging, optical coherence tomography, or another imaging modality. Thus, the imaging device may be a CT scanner, a Magnetic Resonance Imaging (MRI) scanner, an Optical Coherence Tomography (OCT) scanner, an O-arm (including, for example, an O-arm 2D long film scanner), a C-arm, a G-arm, another device that utilizes X-ray based imaging (e.g., a fluoroscope or other X-ray machine), or any other imaging device.
The method 300 further includes receiving a second image of the patient anatomy (step 308). The second image is also generated by an imaging device, such as imaging device 112, but the imaging device used to generate the second image may be different than the imaging device used to generate the first image. Further, the second image is generated at a second time subsequent to the first time. The second time may be separated from the first time by a time of the surgical procedure (e.g., the first image may be a pre-operative image and the second image may be a post-operative image). The second time may be one or more days, weeks, months or years after the first time interval.
The second image generally corresponds to an anatomical region or portion of the same patient anatomy as the first image or portion thereof. Thus, for example, if the first image depicts a spine or a fragment thereof of a patient, the second image data also depicts the spine or a fragment thereof. As another example, if the first image depicts a knee or portion thereof of the patient, the second image data also depicts the knee or portion thereof.
The second image may be received directly or indirectly from an imaging device, such as imaging device 112. The second image may have the same dimensions (e.g., two or three dimensions) as the first image. The second image may be an image generated using the same imaging device as the first image or a different imaging device. The imaging device generating the second image may have the same imaging modality as the imaging device generating the first image, or may be a related imaging modality. In some embodiments, the first image and the second image may be generated using different imaging modalities.
The method 300 further includes determining a transformation from the first image to the second image for each rigid element of the plurality of rigid elements in the first image and the second image to produce a set of transformations (step 312). In some embodiments, step 312 may include preprocessing the first image and the second image using one or more image processing algorithms 120 to: removing noise and/or artifacts therefrom ensures that the two images have the same scale and otherwise prepares the images for other aspects of step 312. One or more image processing algorithms 120 may also be used to identify multiple rigid units in each image, whether feature recognition, edge detection, or other object detection methods are used.
In some embodiments, step 312 includes segmenting the first image and the second image to identify and/or identify individual rigid elements within each image. Such segmentation may be accomplished using one or more segmentation algorithms 122 and/or any other segmentation algorithm or process. Step 312 may also include identifying anatomical objects within the first and second images using anatomical maps, biomechanical models, or other references, determining which of those anatomical objects are rigid units, and/or determining a relationship (if any) between two or more identified rigid units. Thus, for example, an anatomic map may be referenced to determine that two adjacent vertebrae are connected by an intervertebral disc, or a patient-specific biomechanical model may be referenced to determine that two adjacent vertebrae have fused and should move as a whole within the patient's anatomy.
The plurality of rigid units may include individual bones or other hard tissue anatomical objects. The plurality of rigid units may also include one or more medical implants, such as pedicle screws, vertebral rods, surgical pins, and/or intervertebral bodies. In the case where a particular rigid element appears in the first image but does not appear in the second image, or vice versa, the particular rigid element may be excluded from the plurality of rigid elements. Similarly, in some embodiments, the plurality of rigid units may not include each rigid unit depicted in one image or both images. For the purposes of this disclosure, a unit of bone anatomy or other hard tissue may be considered rigid, even if the unit has a degree of flexibility. At least one rigid unit of the plurality of rigid units is movable relative to at least another rigid unit of the plurality of rigid units.
To determine a transformation from the first image to the second image for each rigid element of the plurality of rigid elements, one or more transformation algorithms 124 may be used. The determining may include superimposing the second image on the first image or defining any other relationship between the first image and the second image. In some embodiments, a "best guess" alignment between the first image and the second image may be performed automatically or manually, such as by aligning a protruding edge or surface in the two images (e.g., a visible edge of the patient, such as the back or side of the patient; one or more surfaces of the hip or pelvis of the patient, or one or more surfaces of another hard tissue element that is less likely to move over time than the rigid element in question). To determine the transformation, a fixed relationship between the two images must be established; however, the fixed relationship need not be exact, as the remaining steps of method 300 will differentiate between: aspects of each transformation attributable to camera pose, patient position, or other parameters that affect the depiction of each rigid unit in the same manner, and aspects of each transformation attributable to movement of the rigid unit.
The determined transformation of each rigid unit of the plurality of rigid units may be homography. Homographies relate a given rigid unit as depicted in a first image to the same rigid unit as depicted in a second image. For the purpose of calculating homographies, a plurality of points on the rigid unit (visible in both the first image and the second image) may be selected. For example, these points may be points along the perimeter of the rigid unit in either front-to-back (AP) or side (LT) bit projections. Where the rigid unit is a vertebra, these points may be edge points of the vertebral endplates. Where the rigid unit is a screw, these points may be at both ends of the screw (e.g., at the top of the screw head and at the screw tip). Where the rigid unit is a rod, these points may be at opposite ends of the rod. For purposes of this disclosure, multiple screws in a single anatomical unit may be considered a single rigid unit. These points may be specified manually (e.g., via a user interface such as user interface 110) or automatically (e.g., using image processing algorithm 120, segmentation algorithm 122, or any other algorithm). Homography may be calculated using homography algorithms, such as homography algorithm 126. Any known method for calculating homographies may be used.
In some implementations, homographies of adjacent rigid units can be calculated. Thus, for example, the homographies of each pair of adjacent vertebrae may be calculated using the determined transforms corresponding to each pair of adjacent vertebrae.
In case the first image and the second image are two-dimensional images, the plurality of points comprises at least four points. In the case where the first image and the second image are three-dimensional images, the plurality of points includes at least eight points. Whether using 2D or 3D images, more points than the minimum number of points may be utilized. Additionally, the desired points may include points defined with reference to the patient anatomy, points defined with reference to one or more implants (e.g., screws, rods), or any combination thereof. The selected points may be connected by a marker line and one or more points may be interpolated along the marker line. Notably, noise in the points used to calculate the homography (e.g., any differences between the locations of the points in each image and the corresponding rigid cells in that image) will result in the calculation of a noise homography.
The use of screws, rods and/or other implants as a rigid unit for the purposes of this disclosure is beneficial in view of the fact that implants may be less likely to change over time than anatomical units.
The determination of the transformation (whether homography or otherwise) for each rigid unit of the plurality of rigid units produces a set of transformations. Each transformation may include one or more distances, angles, and/or other measurements sufficient to describe the change in pose of the rigid unit for which the transformation was determined. In some implementations, each determined transformation may simply include a segmented image of the rigid unit in a first position (e.g., as depicted in the first image) and in a second position (e.g., as depicted in the second image). In other implementations, the determined transformation may include an equation or a set of equations describing the movement of the rigid unit from the pose depicted in the first image to the pose depicted in the second image.
The method 300 also includes computing homographies for each transformation.
The method 300 also includes identifying a common portion of each transformation attributable to the gesture change (step 316). For example, the identification may be based on the calculated transformation. Based on the following assumptions: most transformations will be determined solely by camera pose changes (e.g., because the corresponding vertebra or other rigid unit has not moved yet), or alternatively, the same or nearly the same transformations are caused by camera pose changes (which affect each rigid unit more or less equally, while the motion of individual rigid units does not necessarily have any correlation with the motion of other rigid units), clustered data science methods can be used to separate transformations that only result from camera pose from those caused by a combination of camera pose changes and rigid unit motions.
In the case of using clustering, the clustering may be done in a transformed space (e.g., 9-dimensional space) or a reduced space. The resulting clusters may be analyzed using contour measurements, variances, sizes, or another parameter that may be used to separate those transforms attributable to camera pose changes from those transforms attributable to both camera pose changes and motion of rigid units. The most coherent clusters (or clusters selected by applying other parameters) can be averaged, where the average of the clusters is treated as a transform corresponding to the camera pose change. The use of clustering advantageously accounts for noise in the transformation caused by noise in the marker points used to calculate the transformation.
The transformation corresponding to the camera pose change may account for the movement of the majority of the individual rigid units in the first image and the second image. Regardless of the number of rigid units whose transformations are interpreted by the camera pose changes only, the portion of each transformation that is interpreted by the camera pose changes only constitutes a common portion of each transformation (e.g., because the portion affects each transformation equally).
Regardless of how step 316 is completed, the result is a determination of how the change in pose of the camera used to capture the first and second images results in a determined transformation for each rigid unit.
The method 300 also includes identifying a separate portion of each transformation attributable to rigid unit pose changes (step 320). Identifying the separate portion of each transformation attributable to the rigid unit pose change may include, for example, projecting each rigid unit from the second image onto the first image using the common portion of each transformation determined in step 316. The result of this projection will be to align the rigid element depicted in the second image that is not moved between the first time and the second time with the corresponding rigid element depicted in the first image. However, for a rigid unit that is moved between a first time and a second time, the result of this projection will be to remove the effect of the camera pose change from the depiction of the rigid unit. Thus, any misalignment between the rigid unit projected from the second image onto the first image and the corresponding rigid unit as depicted in the first image is attributable to the movement of the rigid unit itself. In other words, any differences between the pose of the rigid unit projected from the second image onto the first image and the corresponding rigid unit as depicted in the first image constitute a separate portion of each transformation attributable to the rigid unit pose change.
In some embodiments, the identifying may not include projecting the rigid unit from the second image onto the first image using the common portion of each transformation determined in step 316. Instead, the identification may include calculating the difference between the common portion of each transformation determined in step 316 and the transformation calculated for the individual rigid units. In some implementations, any such calculated differences below a predetermined threshold may be discarded due to noise or otherwise constituting insubstantial motion. Other methods of identifying individual portions of each transformation attributable to rigid unit pose changes may also be utilized in accordance with embodiments of the present disclosure.
The method 300 further includes registering the second image with the first image based on the identified common portion (step 324). Step 324 may occur prior to step 320 (and other steps) and may include projecting the rigid unit from the second image onto the first image using the common portion of each transformation identified in step 316. The registration may also include otherwise aligning the second image with the first image based on cells that are known to have not moved from the first time to the second time (e.g., cells that only appear to have moved due to camera pose changes but have not actually moved (or moved within a certain tolerance). The registration may utilize one or more registration algorithms, such as registration algorithm 128.
In some embodiments, step 324 may alternatively include updating the pre-operative model based on separate portions of each transformation. The pre-operative model may have been generated, for example, based on the pre-operative image, and the updating may include updating each rigid unit depicted in the pre-operative model to reflect any change in the position of the rigid unit from the time the pre-operative image was taken to the time the second image was taken. In such embodiments, the second image may be an intra-operative image or a post-operative image.
Also in some embodiments, step 324 may alternatively include updating the registration of one of the robot space or the navigation space with the image space based on one of the common portion of each transformation or the separate portion of each transformation. This update may be advantageous to maintain accurate registration, which in turn may increase the accuracy of the surgery.
The method 300 further includes quantifying a change in pose of the at least one rigid unit (step 328). The quantification utilizes only the portion of each transformation attributable to rigid unit pose changes. In other words, the quantifying includes quantifying one or more aspects of a change in pose of a particular rigid unit caused by movement of the rigid unit from a first time to a second time (rather than a significant change in pose of the rigid unit attributable to a change in pose of a camera used to image the particular rigid unit at the first time and the second time).
The quantifying may include, for example, determining a rotation angle of the rigid unit from a first time to a second time, and/or determining a translation distance of the rigid unit. The quantifying may include comparing the pose of the rigid unit at the second time to the desired pose change and may be expressed as a percentage of the desired pose change (e.g., based on a comparison to a physical or virtual model of the ideal pose of the rigid unit, whether in a surgical plan, a treatment plan, or otherwise). The quantifying may further comprise quantifying a change in pose of each of the rigid units.
The present disclosure encompasses embodiments of the method 300 that include more or fewer steps than those described above, and/or one or more steps that differ from the steps described above.
Fig. 4 depicts a method 400 for correlating images taken at different times. The method 400 (and/or one or more steps thereof) may be performed or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 130, or part of a navigation system, such as navigation system 114. Processors other than any of the processors described herein may also be used to perform the method 400. The at least one processor may perform the method 400 by executing instructions stored in a memory, such as the memory 106. The instructions may correspond to one or more steps of the method 400 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, segmentation algorithm 122, transformation algorithm 124, homography algorithm 126, and/or registration algorithm 128.
The method 400 includes segmenting each rigid element of a plurality of rigid elements in the first image and the second image (step 404). The first image is taken at a first time and the second image is taken at a second time after the first time. The first image may be the same as or similar to any other first image described herein, and the second image may be the same as or similar to any other second image described herein. The first image and the second image each depict a common portion of the patient anatomy, but the first image and the second image may not be perfectly aligned (e.g., the first image may depict one or more portions of the patient anatomy that are not depicted in the second image, in addition to the common portion of the patient anatomy depicted in both the first image and the second image, or vice versa). The plurality of rigid units may be or include, for example, one or more vertebrae and/or other bony anatomy or hard tissue units, and/or one or more implants (e.g., pedicle screws, cortical screws, rods, pins, and/or other implants).
The segmentation may be accomplished using one or more segmentation algorithms 122 and/or any other segmentation algorithm or process. Step 312 may also include identifying anatomical objects within the first and second images using anatomical maps, biomechanical models, or other references, determining which of those anatomical objects are rigid units, and/or determining a relationship (if any) between two or more identified rigid units. Thus, for example, an anatomic map may be referenced to determine that two adjacent vertebrae are connected by an intervertebral disc, or a patient-specific biomechanical model may be referenced to determine that two adjacent vertebrae have fused and should move as a whole within the patient's anatomy. The segmentation enables determining the perimeter of each rigid unit in the first and second images such that each rigid unit can be analyzed separately.
The method 400 also includes computing a set of homographies that relate the depiction of each rigid element in the first image to the corresponding rigid element in the second image (step 408). Homographies may be calculated using any known method. The calculation may utilize one or more homography algorithms 126. Each calculated homography may describe a relationship between rigid units in the first image and corresponding rigid units in the second image. In other words, each homography relates a rigid element in the first image to a corresponding rigid element in the second image. In other words, using the computed homographies and the depictions of the rigid units in the first image or the second image, depictions of the rigid units may be generated in the other of the first image or the second image.
For the purpose of calculating homographies, a plurality of points on each rigid element (visible in both the first image and the second image) may be selected. For example, these points may be points along the perimeter of the rigid unit in either front-to-back (AP) or side (LT) bit projections. Where the rigid unit is a vertebra, these points may be edge points of the vertebral endplates. Where the rigid unit is a screw, these points may be at both ends of the screw (e.g., at the top of the screw head and at the screw tip). Where the rigid unit is a rod, these points may be at opposite ends of the rod. For purposes of this disclosure, multiple screws in a single anatomical unit may be considered a single rigid unit. These points may be specified manually (e.g., via a user interface such as user interface 110) or automatically (e.g., using image processing algorithm 120, segmentation algorithm 122, or any other algorithm). Homography may be calculated using homography algorithms, such as homography algorithm 126. Any known method for calculating homographies may be used.
In case the first image and the second image are two-dimensional images, the plurality of points comprises at least four points. In the case where the first image and the second image are three-dimensional images, the plurality of points includes at least eight points. Whether using 2D or 3D images, more points than the minimum number of points may be utilized. Additionally, the desired points may include points defined with reference to the patient anatomy, points defined with reference to one or more implants (e.g., screws, rods), or any combination thereof. The selected points may be connected by a marker line and one or more points may be interpolated along the marker line. Notably, noise in the points used to calculate the homography (e.g., any differences between the locations of the points in each image and the corresponding rigid cells in that image) will result in the calculation of a noise homography.
The use of screws, rods and/or other implants as a rigid unit for the purposes of this disclosure is beneficial in view of the fact that implants may be less likely to change over time than anatomical units.
The method 400 also includes removing any homographies from the set of homographies that are affected by the physical changes in the shape of the rigid unit (step 412). The method 400 is based on the following assumptions: bone structure changes (and more generally rigid element shape changes) are less pronounced than soft tissue changes, and any rigid element that has changed shape will add undesirable noise. However, the shape of the rigid unit does sometimes change. Such shape changes may be caused by, for example, compression fractures, osteophytes (bone spurs), and/or other causes.
The removal of homographies affected by physical changes in shape may be accomplished manually or automatically. In some embodiments, the shape change may be identified by a treating physician or other user (e.g., from the first image and the second image) prior to calculating any homographies. In other embodiments, the treating physician or other user may view the first image and the second image after the homography has been calculated, and may identify one or more rigid units that have changed in shape, based on which the corresponding homography may be discarded or ignored. In other embodiments, the processor may use one or more image processing algorithms 120 or other algorithms to identify shape changes, either before or after segmentation at step 404. In such implementations, the shape change may be identified based on a rough comparison of the edges of each rigid cell in the first and second images (e.g., as detected using an edge detection algorithm, segmentation algorithm, or other algorithm).
The method 400 also includes arranging the set of homographies into homographies clusters (step 416). Homographies may be clustered using any data science clustering method. The purpose of clustering is to identify those homographies that are most similar, which can be assumed to correspond to no movement from a first time to a second time, but whose pose change in the second image relative to the first image can be entirely or almost entirely attributed to the rigid elements of camera pose change. Thus, any clustering method that results in similar homographies being grouped together may be used. Clustering may be done in a transform space (e.g., 9-dimensional space) or in a reduced space.
The method 400 further includes selecting a homography cluster based on the parameters (step 420). The parameters may be contours, variances, sizes, or another parameter that may be used to separate those homographies attributable to camera pose changes from those homographies attributable to both camera pose changes and motion of the rigid unit. Clusters may include most homographies utilized in cluster analysis, or a few such homographies. Because (e.g., from a first time to a second time) camera pose changes will affect each rigid element equally (while the motion of each rigid element will not necessarily be related to the motion of any other rigid element), the most coherent clusters most likely include homographies that reflect only perceived motion resulting from the camera pose changes. However, even the most coherent clusters are unlikely to have perfect matching homographies due to noise in homographies caused by noise in the marker points used to calculate homographies, segmentation of each rigid cell, and any other aspect of the method 400 that may lack 100% accuracy.
The method 400 further includes projecting each rigid element as depicted in the second image onto the first image using the average of the selected homography clusters to produce a projected image (step 424). The average of the selected homographies is used to reduce the effect of any noise that affects homographies in the most coherent (or other selected) clusters. The average of the selected homographies is then used to project the rigid element from the second image onto the first image. Because the selected homography corresponds to the effect of a change in pose of the camera from a first time (when the first image is captured) to a second time (when the second image is captured), the projection may result in any projected rigid element that does not move from the first time to the second time being aligned with and overlapping with a corresponding rigid element from the first image. For any rigid unit that has moved from a first time to a second time, the projection of such rigid unit will remove any effect from camera pose changes on the pose of such projected rigid unit, such that the projected image depicts only the actual pose changes of such rigid unit from the first time to the second time.
The method 400 further includes measuring a change between a first pose and a second pose of the rigid unit as depicted in the projection image (step 428). As described above, the projection image includes an image of each rigid unit from a second image that has been projected onto the first image using the average of the selected homographies. Thus, any pose variation between two corresponding anatomical units in the projection image may be assumed to reflect the actual pose variation of the rigid unit. Such a change in attitude may be measured to produce one or more angles of rotation, translational distances, and/or other parameters describing movement of the rigid unit from the first time to the second time. In some embodiments, the measured quantity may be compared to an expected quantity (as reflected, for example, in a treatment plan) to produce a percentage of realization or similar parameter. In other embodiments, the measured amount may be compared to the amount of time of the first time and the second time interval to produce a rate of change that may be used to predict future changes in posture of the one or more rigid units, to predict whether and when additional surgery or other treatment will be needed, or for any other useful purpose.
In addition, the measured quantity (and/or the results of any calculations done using the measured quantity) may be displayed to a treating physician or other user on a user interface, such as user interface 110. The measured quantity may be displayed as a number or may be converted into an indicator (e.g., a red indicator if the quantity is within a predetermined range of unacceptable values, a yellow indicator if the quantity is within a predetermined range of unsatisfactory values, and a green indicator if the quantity is within a predetermined range of acceptable values).
The present disclosure encompasses embodiments of the method 400 that include more or fewer steps than those described above, and/or one or more steps that differ from the steps described above.
Fig. 5 depicts a method 500 for comparing images. The method 500 (and/or one or more steps thereof) may be performed or otherwise performed, for example, by at least one processor, which may be part of a system. The at least one processor may be the same as or similar to the processor 104 of the computing device 102 described above. The at least one processor may be part of a robot, such as robot 130, or part of a navigation system, such as navigation system 114. Processors other than any of the processors described herein may also be used to perform the method 500. The at least one processor may perform the method 500 by executing instructions stored in a memory, such as the memory 106. The instructions may correspond to one or more steps of the method 500 described below. The instructions may cause the processor to perform one or more algorithms, such as image processing algorithm 120, segmentation algorithm 122, transformation algorithm 124, homography algorithm 126, and/or registration algorithm 128.
The method 500 includes identifying a plurality of cells in a first image (step 504). The first image may be taken using any imaging device (e.g., imaging device 112) and taken at a first time. The first image depicts a portion of a patient anatomy. One or more image processing algorithms, such as image processing algorithm 120, may be utilized for identification. Each of these units is a rigid unit and may be an anatomically rigid unit (e.g., a bone anatomy or hard tissue unit) or a rigid implant (e.g., screw, rod, pin). The plurality of cells may include both one or more anatomically rigid cells and one or more rigid implants. In some embodiments, identifying the plurality of cells in the first image further comprises segmenting the plurality of cells in the first image, which may be accomplished in any of the ways described herein or in any other known way of segmenting cells in an image.
The method 500 further includes identifying a plurality of cells in a second image captured after the first image (step 508). Similar to the first image, the second image may be captured using any imaging device (e.g., imaging device 112) and depicts the same portion of patient anatomy (or at least substantially overlapping portions of patient anatomy) as the first image. The second image is taken at a second time subsequent to the first time. As with other embodiments of the present disclosure, the second time may be days, weeks, months, or even years after the first time. One or more image processing algorithms, such as image processing algorithm 120, may be utilized for identification. The plurality of cells identified in the second image are the same as the plurality of cells identified in the first image. In some embodiments, identifying the plurality of cells in the second image further comprises segmenting the plurality of cells in the second image, which may be accomplished in any of the ways described herein or in any other known way of segmenting cells in an image.
The method 500 further includes calculating homographies for each of the plurality of cells (step 512). Step 512 is the same as or similar to step 316 of method 300 and/or step 408 of method 400.
The method 500 also includes determining a first pose change attributable to the imaging device position change and a second pose change not attributable to the imaging device position change based on the homographies (step 516). Step 516 is the same as or similar to the combination of steps 316 and 320 of method 300 and/or the combination of steps 416, 420 and 424 of method 400.
The method 500 further includes registering the second image with the first image based on the first pose change (step 520). Step 520 is the same as or similar to step 324 of method 300.
In some embodiments, step 520 may alternatively include updating the pre-operative model based on separate portions of each transformation. The pre-operative model may have been generated, for example, based on the pre-operative image, and the updating may include updating each rigid unit depicted in the pre-operative model to reflect any change in the position of the rigid unit from the time the pre-operative image was taken to the time the second image was taken. In such embodiments, the second image may be an intra-operative image or a post-operative image.
Also in some embodiments, step 520 may alternatively include updating the registration of one of the robot space or the navigation space with the image space based on one of the common portion of each transformation or the separate portion of each transformation. This update may be advantageous to maintain accurate registration, which in turn may increase the accuracy of the surgery.
The present disclosure encompasses embodiments of the method 500 that include more or fewer steps than those described above, and/or one or more steps that differ from the steps described above.
As described above, the present disclosure encompasses methods having fewer than all of the steps identified in fig. 3, 4, and 5 (and corresponding descriptions of methods 300, 400, and 500), as well as methods including additional steps beyond those identified in fig. 3, 4, and 5 (and corresponding descriptions of methods 300, 400, and 500). The present disclosure also encompasses methods comprising one or more steps from one method described herein and one or more steps from another method described herein. Any of the correlations described herein may be or include registration or any other correlation.
The foregoing is not intended to limit the disclosure to one or more of the forms disclosed herein. In the foregoing detailed description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. Features of aspects, embodiments, and/or configurations of the present disclosure may be combined in alternative aspects, embodiments, and/or configurations than those discussed above. The methods of the present disclosure should not be construed as reflecting the following intent: the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Furthermore, while the foregoing has included descriptions of one or more aspects, embodiments and/or configurations, and certain variations and modifications, other variations, combinations, and modifications are within the scope of this disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims (28)

1. A method, comprising:
receiving a first image of a patient anatomy, the first image generated at a first time and depicting a plurality of rigid units, each rigid unit of the plurality of rigid units being movable relative to at least another rigid unit of the plurality of rigid units;
receiving a second image of the patient anatomy, the second image generated and depicting the plurality of rigid units at a second time subsequent to the first time;
Determining, for each rigid unit of the plurality of rigid units, a transformation from the first image to the second image to produce a set of transformations; and
the set of transforms is used to identify a common portion of each transform attributable to camera pose changes and a separate portion of each transform attributable to rigid unit pose changes.
2. The method of claim 1, further comprising:
the second image is registered with the first image based on the identified common portion of each transformation.
3. The method of claim 1, further comprising:
the pre-operative model is updated based on the separate portion of each transformation.
4. The method of claim 1, further comprising:
the registration of one of the robot space or the navigation space with the image space is updated based on one of the common portion of each transformation or the separate portion of each transformation.
5. The method of claim 1, wherein each transformation is a homography and the set of transformations is a set of homographies.
6. The method of claim 1, wherein the identifying step utilizes clustering to separate transformations of the set of transformations resulting from the camera pose changes.
7. The method of claim 1, wherein the registering step comprises spatially correlating both the first image and the second image with a common vector.
8. The method of claim 1, wherein the first image is a preoperative image.
9. The method of claim 1, wherein at least one of the first image and the second image is an intra-operative image.
10. The registration method of claim 1, wherein computing the transformation includes identifying at least four points on each rigid cell of the plurality of rigid cells as depicted in the first image and corresponding at least four points on each rigid cell of the plurality of rigid cells as depicted in the second image.
11. The method of claim 1, wherein the first image and the second image are two-dimensional.
12. The method of claim 1, wherein the first image and the second image are three-dimensional.
13. The method of claim 1, wherein the plurality of rigid units comprises a plurality of vertebrae of the patient's spine.
14. The method of claim 1, wherein the plurality of rigid units comprises at least one implant.
15. The method of claim 1, further comprising quantifying a change in pose of at least one rigid element of the plurality of rigid elements from the first time to the second time.
16. A method of correlating images taken at different times, comprising:
segmenting each rigid cell of the plurality of rigid cells in a first image of the plurality of rigid cells taken at a first time and in a second image of the plurality of rigid cells taken at a second time subsequent to the first time;
computing a homography for each rigid unit of the plurality of rigid units to generate a set of homographies, each homography relating the rigid unit as depicted in the first image to the rigid unit as depicted in the second image;
ranking the set of homographies into homographies clusters based on at least one characteristic;
selecting a homography cluster based on at least one parameter; and
each rigid element of the plurality of rigid elements as depicted in the second image is projected onto the first image using an average of the selected homography clusters to produce a projected image.
17. The method of claim 16, wherein the second time is at least one month after the first time.
18. The method of claim 16, wherein the second time is at least one year after the first time.
19. The method of claim 16, wherein the at least one parameter is a contour.
20. The method of claim 16, wherein at least one rigid unit of the plurality of rigid units is an implant.
21. The method of claim 16, wherein the plurality of rigid units comprises a plurality of vertebrae of a patient's spine.
22. The method of claim 16, further comprising measuring at least one of an angle or a distance corresponding to a change in pose of one of the plurality of rigid units as reflected in the projection image.
23. The method of claim 16, further comprising removing from the set of homographies any homographies affected by one or more of a compression fracture or osteophyte depicted in the second image but not the first image.
24. The method of claim 16, wherein computing the homography for each rigid unit of the plurality of rigid units includes identifying edge points of a vertebral endplate.
25. A system for comparing images, comprising:
At least one processor; and
a memory storing instructions for execution by the processor, the instructions when executed causing the processor to:
identifying a plurality of cells in a first image generated at a first time;
identifying the plurality of cells in a second image generated at a second time subsequent to the first time;
calculating homographies for each of the plurality of cells using the first image and the second image to generate a set of homographies; and
determining based on the set of homographies: a first pose change of one or more of the plurality of cells in the second image relative to the first image, the first pose change attributable to an imaging device position change from the first image to the second image relative to the plurality of cells; and a second pose change of at least one of the plurality of cells in the second image relative to the first image, the second pose change not attributable to the imaging device position change.
26. The system of claim 25, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to:
The second image is registered with the first image based on the first pose change.
27. The system of claim 25, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to:
updating the preoperative model based on the second pose change.
28. The system of claim 25, wherein the memory stores additional instructions for execution by the processor, the additional instructions when executed further causing the processor to:
the registration of one of the robot space or the navigation space with the image space is updated based on one of the first pose change or the second pose change.
CN202180084396.0A 2020-12-15 2021-12-07 Registration of time-spaced X-ray images Pending CN116762095A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/125,822 2020-12-15
US17/526,935 US20220189047A1 (en) 2020-12-15 2021-11-15 Registration of time-separated x-ray images
US17/526,935 2021-11-15
PCT/IL2021/051452 WO2022130371A1 (en) 2020-12-15 2021-12-07 Registration of time-separated x-ray images

Publications (1)

Publication Number Publication Date
CN116762095A true CN116762095A (en) 2023-09-15

Family

ID=87959546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180084396.0A Pending CN116762095A (en) 2020-12-15 2021-12-07 Registration of time-spaced X-ray images

Country Status (1)

Country Link
CN (1) CN116762095A (en)

Similar Documents

Publication Publication Date Title
US8897514B2 (en) Imaging method for motion analysis
US20130094742A1 (en) Method and system for determining an imaging direction and calibration of an imaging apparatus
EP2981943B1 (en) Method and device for determining the orientation of a co-ordinate system of an anatomical object in a global co-ordinate system
US20060204067A1 (en) Determining shaft and femur neck axes and three-dimensional reconstruction
US20220270263A1 (en) Computed tomography to fluoroscopy registration using segmented inputs
CN116490145A (en) System and method for segment tracking
WO2023215762A1 (en) Methods and systems for determining alignment parameters of a surgical target, such as a spine
US20230097365A1 (en) Systems, methods, and devices for detecting anatomical features
EP4026511A1 (en) Systems and methods for single image registration update
US20220198665A1 (en) Systems and methods for monitoring one or more anatomical elements
US20220189047A1 (en) Registration of time-separated x-ray images
CN116762095A (en) Registration of time-spaced X-ray images
US20210322112A1 (en) System and method for aligning an imaging device
US20230240755A1 (en) Systems and methods for registering one or more anatomical elements
US20230115512A1 (en) Systems and methods for matching images of the spine in a variety of postures
TWI836491B (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
US20220241015A1 (en) Methods and systems for planning a surgical procedure
US20220156928A1 (en) Systems and methods for generating virtual images
US20220296388A1 (en) Systems and methods for training and using an implant plan evaluation model
US20220079704A1 (en) Systems and methods for generating a corrected image
EP4298604A1 (en) Computer tomography to fluoroscopy registration using segmented inputs
CN116917942A (en) Registration of computed tomography to perspective using segmented input
CN117279586A (en) System and method for generating multiple registrations
TW202333628A (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
CN116648724A (en) Systems and methods for monitoring one or more anatomical elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination