CN117320656A - Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery - Google Patents

Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery Download PDF

Info

Publication number
CN117320656A
CN117320656A CN202280032309.1A CN202280032309A CN117320656A CN 117320656 A CN117320656 A CN 117320656A CN 202280032309 A CN202280032309 A CN 202280032309A CN 117320656 A CN117320656 A CN 117320656A
Authority
CN
China
Prior art keywords
screw
data
pedicle
calculating
spinal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280032309.1A
Other languages
Chinese (zh)
Inventor
V·勒福科尼耶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neo Medical SA
Original Assignee
Neo Medical SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neo Medical SA filed Critical Neo Medical SA
Priority claimed from PCT/IB2022/051805 external-priority patent/WO2022185210A1/en
Publication of CN117320656A publication Critical patent/CN117320656A/en
Pending legal-status Critical Current

Links

Landscapes

  • Surgical Instruments (AREA)

Abstract

A method for assisting an orthopedic procedure is disclosed, the method comprising the steps of: capturing a sequence of images such that the field of view captures images of a plurality of screw extenders, each screw extender holding a pedicle screw, the plurality of screw extenders being disposed at a surgical incision of the orthopedic procedure; displaying images of the captured images to provide a live video feed; detecting a plurality of screw extenders based on the captured image sequence, first calculating an orientation and position of the detected plurality of screw extenders; second calculating a 3D position of the screw head of each pedicle screw based on the orientation and position; and projecting and displaying, on the display device, the calculated 3D position of each of the plurality of screw heads with the graphical element at a position corresponding to the position of the screw head projected onto the currently displayed image of the live video feed.

Description

Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery
Cross Reference to Related Applications
The present invention claims priority from International patent applications filed on day 3 and 1 of 2021 and with serial numbers PCT/IB2021/051694 and day 7 and 12 of 2021 and with serial numbers PCT/IB2021/056242, which are incorporated herein by reference in their entirety.
Technical Field
The present invention relates to the field of orthopedic surgery using augmented reality or mixed reality, and more particularly to methods, systems and devices for using augmented reality or mixed reality to provide assistance or convenience to a surgeon performing an orthopedic surgery, in particular for proposing different types and shapes of stabilizer bars for spinal fusion surgery and other types of orthopedic surgery.
Background
In the field of orthopedic and implantation tools and systems for orthopedic surgery, more particularly spinal fusion surgery for the spine, pedicle screws are used to attach to vertebrae with bone anchors through incision locations on the back of a patient. After several pedicle screws are attached to different vertebrae, the heads of these pedicle screws are connected together via rod or bar-type devices, and the rod or bar-type devices (also referred to as spinal rods) are attached to the heads of the pedicle screws with set screws. As an example, for several adjacent vertebrae for vertebral fusion, for each vertebra, pedicle screws are threadably attached to the vertebrae with their bone anchors, after which they are mechanically fastened towards each other by using spinal rods placed in grooves or U-shaped openings formed by the heads of the pedicle screws, thereby forming a row of pedicle screws along the spinal column. This allows for providing the mechanical support required for spinal stabilization for spinal fusion in a patient or organism.
To better reach the incision site and to threadably attach the pedicle screw to the vertebra, the pedicle screw, and in particular the head of the pedicle screw, is typically removably attached to a screw extender or similar device, such as an elongated self-tapping screw head or a bladed screw head. The purpose of the screw extender and similar devices is to increase the additional length of the head of the pedicle screw, allowing the operator or surgeon to act outside of the surgical incision to keep the surgical incision open, and to help guide different tools and spinal rods to the head of the pedicle screw. The screw extender, which is configured to hold the pedicle screw, is typically a tubular longitudinal device that is much larger than the head of the pedicle screw and itself has a slot along its lateral longitudinal shape. When the pedicle screw head is connected to the screw extender, the longitudinal shaped slot mates with the U-shaped opening in the screw head of the pedicle screw, thus allowing the spinal rod to be guided into the U-shaped opening through the longitudinal shaped slot. The process of pushing the spinal rod in the longitudinal shaped slot of the screw extender down toward the pedicle screw heads and into the pedicle screw heads is also referred to as rod reduction.
For example, U.S. patent No. 10,058,355 describes an orthopedic implant kit that provides pedicle screws, corresponding set screws, rods, and tools for manipulating these components, including a screw extender for holding the pedicle screws and a set screw driver for tightening the set screws relative to the screw head threads of the pedicle screws, the entire disclosure of which is incorporated herein by reference. U.S. patent No. 7,160,300 describes a rod reduction method in which an intermediate guide tool is attached to a bone screw, the intermediate guide tool having a tubular shape with a longitudinal shaped channel capable of guiding a rod from the guide tool to the bone screw attached thereto. As another example, U.S. patent No. 8,795,283 describes another set of orthopedic surgical systems for surgical intervention for spinal stabilization, the entire disclosure of which is incorporated herein by reference, including pedicle screws having heads for receiving rods and the tools required for the surgical intervention. The screw extender is made from a tube having two separable half-shells which are held together by a retaining ring so that a tubular shape can be formed. In yet another example, U.S. patent No. 8,262,662, which is incorporated herein by reference in its entirety, provides a system and method for delivering spinal connectors to spinal anchor sites in the spinal column. In one embodiment, a spinal implant and access device is provided that includes a U-shaped receiver member, a bone engaging member, an extension member, a spinal rod, and a set screw. The elongate member has a tubular shape.
As noted above, similar orthopedic spinal surgical concepts, tools and devices have been proposed for attaching rods to pedicle screws via set screws, such as U.S. patent No. 5,129,388, U.S. patent No. 5,520,689, U.S. patent No. 5,536,268, U.S. patent No. 5,720,751, U.S. patent No. 5,984,923, U.S. patent No. 6,056,753, U.S. patent No. 6,183,472, U.S. patent No. 6,258,090, U.S. patent No. 6,454,768, U.S. patent No. 6,648,888, U.S. patent No. 6,740,086, U.S. patent No. 7,618,442, U.S. patent No. 8,308,782, U.S. patent No. 8,876,868, U.S. patent publication No. 2006/0025771, and U.S. patent publication No. 2018/0289397, all of which are incorporated herein by reference in their entirety.
However, once the pedicle screws are attached to the vertebrae of the spine, the surgeon or operator can only see the screw extender removably attached to the screw heads of the respective pedicle screws, which is typically pointed out of and out of the surgical incision required to attach the pedicle screws to the vertebrae. Typically, the screw heads are embedded in the tissue surrounding the incision unless the surgeon opens the incision. In this regard, prior to the rod reduction and rod fixation procedures, the surgeon or operator typically needs to select a rod of the appropriate length, pre-bent spinal rod, or select a pre-bent spinal rod for placement into the U-shaped recess of the pedicle screw head. However, without being able to see the exact placement of the pedicle screws and their screw heads with grooves for receiving spinal rods, this is a difficult task that may lead to a trial-and-error procedure for determining the proper length and shape of the spinal rod and bending so that it can be percutaneously inserted into the individual screw heads of the pedicle screws. This may result in a significant loss of time during surgery, increased risk of screw loosening or implant failure, and additional costs.
Solutions based on machine learning using Convolutional Neural Networks (CNNs) to detect the visible pedicle screw heads have been proposed in orthopedic surgery. See, for example, von Atzigen et al, "HoloYolo: A proof-of-concept study for marker-less surgical navigation of spinal rod implants with augmented reality and on-device machine learning" The International Journal of Medical Robotics and Computer Assisted Surgery,2020, e2184. However, this method has a number of drawbacks, as it relies on direct visual observation of the different screw heads of the pedicle screws attached to the vertebrae, and thus requires a fully open surgical site and a direct observation of the maximum opening of the incision of the wound, and requires relatively long data processing times for detection and slow tracking refresh rates, and has considerable detection uncertainty.
To avoid the drawbacks of camera view based imaging solutions, some methods have used C-arm fluoroscopy with X-ray projections to evaluate pedicle screw placement, allowing calculation of screw pose estimates based on biplane X-rays and fluoroscopic images using reflective markers. See Esfandiiri et al, "A deep learning framework for segmentation and pose estimation of pedicle screw implants based on C-arm fluorocopy," International journal of Computer assisted radiology and surgery, volume 13, 8, 2018, pages 1269-1282, and Fu et al, "Computer-Assisted Fluoroscopic Navigation of Pedicle Screw Insertion An In Vivo Feasibility Study," Acta Orthopaedica Scandinavica, volume 75, 6, 2004, pages 730-735. However, these methods require complex and expensive computed tomography equipment and are also not suitable for direct use by orthopedic surgeons due to the additional operating steps that need to be performed.
Accordingly, there is a need for systems, methods, and devices to improve the use of spinal rods during surgery, particularly placement, implantation, preselection, and matching of spinal rods for specific surgical conditions, which have simplified use for the user and require significantly less cost to assist the user.
Disclosure of Invention
According to one aspect of the invention, a method for assisting an orthopedic procedure is provided. The method may be performed using a data processing device comprising a display device and an image capturing device. Preferably, the method comprises the steps of: capturing a sequence of images with an imaging device such that a field of view of the imaging device captures images of a plurality of screw extenders, each screw extender holding a pedicle screw, the plurality of screw extenders being disposed at a surgical incision of a body of a living being undergoing an orthopedic procedure; displaying at least some of the captured images on a display device to provide a live video feed; detecting a plurality of screw extenders using a data processing device based on the captured image sequence; first calculating the detected orientation and position of the plurality of screw extenders; second calculating a three-dimensional (3D) position of the screw head of each pedicle screw based on the first calculated orientation and position; and projecting and displaying, on the display device, each calculated 3D position of the plurality of screw heads with a graphical element having a graphical user interface at a position corresponding to the position of the screw head of the currently displayed image projected onto the live video feed.
According to another aspect of the present invention, a non-transitory computer readable medium having computer instructions recorded thereon is provided. The computer instructions are configured to perform a method for assisting in an orthopedic procedure when executed on a computer device operatively connected to a display device and an image capture device.
According to yet another aspect of the present invention, there is provided a computer system comprising an image capturing device, a display device and a data processing device operatively connected to the image capturing device and the display device. Preferably, the data processing apparatus is configured to perform a method for assisting an orthopedic surgery using augmented reality.
According to another aspect of the present invention, a method for a curved-based fixation rod-assisted orthopedic procedure to determine correction of a spinal column is provided. Preferably, the method is performed using a data processing device. Furthermore, preferably, the method comprises the steps of: scanning the fixation rod with an image capture device to obtain scan data of the fixation rod, the spinal correction rod having been bent for spinal correction; first calculating curvature data of the fixed rod based on the scan data; receiving data of the position of the attachment point of the fixation rod to the spine, the position of the attachment point having been determined based on the position data of the screw heads of pedicle screws attached to vertebrae of the spine; a second calculation of the corrected position of the attachment point by taking into account the curvature data of the fixation rod from the first calculation step, the corrected position of the attachment point being based on a correction given to the position of the attachment point when the fixation rod is attached to the corrected attachment point of the spine; third calculating a spinal parameter of the corrected spine based on the data of the corrected location of the attachment point of the corrected spine; and displaying the corrected spinal parameters of the spinal column on a display device.
According to yet another aspect of the present invention, there is provided a non-transitory computer readable medium having computer instructions recorded thereon. The computer instructions are configured to perform a method for bend-based fixation rod assisted orthopedic surgery to determine spinal correction when executed by a computer device operatively connected to a display device and an image capture device.
According to yet another aspect of the present invention, there is provided a computer system comprising an image capturing device, a display device and a data processing device operatively connected to the image capturing device and the display device. Preferably, the data processing apparatus is configured to perform a method for bend-based fixation rod assisted orthopaedic surgery to determine a spinal correction.
According to another aspect of the present invention, a method for assisting spinal orthopaedic surgery is provided. Preferably, the method is performed with a data processing device comprising a display device and an image capturing device. Furthermore, preferably, the method comprises the steps of: capturing a sequence of images with an image capturing device such that a field of view of the image capturing device captures images of at least one of a plurality of pedicle markers or a plurality of guide wires respectively placed on a plurality of guide wires, the plurality of pedicle markers or the plurality of guide wires being arranged at a surgical incision of a body of a living being undergoing an orthopedic operation; providing a live video feed on a display device by displaying at least some of the captured images or by utilizing a direct view of a transparent display device; detecting a plurality of pedicle markers or a plurality of guide wires using a data processing device based on the captured image sequence; first calculating the orientation and position of the detected plurality of pedicle markers or the detected plurality of guide wires; and second calculating pose data information for at least two vertebrae based on the orientation and position of at least one of the detected plurality of pedicle markers or the detected plurality of guide wires attached to the vertebrae from the first calculating step.
The above and other objects, features and advantages of the present invention, and the manner of attaining them, will become more apparent and the invention itself will be best understood by reference to the following description and appended claims, taken in conjunction with the accompanying drawings, which illustrate some preferred embodiments of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate presently preferred embodiments of the invention and, together with the general description given above and the detailed description given below, serve to explain features of the invention.
Fig. 1A shows a perspective view of an exemplary and simplified location or site for performing an orthopedic procedure, showing a living being or patient L on an operating table with a surgical incision SI and a user, operator, surgeon, medical assistant O holding an exemplary data processing apparatus 100 for performing a method for presenting different types of spinal rods for the orthopedic procedure;
FIG. 1B shows an exemplary simplified flowchart depicting different steps of a method for presenting different types of spinal rods for orthopedic surgery using augmented reality in accordance with an aspect of the present invention;
FIG. 1C shows a side view of an exemplary screw extender SE and pedicle screw assembly having a bone anchor BA and a screw head SH to visualize the various elements of the exemplary assembly, as a non-limiting example of a screw extender and pedicle screw for use in the methods and systems presented herein;
FIG. 1D shows a simplified schematic perspective view of a surgical incision SI having an exemplary number of six (6) screw extenders SE1-SE6 protruding therefrom, wherein each screw extender SE is equipped with optical markers OM1-OM6 for detecting and tracking the screw extender;
FIG. 1E shows a simplified perspective view of an exemplary marking device 50 removably positionable onto a distal end 60 of a screw extender SE, the marking device 50 having an optical marking code OM and an attachment device 55 for removable attachment to the screw extender SE;
2A-2N illustrate exemplary screenshots of different stages of a method, preferably displayed on a graphical user interface of a data processing apparatus, showing different aspects of augmented reality used on a display screen for orthopedic surgery assistance; and
FIG. 3A shows a schematic simplified representation of a spinal column SC having seven (7) exemplary vertebrae V1-V7, illustrated from a rear view, for each vertebra V, two attachment points AP have been determined using the methods described herein, and different parameters that may be calculated and displayed by some steps of the methods (e.g., steps C70, C75, D70, D80) are visualized, including position and posture information PDI_V for each vertebra V1-V7, curvature data SCD for the currently uncorrected spinal column, and data about corrected spinal curve CSC for the corrected spinal column;
Fig. 3B shows an exemplary perspective view of a screw extender SE placed within a surgical incision SI, the screw extender having a tool SD attached thereto, such as a screwdriver SD for attaching pedicle screws to vertebrae V, the tool SD having two exemplary (2) optical detection markers OM disposed thereon;
fig. 3C shows a side view of three (3) screw extenders SE that have been moved to one predetermined side based on the fixed geometric relationship between screw extenders SE1 to SE3 and bone anchors BA1 to BA3 of screw heads SH1 to SH3, positioned at the outermost angular position relative to screw heads SH and bone anchors BA, in order to improve the accuracy of the calculation of the position of vertebrae and spine SC;
FIG. 4 illustrates an exemplary flow chart of a method 500 for scanning, calculating and displaying rod data RD of a real spinal stabilization rod R and calculating a virtual spinal correction based on the rod data RD to visualize the virtual correction of the spinal column SC in accordance with another aspect of the invention;
fig. 5 shows an exemplary flowchart of a method 600 for determining different types of information characterizing a spinal column SC prior to placement of a pedicle screw PS by detecting a guidewire GW or pedicle marker PM visible from a surgical incision SI; and
Fig. 6 shows an exemplary simplified cross-sectional representation of a vertebra having two bores DH1, DH2, two guide wires GW1, GW2 placed into the bores DH1, DH2, respectively, and two pedicle markers PM1, PM2 attached to the guide wires GW1, GW2, respectively, wherein the optical marker OM is provided with a removable or fixedly attached optical marker portion 50, according to an aspect of the invention.
Identical reference numerals have been used, where possible, to designate identical elements that are common to the figures herein. Moreover, the images in the drawings are simplified for illustrative purposes and may not be depicted to scale.
Detailed Description
Fig. 1A shows a perspective view of a site where an orthopedic operation is performed showing a living being or patient L on an operating table with an operation incision SI and a user, operator, surgeon, medical assistant O holding an exemplary data processing apparatus 100 for performing a method for presenting different types of spinal rods for the orthopedic operation, and fig. 1B shows an exemplary simplified flowchart depicting different steps of a method for presenting different types of spinal stabilization rods for stabilizing and fusing vertebrae of a spinal column SC of the living being or patient L, the method being performed by a computing device with an imaging capture device and a display screen during a spinal orthopedic operation at the operation incision SI using augmented reality, according to one aspect of the present invention. As an exemplary embodiment, an orthopedic spinal procedure is shown and described, wherein the method is performed and used to assist a user, operator, surgeon, medical assistant O in selecting an appropriate stabilization rod for attachment to two or more pedicle screws. For example, the method may propose a particular rod having a particular pre-curved shape among a plurality of rods having different curved shapes, or may propose a particular curvature or curve of the rod, which may then be curved by the surgeon O for surgery.
Note that spinal orthopedic surgery is merely exemplary in nature and that the same approach using augmented reality may be used for other types of orthopedic surgery (e.g., without limitation, fracture repair surgery requiring stabilization by a rod or other types of fracture or reconstructive surgery using external fixation), where a stabilizer rod or another type of stabilization device is required for attachment to different types of pre-placed bone screws having detectable screw extenders attached thereto.
Prior to performing method 200, an orthopedic procedure is performed, wherein surgeon O begins and performs the orthopedic procedure, for example, based on a conventional surgical workflow. Thus, a surgical incision SI is made to the living being or patient L, and for purposes of illustration and description, it is assumed that at least two pedicle screws PS1, PS2 are placed to respective vertebrae of the spinal column SC of the living being L, in the illustrated variation three (3) pedicle screws PS1, PS2, PS3. This number is merely exemplary and is selected for illustration purposes, and the method 200 may be performed with a different number of screw extenders SE and corresponding pedicle screws. Typically, two pedicle screws are required on each side of each vertebra. Thus, each pedicle screw PS1, PS2, PS3 is attached with its screw head SH1, SH2, SH3 to a respective screw extender SE1, SE2, SE3. An example of at least a portion of such a procedure is shown in U.S. patent No. 10,058,355, see fig. 18-38, which is incorporated herein by reference in its entirety.
Once the plurality of pedicle screws PS1, PS2, PS3 are placed in their final position relative to the respective vertebrae V1, V2, V3, for example by screwing up to the respective vertebrae with the bone anchoring elements of the pedicle screws PS1, PS2, PS3, by using screw extenders SE1, SE2, SE3 and screwdrivers, as shown in us patent No. 10,058,355, the surgeon or operator O needs to select or provide a spinal stabilizer R that is curved or has a shape that enables it to be placed into each of the receiving openings of the screw heads SH1, SH2, SH3 of the plurality of pedicle screws PS1, PS2, PS 3. Preferably, each screw head of the pedicle screws PS1, PS2, PS3 has a U-shaped recess for receiving the rod R, and is threaded such that the rod R can be attached to the screw head by a set screw. In order to determine the shape or curvature of the rod R to be placed into the screw heads SH1, SH2, SH3 of the pedicle screws PS1, PS2, PS3, it is desirable to have information of the position and orientation of the screw heads SH1, SH2, SH3 relative to each other so that at least one of the shape, curvature or length of the rod R prior to placement and connection to the screw heads SH1, SH2, SH3 can be determined.
At this stage of the procedure, surgeon O may use data processing device 100 to begin method 200 for presenting different types of spinal stabilization rods for stabilizing and fusing vertebrae of spinal column SC of organism or patient L. The steps of method 200 may be performed by dedicated application software comprising computer instructions executable on a data processor of data processing apparatus 100 to perform aspects of the methods described herein, method 200 being configured to operate and display a graphical user interface GUI with user commands, for example as a graphical overlay on a live video feed that may be shown on display device 120 of data processing apparatus 100. The display device 120 may be a display screen, such as a touch screen, that also includes touch sensitive features for information input. Preferably, the data processing device 100 may be a portable device such as, but not limited to, a smart phone, a cellular phone or a tablet computer, or another type of handheld data processing device. Moreover, the data processing apparatus 100 may also include a graphics processor supporting image data processing and generation of live video feeds and GUIs, as well as other graphical elements displayed on the GUIs.
Once all pedicle screws PS1, PS2, PS3 and their respective screw extenders SE1, SE2, SE3 are placed, the method 200 may begin as exemplarily illustrated in fig. 1A and 1B. A first step U10 is performed, wherein the method 200 is started, for example by starting an application by a surgeon or operator O. Next, the method 200 proceeds to step D10, wherein a live video feed is generated and displayed on a GUI of the display device 120, such as a touch screen of a smart phone. This may be done by touching a button or active graphical element of the GUI for starting a live video feed displayed on the application, or may be started automatically when the application is started in step U10. Thus, the data processing device 100 starts capturing a sequence of images with the image capturing device 110 (e.g., a camera unit built-in to a smart phone) and simultaneously displays such images in real time on the display screen 120 (e.g., using a window of a graphical user interface GUI) or on the full screen of the display device or screen 120. The live video feed is based on captured images of the image capture device 110 and is displayed as a real-time video sequence to allow additional graphical elements, animations and other objects to be overlaid for the augmented reality representation. Note that step D10 is an optional step, as the method may also be performed with a transparent or semi-transparent display screen or device, for example with wearable Augmented Reality (AR) glasses, a head-mounted display with a transparent or semi-transparent display screen or device, or a head-up display (HUD), wherein the surgical incision SI may be directly observed.
Then, optionally, the method 200 proceeds to step D20, wherein an instruction or command CMD may be displayed or otherwise provided to the surgeon or operator O, for example to request basic information for starting the method, such as by requesting calibration information, orientation information, or other types of information that allow the next scanning step U30 to manually input data by the surgeon or operator O. In addition, this step D20 may also provide a graphical element that allows the user to input data for the method 200, in particular data related to instructions or commands CMD. This may be done with graphical elements overlaid on the live video feed, for example with text cues, graphical cues, or one or more selection buttons. It is also possible that step D20 provides audio information in the form of voice commands to assist the surgeon or operator O in providing instructions or commands CMD. Also, step U20 may be performed, wherein the surgeon or operator O may input data in response to a command or command CMD, as exemplarily shown in the screen shot of FIG. 2A. These steps may be performed at least partially simultaneously while the surgeon or operator O continuously photographs the surgical incision SI to generate a live video feed of step D10, allowing the display of commands and buttons CMD. For example, in the illustrated variant, step D20 displays a text box for the surgeon or operator O requesting information about the orientation of the living being L relative to the live video feed view, wherein in step D20 a text box is displayed in the GUI providing additional information to the surgeon or operator O relating to the information request. Also, step D20 may display two symbolized heads on the left and right sides of the GUI and overlay the two symbolized heads with a graphical icon such that the surgeon or operator O may select one of the two graphical icons to indicate on which side the head of the living being L is located relative to the position of the data processing apparatus 100. Once one of the graphical icons representing the header is selected, the header may be highlighted and the requested information may be confirmed using a confirm button overlaid on the live video feed of the GUI, as shown in fig. 2A.
Generally, in the context of this specification, in a surgeon or operator O entering data into steps of method 200 or other methods described herein, such as, but not limited to, step U20, the requested data may be entered using voice or speech recognition software operating on data processing device 100, using microphone 130 of data processing device 100, rather than manually entering the data by touching graphical elements such as buttons on a GUI with a touch screen operation. This will allow the surgeon or operator O to provide data or information to respond to the command or command CMD with a voice command, and such an implementation of voice and speech recognition will allow the surgeon or operator O to avoid touching the display screen 120 at least in part during the method 200. Thereafter, the input of data may be confirmed by audio, e.g., with a voice prompt or with a different graphical element displayed, using one or more speakers as part of the data processing apparatus 100 or operatively connected thereto.
However, it is also possible that this step U20 is automated by a computer-based process (e.g. by an image data processing algorithm), wherein the data processor and memory of the data processing device 100 are executed to detect the orientation of the living being L with respect to a captured image sequence providing image data for a live video feed. As an example, this may be done, for example, by using a trained neural network capable of detecting the orientation of the bio-L based on training data, or by using optical markers attached to the bio-L or to a medical or surgical bed or table (as explained further below with respect to optical markers OM attached to the screw extender SE), or by detecting the orientation of the medical or surgical bed or table capable of being detected by a pattern matching algorithm, which provides information related to the orientation of the bio-L.
As explained above with respect to steps U20 and D20, while displaying the live video feed of step D10 with the GUI, during execution of the method 200, different user commands and information may be displayed on the GUI at different times in order to provide user indications and to receive user information and instructions. For example, a different text prompt or text box with text information may be displayed as an overlay on the live video feed that gives the surgeon or operator O information about the type of processing performed or the status information of the method 200 as feedback to the surgeon or operator O or requests user input via icons or buttons. It is also possible to request different information and instructions by audio (e.g. voice prompts). Also, it may be possible to display a graphical element or icon that allows a menu for configuring the method 200 to be opened or pulled down, for example, by locking image quality and image capture parameters and features such as, but not limited to, zoom or image clipping, automatic image correction settings, automatic color and white balance adjustment, wide angle settings. Moreover, a graphical icon may be provided that may be touched or otherwise selected by the surgeon or operator O to return to the previous step of method 200.
Next, in step U30 of scanning the screw extender SE, the surgeon or operator O is notified or encouraged, for example by text prompting, to take and capture a sequence of images of the screw extender SE pointed out from the surgical incision SI, while displaying a live video feed on the GUI, and step D10 is continuously performed. An exemplary screen shot of this step is shown in fig. 2B, in which a text box requests the user or operator O to move the data processing device 100 to scan all screw extenders SE. Next, for step D10, the surgeon or operator O directs the view angle and field of view of the image capture device 110 to the living being L and the surgical incision SI such that all screw extenders SE (e.g., three exemplary screw extenders SE1, SE2, SE 3) are viewable in the live video feed of the GUI to capture the sequence of images at and around the surgical incision SI. This is illustrated schematically in fig. 2C with a screen shot in which a text box indicating that a scan of the screw extender is in progress may be displayed. With respect to fig. 2C, it can also be seen that the field of view of the image capture device 110 displayed with the GUI shows five (5) screw extenders SE, three (3) of which are anterior, attached to the right side of the pedicle bone posterior to the spine, into the vertebral body, and two (2) on the left side.
Simultaneously with or after the beginning of the scanning step U30, while the surgeon or operator O is still taking the surgical incision SI and the screw extender SE, an image data processing step C10, i.e. a step of detecting a different screw extender SE, is performed. This may be accomplished by different types of image processing algorithms performed on the captured image. This may be done, for example, by a step-by-step method, in which first a screw extender SE is searched, detected and its data is saved, for example by means of a three-dimensional coordinate data model. Thereafter, the next screw extender SE is searched, detected and its data is saved, and these sub-steps are repeated until all screw extenders SE in the field of view of the image capture device 110 are detected and saved. Preferably, the detecting step C10 is performed while the data processing device 100 is moved, which means that the viewing angle and viewing window of the screw extender SE and the surgical incision SI are variable and variable.
As an example, the detecting step C10 may be performed by using a three-dimensional (3D) pose based on a rigid body model, a positioning estimation algorithm, and a tracking algorithm to detect and track the shape of the screw extender SE, and thereafter extracting the pose of the screw extender SE to provide a dataset of pose data information PDI. Since all shapes of the screw extender SE are known and the shapes and dimensions are the same, a three-dimensional model may be used for this detection step, such as a Computer Aided Design (CAD) data model. After detecting one of the plurality of screw extenders SE, this step generates pose data information PDI, which may be stored and updated, which may include coordinate reference positions, angles and rotational orientations of the screw extender SE of a reference coordinate system, e.g. different vectors, such as a real world coordinate system such as euclidean space. It is also possible that the attitude data information PDI includes only the coordinate positions of the different screw extenders SE to simplify the calculation. The pose data information PDI may be calculated in different forms and coordinate spaces, but in a preferred embodiment the coordinate data is three-dimensional data referencing the euclidean coordinate space. Once all of the pose data information PDIs for all of the screw extenders SE are generated, a dataset or table may be generated from the dataset of all of the collected pose data information PDIs. Although the body shape of the screw extender SE is known, the screw extender SE is only partially visible in the image sequence because the leading end of the screw extender SE is interconnected with the head of the pedicle screw PS within the surgical incision, as can be seen in the exemplary screen shot of fig. 2C. To this end, it is necessary to execute a robust algorithm for computer vision tracking, wherein it is also possible to detect a partial shape of the subject and generate the pose data information PDI. Since each surgical incision is a priori unknown and can vary widely between different incision locations, biological L, placement of surgical tools, it is preferable to use a visual tracking algorithm that does not require prior knowledge of the scene to be tracked.
U.S. patent publication 2019/0355150 describes examples of model-based tracking algorithms that may be used, which is incorporated by reference herein in its entirety, wherein a trained god is usedFor object detection via a network. With trained neural networks, such as, but not limited to, CNN and deep learning based on images of screw extenders with known pose data information PDI, training data may be established that allows a partial view of screw extender SE to be directly linked to its pose data information PDI. As another example, a vision Lib from the company Visometry GmbH may be used TM Is a robust and model-based augmented reality tracking algorithm.
As another example, the detecting step C10 may be performed by using a contour detection algorithm that allows first detecting the contour of each screw extender SE, and thereafter mapping each detected contour to a two-dimensional (2D) projection of a three-dimensional (3D) model of the screw extender to thereafter determine a dataset of pose data information PDI.
With the detected step C10, the detected screw extender SE may also be tracked and updated during capturing and displaying of images on the live video feed. This may be necessary because the surgeon or operator O will move the shooting position and orientation of the screw extender SE using the scanning step U30, so that more information will be collected to further refine the dataset of pose data information PDI. However, it is also possible that the screw extenders SE themselves may be slightly moved relative to each other, which may change their coordinate and orientation data. In this regard, the data set of the pose data information PDI may change as a function of time, and the data structure or table containing the data set of the pose data information PDI may be periodically updated during step C10 of detection.
In a variant, step C10 does not require the placement of any screw extender SE to the pedicle screw PS, and is performed with only the pedicle screw PS attached to the vertebra V. For example, the screw heads SH of the respective pedicle screws PS may have optical indicia OM printed, etched, engraved, patterned or otherwise disposed thereon to more robustly detect the screw heads SH of the respective pedicle screws PS by a tracking algorithm. For example, the optical marker OM may be such that it has some redundant information so that it can still be detected even if the marker OM is covered by meat, muscle, fat or other body parts of the surgical incision SI. See, for exampleEt al, "Robust Detection and Identification of Partially Occluded Circular Markers," In International Conference on Computer Vision Theory and Applications (VISAPP), volume 1, pages 387-392, 2010. See also art reference optical markers. In a variant, prior to step C10 of scanning the screw heads SH instead of the screw extender SE, each screw head SH is provided with a removable optical marking portion 50 having an optical marking OM arranged thereon, as shown in fig. 1E, but this time not placed onto the screw extender SE, but directly onto each screw head SH of the pedicle screw PS. The interconnection between the optical marking portion 50 and the screw heads SH may be accomplished by configuring the end of the optical marking portion 50 with an interconnection element complementary or corresponding to one of the screw heads SH of the pedicle screw PS, such as a press fit engagement or snap lock or other type of geometrically defined lock between the optical marking portion 50 and the screw heads SH as discussed in U.S. patent No. 10,058,355, so that the optical marking portion 50 may be connected to the screw heads SH in precisely defined locations and still be easily removed, as only for detection. Thus, in the interconnected state or position, the geometric relationship between the optical marker OM, the optical marker portion 50 and the screw head SH of the pedicle screw PS is defined and not variable. This will reduce or even completely eliminate the problem of the reliability of the detection of the pedicle screw PS. However, it is also possible that at least part of the method relies on the detection of the pedicle screw PS using an image processing algorithm without any additional optical detection assistance.
As another variation of step C10, each pedicle screw PS may be equipped with one or more radio frequency identification tags (RFID) that allow for the detection of three-dimensional positions in space based on different detection techniques and RFID detection antennas using, preferably, passive RFID tags. This may be accomplished, for example, by using an array of RFID tags attached to pedicle screws PS, such as screw heads SH (where different RFID tags have different orientations from each other, such as multiple RFID tags oriented with different axes in three-dimensional coordinate space) and using an RFID detection antenna that is movable relative to the RFID tags of pedicle screws PS, in order to improve positional accuracy. See, e.g., zhang et al, "3-Dimensional Localization via RFID Tag Array," In 2017IEEE 14th International Conference on mobile ad hoc and sensor systems (MASS), pages 353-361, IEEE, 2017. It is also contemplated to use a plurality of reference RFID tags that are not attached to pedicle screws PS to provide different known reference locations, e.g., in a matrix arrangement, and one or more RFID tags are attached to pedicle screws PS, e.g., screw heads SH, see, e.g., liu et al, "a Three-Dimensional Localization Algorithm for Passive Radio-Fequency Identification Device Tag," International Journal of Distributed Sensor Networks, volume 13, 10, 2017, reference 1550147717736176.
As another variation, ultra wideband RFID tags can be used and detected by different types of detection algorithms, such as by backscatter modulation or UHF and UWB modulation, using multiple reader antennas, see for example dardardari et al, "Ultrawide Bandwidth RFID: the Next Generation? "Proceedings of the IEEE, volume 98, 9, 2010, pages 1570-1582. In this case, the different elements such as the reference RFID tag, the one or more reader antennas, and the data processing means for performing the data processing algorithm on the read signal from the RFID tag may be part of the system shown in fig. 1A and interconnected to the data processing means 100 to deliver coordinate data of the screw head SH and the attachment location AP, to deliver reference frame information or other data allowing the screw head SH and the attachment location AP to be found in a particular reference frame for further processing by the method 200. However, it is also possible that the raw data of the reader antenna is provided to the data processing device 100 via a network to determine the coordinate positions of the screw head SH and the attachment point AP at the device 100.
As a further variant, the screw head SH based on pedicle screws can be detected by thermal imaging, in particular due to different thermal radiation emitted from the metal screw head and the surrounding tissue of the surgical incision SI, on the premise that the screw head SH will be colder than the environment in the surgical incision SI. For example, infrared thermal imaging may be used to measure infrared energy generated from the surgical incision SI and exposed tissue and bone of an implant (such as pedicle screw PS and its screw head SH), and the infrared energy may be converted into a bolometric image indicative of the surface temperature profile. Such an image may be subjected to an image data processing algorithm for detecting the screw head SH, or even for detecting the screw extender SE. An exemplary thermal imaging camera that may be used for this purpose is an Infrared (IR) thermal imager FLIR T335 from FLIR Systems, inc. This would also require the use of reference marks or frames of reference that are viewable by a thermal imaging camera (not shown) and by the image capture device 110 operating in the visible range to provide a reference location for coordinates, such as a scale, mark, or the like. The screw head SH or screw extender SE may thus be positioned and detected from the thermal image based on an image processing algorithm, such as by a model-based pattern matching algorithm, or by other types of artificial intelligence-based detection algorithms. To this end, the system as shown in fig. 1A will also include a thermal imaging camera that may provide thermal imaging data to the data processing device, e.g., to the data processor 100, over a network.
This information about the position of the screw extender SE or the position of the screw head SH can be used by steps D25, D30 and U40 to provide graphical primitives GP which can be overlaid on the live video feed to highlight the different screw extenders SE or in a variant to highlight the different detected screw heads SH of the pedicle screws PS (if no screw extender SE is placed on them) in order to select and deselect the different pedicle screws PS that need to be considered for the geometry and rod template calculation of steps C20, C30.
Also, in a variant, the step U30 of scanning, the step C10 of detecting and the step D25 of displaying primitives may be performed iteratively and thus repeated, for example for each detected screw extender SE. This variation is illustrated with the representations of fig. 2L and 2M, wherein an exemplary number of four (4) screw extenders SE 1-SE 4 are tracked and detected. For example, in step U30 of scanning, the search and scanning of the screw extender SE may be further assisted by using a graphical locator element GLE displayed and overlaid on the live video feed, see for example fig. 2M. For example, a displayed graphical locator element GLE representing a graphical representation of the screw extender (e.g., a graphical representation of a rendering or projection of the screw extender SE on a screen), a contour of the screw extender SE (e.g., a translucent graphical representation of the screw extender SE) may be shown on the screen, or other types of graphical locator elements GLE (e.g., crosshairs, graticules, cursors, arrows, indicators) that may be used as locators to scan and detect the screw extender SE may be shown on the screen. For example, the graphical locator element GLE may be presented at a fixed position on the screen relative to the screen, e.g., approximately in the center of the displayed field of view. This allows the surgeon or operator O to move the device 100 for step U30, thereby also moving the photographed or captured scene with the surgical incision SI relative to the graphical locator elements photographed by the image capturing device 110 of the data processing device 100.
In the variant shown in fig. 2M, the graphical locator element GLE is fixedly represented in an upright position in the centre of the GUI screen, presented as a translucent element, as an outline of the screw extender SE on the live video feed, with a position and orientation allowing the operator O to move the device 100 so that the graphical locator element GLE can be matched with one of the screw extenders SE1 to SE2 protruding from the surgical incision SI. In the variant shown, the longitudinal grooves of the screw extenders are also represented by graphic locator elements GLE, acting as orientation aid for the operator O with respect to the orientation angle, to hold and move the device 100, to detect the screw extenders SE1 to SE4.
Upon partial or complete visual contact or touching of the graphical locator element GLE with one of the screw extenders SE1 to SE4 captured by the video feed, the contacted screw extender SE may then be detected with step C10 and thereafter highlighted, for example by displaying the primitive for the detected screw extender SE with step D25. For example, the detection step C10 may be divided into a coarse detection step C12 performed simultaneously with the step U30 of scanning, wherein touching or contact of the graphical locator element GLE with the screw extender SE may be detected. This coarse detection step C12 may be based on a pattern matching algorithm or other type of detection algorithm that allows detection of the surface or area in the current image where the screw extender SE is located, and thereafter a fine detection step C14 may be performed when the coordinates or area of the graphical locator element GLE contacts, approaches or touches the image area representing the screw extender SE, wherein the exact position and coordinates of the screw extender SE, such as the pose data information PDI, are detected. Upon complete detection of the screw extender SE with step C14, the augmented reality graphic primitives GP1 to GP4 may be displayed on the detected screw extender SE and the surgeon or operator may be prompted to accept detection of the screw extender SE and thus also the detected pose data information PDI, for example with a prompt, text box, confirmation button, as shown in fig. 2L. For the exemplary view of fig. 2M, two screw extenders SE1 and SE2 arranged on the left have been detected and are shown overlaid with graphics primitives GP1 and GP2, and a graphics locator element GLE is shown in the center of the image, the graphics locator element GLE showing the rendering of the translucent screw extender SE.
In a variation, for detection, as visualized in fig. 2L and 2M, when the graphical locator element GLE with specific pose information regarding position and orientation approximately matches the pose data information of the tracked one of the screw extenders SE 1-SE 4 of the surgical incision SI with coordinates fixed relative to the screen 120 or the device 100, the matching screw extender may be highlighted for selection. In this regard, upon exact or approximate matching of the pose data information PDI of one of the screw extenders SE 1-SE 4, the detected screw extender may be overlaid with a graphical primitive GP and a confirmation prompt may be presented to the operator O, as shown in fig. 2L. In order for operator O to make a match between the PDI of one of screw extenders SE1 to SE4 and the PDI of graphical locator element GLE, he or she must move device 100, for example by turning, tilting, moving, until the displayed graphical locator element GLE approximately matches one screw extender SE (SE 3 in fig. 2M). In this step, the calculation of the coarse PDI information for the different screw extender candidates selected may be repeated, for example by means of a pattern matching algorithm, until a PDI match with the graphical locator element GLE is found.
Next, the scanning step U30, the detecting step C10, and the displaying primitive step D25 may be repeated for the next screw extender SE, and the screw extenders SE1 to SE4 are detected and highlighted one by the primitive until all the desired screw extenders SE are detected, as illustrated in fig. 2D. This variant is a step scan which allows to provide direct visual and intuitive feedback to the surgeon or operator in order to detect each screw extender SE. The detection moment of step S14 of each screw extender, in which the PDT of SE is captured and tracked, may also be further emphasized by a signal (e.g. an audible or vibratory signal or both).
In a variant, the screw extenders SE are not detected by computer vision algorithms that detect shapes, contours or patterns as described above, but each screw extender SE is provided with an optical marker OM that can be detected and tracked with a detection step and that can also be used as a fiducial marker for the observed scene. An example of such a viewing scenario with a surgical incision SI is shown in fig. 1D, showing two rows of screw extenders SE1-SE6 attached to pedicle screws PS (not shown). In the example shown, each screw extender SE may be provided with two optical markers OM at different locations to provide a more robust detection of the screw extender such that the first marker OM is located at the distal end of the body of the screw extender SE and the second marker is located in a middle portion of the body of the screw extender SE. In case one of the two markers is overlaid from the camera view, for example as shown for the screw extender SE4, SE6, there is a detection and tracking redundancy provided for the other visible optical marker OM.
For redundancy purposes, each screw extender SE may be equipped with a plurality of optical markers OM, as some markers OM may be placed such that they are hidden within the surgical incision, covered by other screw extenders SE, or otherwise outside the field of view of the image capture device 110. It is also possible that the surgeon or operator O visually checks whether the screw extender SE is detected or not and can move the shooting and viewing position of his camera or imaging device 110 so that at least one marker OM is detected and tracked. In the example of fig. 1D, to detect and track the screw extender with step C10, a different camera view may be required to detect the at least one optical marker OM5.
The optical marker OM may be formed by a graphic pattern or design having a fixed geometric relationship with the screw extender SE, for example by being placed in a specific position in a specific orientation. As an example of a pattern that may be used for the optical marking, it may be a checkerboard pattern, a matrix code or a QR code or similar design, for example a design for robot tracking. Different tracking markers (such as, but not limited to ARToolKit, ARTag, aprilTag and arco fiducial tracking markers) are examples of optical markers OM that may be used to mark the screw extender SE, and they may be used for both identification and pose estimation purposes. For example, each screw extender SE may be fixedly provided with one or more optical markers OM, for example by printing, adhering, etching, embossing, rasterizing or depositing a layer with such optical markers OM. These optical markers can also be made invisible to the human eye, for example by using UV visible inks, NIR visible inks. For example, the optical markers OM may be made removable or fixedly attached to a layer or an adherend.
Alternatively, as illustrated in fig. 1E, the optical markers may be separate portions 50 from each screw extender SE, and may be placed in a predetermined geometric relationship with the screw extender SE, for example by placing the optical marker portions 50 onto the handle attachment portion 60 at the distal end of the screw extender SE. Moreover, the optical marking portion 50 includes optical marking OM and can be easily removed from the screw extender 50 once the method 100 is performed and ended, for example, by a simple manual operation. To this end, the optical flag portion 50 may have an attachment means 55 complementary to the handle attachment portion 60 of the screw extender 60. Preferably, the attachment means 55 and the handle attachment portion 60 are formed such that the attachment means 55 can occupy only one position with respect to the screw extender SE, such that the correct posture data information of the screw extender SE can be calculated taking into account the correct orientation of the screw extender SEPDI. Prior to performing step U30 of scanning screw extender SE, optical marking portion 50 may be placed onto all screw extenders SE for detection and tracking by step C10. As described above, it is also possible to use optical markers OM with redundant information so that partially occluded markers OM, e.g. ARTag, triCode, ARToolkit +, kohler, can be detected Circular marks.
It is also possible that the pattern or other graphic element included in the optical marking OM is different for each screw extender SE and includes information that can be read and identified in the identified optional step C15. This information can be used for verification purposes to see if the correct screw extender SE is being used for the correct surgical procedure. For example, using a database, the identity of each screw extender SE can be read and different aspects of the screw extender SE can be checked, e.g. whether the screw extender SE has exceeded its lifetime or lifecycle, whether the correct type of screw extender SE is used for a particular procedure. In addition, the identification information included in each optical marker OM may be used to identify the corresponding screw extender SE on several images captured from the surgical scene, allowing for a quick calculation of the correspondence of the detected screw extender SE within the captured image sequence. This allows for a more robust and fast identification of the individual screw extenders SE over several captured images.
Once at least one screw extender SE is detected, a graphic primitive GP may be generated and overlaid on the actually displayed screw extender in the live video feed of the GUI, as exemplarily shown in the screenshot of fig. 2D, using step D25 of calculating and displaying the screw extender primitive. In this figure, a graphic primitive is shown showing the outline of the visible portion of the screw extender SE, wherein the graphic primitive highlights or indicates the corner point. The graphics primitive GP may be displayed such that the position of the corresponding screw extender SE during the live video feed is overlaid or otherwise indicated graphically to provide augmented reality and the real world for the computer-generated graphics elements of the screw extender SE. This may be done by calculating data representing the camera position and orientation of the currently captured and displayed image relative to the screw extender SE and by calculating and displaying the projection of the screw extender SE as a graphical element of the graphical primitive GP based on the dataset of pose data information PDI. However, the step D25 of calculating and displaying the screw extender graphic primitive GP may also be completed entirely separately from the pose data information PDI collected by the detecting step C10, and may be based on a contour detection algorithm that detects the contour of the screw extender SE, and thereafter graphically display elements such as, but not limited to, lines, shadows, dots, boxes.
In general, step D25 allows providing computer-generated information about the screw. This step preferably comprises two sub-steps, wherein the first sub-step detects a fixed point of interest, fiducial marker or optical flow on the captured image of the live video feed. Thus, the first sub-step may create an orientation data model of the current camera view. This step may use feature detection methods such as corner detection, spot detection, edge detection or thresholding, and other types of image processing methods. The second sub-step restores the real world coordinate system of the current shooting environment (i.e. surgical incision SI and screw extender SE). Since at least a portion of the observed scene with surgical incision SI and body is unknown, simultaneous localization and mapping (SLAM) can map the relative position of pose data information SPI to screen position coordinate data SLCD that can be calculated in order to display graphics primitive GP at the correct position on the live video feed. In this regard, the graphical representation of the screw extender SE may be a projection view of a geometric model of the screw extender projected onto the screen position coordinate data SLCD. Additionally or alternatively, the structure of the observed scene may be derived from a motion method, such as using a binding adjustment, and the mathematical methods used may include the use of projection (epipolar) geometry, geometric algebra, rotational representation with exponential mapping, kalman and particle filters, nonlinear optimization and robust statistics. With this step D25, the graphical representation of the real world object (in this case, the graphical primitive GP of the screw extender SE) is related to the real world view or scene of the screw extender SE. It is also possible to further analyze the observed scene for mapping based on three-dimensional information, e.g. based on data from distance measuring sensors, including direct time of flight (dtif), liDAR sensors, or structured light sensors, stereo imaging with two image sensors. The graphics primitive GP may be regarded as a virtual reconstruction and projection model of the screw extender SE. Examples of such embodiments of step D25 can be found in U.S. patent nos. 10,824,310 and 9,824,495, which references are incorporated herein by reference in their entirety.
With this step D25, visual feedback may be given to the surgeon or operator O to see if all screw extenders SE have been detected and to provide an aspect of the augmented reality concept, as the live video feed of the surgical incision SI is further enhanced by dynamically moving graphical elements in order to highlight the screw extender SE. In a variant, all screw extenders SE are first detected and pose data information PDI is extracted and stored in a table or data structure, after which the graphics primitive GP is overlaid on the screw extender. This step may be performed simultaneously with the scanning step U30 and the detecting step C10. For example, the graphic primitive GP may be an opaque, transparent or translucent shade covering the detected corresponding screw extender SE.
Next, the method 200 may perform step D30, wherein the selector element SF is generated and displayed on the live video feed for each detected screw extender SE, and step U40 is performed: the screw extender SE is selected or deselected by the selector element SF, allowing the surgeon or operator O to manually select a single screw extender, preferably by touch screen operation. Step D30 thus provides another aspect of the augmented reality concept that allows easy interaction with the surgeon or operator O to select or not select the screw extender SE to be considered, and with step U40, uses the graphical element SF on the GUI to select/deselect the screw extender placed on the live video feed, for example by switching the selection/deselection by touching the selector element SF with the finger of the surgeon or operator O. Moreover, even when the viewing angle is changed, the graphical representation of the selector element SF may be moved to position or point to the corresponding screw extender SE. Step D30 of displaying the selector elements SF may display graphical elements, such as, but not limited to, fields, boxes, arrows, icons, labels, or other types of graphical selectable markers or elements for each screw extender SE, that are dynamically overlaid on the live video feed, as exemplarily shown in the screenshot of fig. 2E, for example by associating the display coordinates of the selector elements SF with the display coordinates of the corresponding graphical primitive GP generated by step D25, or by calculating projections from the pose data information PDI of the screw extender SE. Such graphical overlays of the selector element SF on the live video feed allow the surgeon or operator O to select an active one of the screw extenders SE in order to either propose a rod template RT or select a rod template among a plurality of rod templates, as explained further below. For example, a surgeon or operator O may want to select the front row with three (3) screw extenders SE to determine the rod.
The selection made by the surgeon or operator O with step U40 can be confirmed by a graphically displayed confirmation button that is accessible by touch screen operation and the selection can also be guided by a text box with information about how many screw extenders SE have been detected, information about how many screw extenders have been selected, also as exemplarily shown in fig. 2F, wherein the front three (3) screw extenders have been selected. The selection or deselection of a screw extender with SF may also be highlighted or de-highlighted by a graphical element such that visual feedback is provided to the surgeon or operator O regarding the status of the selected SE.
Upon confirming the selection of the screw extender of step U40, the method 200 proceeds to step C20: the geometry of the rod attachment location of the pedicle screw PS is calculated. For example, in this step, the geometry may include coordinate data of all attachment center points AP of the spinal stabilization or fixation rod R, which may be calculated based on the data of the screw extender SE detected and selected from steps C10 and U40. In the variant described herein, this step determines for each selected screw extender SE the attachment center point AP of each pedicle screw PS of the fictionally placed or fictional rod R, each pedicle screw PS being attached to the corresponding screw extender SE, taking into account that the spinal stabilization or fixation rod R is replaced and placed into the screw head SH at its final position for spinal stabilization, as exemplified in the exemplary embodiment in fig. 1C. The rod is considered hypothetical or fictional in that it has not yet been placed into the pedicle screw PS of the surgical incision. In the illustrated variation, when the rod R is fully placed into the U-shaped recess UG of the screw head SH, the attachment center point AP of the rod R is defined as the intersection between the central axis CA of the screw head SH of the pedicle screw PS and screw extender SE assembly and the rotational central axis of the rod R. However, the attachment center point AP may be defined differently depending on the screw head type and other considerations. All attachment center points AP in geometric space (e.g., three-dimensional euclidean space) are determined by step U40, after which a rod shape or template RT for placement and attachment to the screw heads SH of the pedicle screws PS may be presented.
Step C20 may perform a geometric calculation based on the dataset of pose data information PDI calculated by step C10 of detecting screw extender SE. It may be assumed that the screw head SH of the pedicle screw PS has a fixed position relative to the corresponding screw extender SE to which the pedicle screw PS is removably attached, as the screw head SH is typically fully inserted or has a fixed attachment position relative to the screw extender SE. Thus, the three-dimensional coordinate position of the attachment center point AP can be calculated using the coordinate and orientation data of the attitude data information PDI from each selected screw head. Note that since the pedicle screw PS is placed into the surgical incision SI, the pedicle screw PS is not visible or only partially visible from outside the living being L, but based on the detection of the placed screw extender SE, the attachment center point AP can still be calculated. This may be done, for example, by determining the coordinates of the different attachment center points AP using cartesian coordinates, line equations, distance calculations and surface equations. For example, this may be accomplished by first determining a line equation corresponding to the central axis CA of the screw extender SE, by using the pose information of the screw extender SE, and by calculating the attachment center point AP at a fixed distance from a fixed location that is the same in all screw extenders SE to calculate the location of the attachment center point AP.
In addition to the attachment center point AP, additional information related to determining the proposed pole template RT may also be calculated. For example, for the variants of fig. 1A and 2F in which the rod templates RT for three (3) screw extenders SE1, SE2, SE3 are determined, not only the three coordinate points of AP may be used for the geometry, but the geometry of the rod attachment locations may also include the orientation of the screw heads SH in the coordinate space, for example expressed by the direction or axis of the central axis DCA of the imaginary rod R placed into the U-shaped recess. Based on the attitude data information PDI of the screw extender SE, for each attachment center point AP, data indicating the direction of the central axis DCA of the virtual rod R can be calculated. This direction corresponds to the groove extension direction of the U-shaped groove of the screw head SH, since the screw head SH is usually rigidly attached to the corresponding screw extender SE such that the central axis of the screw head SH coincides with the central axis CA of the screw extender SE, whereas the bone anchor portion of the pedicle screw PS may have a different orientation due to its polyaxiality.
In a variant, the calculation of the geometry may also be part of another step and may have been calculated previously, for example, after data about the positioning or attitude of the screw extender is available, it may be part of the detection of the screw extender SE by step C10 and the sequence of steps of the method 200 presented herein is merely exemplary.
Next, in step C30, a data set representing the geometry of the one or more stem templates RT, referred to herein as stem template data RTD, may be calculated based on the geometry of the stem attachment locations determined in step C20, including, for example, the determined direction of the attachment center point AP and/or the central axis DCA. For example, taking into account the coordinate data of the attachment center point AP and the direction of the central axis DCA, an appropriate geometry for the rod template RT may be determined, e.g., a geometry that may be considered as a best fit for the current position of the attachment center point AP, e.g., by using a curve fitting algorithm, e.g., a curve fitting that provides a geometric fit to the attachment center point AP, or also taking into account the direction of the central axis DCA from step C20, or a fitting algorithm that takes into account the bending limits of the real physical spine stabilizer rod R and its physical limits, e.g., but not limited to, the minimum possible or allowable bending radius, maximum curvature, maximum lateral dimension of the bending rod R. It is also possible that the bar template data RTD is determined in three-dimensional coordinate space as a series of interpolated and discrete three-dimensional points located between adjacent attachment center points AP. In this step C30, the total length of the bar template RT may also be calculated, and the calculated length may be stored to the bar template data RTD.
According to another aspect, in step D42, it is also possible to display a window or other graphical element on the graphical user interface of the display in order to show the selected wand template RT in a one-to-one ratio of the actual physical implementation of the wand. This may be done by double clicking or otherwise selecting the stem template RT or one of the calculated for best mechanical fit (e.g. selecting the stem template RT from a list by a graphical button, a context menu item or other selection operation with a graphical user interface). This would allow a surgeon or operator to directly compare the actual physical wand R with the proportionally displayed wand template RT simply by holding the actual physical wand R on a display screen, and the operator or user could switch between different pre-calculated or determined wand templates RT to graphically verify their fit and suitability. Also, in a modification, the bar shape of the bar template RT may be bent, stretched, or otherwise deformed or changed in shape by a touch screen operation, for example, by laterally moving a portion of the graphic element showing the bar template RT on the touch screen with a finger operation. The modified virtual bar template RT may again be displayed with respect to a selected reference point or zero point (e.g., one of the attachment points AP1, AP2, AP 3) and the offset distances from the various attachment points may be recalculated. This step of displaying D42 and recalculating parameters associated with the stem template RT may be repeated until the operator or surgeon O is satisfied with the stem template RT for use.
It is also possible to pre-store a list of coordinates or other descriptive data of a plurality of different pre-bent stem templates RT in a dataset or structure, for example in a memory of the data processing device 100 or at a server accessible by the data processing device 100, and then compare this dataset with the geometry of the stem attachment location (e.g. including the determined direction of the attachment centre point AP and/or the central axis DCA) for best fitting. Thus, one or more rod templates may be identified for presentation to a surgeon or operator O. Execution of step C30 may also be displayed to the surgeon or operator O on the data processing device 100, such as by a progress bar or circle, animated waiting symbol, as exemplarily illustrated in fig. 2G.
The method 200 may then continue to step D40, wherein different information about the orientation of the stem template RT, the attachment center point AP, the central axis DCA may be displayed, and a user interface related to the information may be displayed on the GUI for changing and visualizing the different parameters. Fig. 2H and 2I provide exemplary screen shots. This may be done while the live video feed is still displayed on the display device 120 of the data processing device 100 to provide augmented reality features of the application and thereby provide visual feedback for the correctness and fit of the rod template to the pedicle screw PS. As exemplarily shown in fig. 2I, the different information may be displayed as a graphical overlay on the live video feed, including a line of each selected screw extender SE as a projection of the central axis CA of each screw extender SE, a graphical element of the calculated attachment center point AP visually representing the geometry of the rod attachment position, which is placed at the projection position of the attachment center point AP. For this augmented reality aspect, the coordinate data of the attachment center point AP may be mapped or projected into the coordinate space of the display. Moreover, a graphical representation of the stem template RT selected or determined by step C30 may be displayed, including characterization data such as, but not limited to, thickness, length, bend radius, bend pattern. In the variant shown, the curved bar form RT is shown in boxes, the length of which is in millimeters.
In addition, as shown in fig. 2I, step D40 may also display the same curved rod template RT of the frame, but placed in coincidence with at least one attachment center point AP to show a graphical representation of the rod template RT with three (3) exemplary pedicle screws PS installed. In the illustrated variant, the rod template RT is shown such that its central axis coincides with the middle one AP2 of the attachment center point AP, such that AP2 serves as a zero point or reference point, which represents the screw head SH2 attached to the pedicle screw PS 2. Then, for other attachment center points AP and pedicle screws PS1, PS3, the distance from the rod template RT to the attachment points PS1, PS3 may be displayed so that the surgeon or operator O may verify how well the currently selected rod template RT fits or does not fit with the pedicle screws PS1, PS3 adjacent to the reference point.
For example, assuming that the stem template RT is straight, by placing two geometric surfaces GS1, GS2 perpendicular to the straight line with the attachment center point AP1 in one surface and the attachment center point AP3 in the other surface, geometric calculations in three-dimensional (3D) space can be made to determine the distance from the stem template RT to the attachment center points AP1, AP3 that are straight lines. The distance between adjacent attachment center points AP1, AP3 and the point defined by the intersection of the respective surface with the straight line then provides a definition of the two distances that can be displayed. In case the rod template RT is curved, the same method can be used by determining two surfaces GS1, GS2, each perpendicular to a tangent line at the intersection of the respective surfaces GS1, GS2, wherein the attachment center point AP1, AP3 is also located within one of the surfaces GS1, GS 2. Thus, the distance from the attachment center points AP1, AP3 to the lever templates RT can be determined. As exemplarily illustrated in fig. 2I, 1J, 2K, these distances may be displayed in millimeters within the frame for highlighting or easy reading, associated with the longitudinally extending centerline CA indicating the screw extender SE, and arrows, pointers or other directional indicating graphical elements may be associated with the distance values to indicate the direction of the offset distance from the rod template RT. In the case of very small distances, the indicator may help identify the offset distance direction. These distance values may be displayed with a graphical user interface or may also be displayed as a movable text screen with graphical associations with each screw extender SE and pedicle screw PS assembly (e.g., with graphical associations with each centerline CA displayed for screw extender SE).
In the variant shown in fig. 2I, the reference point or position for the offset measurement may be changed, for example, using step U55, to determine a different zero point or reference point. For example, with step U55, the operator or surgeon O may reset the zero point or reference point by simply touching, pressing or otherwise selecting a graphical element representing one of the attachment center points AP1, AP2, AP3 with the GUI and may recalculate the offset value for the new reference point. As another example, the user may select one of the screw extenders SE1, SE2, SE3 as one of the zero points or reference points, as exemplarily illustrated in fig. 2H. Moreover, the recalculation of all offset values may be automatic or may be accomplished by pressing or touching a button upon confirmation or request by the operator or surgeon O, as shown by the virtual buttons "re-measure" exemplarily shown in fig. 2H, 2I and 2J.
Fig. 2N shows a variation of the screen available from step D40, wherein exemplary three (3) different attachment points AP1, AP2, AP3 are shown and visual feedback is provided to the operator or surgeon O regarding the offset of the attachment points AP1 to AP3 relative to the placed stem template RT, depending on their distance from the selected and placed stem template RT. For example, attachment point AP3 is shown furthest from rod template RT, with the calculated offset distance being about 8mm, such that attachment point AP3 is highlighted in red, e.g., a red dot or other type of highlighting, that indicates that the selected rod template RT is not suitable for placement and attachment to a corresponding pedicle screw PS3. Instead, attachment point AP2 is shown to be located at or within an acceptable close range of the stem template RT, and thus may be highlighted in green, such as a green dot or other type of highlighting. The offset was measured as 0mm. This indicates that the selected stem template RT will be suitable for placement at that particular attachment point AP 2. Similarly, attachment point AP1 may be highlighted in orange, indicating an undesirable but somewhat suitable location, with an offset distance of 4mm, as shown in fig. 2N. In this regard, the increased distance of the attachment point AP from the stem template RT that has been placed to connect to one of the attachment points AP1, AP2, AP3 may be indicated with coloring or other types of visual feedback. In the variation illustrated in fig. 2N, a heat map coloring scheme is used, where green indicates a good match of one AP to the stem template RT, changing from green to orange to a poorly matched red of the stem template, e.g., offset values outside the range to which the stem can bend.
Moreover, the method 200 may perform step D50 for displaying a list LL of stem templates RT selectable by the operator or surgeon O for visualization at the surgical incision SI using a live video feed to allow the operator or surgeon O to visually inspect the stem placement through augmented reality. For example, this step may display a list of stem templates RT found based on step C30, in which stem templates RT are calculated, e.g. stem templates RT with the best match for the geometry of the stem attachment location, or from a pre-stored list of selected stem templates RT. With the list LL displayed, the data processing device 100 is configured to allow the operator or surgeon O to graphically select one of the wand templates RT with step U50, after which the selected wand template RT may be displayed as virtually connected or placed to at least one attachment centre point AP with step D40, as shown in fig. 2J. Also, the offset value may be calculated and displayed when the rod template RD is selected and virtually placed onto the pedicle screw PS.
In this respect, it is also possible to perform step D55, wherein the selected one bar template RT is displayed as a graphical element in a one-to-one (1:1) ratio of the display screen or the graphical user interface GUI. This can be done with two 1:1 views, for example a sagittal or longitudinal plane view and a coronal or frontal plane view, as an aid to the user or operator O in making the corresponding rod. If the bar template RT is not fit on the screen because it is too long (e.g., longer than the screen of a typical tablet computer), a 1:1 view scale may be maintained, but a scroll option is used on the GUI.
In an optional step of method 200, when referring to the images of the surgical scene shown in fig. 2D-2K, a prompt may be provided to the surgeon or operator O as to how to place or adjust another pedicle screw PS (e.g., fourth pedicle screw PS 4) to match the coordinates of the selected and placed rod template RT. For example, a graphical element extending from the selected rod template RT may be displayed with a graphical element illustrating the potential location of the next pedicle screw PS4 attachment location. For example, referring to fig. 2J, in which curved rod templates RT placed at the attachment center points AP1 to AP3 of three different pedicle screws PS1 to PS3 are shown, a linear or triangular graphical element may be displayed showing a point, cross or other graphical element for indicating the next attachment location of a potential pedicle screw PS 4. The triangular graphic elements may have corners at the ends of the rod template RT to indicate different possibilities of pedicle screw PS4 attachment.
In another optional step of method 200, it may be possible to select specific screw extender SE and pedicle screw PS assemblies to compare their locations before, during, and after correction to collect data regarding changes in geometric location relative to each other, such as by calculating, displaying, and processing different attachment center points AP1-AP3 before, during, and after correction, as described further below.
Next, in step C60, based on the rod template RT that has been selected by the operator or surgeon O using step U50, the rod template data RTD from the selected rod template RT may be processed to generate CAD data or other data that may characterize the rod R to be produced by the rod template RT, which may be used to manufacture the physical fixation rod R, and the CAD data may be sent to a rod bender or another type of rod machining device used to manufacture the actual physical rod manufactured using step F10. It is possible to provide the bar template data RTD after or simultaneously with step U20 or indirectly while displaying in a one-to-one representation with step D55. In this step, the data for manufacturing rod R is based on the RTD from step C30 and is selected by the operator or surgeon O in step U50. The geometric data of the selected stem may then be extracted from the RFT data and converted to a different data format, for example CAD data format standards such as, but not limited to STEP, IGES, parasolid, STL, VRML, X3D, DXF, COLLADA. For example, at least one data set from one rod of the RDT may be sent to a rod bending or machining machine, such as a rod bending device as described in U.S. Pat. No. 6,755,064, U.S. Pat. No. 10,405,908, or a rod bending device as described in U.S. patent publication No. 2005/0262911, which are incorporated herein by reference in their entirety.
Another optional step of method 200 is: a step C70 for calculating an estimate of pose data information pdi_v for each vertebra V attached to the pedicle screw PS; and an optional display step D70 of displaying graphics primitives on the live video feed or display image of each vertebra V to provide live video feedback in augmented reality to show estimated or calculated positioning of the actual vertebra V of the spine SC; and an optional step C75 of calculating spinal curvature data SCD or other spinal characterization parameters or parametric values PAR of the spinal column SC, such as, but not limited to, the Cobb angle, sagittal angle, and other parameters of the spinal column, to calculate an estimate of spinal curvature of the living being L during surgery without the need for invasive medical imaging, such as X-ray imaging. With the step of C10 detecting the different screw extenders SE and providing pose data information PDI for each detected screw extender SE, at least the pose data information pdi_v of the vertebrae V can be estimated even if the spine SC is not visible in the images of the live video feed. Since typically two pedicle screw pairs PS are attached to each vertebra V, this step allows for calculation of an estimate of position based on two different pose data information PDI for two different screw extenders SE (e.g. two adjacently arranged screw extenders SE1, SE4, both attached to the same vertebra V, as shown in fig. 1D).
While the exact geometric relationship between screw extender SE and vertebrae V may be unknown, there is a range of probabilities that may be used for approximate estimation, wherein two (2) sets of pose data information PDI for two (2) screw extenders SE attached to one vertebra V may be used to provide an estimated pose pdi_v for each vertebra V, for example by using an average of the two screw extender poses PDI. Moreover, based on historical data of the geometric relationship between the position or pose of the screw extender SE having a fixed position relative to the screw extender SE and the position or pose of the vertebra V, a knowledge database can be generated to use the most likely position that the vertebrae of the spine SC will take given the detected PDI of the two screw extenders attached thereto. For example, for the purposes of calculation and estimation, for step C70, it may be assumed that for a given vertebra V, each pair of pedicle screws PS has an ideal predetermined placement into the vertebra V, and it may be assumed that the attachment position of the pedicle screws PS in terms of position and orientation with respect to the borehole center axis has been selected to be at such ideal predetermined placement position based on the standard dimensions of the vertebra V. Upon detection of the pose data information PDI for the screw extender SE pair from step S10, and assuming an approximation of such ideal position of the screw extender SE pair attached to the screw extender SE pair via the pedicle screw PS, the position and orientation of the corresponding vertebra V may be approximated, and the pose data information of the corresponding vertebra V may be calculated by geometrical transformation of coordinates to obtain the pose data information pdi_v.
Steps C70 and C75 may be based on an estimation, calculation or determination using a knowledge database with historical information about correspondence or mapping between attachment points AP of different pedicle screws PS, or pose data information PDI of different screw extenders SE, as well as position and orientation information of the corresponding vertebrae, such as pdi_v, spinal curvature data SCD, spinal characterization parameters PAR, or a combination thereof. Thus, it is possible to create or build an artificial intelligence network, such as a Convolutional Neural Network (CNN), decision forest or other type of network, which has been trained with a knowledge database to determine the pose data information pdi_v, the spine curvature data SCD, the spine characterization parameters PAR including Cobb and sagittal angles, or a combination thereof, of the vertebrae from the detected attachment points AP, the pose data information PDI, or a combination thereof. Since the pose data information PDI and the attachment point AP have a defined and computable geometric relationship, pdi_ V, SCD or PAR can be determined directly from the PDI of the detected screw extender SE. However, in a variant, it is also possible to omit the calculation of the pose data information pdi_v of the vertebrae or of the spinal curvature data SCD and to directly calculate or estimate the spinal characterization parameters PAR based on the attachment points AP of the different pedicle screws PS, the pose data information PDI of the different screw extenders SE or both, without calculating any pose of the spinal itself or other types of positioning or curvature data pdi_ V, SCD, as the end user or operator O is particularly interested in these parameters PAR for spinal correction surgery in order to determine the spinal correction.
Also, based on the estimated posture data information pdi_v thus obtained using step C70, step C75 may be performed, which calculates the spinal curvature data SCD, thereby calculating an estimate of the spinal curvature of the living organism L in operation. This may again be based on a knowledge database and patient specific parameters and values may be considered, for example based on the age, weight, height of the patient, the possible spine curves may be calculated as spine curvature data SCD. Moreover, step C75 may also calculate the spinal characterization parameter PAR based on the pose data information PDI_V from step C70 of vertebra V, or based on the pose data information PDI of screw extender SE of step C10, or based on both PDI and PDI_V from steps C10, C70, based on typical algorithms for such determination (e.g., but not limited to, based on geometric transformations of vector representations of PDI, PDI_V in Euclidean coordinate space). The data may be used to automate a spinal correction or correction device or system to at least partially correct curvature of a biological spinal column via an automated process. The spinal correction system may be in the form of an operating bed with motorized actuators, a robotic device, or a pillow with inflatable chambers, for example, for automated spinal correction, a system as described in chinese patent application CN 108 143 582 or CN 110 279 554 or similar machines may be used.
With these aspects, it is possible that the method 200 can calculate different pose and position information of the spine of the living being L. For example, different types of spinal characterization parameters or parameterized values PAR may be calculated, such as but not limited to sagittal alignment of the lumbar spine or lumbar lordosis, including but not limited to lordotic inclination angle, global lordosis, sacral slope, lordosis distribution index, lumbar apex position, upper arc angle, sagittal alignment with respect to the spine-pelvis alignment, thoracic or cervical spine, kyphosis including parameters such as Cobb angle, sagittal balance, and others. Moreover, different geometric parameters can be calculated that relate to the spine that is subject to kyphosis.
For example, FIG. 3A shows a schematic simplified representation of a spinal column having seven (7) exemplary vertebrae V1-V7, for each vertebra V, two pairs of attachment points AP1.1 and AP1.2 have been determined using a calculation step C20. Based on the set of attachment points AP that have been calculated, seven (7) pairs of attachment points apn.1 and apn.2 in the illustrated variant (n is from 1 to seven (7) for this example), different parameters relating to the pose and orientation of the spine can be calculated in additional steps of the method.
For example, with step C70, pose data information pdi_v for each vertebra V may be calculated for each vertebra V1-V7 based on geometric position data from the attachment point pair apn.1 and apn.2 associated with each vertebra V, which may include three-dimensional (3D) position and orientation information VP 1-VP 7. Because two different geometric points AP are available for each vertebra V when two pedicle screws PS with screw extenders SE are attached to each vertebra V, and the exact position of each AP relative to the vertebrae is not 100% defined, the average or geometric intermediate position of the two attachment points apn.1 and apn.2 can be used to calculate the associated VPn for each vertebra with a more precise position. The calculation may further take statistics into account, for example, based on historical data and statistical changes of the attachment point location AP, and as described above, a trained artificial network may be used. As explained above with respect to the display step U70, the three-dimensional (3D) position and orientation information VP 1-VP 7 of each vertebra may also be used to display the graphics primitives of each vertebra to the live video feed.
Next, using a further step C75 of calculating spinal curvature data SCD based on the pose data information PDI_V for each vertebra V (e.g., calculated 3D position and orientation information VP1 through VP7 for each vertebra V1-V7), a geometric model or coordinate data of the spinal curve may be calculated. For example, the spine curvature data SCD may be a curve determined by curve fitting with the geometric points VP1 to VP7 or by characterization with a series of geometric positions in 3D space. However, other data or parameters related to the spinal column and vertebrae V1-V7 may be calculated in this step C75. For example, the distance between each vertebra, e.g. D12, D23, D34, D45, D56, D67, may be calculated, e.g. based on the distance between geometrical points of adjacent vertebrae, and the orientation angle β between adjacent vertebrae V may also be calculated, e.g. when viewed from different directions (e.g. when viewed from the rear, when viewed from the front, or when viewed from either side).
Also, with step C75, as described above, it is also possible to calculate the spinal characterization parameters or parametric values PAR. In general, based on the pose data information pdi_v for each vertebra V, including, for example, the calculated 3D position and orientation information VP 1-VP 7, different geometric and orientation parameters PAR of the spine may be calculated so that a surgeon or operator O may store, display, archive and review them. As an example, for different types of spinal procedures, spinal characterization parameters PAR may be calculated such as, but not limited to, lordotic inclination angle, global lordosis, sacral slope, lordosis distribution index, apex position of lumbar spine, upper arc angle, relative spinal-pelvic alignment, sagittal alignment of thoracic or cervical spine, kyphosis including parameters such as Cobb angle, sagittal balance, and other parameters. As another example, based on pose data information pdi_v for each vertebra V (which may include data regarding the orientation of each vertebra V), the rotational orientation of adjacently positioned vertebrae toward each other may be calculated.
It is also possible that the graphical user interface GUI is configured such that the surgeon or operator O can select two vertebrae, for example by clicking or otherwise selecting a graphical primitive on the display, and thereafter can display different parameters about the two selected vertebrae, for example their distance, their rotational orientation relative to each other, and their pose information, for example to compare their angular orientations.
A display step D70 may be performed in which a graphic primitive may be displayed representing the curvature of the spine of the different vertebrae V, as a line or curved graphic element, overlaid with a live video feed or with a direct view of the procedure with a head-up display, for example based on the spine curvature data SCD that has been calculated by step C75. Moreover, in a display step D70, the corrected spinal curve CSC may be displayed, as well as all the different calculated parameters characterizing the spinal column. In the variant of fig. 3A, this may be a straight line, since the ideal spinal curve will be straight from the rear view.
According to another aspect, with the method 200, measurements of the spine may be made via the screw extender SE at different times during the procedure. For example, the surgeon or operator O may first capture and detect the screw extender SE with steps U30, C10, and thereafter may calculate different parameters based on steps C20, C40, C70, C75. After selecting and placing the rod R, the user or operator O may insert the rod into the open slot of the screw extender SE and then perform rod repositioning of the rod R such that the rod R moves down into the recess of the screw extender SE to place the rod R in the U-shaped recess of the screw head and may be held by the set screw of each pedicle screw PS. During the reduction procedure, rod R will force the vertebrae into a new position. Once measured, this will result in correction or change of the spinal curve to a new alignment and new spinal curvature data SCD, such as coronal, sagittal, and axial corrections. At this stage, or at any other point during the reduction process, before removing the screw extenders SE from the pedicle screws PS, the operator or surgeon O may again proceed to steps U30, C10 to re-detect all screw extenders SE and re-determine the attachment points AP, pose data information pdi_v for each vertebra V, including, for example, calculated 3D position and orientation information VP1 to VP7.
Thereafter, it is possible to display different parameters and data in a display step D80, for example by using a Graphical User Interface (GUI) to show the correction or change before and after the attachment of the rod, thereby using the spine curvature data SCD determined before and after the correction based on the repetition of steps C10, C70 and C75. Step D80 may display the different spinal parameters PAR or spinal curvature data SCD pre-operatively, post-operatively or both, for example as a comparative representation, for example two table rows or columns with the different SCD or PAR pre-operatively and post-operatively. This allows operator O to visually compare data on the GUI or other representations on the screen. If the correction is insufficient or out of the preferred range, the rod R connected to the screw head SH may be removed or unlocked, and a rod R having a different curvature or shape may be placed into the screw head SH. The curvature or shape of the rod R may be changed by an instrument placed on the set screw or screw extender SE before the rod R is again tightened to the screw head SH by the set screw. For example, the different spinal characterization parameters PAR will be displayed and visualized with a graphical user interface GUI to compare pre-rod placement and post-rod placement data, such as, but not limited to, the most relevant spinal characterization parameters PAR including Cobb angle, sagittal angle, and lordotic angle.
It is also possible that not only the screw extender SE is used for detection and tracking in steps U30, C10, but also a tool SD operatively attached to the screw extender SE SC is used for this detection and tracking, as illustrated in fig. 3B. For example, tool SD may be a set screw driver or rod reduction tool for rod reduction, or a screw driver for threadably engaging the bone anchor of pedicle screw PS with the vertebrae. Thus, the tool SD may fix the axis between the screw extender SE and the bone anchor BA of the pedicle screw PS in a defined orientation. To this end, the shape of the tool SD may be tracked and detected, or the tool SD may also be equipped with optical markers OM, in the variant shown in fig. 3B, two optical markers OM being placed on the top and bottom of the handle of the tool SD.
By visualizing the changes in the spine before, during and after the rod correction, the operator or surgeon O directly looks at the extent to which he will affect the spine correction through the above-described measurements and displays. The stem template RT, which has been determined to be suitable, can be made into a realistic physical embodiment, which will then be placed within the living being L. Based on the data of the rod templates RT, the spinal pose information and the position of the attachment points AP, the spinal pose most likely to result from the selected rod template RT may be calculated prior to correction, in other words, prior to attachment of the rod to the pedicle screw PS.
Moreover, training data for future procedures and deep learning through different types of Artificial Intelligence (AI), such as for training convolutional neural networks, may be created using databases that may be accessed from different devices 100 that record the procedure. For example, for each procedure, video data and calculated and detected metadata (including screw extender, attachment point AP, rod template, pose information for vertebral positions) may be stored in a database, indexed and used as training data and archived in the database.
In some cases, the angular orientation between the bone anchor BA and the screw head SH of the pedicle screw PS is not fixed, but is limited to a range of angles, for example, by using a polyaxial pedicle screw PS having a range of angular orientations (e.g., ±27° or other angular range). In this case, because the screw extender SE is attached to the screw head SH of the pedicle screw PS, the orientation of the screw extender SE to the bone anchor BA may be unknown or invisible. Since the method 200 may rely on the position and orientation of the screw extender SE to calculate the attachment point AP of the screw head SH and thereafter calculate the pose data information pdi_v for each vertebra V, e.g., VP1 to VP7, the calculation of the pose data information pdi_v for the spine may have a relatively high margin of error due to such uncertainty given that the orientation between the screw extender SE or screw head SH relative to the bone anchor BA is unknown and may not be visible from the outside of the surgical incision SI. In this case, the operator or surgeon O may be instructed to move all of the screw extenders SE such that they are moved to the end of the angular range such that the articulation joint formed between the screw head SH and the bone anchor BA is at the maximum angular point, whereby the orientation relationship between the screw extender SE and the screw head SH and the bone anchor BA is fixed and known to some extent.
For example, as shown in fig. 3C, all three (3) exemplary visualization screw extenders SE have been moved to the same direction to tilt the maximum orientation angle of 27 ° of the polyaxial pedicle screw PS, in the illustrated variation along the spinal extension direction. Before the operator or surgeon O performs step U30 of scanning the surgical incision SI and the screw extender SE, this step may be indicated to the operator or surgeon O in step D25 of the method 200 by a graphical user interface or other type of instruction (e.g., by voice instructions, animation, etc.). The instructions may include an arrow or pointer displayed on the live video stream to show the direction of movement to place the screw extender SE into an outer angular position relative to the screw head SH. In a variation, it may be possible to use a tool inserted into the screw extender SE that engages the bone anchor BA to provide a fixed angular relationship between the screw extender SE and the bone anchor BA to provide temporary uniaxiality of the pedicle screw PS for measurement and calculation of steps C10, C20, C70. The tool may be the screw driver SD itself, which may be engaged into a portion of the bone anchor BA by a screw extender SE, as exemplarily shown in fig. 3B, for example with a torque driving mechanism of the bone anchor BA or other elements of the bone anchor BA, to redirect the screw head SH to the same extension axis of the bone anchor BA, thereby being in the orientation of the monoaxial screw configuration. It is also possible to attach such a tool to the bone anchor BA for the purpose of orienting the screw head SH without the use of the screw extender SE or when the screw extender SE is removed. Furthermore, it is possible that such a tool is provided with an optical marker OM for detection efficiency, as shown in fig. 3B.
The method 200 is not limited to being performed with the portable data processing device 100, but may also be performed with a non-portable system, such as a multi-camera system having a fixedly mounted camera, data processing device or server, and an interactive screen. In this variant, multiple cameras may be used that provide different perspectives of the surgical incision SI, providing image data for three-dimensional determination, and a live video feed and GUI are displayed on a display screen placed in the operating room. Depending on whether the camera view is occluded, an algorithm may be operated on a data processing device capable of switching camera views. Moreover, instead of a touch screen, another type of input device may be used, such as a mouse, a laser pointer with a corresponding screen, or other input device that may read movements or indications of the operator's or surgeon's O hand.
As another embodiment, it is also possible that the data processing device 100 comprises wearable Augmented Reality (AR) glasses, a head mounted display with a transparent or translucent display screen, or a Head Up Display (HUD), the glasses or display further comprising a camera for capturing a sequence of images for tracking and detecting screw extenders. For example, a system such as that described in U.S. patent No. 10,854,098, which is incorporated herein by reference in its entirety, may be used. This allows providing a see-through augmented reality system and may not require displaying the live video feed of step D10, as the live video feed is a direct view through a transparent display screen. Graphical elements such as graphical primitives GP for screw extender SE, selector elements SF, text boxes, bar templates RT, and other elements of the graphical user interface may still be displayed on the transparent display screen.
According to another aspect, it is possible that a different radio-opaque marker ROM is placed on the skin of the living being L in operation, or on other types of markers that can be detected by X-rays or CT scanning or by other types of medical imaging. For example, the ROM mark may represent a QR code or other type of optical code. This allows intra-operative imaging with the marker ROM in place to form a link between intra-operative patient images, for example by X-rays of a C-arm or by CT scanning of an O-arm or 3D C-arm, to determine the position and orientation of the bone anchor BA and screw head SH, and these positions may then be matched into pose information of the screw extender SE, by 3D shape matching with image data from an image sensor or by QR code matching.
As another aspect of the methods presented herein, an operator or surgeon O may be provided with guidance for the positioning of the screw extender SE to facilitate rod insertion. Sometimes, it is difficult to insert the rod percutaneously into a long structure with a relatively large number of screw extenders SE and pedicle screws PS, since the pedicle screws PS may not be all aligned. For example, one may be placed more laterally, or the other placed more centrally. However, based on the known spatial position of each pedicle screw PS, e.g., by attaching the center point AP, the surgeon or operator O may tilt the different screw extenders SE to opposite sides of the screw that are offset or misaligned relative to each other. For a laterally positioned pedicle screw PS, the surgeon or operator O may tilt the screw extender SE centrally, and for a centrally positioned pedicle screw PS, tilt the screw extender SE laterally. By such repositioning of the screw extender SE and resultant reorientation of the screw head SH, better alignment of all slots or openings of the screw extender SE may be provided, thereby facilitating insertion of the rod R.
Another aspect of the invention includes a method 500 for scanning, displaying and verifying a bent spinal rod R for attachment to an attachment point AP. An exemplary flowchart of a method 500 is exemplarily shown in fig. 4, wherein a real spinal rod R may be scanned and visualized, for example, by using a live video with a surgical incision using an augmented reality live video feed, wherein the spinal rod R serves as a template RT in the context of different attachment points AP defined by pedicle screws PS attached to the spinal column SC.
With the method 500, a spinal rod R that has been bent by a surgeon, operator, or user O may be scanned, captured, or otherwise imaged, which is subjected to a computing step of the data processing apparatus 100 or 320, such as the scanning step U100. It is also possible to manufacture the actual rod R by means of a step F10 as described above, for example based on the selected rod template RT. For example, this step may also be aided or supplemented by three-dimensional data or depth data from a time-of-flight type sensor (e.g., a LIDAR sensor). Next, step C110 may be performed, wherein geometric data representing the rod R may be calculated as a rod dataset RD based on captured image data (e.g. video sequences or image sequences with views from different angles of the rod R) or based on three-dimensional or depth data. Then, with step D40, projection or rendering of the real stick R as a stick template may be performed, and the read stick template RRT may be displayed and selected, for example, to be attached to one attachment point AP as shown in fig. 2H, 2I, 2J, 2K and 2N. With step U55 as described above, the reference attachment point AP, which is a zero offset point, can be changed so that the operator O can visually verify different placements of the real bar template RRT. Next, step C120 may be performed, wherein the remaining attachment points AP are moved or corrected to coincide with the real bar template RRT based on the real bar template RRT and the initially suggested or selected reference attachment points AP.
With step C120, based on raw data derived from the spine curve SCD in the pre-correction state of steps C10 and C75, an approximation of the corrected spine curve SC may be calculated to coincide with the position of a selected one of the pre-correction attachment points AP based on the proposed rod R and rod dataset RD of the actual rod template RRT from step C110 and from the scan and calculation of step C120 where the corrected position of the attachment point AP may be calculated, and based on the initially proposed or selected reference position (e.g., placement of RRT) of the actual rod template RRT. Thus, with step C120, if a real spinal rod R were to be attached to pedicle screw PS, a new dataset would be calculated for attachment point AP that would give virtual movement to attachment point AP. Next, the method 500 may further include the step of calculating the pose data information pdi_v for all vertebrae V involved based on the newly calculated virtual attachment points (using step C70 as described above) and the step of displaying the graphical primitives of vertebrae V or the rendering of the spine SC (using step D70 as described above) to visualize the virtually corrected spine SC.
The display of step D70 shows the spine SC as a virtual or augmented reality graphical primitive that would be based on the curved bar R so that the surgeon, operator or user O can virtually verify the effect that the curved bar R would have on the spine SC. This allows verifying whether the curved rod R has the desired effect before it is required to attach the rod R to the attachment point AP of the pedicle screw PS. Also, steps C75 and D80 may be performed, wherein spinal curve data SCD and spinal parameters are calculated, and thereafter displayed with step D80. Step D80 may also include displaying pre-corrected spine curve data SCD and spine parameters based on previously performed steps C75 and D80 for the pre-corrected location of attachment point AP. The display of the true spinal curvature data SCD and the virtual spinal curvature data SCD prior to correction will allow the operator O to verify whether the curved rod R will have or at least approximate the desired correction effect of the spinal column SC.
According to another aspect of the present invention, a method 600 is provided for determining different types of information characterizing a spinal column SC prior to placement and anchoring of pedicle screws PS to respective vertebrae, as exemplified and schematically illustrated in the flow chart of fig. 5. Thus, the method 600 allows for calculating different spinal data and parameters prior to performing any spinal correction by the fixation rod R attached to the pedicle screw PS, for example by determining spinal parameters PAR or spinal curvature data SCD and pose data information for the different vertebrae pdi_v, by first detecting different pedicle markers PM, which can be inserted or otherwise attached to the different vertebrae V via a guide wire or another equivalent device. This allows the step of defining or suggesting a fixation rod R for spinal correction surgery to be performed even before the pedicle screw PS is attached to the vertebra V, which would define the attachment point AP for the fixation rod R.
For example, as described in U.S. patent publication No. 2021/0169506, which is incorporated herein by reference in its entirety, pedicle markings PM are described that may be attached to a guidewire GW, such as, but not limited to, kirschner wire, K-wire, guide needle, schanz needle, denham needle, steinmann needle, guide rod, and guide shaft, which are inserted and placed into an initial borehole DH via a surgical incision SI, the borehole having been drilled into a different vertebra V of a spinal column SC, and pedicle markings may be attached to the guidewire. A guide wire GW may be placed in each borehole DH to guide pedicle screws PS into the borehole for insertion into a pedicle or vertebra V. The bone anchors BA of the pedicle screws PS typically include through holes through which guide wires pass, allowing the pedicle screws PS to be guided into bores formed in the pedicles. Pedicle markers PM may be used for attachment to a guide wire or its equivalent, such as, but not limited to, those described in U.S. patent publication No. 2021/0169506, which allows for ease of insertion and placement of pedicle screws PS to the guide wire, and also facilitates surgery by the surgeon or operator O by helping to place the guide wire GW and remove the guide wire from the borehole.
Fig. 6 shows an exemplary simplified cross-sectional representation of a vertebra V in which two bores DH1, DH2 have been drilled or otherwise made into the vertebra V, two guide wires GW1, GW2 are placed in the bores DH1, DH2, respectively, and two pedicle markers PM1, PM2 are attached to the guide wires GW1, GW2, respectively, wherein an optical marker OM is provided with a removable or fixedly attached optical marker portion 50, which can be used for robust detection of the pedicle markers PM1, PM2, the guide wires GW1, GW2, or both, by using a computer image data processing with a tracking algorithm. In the illustrated variation, the optical marking portion 50 is illustratively made as a removable cap, clip, tube, clamp, logo, tab, or other device having two planar surfaces for each redundantly placing an optical marking OM, similar to the device 50 shown in fig. 1E, but it is also possible that the optical marking OM may also be disposed directly on the pedicle marking PM, for example as an etched pattern, a printed pattern, a stamped or embossed pattern, a machined three-dimensional surface or structure, or other marking of the pedicle marking PM. Furthermore, the optical marker OM may also be placed directly onto the guidewire GW with or without the optical marker OM on the pedicle marker PM, such as, but not limited to, tabs, marks, longitudinal codes along the axis forming the guidewire GW, three-dimensional structures directly representing the code.
Method 600 has some aspects similar to method 200 described above, but instead of using or not using optical markers OM to detect screw extender SE in step C10, a step of detecting pedicle markers PM is performed to determine information characterizing spinal column SC. Steps U10, D10, U20 and D20 may be substantially the same as method 200 for providing a live video feed on display 120, providing a GUI for user operation, and for entering calibration information. Step U230 may be performed wherein the surgeon or operator O scans the surgical incision SI with the image capturing device 110 of the data processing device 100 with the objective of capturing images of the different pedicle markers PM. Next, the data processing apparatus 100 executes step C210: the pedicle marks PM are detected by image data processing, for example by pedicle marks PM provided with optical marks OM, or by detecting the shape of the pedicle marks PM with image shape or pattern recognition without using optical marks, or by detecting optical marks OM attached directly to the guide wire GW itself or being an integral part of the guide wire GW. It is also possible to detect the guidewire GW itself in this step. The resulting information of step C210 may be pose data information pdi_pm of the pedicle markers PM or pose data information of the guide wires GW, or other types of coordinate data that may characterize the position and orientation of the respective guide wires GW. Assuming that two (2) guide wires GW or pedicle markers PM are attached to one vertebra V, this information can be used to determine the position and orientation of the individual vertebrae V of the spine SC.
Thereafter, optional step D225 may be performed to overlay graphical primitives on the live video feed to highlight the pedicle markers PM, the guide wires GW, or both, thereby assisting the surgeon or operator O in selecting or deselecting the pedicle markers PM or guide wires GW of interest for further calculation, wherein step D230 is used to illustrate the graphical elements used for making the selection, and step U240 receives input data from the surgeon or operator O that actually selects a different pedicle marker PM or guide wire GW that has been detected, similar to steps D25, D30, U40. An optional step C220 of calculating the geometry may then be performed by the data processing device 100, wherein a virtual attachment point ap_v may be calculated, which is a specific geometrical position where the fixation rod R will most likely be positioned relative to the corresponding pedicle screw PS, which has not yet been attached or anchored to the vertebra V. Herein, the attachment point ap_v is considered virtual, as no such attachment point AP currently exists. Thus, with step C220, an estimate of the geometric position of the attachment points AP may be provided as virtual attachment points ap_v, which may be used to estimate different curvature or spinal parameterization values of the currently operated spinal column SC, and if a particular fixation rod R is placed and attached to these virtual attachment points ap_v, the curvature or spinal parameterization values will give the spinal column SC so as not to have any direct information about the real attachment points AP. This calculation may be accomplished using artificial intelligence with a trained network that uses historical data from the image data regarding the location of the attachment point AP of a given borehole and a given guidewire GW placed into the borehole, for example based on historical medical imaging data such as, but not limited to, X-ray images, or by using tables or other pre-stored information regarding statistical data of geometric relationships between the location and orientation of the guidewire GW, the location and orientation of the pedicle markers PM, and the location and orientation of the pedicle screws PS attached to the vertebrae V.
Next, a step C270 similar to step C70 of method 200 may be performed, which is configured to calculate pose data information pdi_v for each vertebra V connected to the guide wire GW or guide wire GW and pedicle mark PM, and an optional display step D70 similar to the same steps of method 200 may be performed to display graphical primitives on the live video feed or display images for each vertebra V to provide the live video feedback in augmented reality to show estimated or calculated positioning of the actual vertebrae V projected to the spine SC of the live video feed. Step C270 may use the pose data information pdi_pm for the pedicle markers PM for two or more vertebrae V or the pose data information of the guide wire GW, or may also use the data of the virtual attachment point pairs ap_v for two or more vertebrae V from step C220, or use both data sets ap_v and pdi_pm.
Moreover, another optional step C75 may be performed, similar to method 20, wherein spinal curvature data SCD or other spinal characterization parameters or parameterization values PAR of the spinal column SC may be calculated, such as approximated curvature data SCD geometrically characterizing the current spinal curve, and, for example, spinal parameterization data, such as Cobb angle, sagittal angle, axial angle, distance between adjacent vertebrae, and other parameters of the spinal column SC, to calculate an estimate of spinal curvature of the intraoperative living organism L without invasive medical imaging, such as X-ray imaging, and even before any pedicle screws PS have been placed or anchored. The data SCD and PAR may then be displayed on the display 120 of the data processing apparatus 100 to provide feedback to the surgeon or operator O.
As indicated above, with the method 600, the correction to be given to the spinal column SC can be verified by estimation prior to any attachment of the pedicle screw PS. For example, after performing the method 600 once, the surgeon or operator O will have some first estimation information about the spine SC, with data about SCD and PAR, which can be displayed with step D80, and even with step D270, with visual feedback of the curvature and position of the spine, which displays overlay primitives projected onto the live video feed. Thus, the surgeon or operator O may select and place a spinal cage, fusion device, or other type of intervertebral implant between the two exemplary adjacent vertebrae V1, V2 of the spinal column SC, and thus may also select the type and configuration of the intervertebral implant, such as by selecting its thickness, or by selecting and adjusting a particular angle, such as the sagittal angle, for use in spinal fusion procedures. The placement of the intervertebral implant may impart some reorientation and displacement between the positions and orientations of the two adjacent vertebrae V1, V2 so that the operator or surgeon O may again perform the method 600 to determine new values of SCD and PAR that partially correct the spinal column SC based on the placement of the intervertebral implant, but without any pedicle screw placement of PS1, PS2 and without any attachment of fixation rod R.
Based on the newly displayed values of the parameters PAR and SCD, or both, step D80 of method 600 may provide a table, curve, or other type of visualization of the PAR and SCD before and after insertion of the intervertebral implant to provide comparative data for a first correction to the spinal column prior to placement of any rod R. If the operator or surgeon O is not satisfied with the new PAR and/or SCD, e.g., is not satisfied with a different thickness or angle, given by the first intervertebral implant calculated by method 600, this allows the operator or surgeon O to replace the intervertebral implant with a different intervertebral implant having a different configuration. Also, in variations, if the intervertebral implant is of a configurable type, the operator or surgeon O may vary the parameterized value of the intervertebral implant to vary the thickness or distance between the superior and inferior bone engaging surfaces or the angle between the superior and inferior bone engaging surfaces. Thereafter, method 600 may be performed again to verify the results of the dimensional and characterization changes of the intervertebral implant.
Moreover, because the method 600 may optionally utilize step C220 to calculate the virtual attachment point AP_V for the pedicle screw pairs PS1, PS2 that may be placed into the bores DH1, DH2, the potential rod R may be determined by calculating one or more rod templates RT that may be proposed to an operator or surgeon O, for example, utilizing steps C30, D40, D50, U55, D55, C60 of the method 200, such that a different proposed rod template RT is proposed and virtually tested for spinal correction utilizing the one or more rod templates RT of step D50 and the creation of manufacturing data or information of steps D55, C60.
Moreover, aspects of method 600 and method 200 may also be combined with the steps of method 500, wherein the effect of rod template RT or rod data RD on spine SC may be virtually tested by calculation prior to the physical fixation rod R being actually attached to any pedicle screw PS. For example, after the step of selecting the rod template RT is performed, for example, with step U50 or with step C30 of calculating and presenting the rod template RT by the method 200, this data may be processed by step C120, in which a new attachment point AP is calculated based on the virtual attachment point ap_v originating from step C220 of the method 600, thereby calculating information about the position and orientation of the vertebrae pdi_v of the virtually corrected spine SC based on the presented virtual rod template RT or rod data RD. This aspect differs from method 500 in that the data about attachment point AP is merely virtual, referred to herein as ap_v, because pedicle screw PS has not yet been placed. Steps C70, D70, C75 and D80 may also be performed to calculate data about the spine curvature SCD and the spine parameters PAR (step C75), display different data about the spine curvature SCD and the spine parameters PAR (step D80), such as post-correction and pre-correction data, and display the vertebrae V as primitives in new virtual positions and orientations of the vertebrae V using step D70, such as using augmented reality projection to a live video feed.
Once the virtual determination of the rod template RT is satisfied, a physical fixation rod R may be manufactured, for example, by means of steps D55, C60 or by means of the method 500, the surgeon or operator O may attach the pedicle screw PS to the borehole DH of the vertebra V, after which the surgeon or operator O may also attach the fixation rod R to the pedicle screw PS. With the aid of method 200, spinal correction can be verified after placement of pedicle screw PS and fixation rod R. Alternatively, a first surgeon or operator O may attach a pedicle screw PS to a borehole DH of a vertebra V prior to actual manufacture of the rod R, and then the method 200 may be performed to verify the attachment point AP, now precisely defined by the attachment of the pedicle screw PS, to determine another or corrected rod template RT or rod data RD for the physical fixation rod R. At this stage, method 200 may be performed to verify the spinal correction imparted by the physical fixation rod R during surgery.
As indicated above, the methods 200, 500, and 600 described herein, as well as combinations of steps of these methods and portions thereof, may be implemented to different types of data processing apparatus 100, but may also be programmed as computer readable code that may be stored on a non-transitory computer readable medium, such as any kind of data storage device or data storage device, and configured to perform the methods 200, 500, 600, or steps thereof, when executed on the data processing apparatus 100 or on a data processor of other types of computer systems, such as distributed computer systems with network and/or cloud access. For example, it is possible to use a tablet-type device for image visualization and image capture, but as a variant of distributed computing, the actual computing steps are performed remotely at a server or personal computer operatively connected to the tablet computer via a network.
Although the present invention has been disclosed with reference to certain preferred embodiments, many modifications, alterations and changes to the described embodiments may be made without departing from the sphere and scope of the invention, as defined in the appended claims and their equivalents. Accordingly, it is intended that the invention not be limited to the described embodiments, but that it have the full scope defined by the language of the following claims.

Claims (17)

1. A method for assisting in orthopedic surgery of a spine, the method performed with a data processing device comprising a display device and an image capturing device, the method comprising the steps of:
capturing a sequence of images with the image capture device such that a field of view of the image capture device captures images of a plurality of screw extenders, each screw extender holding a pedicle screw, the plurality of screw extenders being disposed at a surgical incision of a body of a living being undergoing an orthopedic procedure;
providing a live video feed on a transparent display device by displaying at least some of the captured images or by utilizing a direct view of the display device;
detecting the plurality of screw extenders with the data processing device based on the captured sequence of images;
First calculating the detected orientation and position of the plurality of screw extenders;
second calculating a three-dimensional (3D) position of a screw head of each pedicle screw based on the first calculated orientation and the position; and
each calculated 3D position of the plurality of screw heads is projected and displayed on the display device with a graphical element having a graphical user interface at a position corresponding to the position of the screw head of the currently provided image projected onto the live video feed.
2. The method of claim 1, further comprising the step of:
a plurality of fixation bar templates are displayed as graphical elements on the graphical user interface of the display device, the plurality of fixation bars having different shapes.
3. The method of claim 1, further comprising the step of:
fitting a curve to points represented by the plurality of 3D positions of the screw head; and
the template of the fixation bar is displayed as a graphical element on the display device, the fixation bar being shaped to at least partially match the fitted curve.
4. The method of claim 2, further comprising the step of:
graphically selecting one of the plurality of fixation rod templates;
A graphical element representing a selected one of the plurality of fixation rod templates is placed at a location of one of the 3D positions of the screw head.
5. The method of claim 1, further comprising the step of:
visually highlighting the detected plurality of screw extenders; and
allowing at least one of the plurality of screw extenders to be selected or deselected using a graphical user interface of the data processing apparatus.
6. The method of claim 1, wherein the data processing apparatus further comprises a distance measurement sensor, the method further comprising the steps of:
capturing distance information with the distance measuring sensor,
wherein the step of first calculating is further based on the distance information.
7. The method of claim 6, wherein,
the distance measurement sensor includes a direct time of flight (dtif), liDAR sensor, or a structured light sensor such as FaceID.
8. The method of claim 1, further comprising the step of:
third calculating pose data information for at least two vertebrae based on the orientation and the position of at least one of the detected plurality of screw extenders attached to vertebrae from the step of first calculating; and
A graphic primitive representing a vertebra is projected and displayed on the display device using the graphical user interface, the graphic primitive being displayed at a location corresponding to the location of the vertebra projected to the currently provided image of the live video feed.
9. The method of claim 8, further comprising the step of:
fourth calculating spinal curvature data of a spinal column based on the orientation and the position of the detected plurality of screw extenders attached to the vertebrae from the step of first calculating; and
fifth calculating a spinal parameter of the spinal column based on the orientation and the position of the detected plurality of screw extenders attached to the vertebrae from the step of first calculating;
a graphical primitive representing a curvature of the spine is projected and displayed on the display device using the graphical user interface, the graphical primitive being displayed at a location corresponding to the location of the spine projected to a currently provided image of the live video feed.
10. A data processing device configured to assist in orthopedic surgery of a spine, the data processing device comprising a display device, a data processor, and an image capture device, the data processor configured to:
Indicating to capture a sequence of images with the image capture device such that a field of view of the image capture device captures images of a plurality of screw extenders, each screw extender holding a pedicle screw, the plurality of screw extenders being disposed at a surgical incision of a body of a living being undergoing an orthopedic procedure;
instruct display of at least some of the captured images to provide a live video feed on the display device;
executing a detection algorithm to detect the plurality of screw extenders with the data processing device based on the captured sequence of images;
first calculating the detected orientation and position of the plurality of screw extenders;
second calculating a three-dimensional (3D) position of a screw head of each pedicle screw based on the first calculated orientation and the position; and
each calculated 3D position of the plurality of screw heads is projected and displayed on the display device with a graphical element having a graphical user interface at a position corresponding to the position of the screw head of the currently displayed image projected onto the live video feed.
11. A non-transitory computer readable medium having computer instruction code recorded thereon, the computer instruction code configured to perform a method for computer assisted spinal orthopaedic surgery when the computer instructions are executed on a data processing device operatively connected to a display device and an image capture device, the method comprising the steps of:
Capturing a sequence of images with the image capture device such that a field of view of the image capture device captures images of a plurality of screw extenders, each screw extender holding a pedicle screw, the plurality of screw extenders being disposed at a surgical incision of a body of a living being undergoing an orthopedic procedure;
providing a live video feed on the display device by displaying at least some of the captured images or by utilizing a direct view of the transparent device;
detecting the plurality of screw extenders with the data processing device based on the captured sequence of images;
first calculating the detected orientation and position of the plurality of screw extenders;
second calculating a three-dimensional (3D) position of a screw head of each pedicle screw based on the first calculated orientation and the position; and
each calculated 3D position of the plurality of screw heads is projected and displayed on the display device with a graphical element having a graphical user interface at a position corresponding to the position of the screw head of the currently provided image projected onto the live video feed.
12. A method for rod-assisted orthopedic surgery to determine correction of a spine, the method performed with a data processing device, the method comprising the steps of:
Scanning a fixation rod with an image capturing device to obtain scan data of the fixation rod, the spinal correction rod having been bent for spinal correction;
first calculating curvature data of the fixed rod based on the scan data;
receiving data of a position of an attachment point of the fixation rod to the spine, the position of the attachment point having been determined based on position data of screw heads of pedicle screws attached to at least two vertebrae of the spine;
second calculating data of a corrected position of the attachment point by taking into account the curvature data of the fixation rod from the first calculating step, the corrected position of the attachment point being based on a correction given to the position of the attachment point when the fixation rod is attached to the attachment point of a corrected spine;
third calculating a spinal parameter of the corrected spinal column based on the data of the corrected location of the attachment point of the corrected spinal column; and
displaying the spinal parameters of the corrected spinal column on a display device.
13. The method for assisting an orthopedic procedure to determine correction of the spine according to claim 12, further comprising the steps of:
Fourth calculating pose data information for vertebrae of the corrected spine based on the data for the corrected location of the attachment point of the corrected spine.
14. The method for assisting an orthopedic procedure to determine correction of the spine according to claim 13, further comprising the steps of:
displaying, on the display device, a graphical primitive representing the vertebra using the graphical user interface, the graphical primitive being displayed at a location corresponding to the location of the vertebra.
15. A method for assisting in orthopedic surgery of a spine, the method performed with a data processing device comprising a display device and an image capturing device, the method comprising the steps of:
capturing a sequence of images with the image capture device such that a field of view of the image capture device captures images of at least one of a plurality of pedicle markers or a plurality of guide wires respectively placed on a plurality of guide wires, the plurality of pedicle markers or the plurality of guide wires being arranged at a surgical incision of a body of a living being undergoing an orthopedic procedure;
providing a live video feed on a transparent display device by displaying at least some of the captured images or by utilizing a direct view of the display device;
Detecting the plurality of pedicle markers or the plurality of guide wires with the data processing device based on the captured image sequence;
first calculating the orientation and position of the detected plurality of pedicle markers or the detected plurality of guide wires; and
posture data information of at least two vertebrae is second calculated based on the orientation and the position of at least one of the detected plurality of pedicle markers or the detected plurality of guide wires attached to vertebrae from the step of first calculating.
16. The method of claim 15, further comprising the step of:
third computing parameters characterizing the spine from the pose data information from the step of second computing; and
displaying the parameters characterizing the spine on the display device.
17. The method of claim 15, further comprising the step of:
fourth calculating a virtual three-dimensional (3D) position of the screw head of each pedicle screw based on the first calculated orientation and the position; and
each calculated virtual 3D position of the plurality of screw heads is projected and displayed on the display device with a graphical element having a graphical user interface at a position corresponding to the position of the screw head of the currently provided image projected onto the live video feed.
CN202280032309.1A 2021-03-01 2022-03-01 Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery Pending CN117320656A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IBPCT/IB2021/051694 2021-03-01
IBPCT/IB2021/056242 2021-07-12
IB2021056242 2021-07-12
PCT/IB2022/051805 WO2022185210A1 (en) 2021-03-01 2022-03-01 A method and system for proposing spinal rods for orthopedic surgery using augmented reality

Publications (1)

Publication Number Publication Date
CN117320656A true CN117320656A (en) 2023-12-29

Family

ID=89274190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280032309.1A Pending CN117320656A (en) 2021-03-01 2022-03-01 Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery

Country Status (1)

Country Link
CN (1) CN117320656A (en)

Similar Documents

Publication Publication Date Title
JP6949172B2 (en) Surgical spinal correction
US20220361962A1 (en) Surgical navigation systems and methods
JP7204663B2 (en) Systems, apparatus, and methods for improving surgical accuracy using inertial measurement devices
US20220133412A1 (en) Apparatus and methods for use with image-guided skeletal procedures
EP3498212A1 (en) A method for patient registration, calibration, and real-time augmented reality image display during surgery
CN108601629A (en) The 3D visualizations of radioactive exposure are reduced during surgical operation
US20210153953A1 (en) Systems and methods for performing intraoperative guidance
US20200129240A1 (en) Systems and methods for intraoperative planning and placement of implants
WO2015151098A2 (en) An articulated structured light based-laparoscope
EP2953569A1 (en) Tracking apparatus for tracking an object with respect to a body
US20220110698A1 (en) Apparatus and methods for use with image-guided skeletal procedures
WO2014117806A1 (en) Registration correction based on shift detection in image data
WO2023021448A1 (en) Augmented-reality surgical system using depth sensing
US20240013412A1 (en) Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
WO2023047355A1 (en) Surgical planning and display
US20240138931A1 (en) A method and system for proposing spinal rods for orthopedic surgery using augmented reality
US20230368330A1 (en) Interpolation of medical images
CN117320656A (en) Methods and systems for using augmented reality to propose spinal rods for orthopedic surgery
EP4370051A1 (en) A method and system for verifying a correction of a spinal curvature by imaging and tracking
US11406471B1 (en) Hand-held stereovision system for image updating in surgery
US20230196595A1 (en) Methods and systems for registering preoperative image data to intraoperative image data of a scene, such as a surgical scene
JP2024504482A (en) Computer-implemented method for augmented reality spinal rod planning and bending for navigational spine surgery
CN117414199A (en) Intraoperative navigation system and method for skeletal correction surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination