WO2024054737A1 - Computer vision systems and methods for classifying implanted thoracolumbar pedicle screws - Google Patents

Computer vision systems and methods for classifying implanted thoracolumbar pedicle screws Download PDF

Info

Publication number
WO2024054737A1
WO2024054737A1 PCT/US2023/071817 US2023071817W WO2024054737A1 WO 2024054737 A1 WO2024054737 A1 WO 2024054737A1 US 2023071817 W US2023071817 W US 2023071817W WO 2024054737 A1 WO2024054737 A1 WO 2024054737A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
anterior
view image
lateral view
machine learning
Prior art date
Application number
PCT/US2023/071817
Other languages
French (fr)
Inventor
Adrish ANAND
Alex FLORES
Ron GADOT
Alexander E. ROPPER
David Shuo XU
Original Assignee
Baylor College Of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baylor College Of Medicine filed Critical Baylor College Of Medicine
Publication of WO2024054737A1 publication Critical patent/WO2024054737A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/12Devices for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present application relates generally to digital medical systems. More specifically, the present application provides an automated computer vision approach to classifying implanted screw and rod systems from digital imaging of a patient.
  • the spinal column is one of the most important parts of the body that helps with movement and balance, upright posture, protection of the spinal cord, and shock absorption.
  • Adults typically have 24 vertebrae in their spinal column.
  • Each vertebra has two cylinder- shaped projections called pedicles, which are hard bones that stick out from the back part of the vertebral body.
  • the pedicles provide protection for the spinal cord and nerves, and serve as a bridge to join the front and back parts of the vertebra.
  • pedicle screw fixation is the most widespread technique to achieve spinal fusion and stabilization.
  • Other options for spinal fusion and stabilization use wires, bands, and hooks, however, pedicle screw fixation has many biomechanical advantages.
  • pedicle screws being used as clinical best practice, screw loosening and breakage are recurring mechanical issues of spinal fixation that lead to revision surgery in about 6% of cases. Additional reasons for revision surgery are disc herniation, scar tissue, hardware issues, and bone fragments.
  • revision surgical fusion requires decortication of the involved vertebrae with implantion of native bone or allograft into the target area, and may be augmented with bone morphogenic protein and/or external bone growth stimulator to promote bone formation. Previous instrumentation often must be removed and replaced during this procedure to allow the new bone graft to heal properly. Accordingly, revision surgery can be made faster and safer if information is known about which fixation components (e.g., which manufacturer) were used for the initial surgery. This information is often unavailable, however, due to patients being referred by other centers, or due to missing information in the patients’ records.
  • fixation components e.g., which manufacturer
  • a surgeon or other expert medical professional can sometimes decipher which fixation component was used from a radiograph of a patient but doing so can take significant time away from the surgeon or other expert medical professional and is still subject to error. It was against these drawbacks that the present disclosure was conceived.
  • the present disclosure involves new and innovative machine learning-based systems and methods for classifying implanted thoracolumbar pedicle screws from captured images of a patient in which the thoracolumbar pedicle screws are implanted.
  • the classification system therefore provides information regarding the pedicle screw that is implanted in a patient, such as a manufacturer of the pedicle screw, and provides that information for a surgeon to use prior to a revision surgery, which allows for adequate pre-procedure preparation in planning and materials. This process may therefore reduce the time of a revision surgery as compared to a surgeon or other expert medical professional manually reviewing radiographs.
  • the classification system also improves upon identification accuracy and speed compared to a surgeon or other medical professional manually review radiographs to classify a pedicle screw.
  • a method includes receiving, by a processor, image data associated with a patient.
  • the image data includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient.
  • a fused image is generated by the processor that combines the lateral view image and the anterior-posterior view image.
  • the processor determines a classification of a thoracolumbar pedicle screw in the fused image.
  • a method includes receiving image data associated with a patient.
  • the image data includes a lateral view image of a lumbar spine of the patient and an anterior- posterior view image of the lumbar spine of the patient.
  • a fused image is generated that combines the lateral view image and the anterior-posterior view image.
  • a classification of a thoracolumbar pedicle screw in the fused image is determined.
  • determining the classification of the thoracolumbar pedicle screw includes determining a manufacturer of the thoracolumbar pedicle screw.
  • determining the classification of the thoracolumbar pedicle screw further includes the lateral view image and the anterior-posterior view image as the input to the at least one machine learning model.
  • the method further includes processing, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
  • generating the fused image includes processing the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
  • generating the fused image further includes processing the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within the same range.
  • a seventh aspect which may be combined with other aspects described herein (e.g., the 6th aspect)
  • the lateral view and anterior-posterior view images are combined such that the fused image inputs each the lateral view image and the anterior-posterior view image individually to the machine learning model at the same time.
  • each of the lateral view and anterior-posterior view images is a radiograph.
  • the at least one machine learning model includes a first machine learning model trained on lateral view images, a second machine learning model trained on anterior-posterior view images, and a third machine learning model trained on fused images.
  • the at least one machine learning model is implemented as one or more of a support vector machine and a neural network.
  • a system in an eleventh aspect, which may be combined with other aspects described herein (e.g., the 2nd aspect, 3rd aspect, and 5th aspect through the 10th aspect), includes a memory and a processor in communication with the memory.
  • the processor is configured to receive image data associated with a patient that includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient; generate a fused image that combines the lateral view image and the anterior-posterior view image; and determine, using at least one machine learning model with the fused image as an input, a classification of a thoracolumbar pedicle screw in the fused image.
  • the processor is further configured to process, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
  • the process is further configured to process the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within a same range.
  • the processor is configured to process the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
  • FIG. 1 illustrates a block diagram of an example computer vision-based system for classifying an implanted thoracolumbar pedicle screw, according to an aspect of the present disclosure.
  • FIG. 2 illustrates a Row chart of a method for classifying an implanted thoracolumbar pedicle screw, according to an aspect of the present disclosure.
  • FIG. 3 illustrates a fused image, according to an aspect of the present disclosure.
  • FIG. 4 illustrates a flow chart of a pre-processing method for generating a fused image, according to an aspect of the present disclosure.
  • FIG. 5 illustrates a block diagram of a computing and networking environment, according to an aspect of the present disclosure.
  • the present disclosure involves an automated computer vision approach to classifying implanted thoracolumbar pedicle screw and rod systems from digital imaging and communications in medicine (DICOM) images, such as radiographs.
  • DICOM digital imaging and communications in medicine
  • Previously implanted thoracolumbar pedicle screw and rod systems may be removed during revision surgical fusion surgery to allow the new bone graft to heal properly. Revision surgery can therefore be made faster and safer if information is known about which thoracolumbar pedicle screws were used for the initial spinal fusion and stabilization surgery. This information is often unavailable, however, due to patients being referred by other centers, or due to missing information in the patients’ records.
  • the provided classification system uses machine learning to process a DICOM image, such as a radiograph, containing a pedicle screw in order to classify the pedicle screw. For example, the classification system may determine a manufacturer that produced the pedicle screw contained in the image.
  • the classification system may receive a lateral view image and an anterior- posterior view image of a patient’s lumbar spine having an implanted pedicle screw.
  • Each of the images may be a DICOM image, such as a radiograph.
  • the classification system From the received images, the classification system generates a fused image that combines the lateral view image and the anterior-posterior view image.
  • the fused image alone is provided to a machine learning model that is trained to output a classification of a pedicle screw contained in the lateral view and anterior- posterior view images of the fused image.
  • the fused image along with the lateral view image and/or the anterior-posterior view image may be provided to a machine learning model that is trained to output a classification of a pedicle screw contained in the images.
  • the classification system provides information regarding the pedicle screw that is implanted in a patient and provides that information for a surgeon to use prior to a revision surgery, which allows for adequate pre-procedure preparation in planning and materials. This process may therefore reduce the time of a revision surgery as compared to a surgeon or other expert medical professional manually reviewing radiographs.
  • the classification system also improves upon identification accuracy and speed compared to manual expert review.
  • FIG. 1 illustrates an example computer network 10 (e.g., a telecommunications network) that may be used to implement various aspects of the present application.
  • the computer network 10 includes various devices communicating and functioning together in the gathering, transmitting, and/or requesting of data related to classifying thoracolumbar pedicle screws contained in an image, such as a radiograph.
  • the components of the computer network 10, and the subcomponents of each component may be combined, rearranged, removed, or provided on a separate device or server.
  • a medical professional may capture images of a patient with an image capture device 100.
  • the image capture device 100 may be, for example, an x-ray machine or another suitable device for imaging a screw implanted in a patient.
  • the captured images (e.g., a radiograph) may be communicated to a classification system 110, through the network 130 or a computing device 140.
  • the computing device 140 may be a personal computer, workstation, tablet device, or other suitable processing device that is physically in the medical environment in which the image capture device 100 is located.
  • the computing device 140 may be in the same room of a hospital as the image capture device 100, or alternatively may be located in a different room of the hospital.
  • a communications network 130 allows for communication in the computer network 10.
  • the communications network 130 may include one or more wireless networks such as, but not limited to one or more of a Local Area Network (LAN), Wireless Local Area Network (WLAN), a Personal Area Network (PAN), Campus Area Network (CAN), a Metropolitan Area Network (MAN), a Wde Area Network (WAN), a Wireless Wde Area Network (WWAN), Global System for Mobile Communications (GSM), Personal Communications Service (PCS), Digital Advanced Mobile Phone Service (D-Amps), Bluetooth, Wi-Fi, Fixed Wireless Data, 2G, 2.5G, 3G, 4G, LTE networks, enhanced data rates for GSM evolution (EDGE), General packet radio service (GPRS), enhanced GPRS, messaging protocols such as, TCP/IP, SMS, MMS, extensible messaging and presence protocol (XMPP), real time messaging protocol (RTMP), instant
  • the classification system 110 may receive the captured images and classify a thoracolumbar pedicle screw contained in the captured images using a machine learning model (e.g., the model 116), such as by identifying a manufacturer of the thoracolumbar pedicle screw.
  • a machine learning model e.g., the model 116
  • Each manufacturer may design its thoracolumbar pedicle screws in their own distinct way such that the manufacturer is identifiable by an image of its thoracolumbar pedicle screws. For example, the spacing between each turn of the screw (the pitch), the tapering of the screw from full diameter to its tip, and the screw thread to diameter ratio vary across manufacturers. These features are identifiable in lateral radiographs. Additionally, the junction between the screw and the rod may differ across manufacturers, which may be seen in the anterior-posterior view.
  • the classification system 110 may include a processor in communication with a memory 114.
  • the processor may be a CPU 112, an ASIC, or any other similar device.
  • the memory 114 may store at least the model 116.
  • One or more images may be input to the model 116 which thereafter outputs a classification for a thoracolumbar pedicle screw contained in the input image(s).
  • the input images may include one or more of a lateral view of a patient’s lumbar spine, an anterior-posterior view of a patient’s lumbar spine, and a fused image 300 (e.g., FIG. 3) combining the lateral and anterior-posterior views.
  • the model 1 16 may be trained using the Bag of Visual Words (BoVW) technique.
  • BoVW is a technique to compactly describe images and compute similarities between images, and is used for image classification.
  • an input image is broken down into a set of independent features, such as by a feature extractor (e.g., KAZE, Scale Invariant Feature Transform (SIFT), Binary Robust Invariant Scalable Keypoints (BRISK), etc.).
  • KAZE Scale Invariant Feature Transform
  • BRISK Binary Robust Invariant Scalable Keypoints
  • the KAZE feature extractor is used to train the model 116.
  • a feature extractor may return an array containing computed values (e.g., descriptors) of keypoints for the input image.
  • the model 116 may be trained with the histograms for classifying the images.
  • the model 116 may be implemented as one or more of a neural network and a support vector machine (e.g., linear, polynomial, or Gaussian).
  • the classification system 110 may employ a mixture of experts model.
  • the memory 114 may store a machine learning model trained as an expert for each of the lateral view images (e.g., the model 118), the anterior-posterior view images (e.g., the model 120), and the fused images (e.g., the model 122).
  • the model 116 serves as a gating model that learns which expert model 118-122 to trust based on the input to be predicted, and combines the predictions to classify a thoracolumbar pedicle screw in the images.
  • Machine learning models may include support vector machines, logistic regression techniques, linear discriminant analysis, linear regression analysis, artificial neural networks, machine learning classifier algorithms, or classification/regression trees in some embodiments.
  • machine learning systems may employ Naive Bayes predictive modeling analysis of several varieties, learning vector quantization artificial neural network algorithms, or implementation of boosting algorithms such as Adaboost or stochastic gradient boosting systems for iteratively updating weighting to train a machine learning classifier to determine a relationship between an influencing attribute, such as received image data, and a thoracolumbar pedicle screw classification and/or a degree to which such an influencing attribute affects the outcome of such thoracolumbar pedicle screw classification.
  • boosting algorithms such as Adaboost or stochastic gradient boosting systems for iteratively updating weighting to train a machine learning classifier to determine a relationship between an influencing attribute, such as received image data, and a thoracolumbar pedicle screw classification and/or a degree to which such an influencing attribute affects the outcome of such thoracolumbar pedicle screw classification.
  • the classification system 110 may communicate the classification to the computing device 140 so that a medical professional (e.g., surgeon) may view the classification.
  • a medical professional e.g., surgeon
  • the surgeon may use the information to prepare for revision surgery.
  • the components of the computer network 10 may be combined, rearranged, or removed.
  • two or more of the image capture device 100, the classification system 110, and the computing device 140 may be combined into a single system.
  • subcomponents of one or more of the image capture device 100, the classification system 110, and the computing device 140 may be combined, rearranged, or removed.
  • FIG. 2 illustrates a flow chart of an example method 200 for classifying a thoracolumbar pedicle screw implanted in a patient.
  • image data associated with a patient is received by a processor (e.g., the processor 112 of the classification system 110).
  • the image data includes a lateral view image of the patient’s lumbar spine and an anterior-posterior view image of the patient’s lumbar spine.
  • Each of the lateral view and anterior-posterior view images may be a digital imaging and communications in medicine (DICOM) image, such as a radiograph.
  • DICOM digital imaging and communications in medicine
  • the lateral view and anterior-posterior view images of the patient’s lumbar spine capture a thoracolumbar pedicle screw implanted in a patient.
  • the images capture both the implanted thoracolumbar pedicle screw and an implanted rod, which are both components of a posterior thoracolumbar instrumentation system. In other aspects, the images capture the implanted thoracolumbar pedicle screw but not the implanted rod.
  • the method 200 may include pre-processing the image data.
  • the lateral view image and/or the anterior-posterior view image may be cropped, such as to remove one or more interbody cages from the image(s).
  • Interbody cages are titanium cylinders that are placed in the disc space. The cages are porous and allow the bone graft to grow from the vertebral body through the cage and into the next vertebral body. Cropping out the interbody cages from the received image(s) can help eliminate noise from the image(s) by removing extraneous hardware, which can improve the learning and classifications of the classification system 110.
  • a fused image is generated by the processor that combines the lateral view image and the anterior-posterior view image.
  • FIG. 3 shows a representative example in which the combination is such that the fused image 300 is the lateral view image adjacent the anterior-posterior view image. While the lateral view image is shown to the left of the anterior- posterior view image in FIG. 3, other orientations of the lateral view image being adjacent to the anterior-posterior view image are possible.
  • An example method 400 for pre-processing the lateral view image and/or the anterior-posterior view image to generate the fused image is shown in FIG. 4, which will be described below.
  • a classification of a thoracolumbar pedicle screw in the fused image 300 is determined by the processor 112 using at least one machine learning model (e.g., the model 116) with the fused image 300 as an input.
  • the fused image 300 is provided to the model 116, which outputs a classification for the thoracolumbar pedicle screw contained in the fused image 300.
  • the classification may be the manufacturer that produces the thoracolumbar pedicle screw contained in the fused image 300 and/or the model number of the thoracolumbar pedicle screw, or other identifying information such as length, diameter, material, or unique features.
  • the fused image 300 and the lateral view image and/or the anterior-posterior view image are individually provided to the model 116 for classifying the thoracolumbar pedicle screw.
  • the lateral view image and the anterior-posterior view image are input to the model 116 at the same time rather than individually.
  • each the lateral view image and the anterior-posterior view image are input individually at the same time to the model 116 such that the model 116 can use information from the lateral view image, the anterior-posterior view image, or both to determine an output.
  • the fused image 300 can be beneficial for the classification process because each of the lateral view image and the anterior-posterior view image carry salient information, and the fused image 300 therefore allows the classification system 110 to take information from either the lateral view image, the anterior-posterior view image, or both.
  • the inventors have found during testing that, in 6-way classification, providing the fused image 300 to the model 116 performed better than providing only the lateral view image or only the anterior-posterior view image, which indicates that as classification gets more complex, having more available data to pull from in the form of the fused image 300 allows for better training of the model 116 compared to when trained on only lateral view images or only anterior-posterior view images. Accordingly, classification accuracy can be improved over typical systems that do not employ a fused image.
  • the inventors have also found that the classification system 110 performs with greater accuracy at classifying implanted pedicle screws than manual expert review by a surgeon or other expert medical professional. When testing 412 images across eight different manufacturers and classifying between the two most common manufacturers, the inventors found that the classification accuracy of the model 116 was 93.2%. Classification accuracy was 82.4% for three- way classification.
  • the fused image 300 also enables, in some aspects, using mixture of expert models.
  • the received lateral view image may be provided to a machine learning model (e.g., the model 118) trained to be an expert on classifying thoracolumbar pedicle screws in lateral view images
  • the received anterior-posterior view image may be provided to a machine learning model (e.g., the model 120) trained to be an expert on classifying thoracolumbar pedicle screws in anterior-posterior view images
  • the received fused image 300 may be provided to a machine learning model (e.g., the model 122) trained to be an expert on classifying thoracolumbar pedicle screws in fused images.
  • the classifications from each of the models 118, 120, and 122 may be provided to a machine learning model (e.g., the model 116) trained to combine the classifications in order to generate a final classification of the thoracolumbar pedicle screw in the images.
  • a machine learning model e.g., the model 116 trained to combine the classifications in order to generate a final classification of the thoracolumbar pedicle screw in the images.
  • FIG. 4 illustrates a flow chart of an example pre-processing method 400 that may be performed at block 204 of the method 200 for generating a fused image 300.
  • the image data is processed, by a processor (e.g., the processor 112 of the classification system 110), to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at substantially, or exactly, the same scale. Resizing one or both of the lateral view and anterior-posterior view images in this way helps eliminate the model 116 favoring the lateral view image or the anterior-posterior view image solely because one or more features are larger in one of the images as compared to the other.
  • a processor e.g., the processor 112 of the classification system 110
  • the image data is processed, by the processor 112, to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within the same range.
  • the lateral image had a minimum brightness of 0 at some pixels and a maximum brightness of 3000
  • the anterior-posterior image had minimum brightness of 0 and maximum brightness of 2000
  • the brightness values of the anterior-posterior image would be multiplied by 1.5 (3000/2000) so that both the lateral and anterior-posterior image brightness would be on the scale of 0-3000.
  • a typical maximum brightness of radiographs ranges from 1500- 3500.
  • the processor 112 combines the lateral view image and the anterior-posterior view image, one or both of which may be altered from blocks 402 and 404, to generate a fused image. For example, the processor 112 positions the lateral view image adjacent the anterior-posterior view image and creates a new image file of the combination.
  • FIGS. 2 and 4 are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of aspects of the disclosed method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagram, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
  • FIG. 5 illustrates an example computer system 500 that may be utilized to implement one or more of the devices and/or components of the disclosed system.
  • one or more computer systems 500 perform one or more steps of one or more methods described or illustrated herein.
  • one or more computer systems 500 provide the functionalities described or illustrated herein.
  • software running on one or more computer systems 500 performs one or more steps of one or more methods described or illustrated herein or provides the functionalities described or illustrated herein.
  • Particular embodiments include one or more portions of one or more computer systems 500.
  • a reference to a computer system may encompass a computing device, and vice versa, where appropriate.
  • a reference to a computer system may encompass one or more computer systems, where appropriate.
  • the computer system 500 may be an embedded computer system, a systcm-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computcr- on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these.
  • SOC single-board computer system
  • COM computcr- on-module
  • SOM system-on-module
  • the computer system 500 may include one or more computer systems 500; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks.
  • one or more computer systems 500 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein.
  • one or more computer systems 500 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein.
  • One or more computer systems 500 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
  • computer system 500 includes a processor 504, memory 502, storage 506, an input/output (I/O) interface 508, and a communication interface 510.
  • processor 504 memory 502
  • storage 506 storage 506
  • I/O input/output
  • communication interface 510 communication interface 510
  • the processor 504 includes hardware for executing instructions, such as those making up a computer program.
  • the processor 504 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 502, or storage 506; decode and execute the instructions; and then write one or more results to an internal register, internal cache, memory 502, or storage 506.
  • the processor 504 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates the processor 504 including any suitable number of any suitable internal caches, where appropriate.
  • the processor 504 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs).
  • Instructions in the instruction caches may be copies of instructions in memory 502 or storage 506, and the instruction caches may speed up retrieval of those instructions by the processor 504.
  • Data in the data caches may be copies of data in memory 502 or storage 506 that are to be operated on by computer instructions; the results of previous instructions executed by the processor 504 that are accessible to subsequent instructions or for writing to memory 502 or storage 506; or any other suitable data.
  • the data caches may speed up read or write operations by the processor 504.
  • the TLBs may speed up virtual-address translation for the processor 504.
  • processor 504 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates the processor 504 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor 504 may include one or more arithmetic logic units (ALUs), be a multi-core processor, or include one or more processors 504. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
  • ALUs arithmetic logic units
  • the memory 502 includes main memory for storing instructions for the processor 504 to execute or data for processor 504 to operate on.
  • computer system 500 may load instructions from storage 506 or another source (such as another computer system 500) to the memory 502.
  • the processor 504 may then load the instructions from the memory 502 to an internal register or internal cache.
  • the processor 504 may retrieve the instructions from the internal register or internal cache and decode them.
  • the processor 504 may write one or more results (which may be intermediate or final results) to the internal register or internal cache.
  • the processor 504 may then write one or more of those results to the memory 502.
  • the processor 504 executes only instructions in one or more internal registers or internal caches or in memory 502 (as opposed to storage 506 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 502 (as opposed to storage 506 or elsewhere).
  • One or more memory buses (which may each include an address bus and a data bus) may couple the processor 504 to the memory 502.
  • the bus may include one or more memory buses, as described in further detail below.
  • one or more memory management units (MMUs) reside between the processor 504 and memory 502 and facilitate accesses to the memory 502 requested by the processor 504.
  • the memory 502 includes random access memory (RAM). This RAM may be volatile memory, where appropriate.
  • this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM.
  • the memory 502 may include one or more memories 502, where appropriate. Although this disclosure describes and illustrates particular memory implementations, this disclosure contemplates any suitable memory implementation.
  • the storage 506 includes mass storage for data or instructions.
  • the storage 506 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these.
  • the storage 506 may include removable or non-removable (or fixed) media, where appropriate.
  • the storage 506 may be internal or external to computer system 500, where appropriate.
  • the storage 506 is non-volatile, solid-state memory.
  • the storage 506 includes read-only memory (ROM).
  • this ROM may be maskprogrammed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these.
  • PROM programmable ROM
  • EPROM erasable PROM
  • EEPROM electrically erasable PROM
  • EAROM electrically alterable ROM
  • flash memory or a combination of two or more of these.
  • This disclosure contemplates mass storage 506 taking any suitable physical form.
  • the storage 506 may include one or more storage control units facilitating communication between processor 504 and storage 506, where appropriate. Where appropriate, the storage 506 may include one or more storages 506. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
  • the I/O Interface 508 includes hardware, software, or both, providing one or more interfaces for communication between computer system 500 and one or more I/O devices.
  • the computer system 500 may include one or more of these I/O devices, where appropriate.
  • One or more of these I/O devices may enable communication between a person and computer system 500.
  • an I/O device may include a keyboard, keypad, microphone, monitor, screen, display panel, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these.
  • An I/O device may include one or more sensors.
  • the I/O Interface 508 may include one or more device or software drivers enabling processor 504 to drive one or more of these I/O devices.
  • the I/O interface 508 may include one or more I/O interfaces 508, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface or combination of I/O interfaces.
  • communication interface 510 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 500 and one or more other computer systems 500 or one or more networks 512.
  • communication interface 510 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or any other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a Wi-Fi network.
  • NIC network interface controller
  • WNIC wireless NIC
  • This disclosure contemplates any suitable network 512 and any suitable communication interface 510 for it.
  • the network 512 may include one or more of an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these.
  • PAN personal area network
  • LAN local area network
  • WAN wide area network
  • MAN metropolitan area network
  • computer system 500 may communicate with a wireless PAN (WPAN) (such as, for example, a Bluetooth® WPAN), a WI-FI network, a WLMAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these.
  • GSM Global System for Mobile Communications
  • Computer system 500 may include any suitable communication interface 510 for any of these networks, where appropriate.
  • Communication interface 510 may include one or more communication interfaces 510, where appropriate.
  • this disclosure describes and illustrates a particular communication interface implementations, this disclosure contemplates any suitable communication interface implementation.
  • the computer system 500 may also include a bus.
  • the bus may include hardware, software, or both and may communicatively couple the components of the computer system 500 to each other.
  • the bus may include an Accelerated Graphics Port (AGP) or any other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCT) bus, a PCT-Express (PCTe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these.
  • the bus may include one or more buses, where appropriate.
  • a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other types of integrated circuits (ICs) (e.g., field- programmable gate arrays (FPGAs) or application- specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer- readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate.
  • ICs e.g., field- programmable gate arrays (FPGAs) or application- specific ICs (ASICs)
  • HDDs hard disk drives
  • HHDs hybrid hard drives
  • ODDs optical disc drives
  • magneto-optical discs magneto-opti
  • “about,” “approximately” and “substantially” are understood to refer to numbers in a range of numerals, for example the range of -10% to +10% of the referenced number, preferably -5% to +5% of the referenced number, more preferably -1% to +1% of the referenced number, most preferably -0.1% to +0.1% of the referenced number.

Landscapes

  • Image Analysis (AREA)

Abstract

A machine learning-based computer vision classifier for classifying an implanted thoracolumbar pedicle screw captured in a digital imaging and communications in medicine (DICOM) image, such as a radiograph, is provided. A lateral view image and an anterior-posterior view image of a patient's lumbar spine including the implanted thoracolumbar pedicle screw are each provided to a machine learning model. A fused image that combines the lateral view and anterior-posterior view images is also generated and provided to the machine learning model. Based on this input, the machine learning model outputs a classification for the implanted thoracolumbar pedicle screw, such as a manufacturer of the thoracolumbar pedicle screw.

Description

COMPUTER VISION SYSTEMS AND METHODS FOR CLASSIFYING IMPLANTED THORACOLUMBAR PEDICLE SCREWS
PRIORITY CLAIM
[0001] The present application claims priority to and the benefit of U.S. Provisional Application 63/374,877 , filed September 7, 2022, the entirety of which is herein incorporated by reference.
TECHNICAL FIELD
[0002] The present application relates generally to digital medical systems. More specifically, the present application provides an automated computer vision approach to classifying implanted screw and rod systems from digital imaging of a patient.
BACKGROUND
[0003] The spinal column is one of the most important parts of the body that helps with movement and balance, upright posture, protection of the spinal cord, and shock absorption. Adults typically have 24 vertebrae in their spinal column. Each vertebra has two cylinder- shaped projections called pedicles, which are hard bones that stick out from the back part of the vertebral body. The pedicles provide protection for the spinal cord and nerves, and serve as a bridge to join the front and back parts of the vertebra.
[0004] Many back problems occur in the lumbar section of the spine. In the lumbar spine, pedicle screw fixation is the most widespread technique to achieve spinal fusion and stabilization. Other options for spinal fusion and stabilization use wires, bands, and hooks, however, pedicle screw fixation has many biomechanical advantages. Despite pedicle screws being used as clinical best practice, screw loosening and breakage are recurring mechanical issues of spinal fixation that lead to revision surgery in about 6% of cases. Additional reasons for revision surgery are disc herniation, scar tissue, hardware issues, and bone fragments.
[0005] Revision surgical fusion requires decortication of the involved vertebrae with implantion of native bone or allograft into the target area, and may be augmented with bone morphogenic protein and/or external bone growth stimulator to promote bone formation. Previous instrumentation often must be removed and replaced during this procedure to allow the new bone graft to heal properly. Accordingly, revision surgery can be made faster and safer if information is known about which fixation components (e.g., which manufacturer) were used for the initial surgery. This information is often unavailable, however, due to patients being referred by other centers, or due to missing information in the patients’ records. A surgeon or other expert medical professional can sometimes decipher which fixation component was used from a radiograph of a patient but doing so can take significant time away from the surgeon or other expert medical professional and is still subject to error. It was against these drawbacks that the present disclosure was conceived.
SUMMARY
[0006] The present disclosure involves new and innovative machine learning-based systems and methods for classifying implanted thoracolumbar pedicle screws from captured images of a patient in which the thoracolumbar pedicle screws are implanted. The classification system therefore provides information regarding the pedicle screw that is implanted in a patient, such as a manufacturer of the pedicle screw, and provides that information for a surgeon to use prior to a revision surgery, which allows for adequate pre-procedure preparation in planning and materials. This process may therefore reduce the time of a revision surgery as compared to a surgeon or other expert medical professional manually reviewing radiographs. The classification system also improves upon identification accuracy and speed compared to a surgeon or other medical professional manually review radiographs to classify a pedicle screw.
[0007] In an example, a method includes receiving, by a processor, image data associated with a patient. The image data includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient. A fused image is generated by the processor that combines the lateral view image and the anterior-posterior view image. Using at least one machine learning model with the fused image as an input, the processor determines a classification of a thoracolumbar pedicle screw in the fused image.
[0008] In a first aspect, a method includes receiving image data associated with a patient. The image data includes a lateral view image of a lumbar spine of the patient and an anterior- posterior view image of the lumbar spine of the patient. A fused image is generated that combines the lateral view image and the anterior-posterior view image. Using at least one machine learning model with the fused image as an input, a classification of a thoracolumbar pedicle screw in the fused image is determined. [0009] Tn a second aspect, which may be combined with other aspects described herein (c.g., the 1st aspect), determining the classification of the thoracolumbar pedicle screw includes determining a manufacturer of the thoracolumbar pedicle screw.
[0010] In a third aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 2nd aspect), determining the classification of the thoracolumbar pedicle screw further includes the lateral view image and the anterior-posterior view image as the input to the at least one machine learning model.
[0011] In a fourth aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 3rd aspect), the method further includes processing, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
[0012] In a fifth aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 4th aspect), generating the fused image includes processing the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
[0013] In a sixth aspect, which may be combined with other aspects described herein (e.g., the 5th aspect), generating the fused image further includes processing the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within the same range.
[0014] In a seventh aspect, which may be combined with other aspects described herein (e.g., the 6th aspect), the lateral view and anterior-posterior view images are combined such that the fused image inputs each the lateral view image and the anterior-posterior view image individually to the machine learning model at the same time.
[0015] In an eighth aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 7th aspect), each of the lateral view and anterior-posterior view images is a radiograph.
[0016] In a ninth aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 8th aspect), the at least one machine learning model includes a first machine learning model trained on lateral view images, a second machine learning model trained on anterior-posterior view images, and a third machine learning model trained on fused images.
[0017] In a tenth aspect, which may be combined with other aspects described herein (e.g., the 1st aspect through the 9th aspect), the at least one machine learning model is implemented as one or more of a support vector machine and a neural network.
[0018] In an eleventh aspect, which may be combined with other aspects described herein (e.g., the 2nd aspect, 3rd aspect, and 5th aspect through the 10th aspect), a system includes a memory and a processor in communication with the memory. The processor is configured to receive image data associated with a patient that includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient; generate a fused image that combines the lateral view image and the anterior-posterior view image; and determine, using at least one machine learning model with the fused image as an input, a classification of a thoracolumbar pedicle screw in the fused image.
[0019] In a twelfth aspect, which may be combined with other aspects described herein (e.g., the 11th aspect), the processor is further configured to process, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
[0020] In a thirteenth aspect, which may be combined with other aspects described herein (e.g., the 11th aspect), to generate the fused image, the process is further configured to process the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within a same range.
[0021] In a fourteenth aspect, which may be combined with other aspects described herein (e.g., the 11th aspect), to generate the fused image, the processor is configured to process the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
[0022] Additional features and advantages of the disclosed method and apparatus are described in, and will be apparent from, the following Detailed Description and the Figures. The features and advantages described herein are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the figures and description. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and not to limit the scope of the inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] FIG. 1 illustrates a block diagram of an example computer vision-based system for classifying an implanted thoracolumbar pedicle screw, according to an aspect of the present disclosure.
[0024] FIG. 2 illustrates a Row chart of a method for classifying an implanted thoracolumbar pedicle screw, according to an aspect of the present disclosure.
[0025] FIG. 3 illustrates a fused image, according to an aspect of the present disclosure.
[0026] FIG. 4 illustrates a flow chart of a pre-processing method for generating a fused image, according to an aspect of the present disclosure.
[0027] FIG. 5 illustrates a block diagram of a computing and networking environment, according to an aspect of the present disclosure.
DETAILED DESCRIPTION
[0028] The present disclosure involves an automated computer vision approach to classifying implanted thoracolumbar pedicle screw and rod systems from digital imaging and communications in medicine (DICOM) images, such as radiographs. Previously implanted thoracolumbar pedicle screw and rod systems may be removed during revision surgical fusion surgery to allow the new bone graft to heal properly. Revision surgery can therefore be made faster and safer if information is known about which thoracolumbar pedicle screws were used for the initial spinal fusion and stabilization surgery. This information is often unavailable, however, due to patients being referred by other centers, or due to missing information in the patients’ records. The provided classification system uses machine learning to process a DICOM image, such as a radiograph, containing a pedicle screw in order to classify the pedicle screw. For example, the classification system may determine a manufacturer that produced the pedicle screw contained in the image.
[0029] The classification system may receive a lateral view image and an anterior- posterior view image of a patient’s lumbar spine having an implanted pedicle screw. Each of the images may be a DICOM image, such as a radiograph. From the received images, the classification system generates a fused image that combines the lateral view image and the anterior-posterior view image. In some cases, the fused image alone is provided to a machine learning model that is trained to output a classification of a pedicle screw contained in the lateral view and anterior- posterior view images of the fused image. In other cases, the fused image along with the lateral view image and/or the anterior-posterior view image may be provided to a machine learning model that is trained to output a classification of a pedicle screw contained in the images. In this way, the classification system provides information regarding the pedicle screw that is implanted in a patient and provides that information for a surgeon to use prior to a revision surgery, which allows for adequate pre-procedure preparation in planning and materials. This process may therefore reduce the time of a revision surgery as compared to a surgeon or other expert medical professional manually reviewing radiographs. The classification system also improves upon identification accuracy and speed compared to manual expert review.
[0030] FIG. 1 illustrates an example computer network 10 (e.g., a telecommunications network) that may be used to implement various aspects of the present application. Generally, the computer network 10 includes various devices communicating and functioning together in the gathering, transmitting, and/or requesting of data related to classifying thoracolumbar pedicle screws contained in an image, such as a radiograph. In other examples, the components of the computer network 10, and the subcomponents of each component, may be combined, rearranged, removed, or provided on a separate device or server.
[0031] A medical professional may capture images of a patient with an image capture device 100. The image capture device 100 may be, for example, an x-ray machine or another suitable device for imaging a screw implanted in a patient. The captured images (e.g., a radiograph) may be communicated to a classification system 110, through the network 130 or a computing device 140. The computing device 140 may be a personal computer, workstation, tablet device, or other suitable processing device that is physically in the medical environment in which the image capture device 100 is located. For example, the computing device 140 may be in the same room of a hospital as the image capture device 100, or alternatively may be located in a different room of the hospital. Additionally, the computing device 140 may include one or more processors the process software or other machine-readable instructions and may include a memory to store the software or other machine-readable instructions and data. [0032] As illustrated, a communications network 130 allows for communication in the computer network 10. The communications network 130 may include one or more wireless networks such as, but not limited to one or more of a Local Area Network (LAN), Wireless Local Area Network (WLAN), a Personal Area Network (PAN), Campus Area Network (CAN), a Metropolitan Area Network (MAN), a Wde Area Network (WAN), a Wireless Wde Area Network (WWAN), Global System for Mobile Communications (GSM), Personal Communications Service (PCS), Digital Advanced Mobile Phone Service (D-Amps), Bluetooth, Wi-Fi, Fixed Wireless Data, 2G, 2.5G, 3G, 4G, LTE networks, enhanced data rates for GSM evolution (EDGE), General packet radio service (GPRS), enhanced GPRS, messaging protocols such as, TCP/IP, SMS, MMS, extensible messaging and presence protocol (XMPP), real time messaging protocol (RTMP), instant messaging and presence protocol (IMPP), instant messaging, USSD, IRC, or any other wireless data networks or messaging protocols. The communications network 130 may also include wired networks. For example, the image capture device 100 may have a wired connection to the computing device 140, as illustrated.
[0033] The classification system 110 may receive the captured images and classify a thoracolumbar pedicle screw contained in the captured images using a machine learning model (e.g., the model 116), such as by identifying a manufacturer of the thoracolumbar pedicle screw. Each manufacturer may design its thoracolumbar pedicle screws in their own distinct way such that the manufacturer is identifiable by an image of its thoracolumbar pedicle screws. For example, the spacing between each turn of the screw (the pitch), the tapering of the screw from full diameter to its tip, and the screw thread to diameter ratio vary across manufacturers. These features are identifiable in lateral radiographs. Additionally, the junction between the screw and the rod may differ across manufacturers, which may be seen in the anterior-posterior view. To carry out this process, the classification system 110 may include a processor in communication with a memory 114. The processor may be a CPU 112, an ASIC, or any other similar device. In some aspects, the memory 114 may store at least the model 116. One or more images may be input to the model 116 which thereafter outputs a classification for a thoracolumbar pedicle screw contained in the input image(s). For example, the input images may include one or more of a lateral view of a patient’s lumbar spine, an anterior-posterior view of a patient’s lumbar spine, and a fused image 300 (e.g., FIG. 3) combining the lateral and anterior-posterior views. [0034] The model 1 16 may be trained using the Bag of Visual Words (BoVW) technique. BoVW is a technique to compactly describe images and compute similarities between images, and is used for image classification. In BoVW, an input image is broken down into a set of independent features, such as by a feature extractor (e.g., KAZE, Scale Invariant Feature Transform (SIFT), Binary Robust Invariant Scalable Keypoints (BRISK), etc.). In one example, the KAZE feature extractor is used to train the model 116. A feature extractor may return an array containing computed values (e.g., descriptors) of keypoints for the input image. In BoVW, all the input images are converted to histograms and the model 116 may be trained with the histograms for classifying the images. In various aspects, the model 116 may be implemented as one or more of a neural network and a support vector machine (e.g., linear, polynomial, or Gaussian).
[0035] In some aspects, the classification system 110 may employ a mixture of experts model. In such aspects, the memory 114 may store a machine learning model trained as an expert for each of the lateral view images (e.g., the model 118), the anterior-posterior view images (e.g., the model 120), and the fused images (e.g., the model 122). In the mixture of experts model, the model 116 serves as a gating model that learns which expert model 118-122 to trust based on the input to be predicted, and combines the predictions to classify a thoracolumbar pedicle screw in the images.
[0036] Machine learning models (e.g., the models 116-122), as described herein, may include support vector machines, logistic regression techniques, linear discriminant analysis, linear regression analysis, artificial neural networks, machine learning classifier algorithms, or classification/regression trees in some embodiments. In various other embodiments, machine learning systems may employ Naive Bayes predictive modeling analysis of several varieties, learning vector quantization artificial neural network algorithms, or implementation of boosting algorithms such as Adaboost or stochastic gradient boosting systems for iteratively updating weighting to train a machine learning classifier to determine a relationship between an influencing attribute, such as received image data, and a thoracolumbar pedicle screw classification and/or a degree to which such an influencing attribute affects the outcome of such thoracolumbar pedicle screw classification.
[0037] After the classification system 110 classifies the thoracolumbar pedicle screw contained in the captured image(s), the classification system 110, in this example, may communicate the classification to the computing device 140 so that a medical professional (e.g., surgeon) may view the classification. The surgeon may use the information to prepare for revision surgery.
[0038] In some aspects of the present application, the components of the computer network 10 may be combined, rearranged, or removed. For example, two or more of the image capture device 100, the classification system 110, and the computing device 140 may be combined into a single system. In another example, subcomponents of one or more of the image capture device 100, the classification system 110, and the computing device 140 may be combined, rearranged, or removed.
[0039] FIG. 2 illustrates a flow chart of an example method 200 for classifying a thoracolumbar pedicle screw implanted in a patient. At block 202, image data associated with a patient is received by a processor (e.g., the processor 112 of the classification system 110). The image data includes a lateral view image of the patient’s lumbar spine and an anterior-posterior view image of the patient’s lumbar spine. Each of the lateral view and anterior-posterior view images may be a digital imaging and communications in medicine (DICOM) image, such as a radiograph. The lateral view and anterior-posterior view images of the patient’s lumbar spine capture a thoracolumbar pedicle screw implanted in a patient. In some aspects, the images capture both the implanted thoracolumbar pedicle screw and an implanted rod, which are both components of a posterior thoracolumbar instrumentation system. In other aspects, the images capture the implanted thoracolumbar pedicle screw but not the implanted rod.
[0040] In some aspects, the method 200 may include pre-processing the image data. For example, the lateral view image and/or the anterior-posterior view image may be cropped, such as to remove one or more interbody cages from the image(s). Interbody cages are titanium cylinders that are placed in the disc space. The cages are porous and allow the bone graft to grow from the vertebral body through the cage and into the next vertebral body. Cropping out the interbody cages from the received image(s) can help eliminate noise from the image(s) by removing extraneous hardware, which can improve the learning and classifications of the classification system 110.
[0041] At block 204, a fused image is generated by the processor that combines the lateral view image and the anterior-posterior view image. FIG. 3 shows a representative example in which the combination is such that the fused image 300 is the lateral view image adjacent the anterior-posterior view image. While the lateral view image is shown to the left of the anterior- posterior view image in FIG. 3, other orientations of the lateral view image being adjacent to the anterior-posterior view image are possible. An example method 400 for pre-processing the lateral view image and/or the anterior-posterior view image to generate the fused image is shown in FIG. 4, which will be described below.
[0042] At block 206, a classification of a thoracolumbar pedicle screw in the fused image 300 is determined by the processor 112 using at least one machine learning model (e.g., the model 116) with the fused image 300 as an input. Stated differently, the fused image 300 is provided to the model 116, which outputs a classification for the thoracolumbar pedicle screw contained in the fused image 300. For example, the classification may be the manufacturer that produces the thoracolumbar pedicle screw contained in the fused image 300 and/or the model number of the thoracolumbar pedicle screw, or other identifying information such as length, diameter, material, or unique features. In some aspects, the fused image 300 and the lateral view image and/or the anterior-posterior view image are individually provided to the model 116 for classifying the thoracolumbar pedicle screw.
[0043] By inputting the fused image 300 to the model 116, the lateral view image and the anterior-posterior view image are input to the model 116 at the same time rather than individually. Stated differently, each the lateral view image and the anterior-posterior view image are input individually at the same time to the model 116 such that the model 116 can use information from the lateral view image, the anterior-posterior view image, or both to determine an output. As such, the fused image 300 can be beneficial for the classification process because each of the lateral view image and the anterior-posterior view image carry salient information, and the fused image 300 therefore allows the classification system 110 to take information from either the lateral view image, the anterior-posterior view image, or both. For instance, the inventors have found during testing that, in 6-way classification, providing the fused image 300 to the model 116 performed better than providing only the lateral view image or only the anterior-posterior view image, which indicates that as classification gets more complex, having more available data to pull from in the form of the fused image 300 allows for better training of the model 116 compared to when trained on only lateral view images or only anterior-posterior view images. Accordingly, classification accuracy can be improved over typical systems that do not employ a fused image. The inventors have also found that the classification system 110 performs with greater accuracy at classifying implanted pedicle screws than manual expert review by a surgeon or other expert medical professional. When testing 412 images across eight different manufacturers and classifying between the two most common manufacturers, the inventors found that the classification accuracy of the model 116 was 93.2%. Classification accuracy was 82.4% for three- way classification.
[0044] The fused image 300 also enables, in some aspects, using mixture of expert models. In such aspects, the received lateral view image may be provided to a machine learning model (e.g., the model 118) trained to be an expert on classifying thoracolumbar pedicle screws in lateral view images, the received anterior-posterior view image may be provided to a machine learning model (e.g., the model 120) trained to be an expert on classifying thoracolumbar pedicle screws in anterior-posterior view images, and the received fused image 300 may be provided to a machine learning model (e.g., the model 122) trained to be an expert on classifying thoracolumbar pedicle screws in fused images. The classifications from each of the models 118, 120, and 122 may be provided to a machine learning model (e.g., the model 116) trained to combine the classifications in order to generate a final classification of the thoracolumbar pedicle screw in the images.
[0045] FIG. 4 illustrates a flow chart of an example pre-processing method 400 that may be performed at block 204 of the method 200 for generating a fused image 300. At block 402, the image data is processed, by a processor (e.g., the processor 112 of the classification system 110), to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at substantially, or exactly, the same scale. Resizing one or both of the lateral view and anterior-posterior view images in this way helps eliminate the model 116 favoring the lateral view image or the anterior-posterior view image solely because one or more features are larger in one of the images as compared to the other.
[0046] At block 404, the image data is processed, by the processor 112, to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within the same range. For example, if the lateral image had a minimum brightness of 0 at some pixels and a maximum brightness of 3000, while the anterior-posterior image had minimum brightness of 0 and maximum brightness of 2000, the brightness values of the anterior-posterior image would be multiplied by 1.5 (3000/2000) so that both the lateral and anterior-posterior image brightness would be on the scale of 0-3000. A typical maximum brightness of radiographs ranges from 1500- 3500. Adjusting the contrast of one or both of the lateral view and anterior-posterior view images in this way helps eliminate the model 116 favoring the lateral view image or the anterior-posterior view image solely because one is brighter or darker than the other. At block 406, the processor 112 combines the lateral view image and the anterior-posterior view image, one or both of which may be altered from blocks 402 and 404, to generate a fused image. For example, the processor 112 positions the lateral view image adjacent the anterior-posterior view image and creates a new image file of the combination.
[0047] The flow charts of FIGS. 2 and 4 are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of aspects of the disclosed method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagram, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
[0048] FIG. 5 illustrates an example computer system 500 that may be utilized to implement one or more of the devices and/or components of the disclosed system. In particular embodiments, one or more computer systems 500 perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems 500 provide the functionalities described or illustrated herein. In particular embodiments, software running on one or more computer systems 500 performs one or more steps of one or more methods described or illustrated herein or provides the functionalities described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems 500. Herein, a reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, a reference to a computer system may encompass one or more computer systems, where appropriate.
[0049] This disclosure contemplates any suitable number of computer systems 500. This disclosure contemplates the computer system 500 taking any suitable physical form. As example and not by way of limitation, the computer system 500 may be an embedded computer system, a systcm-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computcr- on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, the computer system 500 may include one or more computer systems 500; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems 500 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems 500 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems 500 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
[0050] In particular embodiments, computer system 500 includes a processor 504, memory 502, storage 506, an input/output (I/O) interface 508, and a communication interface 510. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
[0051] In particular embodiments, the processor 504 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, the processor 504 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 502, or storage 506; decode and execute the instructions; and then write one or more results to an internal register, internal cache, memory 502, or storage 506. In particular embodiments, the processor 504 may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates the processor 504 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, the processor 504 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory 502 or storage 506, and the instruction caches may speed up retrieval of those instructions by the processor 504. Data in the data caches may be copies of data in memory 502 or storage 506 that are to be operated on by computer instructions; the results of previous instructions executed by the processor 504 that are accessible to subsequent instructions or for writing to memory 502 or storage 506; or any other suitable data. The data caches may speed up read or write operations by the processor 504. The TLBs may speed up virtual-address translation for the processor 504. In particular embodiments, processor 504 may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates the processor 504 including any suitable number of any suitable internal registers, where appropriate. Where appropriate, the processor 504 may include one or more arithmetic logic units (ALUs), be a multi-core processor, or include one or more processors 504. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
10052] In particular embodiments, the memory 502 includes main memory for storing instructions for the processor 504 to execute or data for processor 504 to operate on. As an example, and not by way of limitation, computer system 500 may load instructions from storage 506 or another source (such as another computer system 500) to the memory 502. The processor 504 may then load the instructions from the memory 502 to an internal register or internal cache. To execute the instructions, the processor 504 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, the processor 504 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. The processor 504 may then write one or more of those results to the memory 502. In particular embodiments, the processor 504 executes only instructions in one or more internal registers or internal caches or in memory 502 (as opposed to storage 506 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 502 (as opposed to storage 506 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple the processor 504 to the memory 502. The bus may include one or more memory buses, as described in further detail below. In particular embodiments, one or more memory management units (MMUs) reside between the processor 504 and memory 502 and facilitate accesses to the memory 502 requested by the processor 504. In particular embodiments, the memory 502 includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. The memory 502 may include one or more memories 502, where appropriate. Although this disclosure describes and illustrates particular memory implementations, this disclosure contemplates any suitable memory implementation.
[0053] In particular embodiments, the storage 506 includes mass storage for data or instructions. As an example and not by way of limitation, the storage 506 may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. The storage 506 may include removable or non-removable (or fixed) media, where appropriate. The storage 506 may be internal or external to computer system 500, where appropriate. In particular embodiments, the storage 506 is non-volatile, solid-state memory. In particular embodiments, the storage 506 includes read-only memory (ROM). Where appropriate, this ROM may be maskprogrammed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage 506 taking any suitable physical form. The storage 506 may include one or more storage control units facilitating communication between processor 504 and storage 506, where appropriate. Where appropriate, the storage 506 may include one or more storages 506. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
[0054] In particular embodiments, the I/O Interface 508 includes hardware, software, or both, providing one or more interfaces for communication between computer system 500 and one or more I/O devices. The computer system 500 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system 500. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, screen, display panel, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. Where appropriate, the I/O Interface 508 may include one or more device or software drivers enabling processor 504 to drive one or more of these I/O devices. The I/O interface 508 may include one or more I/O interfaces 508, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface or combination of I/O interfaces.
[0055] In particular embodiments, communication interface 510 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 500 and one or more other computer systems 500 or one or more networks 512. As an example and not by way of limitation, communication interface 510 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or any other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a Wi-Fi network. This disclosure contemplates any suitable network 512 and any suitable communication interface 510 for it. As an example and not by way of limitation, the network 512 may include one or more of an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system 500 may communicate with a wireless PAN (WPAN) (such as, for example, a Bluetooth® WPAN), a WI-FI network, a WLMAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or any other suitable wireless network or a combination of two or more of these. Computer system 500 may include any suitable communication interface 510 for any of these networks, where appropriate. Communication interface 510 may include one or more communication interfaces 510, where appropriate. Although this disclosure describes and illustrates a particular communication interface implementations, this disclosure contemplates any suitable communication interface implementation.
[0056] The computer system 500 may also include a bus. The bus may include hardware, software, or both and may communicatively couple the components of the computer system 500 to each other. As an example and not by way of limitation, the bus may include an Accelerated Graphics Port (AGP) or any other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCT) bus, a PCT-Express (PCTe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. The bus may include one or more buses, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
[0057] Elerein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other types of integrated circuits (ICs) (e.g., field- programmable gate arrays (FPGAs) or application- specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer- readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
[0058] Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
[0059] As used herein, “about,” “approximately” and “substantially” are understood to refer to numbers in a range of numerals, for example the range of -10% to +10% of the referenced number, preferably -5% to +5% of the referenced number, more preferably -1% to +1% of the referenced number, most preferably -0.1% to +0.1% of the referenced number.
[0060] Although the present disclosure and certain representative advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. For example, although processors are described throughout the detailed description, aspects of the invention may be applied to the design of or implemented on different kinds of processors, such as graphics processing units (GPUs), central processing units (CPUs), and digital signal processors (DSPs). As another example, although processing of certain kinds of data may be described in example embodiments, other kinds or types of data may be processed through the methods and devices described above. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, means, methods, or steps.

Claims

CLAIMS The invention is claimed as follows:
1. A method comprising: receiving image data associated with a patient, wherein the image data includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient; generating a fused image that combines the lateral view image and the anterior-posterior view image; and determining, using at least one machine learning model with the fused image as an input, a classification of a thoracolumbar pedicle screw in the fused image.
2. The method of claim 1 , wherein determining the classification of the thoracolumbar pedicle screw includes determining a manufacturer of the thoracolumbar pedicle screw.
3. The method of claim 1 , wherein determining the classification of the thoracolumbar pedicle screw further includes the lateral view image and the anterior-posterior view image as the input to the at least one machine learning model.
4. The method of claim 1 , further comprising processing, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
5. The method of claim 1 , wherein generating the fused image includes processing the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
6. The method of claim 5, wherein generating the fused image further includes processing the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within the same range.
7. The method of claim 6, wherein the lateral view and anterior-posterior view images are combined such that the fused image inputs each the lateral view image and the anterior-posterior view image individually to the machine learning model at the same time.
8. The method of claim 1, wherein each of the lateral view and anterior-posterior view images is a radiograph.
9. The method of claim 1, wherein the at least one machine learning model includes a first machine learning model trained on lateral view images, a second machine learning model trained on anterior-posterior view images, and a third machine learning model trained on fused images.
10. The method of claim 1, wherein the at least one machine learning model is implemented as one or more of a support vector machine and a neural network.
1 1. A system comprising: a memory; and a processor in communication with the memory, the processor configured to: receive image data associated with a patient, wherein the image data includes a lateral view image of a lumbar spine of the patient and an anterior-posterior view image of the lumbar spine of the patient; generate a fused image that combines the lateral view image and the anterior- posterior view image; and determine, using at least one machine learning model with the fused image as an input, a classification of a thoracolumbar pedicle screw in the fused image.
12. The system of claim 11, wherein determining the classification of the thoracolumbar pedicle screw includes determining a manufacturer of the thoracolumbar pedicle screw.
13. The system of claim 11, wherein determining the classification of the thoracolumbar pedicle screw further includes the lateral view image and the anterior-posterior view image as the input to the at least one machine learning model.
14. The system of claim 11, wherein the processor is further configured to process, prior to generating the fused image, at least one of the lateral view image and the anterior posterior view image to crop out an interbody cage in the at least one of the lateral view image and the anterior posterior view image.
15. The system of claim 11 , wherein, to generate the fused image, the processor is configured to process the image data to resize at least one of the lateral view image and the anterior-posterior view image such that the lateral view image and the anterior-posterior view image are at the same scale.
16. The system of claim 15, wherein, to generate the fused image, the process is further configured to process the image data to adjust a contrast of at least one of the lateral view image and the anterior-posterior view image such that the respective contrasts of the lateral view and anterior-posterior view images are within a same range.
17. The system of claim 16, wherein the lateral view and anterior-posterior view images are combined such that the fused image inputs each the lateral view image and the anterior-posterior view image individually to the machine learning model at the same time.
18. The system of claim 11, wherein each of the lateral view and anterior-posterior view images is a radiograph.
19. The system of claim 11, wherein the at least one machine learning model includes a first machine learning model trained on lateral view images, a second machine learning model trained on anterior-posterior view images, and a third machine learning model trained on fused images.
20. The system of claim 11, wherein the at least one machine learning model is implemented as one or more of a support vector machine and a neural network.
PCT/US2023/071817 2022-09-07 2023-08-08 Computer vision systems and methods for classifying implanted thoracolumbar pedicle screws WO2024054737A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263374877P 2022-09-07 2022-09-07
US63/374,877 2022-09-07

Publications (1)

Publication Number Publication Date
WO2024054737A1 true WO2024054737A1 (en) 2024-03-14

Family

ID=90191857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/071817 WO2024054737A1 (en) 2022-09-07 2023-08-08 Computer vision systems and methods for classifying implanted thoracolumbar pedicle screws

Country Status (1)

Country Link
WO (1) WO2024054737A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110282189A1 (en) * 2010-05-12 2011-11-17 Rainer Graumann Method and system for determination of 3d positions and orientations of surgical objects from 2d x-ray images
US20200107883A1 (en) * 2015-10-30 2020-04-09 Orthosensor Inc A spine measurement system and method therefor
US20200138518A1 (en) * 2017-01-16 2020-05-07 Philipp K. Lang Optical guidance for surgical, medical, and dental procedures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110282189A1 (en) * 2010-05-12 2011-11-17 Rainer Graumann Method and system for determination of 3d positions and orientations of surgical objects from 2d x-ray images
US20200107883A1 (en) * 2015-10-30 2020-04-09 Orthosensor Inc A spine measurement system and method therefor
US20200138518A1 (en) * 2017-01-16 2020-05-07 Philipp K. Lang Optical guidance for surgical, medical, and dental procedures

Similar Documents

Publication Publication Date Title
US11880972B2 (en) Tissue nodule detection and tissue nodule detection model training method, apparatus, device, and system
CN109697741B (en) PET image reconstruction method, device, equipment and medium
KR102014371B1 (en) Method and apparatus for estimating recognition of surgical video
US20200320685A1 (en) Automated classification and taxonomy of 3d teeth data using deep learning methods
US10049457B2 (en) Automated cephalometric analysis using machine learning
US20200311456A1 (en) Computer aided scanning method for medical device, medical device, and readable storage medium
US8494242B2 (en) Medical image management apparatus and method, and recording medium
US10339655B2 (en) Automated image evaluation in x-ray imaging
US11229377B2 (en) System and method for next-generation MRI spine evaluation
US10949970B2 (en) Methods and apparatus for the application of machine learning to radiographic images of animals
CN110570407A (en) image processing method, storage medium and computer device
Li et al. Deep convolutional neural networks for automatic detection of orbital blowout fractures
CN110910342B (en) Analysis of skeletal trauma by using deep learning
WO2022017238A1 (en) Automatic detection of vertebral dislocations
CN111128348A (en) Medical image processing method, device, storage medium and computer equipment
WO2024054737A1 (en) Computer vision systems and methods for classifying implanted thoracolumbar pedicle screws
JP7120965B2 (en) Head image analysis device and image analysis method
JP2022521136A (en) A recording medium recording devices, methods and instructions for determining tooth bone age
EP4116992A1 (en) Method and system for predicting expression of biomarker from medical image
CN115797729A (en) Model training method and device, and motion artifact identification and prompting method and device
CN112236832A (en) Diagnosis support system, diagnosis support method, and diagnosis support program
CN114359296A (en) Image element and lower alveolar nerve segmentation method and device based on deep learning
Akay et al. Deep convolutional neural network—The evaluation of cervical vertebrae maturation
CN114359197A (en) Quality evaluation method of cervical vertebra image, electronic device and storage medium
CN111696083B (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23863893

Country of ref document: EP

Kind code of ref document: A1