CN116492052B - Three-dimensional visual operation navigation system based on mixed reality backbone - Google Patents
Three-dimensional visual operation navigation system based on mixed reality backbone Download PDFInfo
- Publication number
- CN116492052B CN116492052B CN202310450119.8A CN202310450119A CN116492052B CN 116492052 B CN116492052 B CN 116492052B CN 202310450119 A CN202310450119 A CN 202310450119A CN 116492052 B CN116492052 B CN 116492052B
- Authority
- CN
- China
- Prior art keywords
- image
- dimensional
- data
- spine
- mixed reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 21
- 230000004927 fusion Effects 0.000 claims abstract description 123
- 238000012545 processing Methods 0.000 claims abstract description 48
- 238000003384 imaging method Methods 0.000 claims abstract description 37
- 238000001356 surgical procedure Methods 0.000 claims abstract description 27
- 230000003287 optical effect Effects 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims description 62
- 238000000034 method Methods 0.000 claims description 61
- 230000008569 process Effects 0.000 claims description 34
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 26
- 230000011218 segmentation Effects 0.000 claims description 25
- 239000011159 matrix material Substances 0.000 claims description 23
- 238000012800 visualization Methods 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000003902 lesion Effects 0.000 claims description 10
- 238000011282 treatment Methods 0.000 claims description 10
- 206010029174 Nerve compression Diseases 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000005855 radiation Effects 0.000 abstract description 6
- 238000004904 shortening Methods 0.000 abstract description 3
- 210000005036 nerve Anatomy 0.000 description 14
- 210000004204 blood vessel Anatomy 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 7
- 230000036285 pathological change Effects 0.000 description 7
- 231100000915 pathological change Toxicity 0.000 description 7
- 210000001519 tissue Anatomy 0.000 description 7
- 206010050296 Intervertebral disc protrusion Diseases 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 210000003041 ligament Anatomy 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000006378 damage Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 5
- 230000001575 pathological effect Effects 0.000 description 5
- 210000000281 joint capsule Anatomy 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 208000003618 Intervertebral Disc Displacement Diseases 0.000 description 2
- 208000012287 Prolapse Diseases 0.000 description 2
- 230000032683 aging Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007850 degeneration Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 210000003195 fascia Anatomy 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000399 orthopedic effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 206010040007 Sense of oppression Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000002027 skeletal muscle Anatomy 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 238000007794 visualization technique Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/374—NMR or MRI
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
- A61B2090/3762—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Robotics (AREA)
- Gynecology & Obstetrics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides a three-dimensional visual surgical navigation system based on a mixed reality spine, which comprises the following components: the system comprises mixed reality equipment provided with a first navigation unit, C-arm imaging equipment provided with a second navigation unit, an optical surgery positioning navigator and a cross-mode image generation module; the optical surgery positioning navigator is used for establishing a three-dimensional space coordinate system according to the coordinate information of the first navigation unit and the coordinate information of the second navigation unit and dynamically tracking the three-dimensional space coordinate of the mixed reality equipment; the cross-modal image generation module is used for generating three-dimensional spine focus fusion image data and sending the three-dimensional spine focus fusion image data to the mixed reality equipment, and the mixed reality equipment is used for carrying out registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image. The system can reduce human registration errors, improve navigation accuracy, thereby improving shortening operation time and reducing doctor-patient radiation hazard.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a three-dimensional visual surgical navigation system based on a mixed reality spine.
Background
Prolapse of lumbar intervertebral disc is the most common orthopedic disease with a incidence of about 20%, of which 10-15% of patients require surgical treatment. With the aging population and the change of modern people's working and living habits, the number of patients requiring surgery is increasing at a rate of 20% per year. The current surgical treatment method mostly needs to cut the skin at the pathological change position, separate muscle and fascia, then cut off partial structures (such as vertebral lamina, articular process, joint capsule, ligament, etc.) forming the vertebral canal to different degrees, open a window on the vertebral canal wall, finally expose and cut off the pathological change intervertebral disc in the vertebral canal. The method can damage the lumbar vertebra structure to different degrees, causes sequelae such as lumbar vertebra degeneration, lumbar vertebra instability and the like, and even reconstructs the stability of the spine after operation by a fusion internal fixation method. The spine operation navigation system is gradually applied to clinic in recent years due to the advantages of precision, minimally invasive and the like, but has certain limitations: the traditional screen type operation navigation system generally presents different two-dimensional interfaces of lesion positions such as a transverse position, a sagittal position and a coronal position to a surgeon, the relative positions of each tissue structure and the operation instrument lack visual three-dimensional space display and feeling for the operator, the operator is required to reconstruct three-dimensional images in the brain by himself, extremely high requirements are provided for the space imagination capability of the operator, and the technical learning curve is extremely steep for young operators; in addition, as the interface of the navigation system is separated from the operation area, the sight of the operator needs to be frequently switched back and forth between the navigation screen and the operation area, the operation efficiency is reduced, the accuracy of the navigation operation is easily reduced due to improper hand-eye coordination, and the operation effect is influenced.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The invention mainly aims to provide a three-dimensional visual operation navigation system based on a mixed reality spine, which can reduce human registration errors and improve navigation precision, so that the treatment effect of lumbar disc herniation is improved, the occurrence of complications is reduced, the operation time is shortened, and the doctor-patient radiation hazard is reduced.
In a first aspect, an embodiment of the present invention provides a three-dimensional visualization surgical navigation system based on a mixed reality spine, including:
A mixed reality device comprising a first navigation unit;
A C-arm imaging device comprising a second navigation unit for acquiring radiographic image data of a patient's spine during a surgical procedure;
The optical surgery positioning navigator is used for establishing a three-dimensional space coordinate system according to the acquired coordinate information of the first navigation unit and the coordinate information of the second navigation unit in the process of scanning the spine of the patient, and dynamically tracking the three-dimensional space coordinate of the mixed reality equipment to obtain real-time coordinate information of the mixed reality equipment in the process of surgery;
The cross-modal image generation module is used for generating three-dimensional spine focus fusion image data and sending the three-dimensional spine focus fusion image data to the mixed reality equipment, wherein the three-dimensional spine focus fusion image data comprises a three-dimensional spine focus fusion image and position information in the three-dimensional spine focus fusion image, the three-dimensional spine focus fusion image is an image obtained by registering and fusing the ray image data and first image data, the first image data is an image obtained by carrying out three-dimensional reconstruction on a lumbar vertebra lesion part on a CT image and/or an MRI image of a spine of a patient acquired before an operation, and the position information is obtained according to coordinate information of the C-arm imaging equipment and the three-dimensional spine focus fusion image;
the mixed reality device is further used for acquiring a real scene image, and performing registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image.
In some alternative embodiments, the first navigation unit includes a first guide flight carrier and a plurality of infrared reflective spheres disposed on the first guide flight carrier, and the second navigation unit includes a second guide flight carrier and a plurality of infrared reflective spheres disposed on the second guide flight carrier.
In some optional embodiments, the mixed reality device is further configured to perform a point cloud registration process on the real scene image and the three-dimensional spine focus fusion image to obtain a three-dimensional image of the mixed reality spine;
the mixed reality device is further used for obtaining a mixed reality three-dimensional image according to the real-time coordinate information, the position information and the mixed reality spine three-dimensional image.
In some optional embodiments, the mixed reality device is further configured to obtain first iteration output data of a point cloud registration algorithm model, source point cloud data and target point cloud data, where the source point cloud data is point cloud data extracted from a patient skin region image of the three-dimensional spine focus fusion image, and the target point cloud data is point cloud data extracted from a patient skin region image of the real scene image;
The mixed reality device is further configured to input the first output data, the source point cloud data, and the target point cloud data to the point cloud registration algorithm model, so as to output second iteration output data through the point cloud registration algorithm model until the second iteration output data meets a preset convergence judgment standard, where the first iteration output data is last output data of the second iteration output data;
the point cloud registration algorithm model comprises a first rigid body transformation unit, a second rigid body transformation unit, a first feature extraction unit, a second feature extraction unit and a calculation matching matrix unit, wherein the first rigid body transformation unit is used for carrying out first rigid body transformation processing on the first iterative output data and the source point cloud data to obtain source point cloud data after rigid body transformation, the second rigid body transformation unit is used for carrying out second rigid body transformation processing on the source point cloud data after rigid body transformation and the target point cloud data to obtain target point cloud data after rigid body transformation, the first feature extraction unit is used for carrying out feature extraction on the source point cloud data after rigid body transformation to obtain source point cloud feature data, the second feature extraction unit is used for carrying out feature extraction on the target point cloud data to obtain target point cloud feature data, the calculation matching matrix unit is used for carrying out calculation processing on the target point cloud data after rigid body transformation, the source point cloud feature data and the target point cloud feature data to obtain matching matrix, and the calculation matching matrix unit is also used for carrying out second iterative output data after rigid body transformation.
In some optional embodiments, the cross-modal image generating module further includes a surgical path calculating unit, where the surgical path calculating unit is configured to perform identification processing on the first image data to obtain a nerve compression target, and calculate a puncture path in the first image data according to the nerve compression target.
In some optional embodiments, the cross-mode image display module further includes an image fusion module unit and a data transmission unit, where the image fusion module unit is configured to perform vertebra segmentation identification processing on the CT image and/or the MRI image of the spine of the patient acquired before the operation to obtain the first image data;
The image fusion module unit is also used for carrying out symmetrical alignment and centroid alignment treatment on the radiographic image data of the patient spine and the first image data acquired in the operation process to obtain aligned primary fusion image data;
The image fusion module unit is also used for carrying out fusion processing on the primary fusion image data by utilizing a point cloud precision registration ICP algorithm to obtain a three-dimensional spine focus fusion image;
the data transmission unit is used for transmitting the three-dimensional spine focus fusion image data generated by the image fusion module unit to the mixed reality equipment.
In some alternative embodiments, the first image comprises a CT image and an MRI image;
the image fusion module unit is further used for performing first positioning processing on the CT image and the MRI image through a vertebra positioning network respectively to obtain positioning point information corresponding to each vertebra of the CT image and positioning point information corresponding to each vertebra in the MRI image;
Dividing each vertebra in the CT image and the MRI image through the positioning point information respectively to obtain first vertebra image data of each vertebra in the CT image and second vertebra image data of each vertebra in the MRI image;
obtaining the CT image and the MRI image after vertebrae segmentation according to the first vertebrae image data of each vertebrae in the CT image and the second vertebrae image data of each vertebrae in the MRI image;
and registering the CT image and the MRI image after vertebrae segmentation to obtain the first image data.
In some alternative embodiments, the image fusion module unit is further configured to:
Extracting l andmark points of the CT image and l andmark points of the MRI image after vertebrae segmentation;
sequentially taking the l andmark points of the CT image and the l andmark points of the MRI image with the same number;
Sequentially aligning the l andmark points of the CT image with the l andmark points of the MRI image in a mass center manner to obtain an initial alignment image;
And performing rigid transformation and pyramid-based non-rigid registration processing on the initial alignment image to obtain the first image data.
In a second aspect, an embodiment of the present invention provides a three-dimensional visualization display method for a mixed reality spine, which is characterized in that the method is applied to the three-dimensional visualization surgical navigation system for a mixed reality spine according to the first aspect, and the method includes:
Acquiring CT images and/or MRI images of the spine of a patient before an operation;
Performing three-dimensional reconstruction on the CT image and/or the MRI image to obtain first image data;
acquiring radiographic image data by the C-arm imaging device;
Registering and fusing the ray image data and the first image data to obtain a three-dimensional spine focus fused image;
acquiring coordinate information of the mixed reality equipment and coordinate information of the C-arm imaging equipment through the first navigation unit and the second navigation unit respectively;
The method comprises the steps that an optical operation positioning navigator is used for establishing a three-dimensional space coordinate system according to the acquired coordinate information of a first navigation unit and the coordinate information of a second navigation unit in the process of scanning the spine of a patient, and dynamically tracking the three-dimensional space coordinate of the mixed reality equipment to obtain real-time coordinate information of the mixed reality equipment in the process of operation;
Processing the three-dimensional spine focus fusion image according to the coordinate information of the C-arm imaging equipment to obtain the position information in the three-dimensional spine focus fusion image;
Acquiring a real scene image through the mixed reality equipment;
And carrying out registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image.
In a third aspect, an embodiment of the present invention provides a controller, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, and is characterized in that the processor realizes the three-dimensional visualization display method of the mixed reality spine according to the second aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium stores computer-executable instructions for performing the method for three-dimensional visualization of a mixed reality spine according to the second aspect.
The beneficial effects of the invention include: a three-dimensional visualization surgical navigation system based on a mixed reality spine, the system comprising: the system comprises mixed reality equipment, C-arm imaging equipment, an optical surgery positioning navigator and a cross-mode image generation module; wherein the mixed reality device comprises a first navigation unit; the C-arm imaging device comprises a second navigation unit and is used for acquiring radiographic image data of the spine of the patient in the operation process; the optical operation positioning navigator is used for establishing a three-dimensional space coordinate system according to the acquired coordinate information of the first navigation unit and the coordinate information of the second navigation unit in the process of scanning the spine of the patient, and dynamically tracking the three-dimensional space coordinates of the mixed reality equipment to obtain real-time coordinate information of the mixed reality equipment in the operation process; the cross-mode image generation module is used for generating three-dimensional spine focus fusion image data and sending the three-dimensional spine focus fusion image data to the mixed reality equipment, wherein the three-dimensional spine focus fusion image data comprises a three-dimensional spine focus fusion image and position information in the three-dimensional spine focus fusion image, the three-dimensional spine focus fusion image is an image obtained by registering and fusing radial image data and first image data, the first image data is an image obtained by carrying out three-dimensional reconstruction on a lumbar vertebra lesion part on a CT image and/or an MRI image of a spine of a patient acquired before an operation, and the position information is obtained according to coordinate information of the C-arm imaging equipment and the three-dimensional spine focus fusion image; the mixed reality device is also used for acquiring a real scene image, and performing registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image. In the technical scheme of the implementation, based on the three-dimensional visual operation navigation system of the mixed reality spine, the first image data (CT image and/or MRI image of the spine of a patient) before operation and the radiographic image data acquired through the C-arm imaging device during operation can be fused to obtain a three-dimensional spine focus fusion image, and then the three-dimensional spine focus fusion image is registered to the corresponding position of the real scene image through the mixed reality device, so that the precise, rapid and automatic registration of the three-dimensional visual model of the virtual lumbar spine and the real anatomical position of the patient can be realized in the mixed reality device, important structures (such as bones, nerves, blood vessels, ligaments, pathological change intervertebral discs and the like) in the spine of the patient can be dynamically displayed in the real anatomical position of the patient, and doctors can directly and accurately display the important structures in the spine of the patient in the real anatomical position of the patient in the operation process of the mixed reality device, thereby reducing human error, improving navigation precision, improving the treatment effect of lumbar disc herniation, reducing complications, shortening operation time and reducing doctor-patient radiation harm.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a schematic illustration of a three-dimensional visualization surgical navigation system based on a mixed reality spine provided in one embodiment of the present invention;
FIG. 2 is a schematic diagram of a controller in a cross-modality image display module of a mixed reality spinal three-dimensional visualization surgical navigation system provided in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart of a method for three-dimensional visualization of a mixed reality spine for use with a three-dimensional visualization surgical navigation system based on the mixed reality spine provided in one embodiment of the present invention;
FIG. 4 is a schematic view of a CT image provided in accordance with one embodiment of the present invention;
FIG. 5 is a schematic illustration of an MRI image provided in accordance with an embodiment of the present invention;
FIG. 6 is a schematic illustration of a vertebrae positioning network for vertebrae segmentation processing of CT images, MRI images, according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of radiographic image data provided by one embodiment of the invention;
FIG. 8 is a schematic representation of a three-dimensional spinal lesion fusion image provided in accordance with one embodiment of the present invention;
FIG. 9 is a schematic diagram of the operation principle of a mixed reality spinal three-dimensional visualization surgical navigation system provided by an embodiment of the present invention;
Fig. 10 is a schematic diagram of a point cloud registration algorithm model in a three-dimensional visualization display method of a mixed reality spine according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description, in the claims and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
Prolapse of lumbar intervertebral disc is the most common orthopedic disease with a incidence of about 20%, of which 10-15% of patients require surgical treatment. With the aging population and the change of modern people's working and living habits, the number of patients requiring surgery is increasing at a rate of 20% per year. The current surgical treatment method mostly needs to cut the skin at the pathological change position, separate muscle and fascia, then cut off partial structures (such as vertebral lamina, articular process, joint capsule, ligament, etc.) forming the vertebral canal to different degrees, open a window on the vertebral canal wall, finally expose and cut off the pathological change intervertebral disc in the vertebral canal. The method can damage the lumbar vertebra structure to different degrees, causes sequelae such as lumbar vertebra degeneration, lumbar vertebra instability and the like, and even reconstructs the stability of the spine after operation by a fusion internal fixation method. The spine operation navigation system is gradually applied to clinic in recent years due to the advantages of precision, minimally invasive and the like, but has certain limitations: the traditional screen type operation navigation system generally presents different two-dimensional interfaces of lesion positions such as a transverse position, a sagittal position and a coronal position to a surgeon, the relative positions of each tissue structure and the operation instrument lack visual three-dimensional space display and feeling for the operator, the operator is required to reconstruct three-dimensional images in the brain by himself, extremely high requirements are provided for the space imagination capability of the operator, and the technical learning curve is extremely steep for young operators; in addition, as the interface of the navigation system is separated from the operation area, the sight of the operator needs to be frequently switched back and forth between the navigation screen and the operation area, the operation efficiency is reduced, the accuracy of the navigation operation is easily reduced due to improper hand-eye coordination, and the operation effect is influenced.
In order to solve the above-mentioned problems, an embodiment of the present invention provides a three-dimensional visualization surgical navigation system based on a mixed reality spine, which includes: the system comprises mixed reality equipment, C-arm imaging equipment, an optical surgery positioning navigator and a cross-mode image generation module; wherein the mixed reality device comprises a first navigation unit; the C-arm imaging device comprises a second navigation unit and is used for acquiring radiographic image data of the spine of the patient in the operation process; the optical operation positioning navigator is used for establishing a three-dimensional space coordinate system according to the acquired coordinate information of the first navigation unit and the coordinate information of the second navigation unit in the process of scanning the spine of the patient, and dynamically tracking the three-dimensional space coordinates of the mixed reality equipment to obtain real-time coordinate information of the mixed reality equipment in the operation process; the cross-mode image generation module is used for generating three-dimensional spine focus fusion image data and sending the three-dimensional spine focus fusion image data to the mixed reality equipment, wherein the three-dimensional spine focus fusion image data comprises a three-dimensional spine focus fusion image and position information in the three-dimensional spine focus fusion image, the three-dimensional spine focus fusion image is an image obtained by registering and fusing radial image data and first image data, the first image data is an image obtained by carrying out three-dimensional reconstruction on a lumbar vertebra lesion part on a CT image and/or an MRI image of a spine of a patient acquired before an operation, and the position information is obtained according to coordinate information of the C-arm imaging equipment and the three-dimensional spine focus fusion image; the mixed reality device is also used for acquiring a real scene image, and performing registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image.
In the technical scheme of the embodiment, in operation, a patient spine is scanned through a C-arm imaging device to obtain ray image data of the patient spine, a cross-mode image display module registers and fuses the ray image data and the first image data to generate a three-dimensional spine focus fusion image, a three-dimensional space coordinate system is built through an optical operation positioning navigator according to the obtained coordinate information of a first navigation unit and the coordinate information of a second navigation unit in the process of scanning the patient spine, dynamic tracking is carried out on three-dimensional space coordinates of a mixed reality device to obtain real-time coordinate information of the mixed reality device in the operation process, and because the three-dimensional spine focus fusion image is obtained according to the ray image data obtained by the C-arm imaging device, the coordinate information of the C-arm imaging device when obtaining the image can be matched with the coordinate on the three-dimensional spine focus fusion image, and the three-dimensional space coordinate system is built according to the obtained coordinate information of the first navigation unit in the process of scanning the patient spine and the coordinate information of the second navigation unit on the C-arm imaging device, so that the three-dimensional scene focus fusion image can be matched with the real-dimensional scene fusion image according to the real-time coordinate information of the mixed reality device and the three-dimensional space coordinate system. In the technical scheme of the implementation, based on the three-dimensional visual operation navigation system of the mixed reality spine, the first image data (CT image and/or MRI image of the spine of a patient) before operation and the radiographic image data acquired through the C-arm imaging device during operation can be fused to obtain a three-dimensional spine focus fusion image, and then the three-dimensional spine focus fusion image is registered to the corresponding position of the real scene image through the mixed reality device, so that the precise, rapid and automatic registration of the three-dimensional visual model of the virtual lumbar spine and the real anatomical position of the patient can be realized in the mixed reality device, important structures (such as bones, nerves, blood vessels, ligaments, pathological change intervertebral discs and the like) in the spine of the patient can be dynamically displayed in the real anatomical position of the patient, and doctors can directly and accurately display the important structures in the spine of the patient in the real anatomical position of the patient in the operation process of the mixed reality device, thereby reducing human error, improving navigation precision, improving the treatment effect of lumbar disc herniation, reducing complications, shortening operation time and reducing doctor-patient radiation harm.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a three-dimensional visualization surgical navigation system based on a mixed reality spine according to an embodiment of the present invention, including: the mixed reality device 110, the C-arm imaging device 120, the optical surgery positioning navigator 130 and the cross-modal image generation module 140, wherein the mixed reality device 110 comprises a first navigation unit 111, the C-arm imaging device 120 comprises a second navigation unit, the C-arm imaging device 120 is used for acquiring radiographic image data of a patient spine in a surgery process, the optical surgery positioning navigator 130 is used for acquiring coordinate information of the first navigation unit 111 and coordinate information of the second navigation unit, and a three-dimensional space coordinate system is established according to the acquired coordinate information of the first navigation unit 111 and the acquired coordinate information of the second navigation unit, so that three-dimensional space coordinates of the mixed reality device 110 are dynamically tracked through the three-dimensional space coordinate system, and real-time coordinate information of the mixed reality device 110 is obtained. The cross-mode image generation module 140 is configured to generate three-dimensional spine focus fusion image data and send the three-dimensional spine focus fusion image data to the mixed reality device 110, where the three-dimensional spine focus fusion image data includes a three-dimensional spine focus fusion image and position information in the three-dimensional spine focus fusion image, the three-dimensional spine focus fusion image is an image obtained by registering and fusing radial image data and first image data, the first image data is an image obtained by three-dimensionally reconstructing a CT image and/or an MRI image of a spine of a patient obtained before an operation, and the position information is obtained according to coordinate information of the C-arm imaging device 120 and the three-dimensional spine focus fusion image.
In some alternative embodiments, the first navigation unit 111 includes a first guide space frame and a plurality of infrared reflection balls disposed on the first guide space frame, the second navigation unit includes a second guide space frame and a plurality of infrared reflection balls disposed on the second guide space frame, the optical surgical positioning navigator 130 is a NDIVEGA optical positioner, and the position of the mixed reality device 110 can be dynamically tracked by the NDI VEGA optical positioner to obtain real-time coordinate information of the mixed reality device 110, so that a relative relationship between a real scene image acquired by the mixed reality device 110 and a three-dimensional spine focus fusion image fused with radiation image data acquired by the C-arm imaging device 120 can be determined according to the real-time coordinate information of the mixed reality device 110 and the position information of the three-dimensional spine focus fusion image. It is understood that the plurality of infrared-reflection balls represents at least three infrared-reflection balls, which are not particularly limited in this embodiment.
In some alternative embodiments, the infrared reflection device (the first navigation unit 111) is connected and assembled with Microsoft HoloLens head-mounted MR equipment (the mixed reality equipment 110), holonens 2 glasses for short, an NDI VEGA optical locator (the optical surgery positioning navigator 130) is adopted to dynamically track three-dimensional space coordinates of the holonens 2 glasses (the mixed reality equipment 110), an inertial tracking system, an accelerometer and a gyroscope in holonens 2 are called to ensure that the content of holographic image presentation in the MR field of view can be changed correspondingly along with the movement of the head of a surgeon to the conversion of the MR viewing angle, the spatial position of the virtual lumbar three-dimensional model does not move severely, and always keeps high coincidence with the real anatomical position under optical tracking, so that the occurrence of the reduction of navigation surgery accuracy caused by image drift generated by the large-amplitude movement of the head of the surgeon in surgery is prevented. Further, the optical tracking device, namely the related infrared transmitting and receiving device, can be completely integrated on the head-mounted MR device, so that the traditional binocular camera can be separated, the operating room space occupied by the navigation device is reduced, and the integrated operation of the mixed reality and the navigation positioning with more intelligent integration is realized.
It should be noted that, under the condition of selecting Microsoft HoloLens as the mixed reality MR technology deployment device, a computer workstation based on the ARM64 architecture is established through the UWP platform, and corresponding development software is deployed on the workstation; installing Win10 SDK 10.0.18362.0, performing format conversion on the lumbar vertebra multi-mode fusion image and planned operation path data, importing the data into a Unity 2019.4 environment for MR scene construction, compiling and generating a C# project by using a MRTK (Mixed Reality ToolKit) 2.4.0 tool kit, deploying the generated C# solution on Microsoft HoloLens2 through Microsoft Visual Studio 2019, wearing Microsoft HoloLens2 by an operator, starting a deployed application program, and then viewing a virtual lumbar vertebra three-dimensional visualization model for virtual-reality space entity interaction.
In some optional embodiments, the mixed reality device 110 is further configured to perform a point cloud registration process on the real scene image and the three-dimensional spine focus fusion image to obtain a three-dimensional image of the mixed reality spine; and then obtaining the mixed reality three-dimensional image according to the real-time coordinate information, the position information and the mixed reality spine three-dimensional image. In an actual use scene, as the mixed reality device 110 is worn by medical staff and moves in the process of an operation, the mixed reality three-dimensional image is obtained through real-time coordinate information, position information and the mixed reality spine three-dimensional image, the position of the mixed reality spine three-dimensional image in the image can not change along with the movement of the mixed reality device 110, and the three-dimensional spine focus fusion image can be fixed on a target spine region of a patient in a real scene image, so that the registration accuracy can be improved, and the problem that deviation occurs in point cloud registration between the real scene image and the three-dimensional spine focus fusion image due to the movement of the head of the medical staff wearing the mixed reality device 110 in the operation process is solved.
Specifically, the method for point cloud registration processing comprises the following steps: acquiring first iteration output data, source point cloud data and target point cloud data of a point cloud registration algorithm model, wherein the source point cloud data is point cloud data extracted from a patient skin area image of a three-dimensional spine focus fusion image, and the target point cloud data is point cloud data extracted from a patient skin area image of a real scene image; the mixed reality device is further configured to input the first output data, the source point cloud data and the target point cloud data to a point cloud registration algorithm model, so as to output second iteration output data through the point cloud registration algorithm model until the second iteration output data meets a preset convergence judgment standard, where the first iteration output data is last output data of the second iteration output data; the point cloud registration algorithm model comprises a first rigid body transformation unit, a second rigid body transformation unit, a first feature extraction unit, a second feature extraction unit and a calculation matching matrix unit, wherein the first rigid body transformation unit is used for carrying out first rigid body transformation processing on first iteration output data and source point cloud data to obtain source point cloud data after rigid body transformation, the second rigid body transformation unit is used for carrying out second rigid body transformation processing on the source point cloud data after rigid body transformation and target point cloud data to obtain target point cloud data after rigid body transformation, the first feature extraction unit is used for carrying out feature extraction on the source point cloud data after rigid body transformation to obtain source point cloud feature data, the second feature extraction unit is used for carrying out feature extraction on the target point cloud data to obtain target point cloud feature data, the calculation matching matrix unit is used for carrying out calculation processing on the target point cloud data after rigid body transformation, the source point cloud feature data and the target point cloud feature data to obtain a matching matrix, and the calculation matching matrix unit is also used for carrying out registration processing on the target point cloud data through the matching matrix to obtain second iteration output data.
In some optional embodiments, the cross-modality image generation module 140 is configured to perform vertebra segmentation recognition processing on a CT image and/or an MRI image of a spine of a patient acquired before an operation to obtain first image data; then, the radiographic image data and the first image data of the patient spine acquired in the operation process are subjected to symmetrical alignment and centroid alignment treatment, and initial fusion image data after alignment is obtained; and then, fusing the primary fused image data by utilizing a point cloud fine registration ICP algorithm to obtain a three-dimensional spine focus fused image, and transmitting the three-dimensional spine focus fused image to mixed reality equipment.
In some alternative embodiments, the first image comprises a CT image and an MRI image; because the shape and gray level difference between each vertebra are smaller, the vertebrae are difficult to distinguish when being automatically segmented, although bones can be extracted through threshold processing, the space between each vertebra is communicated and difficult to separate, the segmentation of the vertebrae is classified into multiple types of segmentation of the vertebrae, the aim is to independently segment each vertebra (comprising lumbar bones and sacrum) and joint capsules between the vertebrae, support subsequent puncture planning, navigation and CT and MRI fusion, and then the image fusion module unit can be also used for carrying out first positioning processing on CT images and MRI images by using vertebrae positioning networks respectively to obtain positioning point information corresponding to each vertebra of the CT images and positioning point information corresponding to each vertebra of the MRI images; then, respectively dividing each vertebra in the CT image and the MRI image through positioning point information to obtain first vertebra image data of each vertebra in the CT image and second vertebra image data of each vertebra in the MRI image; obtaining a CT image and an MRI image after vertebrae segmentation according to the first vertebrae image data of each vertebrae in the CT image and the second vertebrae image data of each vertebrae in the MRI image; and registering the CT image and the MRI image after vertebrae segmentation to obtain first image data.
In some alternative embodiments, since the measure of the multi-mode image is difficult to define, the CT/MRI multi-mode image is further automatically registered based on the vertebral segmentation of the CT image and the MRI image, and then the image fusion module unit may be further configured to extract the l andmark points of the CT image and the l andmark points of the MRI image after the vertebral segmentation; then sequentially taking l andmark points of the CT image and l andmark points of the MRI image with the same number; sequentially aligning the l andmark points of the CT image with the l andmark points of the MRI image to obtain an initial aligned image; and then, carrying out rigid transformation and pyramid-based non-rigid registration processing on the initial alignment image to obtain first image data. The registration network adopts a Laplace network, and the network can obtain better performance in a plurality of computer vision tasks such as super-resolution image reconstruction and optical flow field estimation. The network consists of three parts, namely feature coding, a plurality of residual error modules and feature decoding. The method comprises the steps of taking two 3*3 convolution layers as feature extraction layers, taking one convolution with the step length of 2 as downsampling, taking five residual modules, taking one deconvolution and two continuous convolutions as feature decoding layers, adding long connection, and outputting three-channel deformation vector fields with the same size as the input size.
In some optional embodiments, the cross-mode image generating module 140 further includes a surgical path calculating unit, where the surgical path calculating unit is configured to identify the first image data to obtain a nerve compression target, calculate, according to the nerve compression target, a puncture path in the first image data, and send the puncture path to the display screen unit, where the puncture path is displayed by the display screen unit. Through the operation path calculation unit in the embodiment, the intelligent puncture path planning module of the navigation system needs to automatically identify the diseased herniated lumbar disc and structures such as bones, ligaments, pressed nerves or blood vessels nearby by means of an intelligent algorithm, and provides a safe and effective puncture path scheme for an operating doctor to refer to and select on the premise of not damaging important tissues. The artificial intelligent deep learning algorithm is utilized to automatically learn and divide structures such as vertebrae, intervertebral discs, nerves and blood vessels around lumbar intervertebral disc foramen, the lumbar intervertebral disc protrusion is taken as the center, parameters such as the center of different operation path targets, the adjacent important nerve blood vessels, the joint process joint distance, the corresponding Kambin triangle area and the like are calculated through analysis, the related percutaneous puncture angle and depth are recorded, the convolutional neural network is used for continuous iterative training, and finally, the optimal operation path scheme or schemes are provided for an operator to refer to and select.
Fig. 2 is a schematic diagram of a controller for performing a three-dimensional visualization display method of a mixed reality spine according to an embodiment of the present application, where the controller is disposed in a cross-modal image display module of the three-dimensional visualization surgical navigation system of fig. 1.
In the example of fig. 2, the controller 100 is provided with a processor 110 and a memory 120, wherein the processor 110 and the memory 120 may be connected by a bus or otherwise, in fig. 1 by way of example.
Memory 120, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. In addition, memory 120 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 120 optionally includes memory remotely located relative to the processor 110, which may be connected to the controller 100 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The controller may be a programmable controller, or may be other controllers, which are not particularly limited in this embodiment.
It will be appreciated by those skilled in the art that the controller shown in fig. 2 is not limiting of the embodiments of the application and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components.
Based on the controller, various embodiments of the present invention applied to a three-dimensional visualization display method of a mixed reality spine based on a three-dimensional visualization surgical navigation system of the mixed reality spine are presented below.
Referring to fig. 3, fig. 3 is a flowchart of a three-dimensional visualization display method of a mixed reality spine according to an embodiment of the present invention, and the three-dimensional visualization display method of the mixed reality spine according to the embodiment of the present invention may include, but is not limited to, step S100, step S200, step S300, step S400, step S500, step S600, step S700, step S800, and step S900.
Step S100, CT images and/or MRI images of the spine of the patient before the operation are acquired.
Specifically, the CT image shown in fig. 4 can clearly show bone tissue, and the MRI image shown in fig. 5 can finely show soft tissues such as skeletal muscle, diseased intervertebral disc or ligament, pressed nerve or blood vessel, and the like, so that the diseased condition that cannot be detected by ultrasonic detection can be identified.
Step S200, performing three-dimensional reconstruction on lumbar vertebra lesion parts on the CT image and/or the MRI image to obtain first image data.
Specifically, the fusion image obtained by carrying out three-dimensional reconstruction on the CT image and the MRI image can enable a doctor to recognize the lesions of bones, gases and the like which are not easy to be detected by ultrasonic in the operation before the operation, to be recognized again, and the rest areas which are not displayed by ultrasonic scanning in the same plane can be seen in the fusion image of the CT image and the MRI image at the same time. The cross-modality image fusion technique results in first image data with better resolution and a larger field of view than ultrasound.
In some optional embodiments, performing first positioning processing on the CT image and the MRI image through a vertebra positioning network to obtain positioning point information corresponding to each vertebra of the CT image and positioning point information corresponding to each vertebra in the MRI image; then, respectively dividing each vertebra in the CT image and the MRI image through positioning point information to obtain first vertebra image data of each vertebra in the CT image and second vertebra image data of each vertebra in the MRI image; obtaining a CT image and an MRI image after vertebrae segmentation according to the first vertebrae image data of each vertebrae in the CT image and the second vertebrae image data of each vertebrae in the MRI image; and registering the CT image and the MRI image after vertebrae segmentation to obtain first image data.
In some optional embodiments, the step of performing registration processing on the CT image and the MRI image after the vertebrae are segmented to obtain first image data may first perform registration processing on the CT image and the MRI image after the vertebrae are segmented to obtain first image data, where l andmark points of the CT image and l andmark points of the MRI image after the vertebrae are segmented may first be extracted; then sequentially taking l andmark points of the CT image and l andmark points of the MRI image with the same number; sequentially aligning the l andmark points of the CT image with the l andmark points of the MRI image to obtain an initial aligned image; and then, carrying out rigid transformation and pyramid-based non-rigid registration processing on the initial alignment image to obtain first image data.
In some alternative embodiments, referring to fig. 6, the vertebral positioning network is made up of two parts, the first part being an automatic positioning network of the vertebrae and the second part being the vertebrae where the positioning points are located, by means of automatic positioning points. Each anchor point heat map is regressed in the vertebra positioning network through a semantic segmentation network. The heat map is generated by firstly calculating the mass center x i of the ith vertebra, and generating a gold standard heat map by adopting a Gaussian blur function according to the mass center:
The closer the representation is to the target point x i, the larger the gray value, where σi is a learnable parameter, so that the heat map is similar to a spot image, the initially generated heat map HLA is a rough heat map, multiple spots may come out or the spots may be blurred, the spot image is input into several consecutive convolution kernels (shown in green), the goal is to eliminate those false spots, and the larger convolution kernel scale is used to increase the receptive field of the feature, further extracting the target point from the global perspective, and obtaining a more accurate heat map HSC. HLA has the advantages of smaller visual field, local feature extraction, more accurate target points can be predicted, the defect of high false positive aiming at specific target points, the HSC has the advantages of large receptive field and low false positive, but inaccurate predicted position, so that the advantages of the two are combined, and a target heat map is obtained by multiplying:
the objective function employs an L2 loss function:
the final anchor point takes the coordinates of the point in the heat map with the largest gray value:
According to the obtained heat map points, extracting local vertebra original gray level images in a clipping mode, taking the heat map and the gray level images as the input of a segmentation network U-Net, and further focusing on the vertebra where the segmentation positioning points are located, wherein a loss function adopts cross entropy and DSC loss function:
step S300, acquiring radiographic image data by the C-arm imaging apparatus.
Specifically, although the first image data obtained by fusing the CT image and the MRI image can identify the pathological condition, the real-time condition of the patient cannot be obtained, and during the operation, the spine of the patient needs to be scanned by the C-arm imaging device, so as to obtain the real-time radiographic image data as shown in fig. 7.
And step S400, registering and fusing the ray image data and the first image data to obtain a three-dimensional spine focus fused image.
Specifically, the three-dimensional spine focus fusion image shown in fig. 8 is obtained by registering and fusing the radiation image data and the first image data, and under the condition that the three-dimensional spine focus fusion image can obtain the real-time image of the spine of the patient in real time, the focus condition of the patient can be obtained from the CT image and the MRI image before operation. The three-dimensional spine focus fusion image obtained by the cross-mode image fusion technology has the following advantages: 1. overlapping the radiographic image data and the MRI/CT image is equivalent to phase change to increase the definition and scanning range of ultrasound in operation; 2. real-time rapid imaging makes registration adjustment of preoperative and intra-operative images simpler, which is very advantageous for guiding interventional exam treatments.
In some optional embodiments, firstly, performing symmetrical alignment and centroid alignment processing on radiographic image data and first image data of a patient spine acquired in a surgical process to obtain aligned primary fusion image data; the image fusion module unit is also used for carrying out fusion processing on the primary fusion image data by utilizing a point cloud fine registration ICP algorithm to obtain a three-dimensional spine focus fusion image and transmitting the three-dimensional spine focus fusion image to the display screen unit.
And S500, acquiring coordinate information of the mixed reality equipment and coordinate information of the C-arm imaging equipment through the first navigation unit and the second navigation unit respectively.
Specifically, after the generation of the three-dimensional spine focus fusion image is completed, in order to ensure that the three-dimensional spine focus fusion image is registered to the real scene image in the case that the mixed reality device moves in the process of the operation, coordinate information of the mixed reality device and coordinate information of the C-arm imaging device need to be acquired through the first navigation unit and the second navigation unit, respectively.
Step S600, a three-dimensional space coordinate system is established according to the acquired coordinate information of the first navigation unit and the coordinate information of the second navigation unit in the process of scanning the spine of the patient through the optical surgery positioning navigator, and the three-dimensional space coordinates of the mixed reality equipment are dynamically tracked, so that real-time coordinate information of the mixed reality equipment in the surgery process is obtained.
Specifically, after the coordinate information of the first navigation unit and the coordinate information of the second navigation unit are acquired, a three-dimensional space coordinate system of the mixed reality device and the C-arm imaging device is established for the acquired coordinate information of the first navigation unit and the acquired coordinate information of the second navigation unit in the process of scanning the spine of the patient through the optical surgery positioning navigator, and then the mixed reality device is dynamically tracked based on the three-dimensional space coordinate system, so that real-time coordinate information of the mixed reality device is obtained, and the relative position relation between the real scene image and the three-dimensional spine focus fusion image can be determined according to the real-time coordinate information of the mixed reality device and the coordinate information of the C-arm imaging device.
And step S700, obtaining the position information in the three-dimensional spine focus fusion image according to the coordinate information of the C-arm imaging device and the three-dimensional spine focus fusion image.
Specifically, the position information in the three-dimensional spine focus fusion image can be obtained by processing the coordinate information of the C-arm imaging device and the three-dimensional spine focus fusion image, the position information represents the corresponding position information of each pixel point in the three-dimensional spine focus fusion image in a three-dimensional space coordinate system, and the position information in the three-dimensional spine focus fusion image can be associated with the position relation of the image acquired by the mixed reality device because the three-dimensional spine focus fusion image is the image fused based on the ray image data acquired by the C-arm imaging device.
Step S800, acquiring a real scene image by a mixed reality device.
Specifically, in order to register the three-dimensional spine focus fusion image to a real scene facing a mixed reality device worn by a medical staff, a real scene image is acquired through the mixed reality device. In some optional embodiments, in step S500, the mixed reality device may acquire the real scene image simultaneously, the C-arm imaging device may acquire the radiographic image data simultaneously, and a three-dimensional space coordinate system is established by using the coordinate information of the first navigation unit and the coordinate information of the second navigation unit acquired at the time point, so that the coordinate information of each pixel point of the real scene image in the three-dimensional space coordinate system may be obtained, and similarly, the coordinate information of each pixel point of the radiographic image data acquired synchronously by the C-arm imaging device in the three-dimensional space coordinate system may also be obtained, so that the coordinate information of the real scene image acquired by the mixed reality device may be associated with the coordinate information of the radiographic image data in the three-dimensional space, that is, the coordinate information of the real scene image acquired by the mixed reality device may be associated with the position information of the three-dimensional spine focus fusion image.
And step S900, carrying out registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain the mixed reality three-dimensional image.
Specifically, point cloud registration processing is carried out on a real scene image and a three-dimensional spine focus fusion image to obtain a mixed reality spine three-dimensional image; and then obtaining the mixed reality three-dimensional image according to the real-time coordinate information, the position information and the mixed reality spine three-dimensional image.
In some alternative embodiments, as shown in fig. 9-10, the process of performing point cloud registration processing on the real scene image and the three-dimensional spine focus fusion image to obtain the three-dimensional image of the mixed reality spine is specifically as follows: acquiring first iteration output data, source point cloud data and target point cloud data of a point cloud registration algorithm model, wherein the source point cloud data is point cloud data extracted from a patient skin area image of a three-dimensional spine focus fusion image, and the target point cloud data is point cloud data extracted from a patient skin area image of a real scene image; the mixed reality device is further configured to input the first output data, the source point cloud data and the target point cloud data to a point cloud registration algorithm model, so as to output second iteration output data through the point cloud registration algorithm model until the second iteration output data meets a preset convergence judgment standard, where the first iteration output data is last output data of the second iteration output data; the point cloud registration algorithm model comprises a first rigid body transformation unit, a second rigid body transformation unit, a first feature extraction unit, a second feature extraction unit and a calculation matching matrix unit, wherein the first rigid body transformation unit is used for carrying out first rigid body transformation processing on first iteration output data and source point cloud data to obtain source point cloud data after rigid body transformation, the second rigid body transformation unit is used for carrying out second rigid body transformation processing on the source point cloud data after rigid body transformation and target point cloud data to obtain target point cloud data after rigid body transformation, the first feature extraction unit is used for carrying out feature extraction on the source point cloud data after rigid body transformation to obtain source point cloud feature data, the second feature extraction unit is used for carrying out feature extraction on the target point cloud data to obtain target point cloud feature data, the calculation matching matrix unit is used for carrying out calculation processing on the target point cloud data after rigid body transformation, the source point cloud feature data and the target point cloud feature data to obtain a matching matrix, and the calculation matching matrix unit is also used for carrying out registration processing on the target point cloud data through the matching matrix to obtain second iteration output data.
In some alternative embodiments, as shown in fig. 9, (1) the CT and MRI images scanned prior to the operation of the patient are segmented and modeled to obtain model data of the spinal surgical area of the patient, including bones, blood vessels, nerves, target lesion areas, etc., and introduced into the Ho loLens internal operating system; (2) Then Ho loLens is adopted in the operation, and three-dimensional model data in the operation of the region to be operated is obtained by utilizing a three-dimensional reconstruction module of the three-dimensional model data. (3) Registering the model data obtained in the steps (1) and (2) according to the model data to obtain a corresponding transformation relationship between the preoperative multi-mode image space and the intraoperative physical space of the patient; (4) After obtaining the spatial transformation relation between preoperative and intraoperative, mapping important model data such as focus, nerve, blood vessel and the like obtained by preoperative segmentation into physical space (namely a real scene image) of an intraoperative patient after registration spatial transformation; (5) Then, a surgeon wears Ho loLens (mixed reality equipment) during operation, and a visual preoperative spine multi-tissue three-dimensional model can be superimposed under the background of a real operation area of a patient by utilizing the mixed reality rendering function of Ho loLens, so that the aim of performing accurate surgery under mixed reality navigation can be fulfilled.
In the solution of the alternative embodiment described above, the most important is the registration between preoperative and intra-operative data. For this purpose, a point cloud registration algorithm is introduced to obtain the transformation relationship between the preoperative and intraoperative coordinate spaces. As shown in fig. 10, the essence of point cloud registration is that the point cloud data obtained in different coordinate systems are transformed in the coordinate systems, and the key of the problem is that after the point cloud data in one coordinate system is transformed to the other coordinate system through coordinate transformation parameters R (rotation matrix) and T (translation vector), the measured distance between the corresponding points is the smallest, and meanwhile, the problem of the operation efficiency of the registration algorithm is considered, and the problem can be solved by adopting the point cloud registration algorithm based on a deep learning frame. The registration problem of a pair of point cloud data is pairwise registration, and the data set of one point cloud is accurately registered with the data set of the other point cloud (target data set) by applying a4×4 rigid body transformation matrix representing translation and rotation, which is specifically implemented as follows: firstly, selecting a standard from two data sets according to the same key points, and extracting the key points; then calculating the feature descriptors of all the selected key points respectively; then, the coordinate positions of the feature descriptors in the two data sets are combined, the corresponding relation of the feature descriptors and the coordinate positions of the feature descriptors are estimated based on the similarity of the features and the positions of the feature descriptors, and corresponding point pairs are estimated preliminarily; then assuming the data is noisy, the erroneous point pairs that have an effect on registration need to be removed; and estimating the rigid body transformation by using the residual correct corresponding relation to finish registration.
It should be noted that, in the step of extracting the key points, two sets of feature vectors (source point cloud and target point cloud) obtained from the point cloud data need to find similar features, determine overlapping portions of the data, and then register. Different methods are used to search for correspondence between features according to the types of features. The common processing strategies include exhaustion method and KD tree nearest neighbor search, and the embodiment is not limited in particular.
In the step of removing the correspondence, because the influence of noise is not always correct for all estimated correspondences, the incorrect correspondences will have negative influence on the estimation of the final rigid transformation matrix, so that the incorrect correspondences are removed by adopting random sampling consistency estimation, and the number of the final correspondences only uses a certain proportion of correspondences, thereby not only improving the estimation accuracy of the transformation matrix, but also improving the speed of the alignment points.
It should be noted that, for the estimation method of the transformation matrix, firstly, some error metrics are estimated on the basis of the corresponding relationship; then estimating a rigid body transformation under camera pose (motion estimation) and minimization error metrics; then optimizing the structure of the points; then using rigid transformation to rotationally translate the point cloud to the same coordinate system where the target is located, and calculating an internal ICP cycle by using all points, a subset of the points or key points; and then iterating until the convergence judgment standard is met. The convergence determination criterion may be set according to practical situations, and is not specifically limited in this embodiment.
In the technical solution in the above embodiment, the following technical effects can be achieved:
(1) The multi-mode image fusion technology such as CT and MRI before spine operation and three-dimensional C-arm scanning during operation is adopted, a three-dimensional visual model containing bones, muscles, nerves, blood vessels, intervertebral discs and pathological tissues of pathological positions is established, nerve compression sites are presented and serve as radio frequency treatment targets, and a foundation is provided for guiding a radio frequency tool bit to accurately treat the pathological tissues compressing nerves during operation.
(2) Through the collection arrangement to clinical lumbar disc herniation patient big data, adopt artificial intelligence analysis to assist the operation sign of determining, automatic identification pathological change target point, calculate best operation route, under the prerequisite that does not harm and destroy important nerve vascular tissue and the original stable structure of lumbar vertebrae, can accurate radio frequency release nerve oppression.
(3) The method comprises the steps of generating a virtual lumbar vertebra holographic three-dimensional visual model in an actual operation area by utilizing a mixed reality technology, displaying relevant structures such as a herniated disk, nearby bones, nerves and blood vessels in a three-dimensional clear manner under the condition of visible operation field, enabling an operator to more clearly know the relative position and distance relation between a treatment target point and surrounding important structures, simultaneously realizing automatic registration of an MR model and the actual anatomical position of a patient, tracking the position of the patient and dynamic position change of surgical instruments in real time by adopting an optical navigation system, and really realizing accurate radio frequency target treatment on pressed nerve pathological tissues.
(4) The whole set of three-dimensional visual operation navigation system for the mixed reality spine is expected to achieve the purposes of intelligence, precision, minimally invasive, safety and high efficiency, achieves the purposes that the skin incision is only 1-2mm, a patient can walk and discharge the hospital in the same day after operation, introduces minimally invasive operation, digital operation navigation technology, artificial intelligence technology, precision surgery technology and mixed reality technology into lumbar disc herniation operation at the same time, and establishes a new technical platform for lumbar vertebra operation.
Furthermore, an embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for performing the above-described mixed reality spinal three-dimensional visualization method, for example, performing the above-described method steps S100 to S900 in fig. 3.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically include computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. The computer-readable storage medium may be nonvolatile or volatile.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.
Claims (6)
1. A three-dimensional visual surgical navigation system based on a mixed reality spine, comprising:
A mixed reality device comprising a first navigation unit;
A C-arm imaging device comprising a second navigation unit for acquiring radiographic image data of a patient's spine during a surgical procedure;
The optical surgery positioning navigator is used for establishing a three-dimensional space coordinate system according to the acquired coordinate information of the first navigation unit and the coordinate information of the second navigation unit in the process of scanning the spine of the patient, and dynamically tracking the three-dimensional space coordinate of the mixed reality equipment to obtain real-time coordinate information of the mixed reality equipment in the process of surgery;
The cross-modal image generation module is used for generating three-dimensional spine focus fusion image data and sending the three-dimensional spine focus fusion image data to the mixed reality equipment, wherein the three-dimensional spine focus fusion image data comprises a three-dimensional spine focus fusion image and position information in the three-dimensional spine focus fusion image, the three-dimensional spine focus fusion image is an image obtained by registering and fusing the ray image data and first image data, the first image data is an image obtained by carrying out three-dimensional reconstruction on a lumbar vertebra lesion part on a CT image and/or an MRI image of a spine of a patient acquired before an operation, and the position information is obtained according to coordinate information of the C-arm imaging equipment and the three-dimensional spine focus fusion image;
The mixed reality device is also used for acquiring a real scene image, and carrying out registration processing according to the real-time coordinate information, the position information, the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality three-dimensional image;
The cross-mode image generation module further comprises an image fusion module unit and a data transmission unit, wherein the image fusion module unit is used for performing vertebrae segmentation recognition processing on the CT image and/or the MRI image of the spine of the patient acquired before an operation to obtain the first image data;
The image fusion module unit is also used for carrying out symmetrical alignment and centroid alignment treatment on the radiographic image data of the patient spine and the first image data acquired in the operation process to obtain aligned primary fusion image data;
The image fusion module unit is also used for carrying out fusion processing on the primary fusion image data by utilizing a point cloud precision registration ICP algorithm to obtain a three-dimensional spine focus fusion image;
the data transmission unit is used for transmitting the three-dimensional spine focus fusion image data generated by the image fusion module unit to the mixed reality equipment;
The first image comprises a CT image and an MRI image, and the image fusion module unit is further used for performing first positioning processing on the CT image and the MRI image through vertebrae positioning networks respectively to obtain positioning point information corresponding to each vertebra of the CT image and positioning point information corresponding to each vertebra of the MRI image;
Dividing each vertebra in the CT image and the MRI image through the positioning point information respectively to obtain first vertebra image data of each vertebra in the CT image and second vertebra image data of each vertebra in the MRI image;
obtaining the CT image and the MRI image after vertebrae segmentation according to the first vertebrae image data of each vertebrae in the CT image and the second vertebrae image data of each vertebrae in the MRI image;
registering the CT image and the MRI image after vertebrae segmentation to obtain the first image data;
The heat map of each positioning point is regressed in the vertebra positioning network through a semantic segmentation network, and the heat map generating method is obtained through the following formula:
;
xi is the centroid of the ith vertebra, σi is the learnable parameter:
;
HLA is an initial heat map, HSC is used for eliminating false light spots in the HLA, a convolution kernel scale is used for improving a receptive field of the characteristics, a heat map obtained by a target point is extracted from a global angle, and a target heat map is obtained in a multiplication mode;
the objective function employs an L2 loss function:
;
the locating point takes the coordinates of the point with the maximum gray value in the heat map:
;
Extracting a local vertebra original gray image by a clipping mode according to the coordinates of the points of the obtained heat map, taking the heat map and the gray map as the input of a segmentation network U-Net, and focusing on the vertebra where the segmentation positioning points are located, wherein a loss function adopts cross entropy and DSC loss function:
。
2. The three-dimensional visual surgical navigation system based on a mixed reality spinal column of claim 1, wherein the first navigation unit comprises a first guided aircraft carrier and a plurality of infrared reflective spheres disposed on the first guided aircraft carrier, and the second navigation unit comprises a second guided aircraft carrier and a plurality of infrared reflective spheres disposed on the second guided aircraft carrier.
3. The three-dimensional visual surgical navigation system based on a mixed reality spinal column according to claim 1, wherein,
The mixed reality device is further used for carrying out point cloud registration processing on the real scene image and the three-dimensional spine focus fusion image to obtain a mixed reality spine three-dimensional image;
the mixed reality device is further used for obtaining a mixed reality three-dimensional image according to the real-time coordinate information, the position information and the mixed reality spine three-dimensional image.
4. The three-dimensional visual surgical navigation system based on the mixed reality spinal column according to claim 3,
The mixed reality device is further configured to obtain first iteration output data of a point cloud registration algorithm model, source point cloud data and target point cloud data, where the source point cloud data is point cloud data extracted from a patient skin area image of the three-dimensional spine focus fusion image, and the target point cloud data is point cloud data extracted from a patient skin area image of the real scene image;
the mixed reality device is further configured to input the first iteration output data, the source point cloud data, and the target point cloud data to the point cloud registration algorithm model, so as to output second iteration output data through the point cloud registration algorithm model until the second iteration output data meets a preset convergence judgment standard, where the first iteration output data is last output data of the second iteration output data;
the point cloud registration algorithm model comprises a first rigid body transformation unit, a second rigid body transformation unit, a first feature extraction unit, a second feature extraction unit and a calculation matching matrix unit, wherein the first rigid body transformation unit is used for carrying out first rigid body transformation processing on the first iterative output data and the source point cloud data to obtain source point cloud data after rigid body transformation, the second rigid body transformation unit is used for carrying out second rigid body transformation processing on the source point cloud data after rigid body transformation and the target point cloud data to obtain target point cloud data after rigid body transformation, the first feature extraction unit is used for carrying out feature extraction on the source point cloud data after rigid body transformation to obtain source point cloud feature data, the second feature extraction unit is used for carrying out feature extraction on the target point cloud data to obtain target point cloud feature data, the calculation matching matrix unit is used for carrying out calculation processing on the target point cloud data after rigid body transformation, the source point cloud feature data and the target point cloud feature data to obtain matching matrix, and the calculation matching matrix unit is also used for carrying out second iterative output data after rigid body transformation.
5. The three-dimensional visualization surgical navigation system based on the mixed reality spinal column according to claim 1, wherein the cross-modal image generation module further comprises a surgical path calculation unit, the surgical path calculation unit is used for performing recognition processing on first image data to obtain a nerve compression target point, and calculating a puncture path in the first image data according to the nerve compression target point.
6. The mixed reality spine-based three-dimensional visualization surgical navigation system of claim 1, wherein the image fusion module unit is further configured to:
extracting landmark points of the CT image and landmark points of the MRI image after vertebrae segmentation;
Sequentially taking the points of landmark points of the CT image and the points of landmark points of the MRI image, wherein the number of the points is the same;
sequentially aligning the landmark points of the CT image with the landmark points of the MRI image in a mass center manner to obtain an initial alignment image;
And performing rigid transformation and pyramid-based non-rigid registration processing on the initial alignment image to obtain the first image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310450119.8A CN116492052B (en) | 2023-04-24 | 2023-04-24 | Three-dimensional visual operation navigation system based on mixed reality backbone |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310450119.8A CN116492052B (en) | 2023-04-24 | 2023-04-24 | Three-dimensional visual operation navigation system based on mixed reality backbone |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116492052A CN116492052A (en) | 2023-07-28 |
CN116492052B true CN116492052B (en) | 2024-04-23 |
Family
ID=87322465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310450119.8A Active CN116492052B (en) | 2023-04-24 | 2023-04-24 | Three-dimensional visual operation navigation system based on mixed reality backbone |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116492052B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116942317B (en) * | 2023-09-21 | 2023-12-26 | 中南大学 | Surgical navigation positioning system |
CN117598782B (en) * | 2023-09-28 | 2024-06-04 | 苏州盛星医疗器械有限公司 | Surgical navigation method, device, equipment and medium for percutaneous puncture surgery |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7280710B1 (en) * | 2002-05-24 | 2007-10-09 | Cleveland Clinic Foundation | Architecture for real-time 3D image registration |
CN106846330A (en) * | 2016-12-22 | 2017-06-13 | 浙江大学宁波理工学院 | Human liver's feature modeling and vascular pattern space normalizing method |
CN109925057A (en) * | 2019-04-29 | 2019-06-25 | 苏州大学 | A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality |
CN109925058A (en) * | 2017-12-18 | 2019-06-25 | 吕海 | A kind of minimally invasive spinal surgery operation guiding system |
CN111932533A (en) * | 2020-09-22 | 2020-11-13 | 平安科技(深圳)有限公司 | Method, device, equipment and medium for positioning vertebrae by CT image |
CN112990373A (en) * | 2021-04-28 | 2021-06-18 | 四川大学 | Convolution twin point network blade profile splicing system based on multi-scale feature fusion |
CN215937645U (en) * | 2020-03-31 | 2022-03-04 | 吴昀效 | Novel mixed reality technique spinal surgery segment location device |
CN115105207A (en) * | 2022-06-28 | 2022-09-27 | 北京触幻科技有限公司 | Operation holographic navigation method and system based on mixed reality |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107595387B (en) * | 2017-07-28 | 2020-08-07 | 浙江大学 | Spine image generation system based on ultrasonic rubbing technology and spine operation navigation and positioning system |
US20210346093A1 (en) * | 2020-05-06 | 2021-11-11 | Warsaw Orthopedic, Inc. | Spinal surgery system and methods of use |
US20220370146A1 (en) * | 2021-05-19 | 2022-11-24 | Globus Medical, Inc. | Intraoperative alignment assessment system and method |
-
2023
- 2023-04-24 CN CN202310450119.8A patent/CN116492052B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7280710B1 (en) * | 2002-05-24 | 2007-10-09 | Cleveland Clinic Foundation | Architecture for real-time 3D image registration |
CN106846330A (en) * | 2016-12-22 | 2017-06-13 | 浙江大学宁波理工学院 | Human liver's feature modeling and vascular pattern space normalizing method |
CN109925058A (en) * | 2017-12-18 | 2019-06-25 | 吕海 | A kind of minimally invasive spinal surgery operation guiding system |
CN109925057A (en) * | 2019-04-29 | 2019-06-25 | 苏州大学 | A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality |
CN215937645U (en) * | 2020-03-31 | 2022-03-04 | 吴昀效 | Novel mixed reality technique spinal surgery segment location device |
CN111932533A (en) * | 2020-09-22 | 2020-11-13 | 平安科技(深圳)有限公司 | Method, device, equipment and medium for positioning vertebrae by CT image |
CN112990373A (en) * | 2021-04-28 | 2021-06-18 | 四川大学 | Convolution twin point network blade profile splicing system based on multi-scale feature fusion |
CN115105207A (en) * | 2022-06-28 | 2022-09-27 | 北京触幻科技有限公司 | Operation holographic navigation method and system based on mixed reality |
Non-Patent Citations (1)
Title |
---|
基于ICP算法的手术导航三维配准技术;王君臣;王田苗;徐源;方礼明;;北京航空航天大学学报;20090415(第04期);45-49 * |
Also Published As
Publication number | Publication date |
---|---|
CN116492052A (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11304680B2 (en) | Spinal image generation system based on ultrasonic rubbing technique and navigation positioning system for spinal surgery | |
CN116492052B (en) | Three-dimensional visual operation navigation system based on mixed reality backbone | |
CN111494009B (en) | Image registration method and device for surgical navigation and surgical navigation system | |
Gueziri et al. | The state-of-the-art in ultrasound-guided spine interventions | |
US20180153620A1 (en) | Spinal Navigation Method, Spinal Navigation System and Computer Program Product | |
US20150371390A1 (en) | Three-Dimensional Image Segmentation Based on a Two-Dimensional Image Information | |
US20230008386A1 (en) | Method for automatically planning a trajectory for a medical intervention | |
TWI836491B (en) | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest | |
EP2950735A1 (en) | Registration correction based on shift detection in image data | |
US20210330250A1 (en) | Clinical diagnosis and treatment planning system and methods of use | |
CN116421313A (en) | Augmented reality fusion method in navigation of lung tumor resection operation under thoracoscope | |
CN109771052B (en) | Three-dimensional image establishing method and system based on multi-view imaging and multi-polarization state imaging | |
CN115105204A (en) | Laparoscope augmented reality fusion display method | |
CN116570370B (en) | Spinal needle knife puncture navigation system | |
KR101988531B1 (en) | Navigation system for liver disease using augmented reality technology and method for organ image display | |
CN113907879A (en) | Personalized cervical endoscope positioning method and system | |
Maharjan et al. | A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm | |
US11564767B2 (en) | Clinical diagnosis and treatment planning system and methods of use | |
Naik et al. | Feature-based registration framework for pedicle screw trajectory registration between multimodal images | |
Sun | A Review of 3D-2D Registration Methods and Applications based on Medical Images | |
EP3800617B1 (en) | A computer-implemented method for registering low dimensional images with a high dimensional image, a method for training an artificial neural network useful in finding landmarks in low dimensional images, a computer program and a system for registering low dimensional images with a high dimensional image | |
US11430203B2 (en) | Computer-implemented method for registering low dimensional images with a high dimensional image, a method for training an aritificial neural network useful in finding landmarks in low dimensional images, a computer program and a system for registering low dimensional images with a high dimensional image | |
CN117836776A (en) | Imaging during medical procedures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |