CN116958132B - Surgical navigation system based on visual analysis - Google Patents

Surgical navigation system based on visual analysis Download PDF

Info

Publication number
CN116958132B
CN116958132B CN202311202306.0A CN202311202306A CN116958132B CN 116958132 B CN116958132 B CN 116958132B CN 202311202306 A CN202311202306 A CN 202311202306A CN 116958132 B CN116958132 B CN 116958132B
Authority
CN
China
Prior art keywords
image
data
mri
representing
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311202306.0A
Other languages
Chinese (zh)
Other versions
CN116958132A (en
Inventor
熊力
张江杰
罗建书
毛彬睿
马程远
林良武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202311202306.0A priority Critical patent/CN116958132B/en
Publication of CN116958132A publication Critical patent/CN116958132A/en
Application granted granted Critical
Publication of CN116958132B publication Critical patent/CN116958132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Robotics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a visual analysis-based surgical navigation system which comprises an image acquisition module, a data preprocessing module, a data classification module, a registration module, a three-dimensional reconstruction module and a tracking feedback module. The invention belongs to the technical field of surgical navigation, in particular to a surgical navigation system based on visual analysis, which aims at the technical problems that a three-dimensional model constructed by medical images in a single mode is poor in image quality and single in provided view angle and cannot comprehensively reflect focus positions.

Description

Surgical navigation system based on visual analysis
Technical Field
The invention belongs to the technical field of surgical navigation, and particularly relates to a surgical navigation system based on visual analysis.
Background
The surgical navigation system is a computer software system for improving the accuracy, safety and precision of surgery. The existing operation navigation system based on visual analysis has the technical problem that the image quality provided when a three-dimensional model is built by medical images is poor; the technical problem that the focus position cannot be comprehensively reflected due to single visual angle provided by a three-dimensional model constructed by medical images under a single mode exists; the existing classification imbalance, the common diseases occupy most, and the rare diseases occupy only a small part of samples, so that the technical problem of inaccurate system judgment is caused; there is a technical problem that the overall accuracy of surgical navigation is reduced due to the fact that the registration method is inaccurate easily because of the large difference between the floating image and the reference image.
Disclosure of Invention
Aiming at the situation, in order to overcome the defects of the prior art, the invention provides a visual analysis-based operation navigation system, and aims at the technical problem of poor image quality provided when a three-dimensional model is constructed by medical images, the invention carries out self-adaptive non-local mean filtering denoising processing on the images, and clear images can be provided; aiming at the technical problem that a three-dimensional model constructed by medical images in a single mode can not comprehensively reflect focus positions due to single view angle, the invention acquires the medical images in multiple modes to construct models with different view angles and physical information; aiming at the technical problems that the classification is unbalanced, common diseases occupy most, rare diseases occupy only a few samples, and the system judgment is inaccurate, the invention adopts a weighted decision tree to classify, judges whether the images belong to common diseases or rare diseases, and saves positioning navigation time; aiming at the technical problem that the accuracy of the registration method is easy to be low due to the fact that large difference exists between the floating image and the reference image, the technical problem that the total accuracy of surgical navigation is low is solved.
The technical scheme adopted by the invention is as follows: the invention provides a visual analysis-based surgical navigation system, which comprises an image acquisition module, a data preprocessing module, a data classifying module, a registration module, a three-dimensional reconstruction module and a tracking feedback module, wherein the image acquisition module acquires an original MRI image, an original CT image and a real-time video of a patient, the original MRI image and the original CT image of the acquired patient are sent to the data preprocessing module, the real-time video is sent to the tracking feedback module, the data preprocessing module receives the original MRI image and the original CT image of the data acquisition module, the data preprocessing module performs denoising filtering by utilizing a self-adaptive non-local mean filtering algorithm, the denoised image is sent to the data classifying module and the registration module, the data classifying module receives the denoised image from the data preprocessing module, performs feature extraction and data classification, the classified data is sent to the registration module, the registration module receives the image and the classified data from the data preprocessing module, the MRI image and the CT image are subjected to feature extraction and registration, the registered data after the registration module receives the data from the data preprocessing module, the three-dimensional reconstruction module receives the data from the data of the data preprocessing module, the three-dimensional reconstruction module is constructed by utilizing the data after the self-adaptive non-local mean filtering algorithm, and the three-dimensional reconstruction module is constructed by utilizing the three-dimensional reconstruction model after the three-dimensional reconstruction model is reconstructed by the three-dimensional reconstruction module.
Further, the image acquisition module acquires an original MRI image, an original CT image and a real-time video of the patient.
Further, the data preprocessing module performs denoising filtering by adopting a self-adaptive non-local mean filtering algorithm, and performs denoising filtering operation on the CT image to obtain a filtered denoising CT image I CT Denoising and filtering operation is carried out on the MRI image to obtain filtered and denoised imageNoisy MRI image I MRI And adjusts the brightness and contrast of the CT image to be the same as those of the MRI image.
Further, the data classification module performs data classification by adopting a convolutional neural network algorithm and a weighted decision tree algorithm, and comprises the steps of denoising the filtering CT image I through the convolutional neural network CT Extracting features, and denoising the filtered and denoised MRI image I through a convolutional neural network MRI Performing feature extraction and data classification by adopting a weighted decision tree algorithm, wherein the steps of feature extraction and data classification comprise:
filtering denoising CT image I through convolutional neural network CT Performing feature extraction, wherein the feature extraction specifically comprises the following steps:
constructing a convolution layer, in particular a convolution module with a 3×3 configuration, and denoising the filtered CT image I CT Input to a convolution layer to obtain an output characteristic diagram O CT The filtered denoising CT image I CT The formula input to the convolution layer is as follows:
wherein b represents a convolutional layer offset value, C l+1 Representing the output value of the convolution layer, C l Representing the input value of the convolution layer, C l+1 (i, j) represents the output value, w, of the convolution layer with the position (i, j) of the center pixel and its similar center pixels l+1 The weights of the convolution layers are represented,the input value and the weight are convolved, i represents a center pixel, and j represents a similar center pixel of the center pixel;
and performing nonlinear operation on the output data of the convolution layer by adopting an activation function ReLU to obtain activatable data, wherein the formula of the activation function for performing nonlinear operation on the convolution layer data is as follows:
wherein f (O) CT ) The method comprises the steps of representing an output value after activation, outputting x when the input value x is larger than 0, activating data in a feature map, and outputting 0 when the input value x is smaller than 0, and not activating the data in the feature map;
building a pooling layer, specifically, sending activatable data into the pooling layer, selecting proper features through pooling operation, and filtering information to obtain a pooled feature map F CT The formula of the pooling operation is as follows:
in the method, in the process of the invention,the output value of the pooling layer is represented by t, f, the size of the convolution kernel, c, the offset of the pooling window row, d, the offset of the pooling window column, s 0 The convolution step length is represented, a represents a pre-designated parameter, a is towards infinity, a maximum pooling value is obtained, i represents a center pixel, and j represents a similar center pixel of the center pixel;
constructing a full-connection layer, namely flattening the feature map by adopting flattening operation to ensure that the feature map loses a space topological structure, and outputting feature vectors of the CT image at the full-connection layer;
denoising the filter into MRI image I MRI Repeating the operation to obtain a filtered denoising MRI image I MRI Feature vectors after feature extraction;
the step of classifying data by adopting a weighted decision tree algorithm comprises the following steps:
collecting rare diseases in CT images and MRI images as a sample data set, presetting the sample data set as Q, wherein the data comprises detected diseases and corresponding labels thereof, the detected diseases are feature vectors, and the corresponding labels are various rare diseases;
counting the number of each sample in the data set Q, and calculating a sample weight, wherein the formula for calculating the sample weight is as follows:
wherein w represents a sample weight, p represents a class sample number, and t represents a duty ratio of the class sample number;
constructing a decision tree, wherein the decision tree comprises the following steps:
dividing the characteristic variable of the data set Q to obtain a subset Q 1 And subset Q 2
Calculate subset Q 1 Is used to measure the purity of the nodes, the subset Q of the computation 1 The formula of the base index of (c) is as follows:
wherein G (Q) 1 ) Is subset Q 1 T is the number of class labels, pt is the t class label duty cycle, w represents the sample weight;
calculate subset Q 2 Is a base index of (2);
calculating the total base index, selecting the characteristic variable with the minimum condition base index and the combination mode to divide, and continuously splitting the subsets until all the subsets belong to the same category or cannot be divided any more, wherein the formula is as follows:
where G (Q, a) is the base index of the Q data set when the partitioning condition is a, |Q| is the size of the set Q, |Q 1 I is set Q 1 Is of the size of |Q 2 I is set Q 2 Is of a size of (2);
and randomly acquiring the MRI image and the CT image to construct a test data set, traversing a decision tree by the test data, and judging whether the disease is a rare disease or not.
Further, the registration module adopts the steps of a medical registration method from coarse to fine by combining progressive images and an accelerated robust feature algorithm, and the method comprises the following steps:
selecting an MRI image of a leaf node as a floating image, selecting a CT image as a reference image, and carrying out averaging processing on each pixel of the floating image and each pixel of the reference image to obtain a progressive image, wherein the formula for carrying out averaging processing on each pixel of the floating image and each pixel of the reference image is as follows:
wherein M is 0 (x, y) represents the pixel value of the progressive image at coordinates (x, y), E (x, y) represents the pixel value of the floating image at coordinates (x, y), and F (x, y) represents the pixel value of the reference image at coordinates (x, y);
generating progressive image M 0 (x, y) as a reference image and a floating image F (x, y) by repeating the calculation, continuing to generate M 1 ,M 2 ,…,M i
Calculating reference image and floating image pixel values, the formula for calculating reference image and floating image pixel values is as follows:
wherein I (x, y) represents the pixel value calculated by integrating the image, x 'represents the lateral position variable, y' represents the longitudinal position variable, and I (x ', y') represents the pixel value after the change;
and extracting characteristic points from the calculated pixel values by using a second partial derivative matrix, wherein the characteristic points are extracted according to the following formula:
where H (x, y, σ) represents feature point data, σ represents a scale factor, kxx (x, y, σ) represents a second derivative of the image in the x-direction at coordinates (x, y), kyy (x, y, σ) represents a second derivative of the image in the y-direction at coordinates (x, y), kxy (x, y, σ) represents a mixed second derivative of the image in the x-and y-directions at coordinates (x, y);
and calculating a Gaussian kernel function for extracting the feature points, wherein the Gaussian kernel function has the following formula:
wherein G (x, y, sigma) represents a Gaussian kernel function, sigma represents a scale factor, exp represents an exponential operation, and x and y represent the abscissa and ordinate of a pixel point;
establishing an image characteristic point database, and calculating Euclidean distance of characteristic points, wherein the Euclidean distance is expressed as follows:
wherein D represents the Euclidean distance, x t Characteristic points, x, representing floating images t ' represents feature points of the intermediate progressive image;
carrying out affine transformation on feature point data in an image feature point database, and applying the obtained parameters to a floating image to obtain a rough registration result, wherein the affine transformation has the following formula:
wherein R is 00 、R 01 、R 10 、R 11 Representing transformation parameters, T x 、T y Representing displacement parameters, x and y representing coordinates of corresponding matching points of the reference image, and x 'and y' representing coordinates of corresponding matching points in the floating image;
repeating the operation on the reference image and the initial coarse registration image to obtain a fine registration result.
Further, the three-dimensional reconstruction module adopts an algorithm for constructing a surface model and a curved surface model to carry out three-dimensional reconstruction, and the method comprises the following steps:
surface interpolation is carried out from the directions of the x axis and the y axis, and plane coordinates of a certain point (a, b) in the image are calculated, wherein the plane coordinates are expressed as follows:
wherein x is b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation, m representing the number of one row in the grid, n representing the number of one column in the grid, a representing the abscissa of the plane coordinate;
calculating a depth value of the plane coordinates (a, b), the depth value having the formula:
z in ab On the representation plane (x b ,y a ) Depth value of a 0 、a 1 And a 2 Representing parameters of a plane, x b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation;
modeling a surface from x, y and z three-dimensional directions by a binary linear function, the formula for modeling the surface being as follows:
where f (x, y, z) represents a modeled function, a 0 、a 1 、a 2 、a 3 、a 4 And a 5 Parameters representing functions;
and carrying out regression on the constructed surface model by adopting a least median square law, wherein the regression formula is as follows:
wherein M represents the minimum median, med represents the median, z a Representing the depth value of a point a on a plane, c representing the coefficient of a function, f (x b ,y a ) Representing the position of the corresponding point (x b ,y a ) Is a value of (2);
reconstructing a curved surface model, wherein the formula of the reconstructed curved surface model is as follows:
where f (·) represents the modeled function, x and y represent the abscissa and the ordinate, a 0 、a 1 、a 2 、a 3 、a 4 And a m-1 Representing parameters of the grid function.
Further, the tracking feedback module compares the three-dimensional reconstruction result with the real-time shot image, updates the extracted characteristic information, calculates the position change and feeds back to related equipment and a display screen.
Compared with the prior art, the invention has the beneficial effects that:
(1) Aiming at the technical problem of poor image quality when a three-dimensional model is constructed by medical images, the invention carries out self-adaptive non-local mean filtering denoising processing on the images, and can provide clear images.
(2) Aiming at the technical problem that a three-dimensional model constructed by medical images in a single mode can not comprehensively reflect the focus position due to single view angle, the invention acquires the medical images in multiple modes to construct models with different view angles and physical information.
(3) Aiming at the technical problems that the classification is unbalanced, the common diseases occupy most, but the rare diseases occupy only a few samples, so that the system judgment is inaccurate.
(4) Aiming at the technical problem that the accuracy of the registration method is easy to be low due to the fact that large difference exists between the floating image and the reference image, the technical problem that the total accuracy of surgical navigation is low is solved.
Drawings
FIG. 1 is a system block diagram of a visual analysis-based surgical navigational positioning method provided by the invention;
FIG. 2 is a flowchart illustrating steps performed by the data classification module for classifying data using a convolutional neural network algorithm and a weighted decision tree algorithm;
FIG. 3 is a flow chart of steps of a medical registration method from coarse to fine in which a registration module employs a combination of progressive images and an accelerated robust feature algorithm;
fig. 4 is a flow chart illustrating the steps of the three-dimensional reconstruction module performing three-dimensional reconstruction using an algorithm for constructing a surface model and a curved surface model.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
Referring to fig. 1, the visual analysis-based surgical navigation system provided by the invention comprises an image acquisition module, a data preprocessing module, a data classification module, a registration module, a three-dimensional reconstruction module and a tracking feedback module, wherein the image acquisition module acquires an original MRI image, an original CT image and a real-time video of a patient, the original MRI image and the original CT image of the acquired patient are sent to the data preprocessing module, the real-time video is sent to the tracking feedback module, the data preprocessing module receives the original MRI image and the original CT image of the data acquisition module, the self-adaptive non-local mean value filtering algorithm is utilized to perform denoising filtering, the denoised image is sent to the data classification module and the registration module, the data classification module receives the denoised image from the data preprocessing module, performs feature extraction and data classification, and sends the classified data to the registration module, the registration module receives the image and the classified data from the data preprocessing module, performs feature extraction and registration on the MRI image and the CT image, and sends the registered data to the three-dimensional reconstruction module, and the three-dimensional reconstruction module receives the three-dimensional reconstruction data from the data acquisition module, and the three-dimensional reconstruction module is constructed by utilizing the real-time feedback model, and the three-dimensional reconstruction model is constructed after the three-dimensional reconstruction model is constructed.
Referring to fig. 1, the image acquisition module acquires an original MRI image, an original CT image, and a real-time video of a patient according to the above embodiment.
Referring to fig. 1, in this embodiment, based on the foregoing embodiment, the step of denoising and filtering by the data preprocessing module using an adaptive non-local mean filtering algorithm includes:
setting an original CT image of a patient as M, setting any pixel in the CT image as a central pixel i, setting a similar pixel of the central pixel as a similar central pixel j, and defining a noisy image, wherein the formula for defining the noisy image is as follows:
wherein T (i) represents a noisy image, N (i) represents a mean value of 0, and the variance is sigma 2 R (i) represents an original image not contaminated by noise, i represents any pixel point in the image M;
calculating a weight W (i, j) of the noise image, wherein the formula for calculating the weight of the noise image is as follows:
wherein N is i Representing an image block centered on pixel i, N j Representing a block of pixels centered on pixel j,representing the euclidean distance of the gaussian weighting, m representing the filtering parameters, exp representing the exponential operation;
calculating a normalization constant Z (i) of the noise image weight, wherein the normalization constant for calculating the noise image weight is as follows:
wherein Z (i) represents a normalization constant of the weight, W (i, j) represents the weight, i represents a center pixel, j represents a similar pixel of the center pixel i, and M represents an original CT image;
calculating a non-local mean evaluation value X (i) of the noise image, wherein the formula for calculating the non-local mean evaluation value of the noise image is as follows:
wherein X (i) represents an adaptive non-local mean evaluation value, i represents a center pixel, j represents a similar center pixel of the center pixel, W (i, j) represents a weight, and T (j) representsThe image with noise is processed to obtain a filtered denoising CT image I CT
Repeating the operation on the MRI image to obtain a filtered denoising MRI image I MRI
The brightness and contrast of the CT image are adjusted to be the same as the MRI image.
By executing the operation, the invention can provide clear images by performing self-adaptive non-local mean filtering denoising processing on the images aiming at the technical problem of poor image quality provided when the medical images are used for constructing the three-dimensional model.
A fourth embodiment, referring to fig. 1 and fig. 2, is based on the above embodiment, further, the data classification module performs data classification by using a convolutional neural network algorithm and a weighted decision tree algorithm, including denoising the filtered CT image I by the convolutional neural network CT Extracting features, and denoising the filtered and denoised MRI image I through a convolutional neural network MRI Performing feature extraction and data classification by adopting a weighted decision tree algorithm, wherein the steps of feature extraction and data classification comprise:
filtering denoising CT image I through convolutional neural network CT Performing feature extraction, wherein the feature extraction specifically comprises the following steps:
constructing a convolution layer, in particular a convolution module with a 3×3 configuration, and denoising the filtered CT image I CT Input to a convolution layer to obtain an output characteristic diagram O CT The filtered denoising CT image I CT The formula input to the convolution layer is as follows:
wherein b represents a convolutional layer offset value, C l+1 Representing the output value of the convolution layer, C l Representing the input value of the convolution layer, C l+1 (i, j) represents the output value, w, of the convolution layer with the position (i, j) of the center pixel and its similar center pixels l+1 The weights of the convolution layers are represented,the input value and the weight are convolved, i represents a center pixel, and j represents a similar center pixel of the center pixel;
and performing nonlinear operation on the output data of the convolution layer by adopting an activation function ReLU to obtain activatable data, wherein the formula of the activation function for performing nonlinear operation on the convolution layer data is as follows:
wherein f (O) CT ) The method comprises the steps of representing an output value after activation, outputting x when the input value x is larger than 0, activating data in a feature map, and outputting 0 when the input value x is smaller than 0, and not activating the data in the feature map;
building a pooling layer, specifically, sending activatable data into the pooling layer, selecting proper features through pooling operation, and filtering information to obtain a pooled feature map F CT The formula of the pooling operation is as follows:
in the method, in the process of the invention,the output value of the pooling layer is represented by t, f, the size of the convolution kernel, c, the offset of the pooling window row, d, the offset of the pooling window column, s 0 The convolution step length is represented, a represents a pre-designated parameter, a is towards infinity, a maximum pooling value is obtained, i represents a center pixel, and j represents a similar center pixel of the center pixel;
constructing a full-connection layer, namely flattening the feature map by adopting flattening operation to ensure that the feature map loses a space topological structure, and outputting feature vectors of the CT image at the full-connection layer;
denoising the filter into MRI image I MRI Repeating the operation to obtain a filtered denoising MRI image I MRI Feature vectors after feature extraction;
the step of classifying data by adopting a weighted decision tree algorithm comprises the following steps:
collecting rare diseases in CT images and MRI images as a sample data set, presetting the sample data set as Q, wherein the data comprises detected diseases and corresponding labels thereof, the detected diseases are feature vectors, and the corresponding labels are various rare diseases;
counting the number of each sample in the data set Q, and calculating a sample weight, wherein the formula for calculating the sample weight is as follows:
wherein w represents a sample weight, p represents a class sample number, and t represents a duty ratio of the class sample number;
constructing a decision tree, wherein the decision tree comprises the following steps:
dividing the characteristic variable of the data set Q to obtain a subset Q 1 And subset Q 2
Calculate subset Q 1 Is used to measure the purity of the nodes, the subset Q of the computation 1 The formula of the base index of (c) is as follows:
wherein G (Q) 1 ) Is subset Q 1 T is the number of class labels, pt is the t class label duty cycle, w represents the sample weight;
calculate subset Q 2 Is a base index of (2);
calculating the total base index, selecting the characteristic variable with the minimum condition base index and the combination mode to divide, and continuously splitting the subsets until all the subsets belong to the same category or cannot be divided any more, wherein the formula is as follows:
wherein G (Q, a) is the Q data setThe division condition is the base index when a, Q is the size of the set Q 1 I is set Q 1 Is of the size of |Q 2 I is set Q 2 Is of a size of (2);
and randomly acquiring the MRI image and the CT image to construct a test data set, traversing a decision tree by the test data, and judging whether the disease is a rare disease or not.
By executing the operation, aiming at the problems that the classification is unbalanced, the common diseases occupy most, the rare diseases occupy only a few samples and the judgment is inaccurate, the invention adopts the weighted decision tree to classify, judges whether the images belong to the common diseases or the rare diseases, and saves the positioning navigation time.
Embodiment five, referring to fig. 1 and 3, the present embodiment is based on the foregoing embodiment, further, the registration module adopts a step of a coarse-to-fine medical registration method combining a progressive image and an accelerated robust feature algorithm, including:
selecting an MRI image of a leaf node as a floating image, selecting a CT image as a reference image, and carrying out averaging processing on each pixel of the floating image and each pixel of the reference image to obtain a progressive image, wherein the formula for carrying out averaging processing on each pixel of the floating image and each pixel of the reference image is as follows:
wherein M is 0 (x, y) represents the pixel value of the progressive image at coordinates (x, y), E (x, y) represents the pixel value of the floating image at coordinates (x, y), and F (x, y) represents the pixel value of the reference image at coordinates (x, y);
generating progressive image M 0 (x, y) as a reference image and a floating image F (x, y) by repeating the calculation, continuing to generate M 1 ,M 2 ,…,M i
Calculating reference image and floating image pixel values, the formula for calculating reference image and floating image pixel values is as follows:
wherein I (x, y) represents the pixel value calculated by integrating the image, x 'represents the lateral position variable, y' represents the longitudinal position variable, and I (x ', y') represents the pixel value after the change;
and extracting characteristic points from the calculated pixel values by using a second partial derivative matrix, wherein the characteristic points are extracted according to the following formula:
where H (x, y, σ) represents feature point data, σ represents a scale factor, kxx (x, y, σ) represents a second derivative of the image in the x-direction at coordinates (x, y), kyy (x, y, σ) represents a second derivative of the image in the y-direction at coordinates (x, y), kxy (x, y, σ) represents a mixed second derivative of the image in the x-and y-directions at coordinates (x, y);
and calculating a Gaussian kernel function for extracting the feature points, wherein the Gaussian kernel function has the following formula:
wherein G (x, y, sigma) represents a Gaussian kernel function, sigma represents a scale factor, exp represents an exponential operation, and x and y represent the abscissa and ordinate of a pixel point;
establishing an image characteristic point database, and calculating Euclidean distance of characteristic points, wherein the Euclidean distance is expressed as follows:
wherein D represents the Euclidean distance, x t Characteristic points, x, representing floating images t ' represents feature points of the intermediate progressive image;
carrying out affine transformation on feature point data in an image feature point database, and applying the obtained parameters to a floating image to obtain a rough registration result, wherein the affine transformation has the following formula:
wherein R is 00 、R 01 、R 10 、R 11 Representing transformation parameters, T x 、T y Representing displacement parameters, x and y representing coordinates of corresponding matching points of the reference image, and x 'and y' representing coordinates of corresponding matching points in the floating image;
repeating the operation on the reference image and the initial coarse registration image to obtain a fine registration result.
By executing the operation, the scheme adopts a medical registration method from coarse to fine by using a progressive image and SURF algorithm to improve the registration accuracy aiming at the problem that the registration method is inaccurate frequently due to the large difference between the floating image and the reference image.
In a sixth embodiment, referring to fig. 1 and fig. 4, the three-dimensional reconstruction module performs three-dimensional reconstruction by adopting an algorithm for constructing a surface model and a curved surface model based on the above embodiment, and includes:
surface interpolation is carried out from the directions of the x axis and the y axis, and plane coordinates of a certain point (a, b) in the image are calculated, wherein the plane coordinates are expressed as follows:
wherein x is b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation, m representing the number of one row in the grid, n representing the number of one column in the grid, a representing the abscissa of the plane coordinate;
calculating a depth value of the plane coordinates (a, b), the depth value having the formula:
z in ab On the representation plane (x b ,y a ) Depth value of a 0 、a 1 And a 2 Representing parameters of a plane, x b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation;
modeling a surface from x, y and z three-dimensional directions by a binary linear function, the formula for modeling the surface being as follows:
where f (x, y, z) represents a modeled function, a 0 、a 1 、a 2 、a 3 、a 4 And a 5 Parameters representing functions;
and carrying out regression on the constructed surface model by adopting a least median square law, wherein the regression formula is as follows:
wherein M represents the minimum median, med represents the median, z a Representing the depth value of a point a on a plane, c representing the coefficient of a function, f (x b ,y a ) Representing the position of the corresponding point (x b ,y a ) Is a value of (2);
reconstructing a curved surface model, wherein the formula of the reconstructed curved surface model is as follows:
where f (·) represents the modeled function, x and y represent the abscissa and the ordinate, a 0 、a 1 、a 2 、a 3 、a 4 And a m-1 Representing parameters of the grid function.
By executing the operation, the technical problem that the focus position cannot be comprehensively reflected due to single view angle is provided for the three-dimensional model constructed by the medical image under the single mode.
In a seventh embodiment, referring to fig. 1, the tracking feedback module is configured to compare the three-dimensional reconstruction result with the real-time captured image, update the extracted feature information, calculate the position change, and feed back the calculated position change to the related device and the display screen.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (6)

1. Surgical navigation system based on visual analysis, its characterized in that: the system comprises an image acquisition module, a data preprocessing module, a data classification module, a registration module, a three-dimensional reconstruction module and a tracking feedback module;
the image acquisition module acquires an original MRI image, an original CT image and a real-time video of a patient, sends the acquired original MRI image and original CT image of the patient to the data preprocessing module, and sends the real-time video to the tracking feedback module;
the data preprocessing module receives the original MRI image and the original CT image of the data acquisition module, performs denoising and filtering by utilizing a self-adaptive non-local mean value filtering algorithm, and denoises the denoised imageTransmitting to a data classification module and a registration module, wherein the denoised image comprises a filtered denoised CT image I CT And filtering the denoised MRI image I MRI
The data classification module receives the denoised image from the data preprocessing module, performs feature extraction, performs data classification, and sends the classified data to the registration module;
the feature extraction is used for extracting the filtering denoising CT image I CT And the filtered denoised MRI image I MRI Is to filter and denoise CT image I through convolutional neural network CT Extracting features, and denoising the filtered and denoised MRI image I through a convolutional neural network MRI Extracting features, including the following steps:
constructing a convolution layer; performing nonlinear operation on the output data of the convolution layer by adopting an activation function ReLU to obtain activatable data; constructing a pooling layer; constructing a full connection layer; denoising the filter into MRI image I MRI Repeating the operation to obtain a filtered denoising MRI image I MRI Feature vectors after feature extraction;
the data classification is used for judging whether the disease is rare or not according to the MRI image and the CT image, specifically, the data classification is carried out by adopting a weighted decision tree algorithm, and the method comprises the following steps:
collecting rare diseases in CT images and MRI images as a sample data set, presetting the sample data set as Q, wherein the data comprises detected diseases and corresponding labels thereof, the detected diseases are feature vectors, and the corresponding labels are various rare diseases; counting the number of each sample in the data set Q, and calculating sample weight; constructing a decision tree; randomly acquiring an MRI image and a CT image to construct a test data set, traversing a decision tree by the test data, and judging whether the disease is rare or not;
the registration module receives the images and the classified data from the data preprocessing module and the registration module, performs feature extraction and registration on the MRI image and the CT image, and sends the registered data to the three-dimensional reconstruction module;
the three-dimensional reconstruction module receives the data from the registration module after registration, constructs a three-dimensional reconstruction model by using the data, and sends the constructed three-dimensional reconstruction model to the tracking feedback module;
the three-dimensional reconstruction module adopts an algorithm for constructing a surface model and a curved surface model to carry out three-dimensional reconstruction, and comprises the following steps:
surface interpolation is carried out from the directions of the x axis and the y axis, and plane coordinates of a certain point (a, b) in the image are calculated, wherein the plane coordinates are expressed as follows:
wherein x is b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation, m representing the number of one row in the grid, n representing the number of one column in the grid, a representing the abscissa of the plane coordinate;
calculating a depth value of the plane coordinates (a, b), the depth value having the formula:
z in ab On the representation plane (x b ,y a ) Depth value of a 0 、a 1 And a 2 Representing parameters of a plane, x b Representing the abscissa, y, of the corresponding point obtained by interpolation a Representing the ordinate of the corresponding point obtained by interpolation;
modeling a surface from x, y and z three-dimensional directions by a binary linear function, the formula for modeling the surface being as follows:
wherein f (x, y, z) represents a buildingFunction of modulus, a 0 、a 1 、a 2 、a 3 、a 4 And a 5 Parameters representing functions;
and carrying out regression on the constructed surface model by adopting a least median square law, wherein the regression formula is as follows:
wherein M represents the minimum median, med represents the median, z a Representing the depth value of a point a on a plane, c representing the coefficient of a function, f (x b ,y a ) Representing the position of the corresponding point (x b ,y a ) Is a value of (2);
reconstructing a curved surface model, wherein the formula of the reconstructed curved surface model is as follows:
where f (·) represents the modeled function, x and y represent the abscissa and the ordinate, a 0 、a 1 、a 2 、a 3 、a 4 And a m-1 Parameters representing a grid function;
the tracking feedback module receives the real-time video from the image acquisition module and the model constructed by the three-dimensional reconstruction module, and realizes real-time tracking and feedback.
2. The visual analysis-based surgical navigation system of claim 1, wherein: the filtering denoising CT image I through the convolutional neural network CT Extracting features, and denoising the filtered and denoised MRI image I through a convolutional neural network MRI The step of extracting the characteristics comprises the following steps:
filtering denoising CT image I through convolutional neural network CT Performing feature extraction, wherein the feature extraction specifically comprises the following steps:
building up a convolution layer, in particular a convolution module of 3 x 3, and toFiltering denoising CT image I CT Input to a convolution layer to obtain an output characteristic diagram O CT The filtered denoising CT image I CT The formula input to the convolution layer is as follows:
wherein b represents a convolutional layer offset value, C l+1 Representing the output value of the convolution layer, C l Representing the input value of the convolution layer, C l+1 (i, j) represents the output value, w, of the convolution layer with the position (i, j) of the center pixel and its similar center pixels l+1 The weights of the convolution layers are represented,the input value and the weight are convolved, i represents a center pixel, and j represents a similar center pixel of the center pixel;
and performing nonlinear operation on the output data of the convolution layer by adopting an activation function ReLU to obtain activatable data, wherein the formula of the activation function for performing nonlinear operation on the convolution layer data is as follows:
wherein f (O) CT ) The method comprises the steps of representing an output value after activation, outputting x when the input value x is larger than 0, activating data in a feature map, and outputting 0 when the input value x is smaller than 0, and not activating the data in the feature map;
building a pooling layer, specifically, sending activatable data into the pooling layer, selecting proper features through pooling operation, and filtering information to obtain a pooled feature map F CT The formula of the pooling operation is as follows:
in the method, in the process of the invention,the output value of the pooling layer is represented by t, f, the size of the convolution kernel, c, the offset of the pooling window row, d, the offset of the pooling window column, s 0 The convolution step length is represented, a represents a pre-designated parameter, a is towards infinity, a maximum pooling value is obtained, i represents a center pixel, and j represents a similar center pixel of the center pixel;
constructing a full-connection layer, namely flattening the feature map by adopting flattening operation to ensure that the feature map loses a space topological structure, and outputting feature vectors of the CT image at the full-connection layer;
filtering denoised image I MRI Repeating the operation to obtain an image I MRI Feature vectors after feature extraction;
the step of classifying data by adopting a weighted decision tree algorithm comprises the following steps:
collecting rare diseases in CT images and MRI images as a sample data set, presetting the sample data set as Q, wherein the data comprises detected diseases and corresponding labels thereof, the detected diseases are feature vectors, and the corresponding labels are various rare diseases;
counting the number of each sample in the data set Q, and calculating a sample weight, wherein the formula for calculating the sample weight is as follows:
wherein w represents a sample weight, p represents a class sample number, and t represents a duty ratio of the class sample number;
constructing a decision tree, wherein the decision tree comprises the following steps:
dividing the characteristic variable of the data set Q to obtain a subset Q 1 And subset Q 2
Calculate subset Q 1 Is used to measure the purity of the nodes, the subset Q of the computation 1 The formula of the base index of (c) is as follows:
wherein G (Q) 1 ) Is subset Q 1 T is the number of class labels, pt is the t class label duty cycle, w represents the sample weight;
calculate subset Q 2 Is a base index of (2);
calculating the total base index, selecting the characteristic variable with the minimum condition base index and the combination mode to divide, and continuously splitting the subsets until all the subsets belong to the same category or cannot be divided any more, wherein the formula is as follows:
where G (Q, a) is the base index of the Q data set when the partitioning condition is a, |Q| is the size of the set Q, |Q 1 I is set Q 1 Is of the size of |Q 2 I is set Q 2 Is of a size of (2);
and randomly acquiring the MRI image and the CT image to construct a test data set, traversing a decision tree by the test data, and judging whether the disease is a rare disease or not.
3. The visual analysis-based surgical navigation system of claim 2, wherein: the registration module adopts the steps of a medical registration method from coarse to fine by combining progressive images and an acceleration robust characteristic algorithm, and comprises the following steps:
selecting an MRI image of a leaf node as a floating image, selecting a CT image as a reference image, and carrying out averaging processing on each pixel of the floating image and each pixel of the reference image to obtain a progressive image, wherein the formula for carrying out averaging processing on each pixel of the floating image and each pixel of the reference image is as follows:
wherein M is 0 (x, y) represents the pixel value of the progressive image at coordinates (x, y), E (x, y) representsFloating image pixel values at coordinates (x, y), F (x, y) representing reference image pixel values at coordinates (x, y);
generating progressive image M 0 (x, y) as a reference image and a floating image F (x, y) by repeating the calculation, continuing to generate M 1 ,M 2 ,…,M i
Calculating reference image and floating image pixel values, the formula for calculating reference image and floating image pixel values is as follows:
wherein I (x, y) represents the pixel value calculated by integrating the image, x 'represents the lateral position variable, y' represents the longitudinal position variable, and I (x ', y') represents the pixel value after the change;
and extracting characteristic points from the calculated pixel values by using a second partial derivative matrix, wherein the characteristic points are extracted according to the following formula:
where H (x, y, σ) represents feature point data, σ represents a scale factor, kxx (x, y, σ) represents a second derivative of the image in the x-direction at coordinates (x, y), kyy (x, y, σ) represents a second derivative of the image in the y-direction at coordinates (x, y), kxy (x, y, σ) represents a mixed second derivative of the image in the x-and y-directions at coordinates (x, y);
and calculating a Gaussian kernel function for extracting the feature points, wherein the Gaussian kernel function has the following formula:
wherein G (x, y, sigma) represents a Gaussian kernel function, sigma represents a scale factor, exp represents an exponential operation, and x and y represent the abscissa and ordinate of a pixel point;
establishing an image characteristic point database, and calculating Euclidean distance of characteristic points, wherein the Euclidean distance is expressed as follows:
wherein D represents the Euclidean distance, x t Characteristic points, x, representing floating images t ' represents feature points of the intermediate progressive image;
carrying out affine transformation on feature point data in an image feature point database, and applying the obtained parameters to a floating image to obtain a rough registration result, wherein the affine transformation has the following formula:
wherein R is 00 、R 01 、R 10 、R 11 Representing transformation parameters, T x 、T y Representing displacement parameters, x and y representing coordinates of corresponding matching points of the reference image, and x 'and y' representing coordinates of corresponding matching points in the floating image;
repeating the operation on the reference image and the initial coarse registration image to obtain a fine registration result.
4. A visual analysis-based surgical navigation system according to claim 3, wherein: the data preprocessing module performs denoising and filtering by adopting a self-adaptive non-local mean value filtering algorithm, and performs denoising and filtering operation on the CT image to obtain a filtered denoising CT image I CT Denoising and filtering the MRI image to obtain a filtered denoising MRI image I MRI And adjusts the brightness and contrast of the CT image to be the same as those of the MRI image.
5. The visual analysis-based surgical navigation system of claim 4, wherein: the tracking feedback module compares the three-dimensional reconstruction result with the real-time shot image, updates the extracted characteristic information, calculates the position change and feeds back the position change to the related equipment and the display screen.
6. The visual analysis-based surgical navigation system of claim 5, wherein: the image acquisition module acquires an original MRI image, an original CT image and a real-time video of a patient.
CN202311202306.0A 2023-09-18 2023-09-18 Surgical navigation system based on visual analysis Active CN116958132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311202306.0A CN116958132B (en) 2023-09-18 2023-09-18 Surgical navigation system based on visual analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311202306.0A CN116958132B (en) 2023-09-18 2023-09-18 Surgical navigation system based on visual analysis

Publications (2)

Publication Number Publication Date
CN116958132A CN116958132A (en) 2023-10-27
CN116958132B true CN116958132B (en) 2023-12-26

Family

ID=88451484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311202306.0A Active CN116958132B (en) 2023-09-18 2023-09-18 Surgical navigation system based on visual analysis

Country Status (1)

Country Link
CN (1) CN116958132B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101843992B1 (en) * 2017-11-30 2018-05-14 재단법인 구미전자정보기술원 Augmented reality based cannula guide system for interventional cardiology procedures and method thereof
CN113298853A (en) * 2021-06-28 2021-08-24 郑州轻工业大学 Step-by-step progressive two-stage medical image registration method
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
WO2022257345A1 (en) * 2021-06-07 2022-12-15 刘星宇 Medical image fusion method and system, model training method, and storage medium
CN116485850A (en) * 2023-03-22 2023-07-25 华南师范大学 Real-time non-rigid registration method and system for surgical navigation image based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3509013A1 (en) * 2018-01-04 2019-07-10 Holo Surgical Inc. Identification of a predefined object in a set of images from a medical image scanner during a surgical procedure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101843992B1 (en) * 2017-11-30 2018-05-14 재단법인 구미전자정보기술원 Augmented reality based cannula guide system for interventional cardiology procedures and method thereof
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
WO2022257345A1 (en) * 2021-06-07 2022-12-15 刘星宇 Medical image fusion method and system, model training method, and storage medium
CN113298853A (en) * 2021-06-28 2021-08-24 郑州轻工业大学 Step-by-step progressive two-stage medical image registration method
CN116485850A (en) * 2023-03-22 2023-07-25 华南师范大学 Real-time non-rigid registration method and system for surgical navigation image based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Advanced Nanotechnology Leading the Way to Multimodal Imaging-Guided Precision Surgical Therapy;Cong Wang 等;WILEY VCH;1-122 *
多模态图像自动配准融合技术联合手术机器人在面侧深区肿瘤诊疗 中的应用;靳能皓 等;中国医学影像学杂志;第31卷(第6期);572-576 *

Also Published As

Publication number Publication date
CN116958132A (en) 2023-10-27

Similar Documents

Publication Publication Date Title
CN107369160B (en) Choroid neogenesis blood vessel segmentation algorithm in OCT image
US11275941B2 (en) Crop models and biometrics
CN106651827B (en) A kind of ocular fundus image registration method based on SIFT feature
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
US7970212B2 (en) Method for automatic detection and classification of objects and patterns in low resolution environments
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
EP1789920A1 (en) Feature weighted medical object contouring using distance coordinates
CN112634256B (en) Circle detection and fitting method and device, electronic equipment and storage medium
CN106682678B (en) Image corner detection and classification method based on support domain
CN112001218A (en) Three-dimensional particle category detection method and system based on convolutional neural network
CN107481274A (en) A kind of three-dimensional makees the robustness reconstructing method of object point cloud
CN107767358B (en) Method and device for determining ambiguity of object in image
CN110378924A (en) Level set image segmentation method based on local entropy
CN109410248B (en) Flotation froth motion characteristic extraction method based on r-K algorithm
CN108830792A (en) A kind of image super-resolution method using multiclass dictionary
CN112085675A (en) Depth image denoising method, foreground segmentation method and human motion monitoring method
Streekstra et al. Analysis of tubular structures in three-dimensional confocal images
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN116958132B (en) Surgical navigation system based on visual analysis
CN116763295A (en) Livestock scale measuring method, electronic equipment and storage medium
CN106897721A (en) The rigid-object tracking that a kind of local feature is combined with bag of words
CN113269733B (en) Artifact detection method for radioactive particles in tomographic image
CN115147613A (en) Infrared small target detection method based on multidirectional fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant