CN114792326A - Surgical navigation point cloud segmentation and registration method based on structured light - Google Patents
Surgical navigation point cloud segmentation and registration method based on structured light Download PDFInfo
- Publication number
- CN114792326A CN114792326A CN202210333748.8A CN202210333748A CN114792326A CN 114792326 A CN114792326 A CN 114792326A CN 202210333748 A CN202210333748 A CN 202210333748A CN 114792326 A CN114792326 A CN 114792326A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- patient
- preoperative
- body surface
- structured light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a structured light-based surgical navigation point cloud segmentation and registration method, which comprises the following steps: calibrating a structured light system; projecting the coded structured light to the body surface of the patient in the operation to obtain the body surface point cloud of the patient in the operation; performing CT scanning on a preoperative patient and performing three-dimensional reconstruction to obtain a preoperative three-dimensional image, setting a threshold value to obtain a preoperative patient body surface three-dimensional image, and sampling to obtain preoperative patient body surface point cloud; segmenting back point clouds in the acquired preoperative and intraoperative patient body surface point clouds by using a point cloud segmentation algorithm; and (3) registering the preoperative and intraoperative back point clouds by using a point cloud registration algorithm to obtain a coordinate conversion relation between a preoperative CT image space and an intraoperative patient space. The method of the invention is based on the structured light equipment, realizes the matching of the preoperative space and the intraoperative space of the patient, thereby completing the operation registration in the operation navigation, does not need human intervention in the process, and has the advantages of no radiation, low complexity, high precision, short time and the like.
Description
Technical Field
The invention relates to the field of surgical navigation and point cloud processing, in particular to a structured light-based surgical navigation point cloud segmentation and registration method.
Background
With the continuous progress and development of science and technology, the interdisciplinary fusion of multiple fields of medical engineering, the innovation and the application of the technologies such as the development of diagnosis and treatment, clinical practice, artificial intelligence, big data and the like, the technology has been greatly developed in the fields of surgical medical treatment by means of assisting surgical operations with the aid of computer technology, three-dimensional positioning, image processing, visualization, operation simulation, navigation and the like. The digital and intelligent navigation operation is a core technology for promoting accurate and minimal invasion in surgical diagnosis and treatment, so that great attention and deep exploration and research are obtained.
The surgical navigation combines the fields of medicine, computer technology, image processing technology, robot technology, etc., and performs surgical planning through preoperative image data (such as ultrasound, X-ray, MRI, CT), in which the image data is used as a reference to guide and assist a doctor or a robot to perform surgery.
An important step in surgical navigation is patient space registration, that is, a conversion relationship between an actual space where a patient is located during surgery and a preoperative medical image space is obtained, and the accuracy of registration directly influences the precision of surgical navigation. The traditional registration method needs a doctor to cut the skin and the soft tissue to expose the focus, and contrast is performed in the operation through the operation and the scheme planned before the operation. Currently, the clinical registration method is to wear the patient, stick on the body surface, implant a marker in the bone or use an anatomical marker point, scan the marker and the patient before the operation, introduce X-ray during the operation to obtain the marker at the focus of the patient during the operation, and manually select the marker by a doctor to complete the matching between the corresponding points in the two groups of spaces. This kind of mode needs the doctor to click the mark point manually in proper order and corresponds the matching, and the manual registration process is loaded down with trivial details, consuming time and have the deviation, and secondly the phenomenon that the marker shifts or drops appears in the operation easily, and the mode of implanting the marker can cause the injury to the patient, moreover need navigate with the help of X-ray many times in the operation, greatly increased doctor and patient by the risk of radiation. Therefore, the research on the registration method which can be fast and accurate, does not use markers and reduces X-ray radiation has great significance on surgical navigation.
Compared with the acquisition of the focus in the operation by utilizing X-ray, the structured light mode is completely free of radiation and has high precision and high efficiency, and a marker does not need to be pasted or embedded. The problem of operation registration in operation navigation is converted into a task of registration of preoperative image space point cloud and intraoperative structure light point cloud, the point cloud of an operation area is segmented, and the segmented preoperative and intraoperative operation area point cloud is registered, so that the coordinate conversion relation between a preoperative CT image space and an intraoperative patient space is obtained, and the design of an accurate and effective point cloud segmentation and registration algorithm is very important for improving the operation navigation accuracy.
Disclosure of Invention
The invention provides a structured light-based surgical navigation point cloud segmentation and registration method, which aims to overcome the defects and shortcomings of the prior art, and adopts structured light to acquire a nonradiative intraoperative patient point cloud.
The invention discloses a structured light-based surgical navigation point cloud segmentation and registration method, which specifically comprises the following steps of:
step 1, calibrating a structured light system, and projecting a coding pattern to obtain a body surface point cloud of a patient in operation;
step 2, scanning a patient through CT before an operation and carrying out three-dimensional reconstruction to obtain a three-dimensional model, obtaining a three-dimensional image of the body surface of the patient before the operation through setting a threshold value, and sampling to obtain the body surface point cloud of the patient before the operation;
step 3, constructing a dynamic graph convolution network model based on feature reuse and attention mechanism, and segmenting the acquired preoperative and intraoperative patient body surface point cloud to obtain back point cloud;
and 4, constructing a point cloud registration network based on principal component analysis and dynamic graph convolution, and matching the segmented preoperative and intraoperative patient back point clouds to obtain a conversion relation between two space coordinate systems.
Preferably, the step 1 specifically comprises the following steps:
step 1-1: for the calibration of a camera, acquiring checkerboard calibration board images of a plurality of angles, detecting characteristic points to obtain pixel coordinate values, solving initial values of internal and external parameters and estimating distortion coefficients, estimating optimized parameters by maximum likelihood, calculating a reprojection error, and outputting camera parameters if the parameters are less than 0.2 pixel;
step 1-2: for the calibration of the projector, acquiring horizontal and vertical complementary Gray code images, decoding to obtain a decoded value, solving a sub-pixel value by using a local homography matrix, calculating a reprojection error, and outputting projector parameters if the reprojection error is less than 0.2 pixel;
whereinp represents a set of pixel points within the rectangular region of the camera, q represents a decoded corresponding set of projector pixel points,
wherein the target corner point is located at the middle position of the regionBy applying local homography matricesObtaining the final corner pixel coordinates of the projectorThe pixel coordinate value obtained at this time is of sub-pixel precision;
and 1-3, projecting the coded positive and negative complementary Gray code pattern, and extracting a three-dimensional coordinate value of the decoded point cloud of the body surface of the intraoperative patient by using a PCL point cloud library so as to obtain the body surface point cloud of the intraoperative patient.
Preferably, the step 2 specifically comprises the following steps:
step 2-1: performing chest and abdomen flat scanning CT on a patient before an operation, and performing three-dimensional reconstruction through Mimics software to obtain a three-dimensional reconstruction model of the patient before the operation;
step 2-2: different CT values are obtained according to the density of different parts of a human body and the difference of the X-ray absorption degree, a three-dimensional image of the body surface of the preoperative patient is obtained by setting the CT value to be more than-200 Hu and less than 50Hu, and the PCL point cloud base is used for sampling the three-dimensional image to obtain the body surface point cloud of the preoperative patient.
Preferably, the specific process of step 3 is as follows:
step 3-1: feature reuse is achieved using dense connections for different edge convolution layers, where the mathematical expression of the edge convolution layer is:
whereinAs the ith point in the ith layer, the point is output after the parameter update after edge convolutionSigma denotes the ReLU activation function, BN denotes batch normalization,representing the edge characteristics obtained by the current layer;
step 3-2: based on a dynamic graph convolution network model, adding space attention to each edge convolution layer, enhancing the geometrical structural relation expression of the sampling point cloud center and neighborhood nodes, and mathematically expressing the space attention as follows:
A s (F)=σ(g(F s (max)||F s (avg)))x F
where σ represents sigmoid activation function, g represents convolution operation, F s (max) and F s (avg) respectively represents maximum pooling and average pooling operations, | | | represents splicing operation, and finally a new feature graph A is obtained s (F);
Step 3-3: classifying each point in the input preoperative and intraoperative patient body surface point clouds by using a dynamic graph convolution network model with a feature reuse and attention mechanism so as to obtain all points classified into back types, and taking the points as segmented back point clouds and storing the points.
Preferably, the step 4 specifically includes the following steps:
step 4-1: calculating the mass center of the point clouds of the preoperative and intraoperative patients back to be registered by using principal component analysis, simultaneously calculating to obtain a covariance matrix of the two point clouds, performing singular value decomposition on the covariance matrix to obtain corresponding characteristic values and characteristic vectors, respectively taking the characteristic vector with the maximum characteristic value in the two point clouds as the principal component vector of the two point clouds, calculating a rotation matrix R and a translational vector t between the two point clouds through the principal component vector, and finishing the initial registration of the two point clouds;
step 4-2: the method comprises the following steps of using a dynamic graph convolution network as a feature extractor, wherein the dynamic graph convolution network is a dynamic graph convolution network model based on feature reuse and attention mechanism and provided by the three steps, using an improved LK algorithm, regarding the feature extractor as an imaging function, calculating a feature projection error between two point cloud features, minimizing a feature difference between the two point clouds and minimizing a target function, and thus obtaining an optimal transformation matrix, wherein the target function is expressed as follows:
wherein the rotation matrix R belongs to SO (3), and the translation vectorWhere φ represents the feature extraction function in the dynamic graph convolution, φ:k represents the dimension of the extracted feature.
The beneficial effects of the invention are as follows:
the invention provides a structured light-based surgical navigation point cloud segmentation and registration method, which is used for collecting body surface point clouds of a human body in an operation through structured light, can be used for replacing perspective scanning of X-ray in the operation, and is characterized by no need of contact, no radiation, high precision, high speed and good real-time property. The human body surface point cloud before the operation can be well obtained through the three-dimensional reconstruction and threshold setting of the CT image. The point cloud segmentation is carried out on the dynamic graph convolution network model based on the feature reuse and attention mechanism, the problem of high requirements on point cloud similarity structures in the point cloud registration process is solved, the registration difficulty is reduced, and the registration precision is improved. The point cloud registration method based on principal component analysis and dynamic graph convolution solves the problem of overlarge difference of rotation and translation between the preoperative CT image space patient point cloud and the intraoperative actual space patient point cloud, and can complete the operation registration process in operation navigation quickly and accurately.
Drawings
FIG. 1 is a flow chart of the structured light-based surgical navigation point cloud segmentation and registration method of the present invention.
FIG. 2 is a flow chart of point cloud segmentation by a dynamic graph convolution network model based on feature reuse and attention mechanism in the method of the invention.
FIG. 3 is a flow chart of point cloud registration based on principal component analysis and dynamic graph convolution in the method of the present invention.
Detailed Description
Referring to fig. 1, the surgical navigation point cloud segmentation and registration method based on structured light provided by the invention mainly comprises the following processes:
step 1, calibrating the structured light system, and projecting a coding pattern to obtain a body surface point cloud of a patient in an operation.
Step 2, scanning the patient through CT before the operation and carrying out three-dimensional reconstruction to obtain a three-dimensional model, obtaining a three-dimensional image of the body surface of the patient before the operation through setting a threshold value, and carrying out sampling to obtain the body surface point cloud of the patient before the operation.
And 3, constructing a dynamic graph convolution network model based on feature reuse and attention mechanism, and segmenting the acquired preoperative and intraoperative patient body surface point clouds to obtain back point clouds.
And 4, constructing a point cloud registration network based on principal component analysis and dynamic graph convolution, and matching the segmented preoperative and intraoperative patient back point clouds to obtain a conversion relation between two space coordinate systems.
The step 1 is to calibrate the structured light system and project the coding pattern to obtain the body surface point cloud of the patient in the operation. The method specifically comprises the following steps:
1-1, calibrating a camera, acquiring checkerboard images of a plurality of angles, detecting characteristic points to obtain pixel coordinate values, solving initial values of internal and external parameters, estimating distortion coefficients, estimating optimized parameters by maximum likelihood, calculating a reprojection error, and outputting camera parameters if the parameters are smaller than a threshold (0.2 pixel).
1-2, calibrating the projector, acquiring horizontal and vertical complementary Gray code images, decoding to obtain a decoded value, solving a sub-pixel value by using a local homography matrix, calculating a reprojection error, and outputting projector parameters if the reprojection error is smaller than a threshold (0.2 pixel).
whereinp represents a set of pixel points within the rectangular region of the camera and q represents a set of decoded corresponding projector pixel points.
Wherein the target corner point is located at the middle position of the regionBy applying local homographies matricesObtaining the final corner pixel coordinates of the projectorThe pixel coordinate values obtained at this time are of sub-pixel accuracy.
1-3, projecting a coded positive and negative complementary Gray code pattern, and extracting a decoded point cloud three-dimensional coordinate value of the body surface of the patient in the operation by using a PCL point cloud library so as to obtain the body surface point cloud of the patient in the operation.
Step 2 before the art, the patient is scanned through the CT and three-dimensional reconstruction is carried out to obtain a three-dimensional model, a three-dimensional image of the body surface of the patient before the art is obtained through setting a threshold value, and sampling is carried out to obtain the body surface point cloud of the patient before the art, and the method specifically comprises the following steps:
and 2-1, performing chest and abdomen flat scanning CT on the patient before the operation, and performing three-dimensional reconstruction through Mimics software to obtain a three-dimensional reconstruction model of the patient before the operation.
And 2-2, obtaining different CT values according to the density of different parts of a human body and different X-ray absorption degrees, obtaining a three-dimensional image of the body surface of the preoperative patient by setting the CT value (the unit is Hu) to be more than-200 Hu and less than 50Hu, and sampling the three-dimensional image by using a PCL point cloud library to obtain the body surface point cloud of the preoperative patient.
And 3, constructing a dynamic graph convolution network model based on feature reuse and attention mechanism, and segmenting the acquired preoperative and intraoperative patient body surface point clouds to obtain back point clouds. The flow chart is shown in fig. 2, and the specific process is as follows:
feature reuse is achieved using dense connections for different edge convolution layers.
The mathematical expression of the edge convolution layer is:
whereinAs the ith point in the ith layer, the point is output after the parameter update after edge convolutionσ denotes the ReLU activation function, BN denotes batch normalization,representing the edge characteristics obtained for the current layer.
The process of feature reuse is represented as follows: suppose f 0 Representing the characteristic dimensions of the initial input point cloud, by passing layer by layer in a network consisting of l layers, each layer undergoing a non-linear transformation T using edge convolution l By reusing the characteristics, the l layer can not only learn from the previous layer, but also use the outputs of all the previous layers as inputs, and the output of the l layer is also used as the input of the next layer, which is expressed as
f l =T l ([f 0 ,f 1 ,…,f l-1 ])
Wherein T is l (. cndot.) represents a nonlinear function; f. of l Represents the input of the l-th layer, which is the connection of the outputs of all previous layers in the channel dimension. This makes f l Aggregating multi-level shape semantics and multi-scale shape information
Based on the dynamic graph convolution network model, space attention is added into each edge convolution layer to enhance the relation expression of the sampling point cloud center and the neighborhood node geometric structure.
The mathematical expression of spatial attention is:
A s (F)=σ(g(F s (max)||F s (avg)))x F
where σ represents sigmoid activation function, g represents convolution operation, F s (max) and F s (avg) respectively represents Max drawing and AvgPooling operations, and | represents splicing operation, and finally obtains a new characteristic diagram A s (F)。
In the process of training the network, the iterative Batch Size (Batch Size) is set to 8, 200 epochs are trained, optimized using an Adam optimizer, and the learning rate lr is 0.001. The convolution layer was then normalized using bn (batch normalization), the activation function was using LeakyReLu, and the overfitting was prevented using Dropout of 0.5 in the last two fully connected layers. And randomly dividing the data set according to the proportion of a training set, a testing set and a verification set in a ratio of 8:1:1, and sampling 1024 points in a point cloud as input.
Classifying each point in the input preoperative and intraoperative body surface point clouds of the patient by using a dynamic graph convolution network model of a feature reuse and attention mechanism so as to obtain all points classified as back types, and taking the points as segmented back point clouds and storing the points.
Preferably, step 4, a point cloud registration network based on principal component analysis and dynamic graph convolution is constructed, the segmented preoperative and intraoperative patient back point clouds are matched, and a conversion relation between two space coordinate systems is obtained, and the method specifically comprises the following steps:
calculating the centroid of the preoperative and intraoperative patient back point clouds to be registered by using Principal Component Analysis (PCA), simultaneously calculating to obtain a covariance matrix of the two point clouds, performing Singular Value Decomposition (SVD) on the covariance matrix to obtain corresponding characteristic values and characteristic vectors, respectively taking the characteristic vector with the maximum characteristic value in the two point clouds as the principal component vector of the two point clouds, calculating a rotation matrix R and a translational vector t between the two point clouds through the principal component vector, and finishing the initial registration of the two point clouds.
The dynamic graph convolution network is used as a feature extractor, namely, the dynamic graph convolution network model based on feature reuse and attention mechanism is provided in three steps. And (3) using an improved LK algorithm, regarding the feature extractor as an imaging function, and calculating a feature projection error between the two point cloud features to minimize a feature difference between the two point clouds and minimize a target function, so as to obtain an optimal transformation matrix. The objective function is expressed as follows:
wherein the rotation matrix R belongs to SO (3) and the translation vectorWhere φ represents the feature extraction function in the dynamic graph convolution, φ:k represents the dimension of the extracted feature.
The LK method in the three-dimensional point cloud uses an inverse synthesis IC algorithm to inversely transform the template point cloud to the source point cloud, and at the moment, only a Jacobian matrix needs to be calculated once, so that the calculation cost is greatly reduced. In solving the jacobian matrix in detail, each column of the jacobian matrix is approximated using finite differences of the values by giving an infinitesimal perturbation. And obtaining a Jacobian matrix, and then solving a rotation and translation parameter xi corresponding to the matrix.
And (4) performing iterative calculation in the over-registration algorithm, obtaining a new rotation and translation parameter xi each time, obtaining a change delta G of a transformation matrix each time, and updating the source point cloud. By continuously updating the changes, when the number of iterations is reachedWhen the number or the variation is smaller than the threshold value, the final prediction transformation matrix G is obtained est 。
G est =ΔG n ·…·ΔG i ·…·ΔG 0
Wherein Δ G i Representing the amount of change, G, per time of the change matrix est The final prediction transformation matrix is obtained.
In the registration network training process, an Adam optimizer is adopted for a dynamic graph convolution network based on feature reuse and an attention mechanism, the learning rate is 0.001, the epoch of training is 200 rounds, the maximum iteration number is 10, the batchsize is 8, a BN layer is added behind each convolution layer in the AFRDGCNN for normalization, Dropout is not used, and 2048 point sampling is carried out on input point cloud.
And registering the segmented structured light scanning point cloud and the CT scanning point cloud by using the step, wherein the finally obtained transformation matrix is the transformation relation between the preoperative CT image space and the intraoperative patient space coordinate system, and the operative registration in the operative navigation is completed.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.
Claims (5)
1. A surgical navigation point cloud segmentation and registration method based on structured light is characterized by comprising the following steps:
step 1, calibrating a structured light system, and projecting a coding pattern to obtain a body surface point cloud of a patient in an operation;
step 2, scanning a patient through CT before an operation and carrying out three-dimensional reconstruction to obtain a three-dimensional model, obtaining a three-dimensional image of the body surface of the patient before the operation through setting a threshold value, and sampling to obtain the body surface point cloud of the patient before the operation;
step 3, constructing a dynamic graph convolution network model based on feature reuse and attention mechanism, and segmenting the acquired preoperative and intraoperative patient body surface point clouds to obtain back point clouds;
and 4, constructing a point cloud registration network based on principal component analysis and dynamic graph convolution, and matching the segmented preoperative and intraoperative patient back point clouds to obtain a conversion relation between two space coordinate systems.
2. The structured light-based surgical navigation point cloud segmentation and registration method as claimed in claim 1, wherein the step 1 specifically comprises the following steps:
step 1-1: for calibration of the camera, acquiring checkerboard calibration board images of multiple angles, detecting characteristic points to obtain pixel coordinate values, solving initial values of internal and external parameters and estimating distortion coefficients, estimating optimized parameters by maximum likelihood, calculating a reprojection error, and outputting camera parameters if the reprojection error is less than 0.2 pixel;
step 1-2: for the calibration of the projector, acquiring horizontal and vertical complementary Gray code images, decoding to obtain a decoded value, solving a sub-pixel value by using a local homography matrix, calculating a reprojection error, and outputting projector parameters if the reprojection error is less than 0.2 pixel;
whereinp represents a set of pixel points within the rectangular region of the camera, q represents a decoded corresponding set of projector pixel points,
wherein the target corner point is located at the middle position of the regionBy applying local homographies matricesObtaining the final corner pixel coordinates of the projectorThe pixel coordinate value obtained at this time is of sub-pixel precision;
and 1-3, projecting the coded positive and negative complementary Gray code pattern, and extracting a decoded point cloud three-dimensional coordinate value of the body surface of the patient in the operation by using a PCL point cloud library so as to obtain the body surface point cloud of the patient in the operation.
3. The structured light-based surgical navigation point cloud segmentation and registration method as claimed in claim 1, wherein the step 2 specifically comprises the steps of:
step 2-1: performing chest and abdomen flat scanning CT on a patient before an operation, and performing three-dimensional reconstruction through Mimics software to obtain a three-dimensional reconstruction model of the patient before the operation;
step 2-2: different CT values are obtained according to the density of different parts of a human body and the difference of the X-ray absorption degree, a three-dimensional image of the body surface of the preoperative patient is obtained by setting the CT value to be more than-200 Hu and less than 50Hu, and the PCL point cloud base is used for sampling the three-dimensional image to obtain the body surface point cloud of the preoperative patient.
4. The method for segmenting and registering the surgical navigation point cloud based on the structured light as claimed in claim 1, wherein the specific process of the step 3 is as follows:
step 3-1: feature reuse is achieved using dense connections for different edge convolution layers, where the mathematical expression of the edge convolution layer is:
whereinAs the ith point in the ith layer, the point is output after the parameter update after edge convolutionσ denotes the ReLU activation function, BN denotes batch normalization,representing the edge characteristics obtained by the current layer l;
step 3-2: based on a dynamic graph convolution network model, adding space attention to each edge convolution layer, enhancing the geometrical structural relation expression of the sampling point cloud center and neighborhood nodes, and mathematically expressing the space attention as follows:
A s (F)=σ(g(F s (max)||F s (avg)))xF
where σ represents sigmoid activation function, g represents convolution operation, F s (max) and F s (avg) respectively represents maximum pooling and average pooling operations, | | | represents splicing operation, and finally a new feature graph A is obtained s (F);
Step 3-3: classifying each point in the input preoperative and intraoperative body surface point clouds of the patient by using a dynamic graph convolution network model of a feature reuse and attention mechanism so as to obtain all points classified as back types, and taking the points as segmented back point clouds and storing the points.
5. The structured light-based surgical navigation point cloud segmentation and registration method as claimed in claim 1, wherein the step 4 comprises the following steps:
step 4-1: calculating the centroid of the point clouds of the preoperative and intraoperative patients to be registered by using principal component analysis, simultaneously calculating to obtain a covariance matrix of the two point clouds, performing singular value decomposition on the covariance matrix to obtain corresponding characteristic values and characteristic vectors, respectively taking the characteristic vector with the maximum characteristic value in the two point clouds as the principal component vector of the two point clouds, calculating a rotation matrix R and a translational vector t between the two point clouds through the principal component vector, and finishing the initial registration of the two point clouds;
step 4-2: the method comprises the following steps of using a dynamic graph convolution network as a feature extractor, wherein the dynamic graph convolution network is a dynamic graph convolution network model based on feature reuse and attention mechanism and provided by the three steps, using an improved LK algorithm, regarding the feature extractor as an imaging function, calculating a feature projection error between two point cloud features, minimizing a feature difference between the two point clouds and minimizing a target function, and thus obtaining an optimal transformation matrix, wherein the target function is expressed as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210333748.8A CN114792326A (en) | 2022-03-30 | 2022-03-30 | Surgical navigation point cloud segmentation and registration method based on structured light |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210333748.8A CN114792326A (en) | 2022-03-30 | 2022-03-30 | Surgical navigation point cloud segmentation and registration method based on structured light |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114792326A true CN114792326A (en) | 2022-07-26 |
Family
ID=82462618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210333748.8A Pending CN114792326A (en) | 2022-03-30 | 2022-03-30 | Surgical navigation point cloud segmentation and registration method based on structured light |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114792326A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115830080A (en) * | 2022-10-27 | 2023-03-21 | 上海神玑医疗科技有限公司 | Point cloud registration method and device, electronic equipment and storage medium |
CN115880469A (en) * | 2023-02-20 | 2023-03-31 | 江苏省人民医院(南京医科大学第一附属医院) | Registration method of surface point cloud data and three-dimensional image |
CN117408908A (en) * | 2023-12-15 | 2024-01-16 | 南京邮电大学 | Preoperative and intraoperative CT image automatic fusion method based on deep neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102784003A (en) * | 2012-07-20 | 2012-11-21 | 北京先临华宁医疗科技有限公司 | Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning |
CN113205547A (en) * | 2021-03-18 | 2021-08-03 | 北京长木谷医疗科技有限公司 | Point cloud registration method, bone registration method, device, equipment and storage medium |
KR20210104466A (en) * | 2020-02-17 | 2021-08-25 | 숭실대학교산학협력단 | Method for fine face registration for 3d surgical navigation system, recording medium and device for performing the method |
WO2021257094A1 (en) * | 2020-06-19 | 2021-12-23 | Hewlett-Packard Development Company, L.P. | Point cloud alignment |
-
2022
- 2022-03-30 CN CN202210333748.8A patent/CN114792326A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102784003A (en) * | 2012-07-20 | 2012-11-21 | 北京先临华宁医疗科技有限公司 | Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning |
KR20210104466A (en) * | 2020-02-17 | 2021-08-25 | 숭실대학교산학협력단 | Method for fine face registration for 3d surgical navigation system, recording medium and device for performing the method |
WO2021257094A1 (en) * | 2020-06-19 | 2021-12-23 | Hewlett-Packard Development Company, L.P. | Point cloud alignment |
CN113205547A (en) * | 2021-03-18 | 2021-08-03 | 北京长木谷医疗科技有限公司 | Point cloud registration method, bone registration method, device, equipment and storage medium |
Non-Patent Citations (5)
Title |
---|
DENGZHI LIU ET AL.: "PDC-Net:Robust point cloud registration using deep cyclic neural network combined with PCA", 《APPLIED OPTICS》 * |
XIAOLONG LU ET AL.: "Linked Attention-Based Dynamic Graph Convolution Module for Point Cloud Classification", 《2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
宋巍等: "结合动态图卷积和空间注意力的点云分类与分割", 《中国图象图形学报》 * |
秦庭威等: "基于残差注意力机制的点云配准算法", 《计算机应用》 * |
肖伟虎: "基于结构光的无标记点手术导航技术研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115830080A (en) * | 2022-10-27 | 2023-03-21 | 上海神玑医疗科技有限公司 | Point cloud registration method and device, electronic equipment and storage medium |
CN115830080B (en) * | 2022-10-27 | 2024-05-03 | 上海神玑医疗科技有限公司 | Point cloud registration method and device, electronic equipment and storage medium |
CN115880469A (en) * | 2023-02-20 | 2023-03-31 | 江苏省人民医院(南京医科大学第一附属医院) | Registration method of surface point cloud data and three-dimensional image |
CN117408908A (en) * | 2023-12-15 | 2024-01-16 | 南京邮电大学 | Preoperative and intraoperative CT image automatic fusion method based on deep neural network |
CN117408908B (en) * | 2023-12-15 | 2024-03-15 | 南京邮电大学 | Preoperative and intraoperative CT image automatic fusion method based on deep neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111091589B (en) | Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning | |
CN112508965B (en) | Automatic outline sketching system for normal organs in medical image | |
CN114792326A (en) | Surgical navigation point cloud segmentation and registration method based on structured light | |
JP2021521993A (en) | Image enhancement using a hostile generation network | |
CN111599432B (en) | Three-dimensional craniofacial image feature point marking analysis system and method | |
CN112598649B (en) | 2D/3D spine CT non-rigid registration method based on generation of countermeasure network | |
KR102442090B1 (en) | Point registration method in surgical navigation system | |
KR102442093B1 (en) | Methods for improving surface registration in surgical navigation systems | |
CN111260702B (en) | Laser three-dimensional point cloud and CT three-dimensional point cloud registration method | |
CN114187293A (en) | Oral cavity palate part soft and hard tissue segmentation method based on attention mechanism and integrated registration | |
CN115578320A (en) | Full-automatic space registration method and system for orthopedic surgery robot | |
CN115830016A (en) | Medical image registration model training method and equipment | |
CN116258732A (en) | Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images | |
CN116650115A (en) | Orthopedic surgery navigation registration method based on UWB mark points | |
CN111192268A (en) | Medical image segmentation model construction method and CBCT image bone segmentation method | |
CN116824173A (en) | Medical image processing method, medical image processing device and storage medium | |
CN116612166A (en) | Registration fusion algorithm for multi-mode images | |
CN114820730B (en) | CT and CBCT registration method based on pseudo CT | |
CN112825619A (en) | Training machine learning algorithm using digitally reconstructed radiological images | |
CN115239740A (en) | GT-UNet-based full-center segmentation algorithm | |
EP3910597A1 (en) | Body representations | |
Van Houtte et al. | A deep learning approach to horse bone segmentation from digitally reconstructed radiographs | |
CN113850710A (en) | Cross-modal medical image accurate conversion method | |
CN113256693A (en) | Multi-view registration method based on K-means and normal distribution transformation | |
CN116012526B (en) | Three-dimensional CT image focus reconstruction method based on two-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |