CN111784727A - Method and device for applying to vessel intervention operation navigation based on 3D/2D registration - Google Patents
Method and device for applying to vessel intervention operation navigation based on 3D/2D registration Download PDFInfo
- Publication number
- CN111784727A CN111784727A CN202010554304.8A CN202010554304A CN111784727A CN 111784727 A CN111784727 A CN 111784727A CN 202010554304 A CN202010554304 A CN 202010554304A CN 111784727 A CN111784727 A CN 111784727A
- Authority
- CN
- China
- Prior art keywords
- blood vessel
- registration
- data
- image
- vessel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 88
- 230000009466 transformation Effects 0.000 claims abstract description 28
- 230000002792 vascular Effects 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 17
- 238000003384 imaging method Methods 0.000 claims abstract description 14
- 230000008447 perception Effects 0.000 claims abstract description 12
- 238000007794 visualization technique Methods 0.000 claims abstract description 10
- 238000000926 separation method Methods 0.000 claims abstract description 7
- 230000003068 static effect Effects 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 239000010410 layer Substances 0.000 claims description 29
- 230000011218 segmentation Effects 0.000 claims description 25
- 238000010968 computed tomography angiography Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 19
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 10
- 210000004351 coronary vessel Anatomy 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 8
- 230000005284 excitation Effects 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 210000000709 aorta Anatomy 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000005489 elastic deformation Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 210000001519 tissue Anatomy 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 239000011229 interlayer Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims description 2
- 210000000988 bone and bone Anatomy 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000000658 coextraction Methods 0.000 claims description 2
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 239000010931 gold Substances 0.000 claims description 2
- 229910052737 gold Inorganic materials 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 210000004072 lung Anatomy 0.000 claims description 2
- 238000002156 mixing Methods 0.000 claims description 2
- 230000000877 morphologic effect Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 claims description 2
- 230000010076 replication Effects 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims description 2
- 238000004088 simulation Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 238000002601 radiography Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The method and the device are applied to vessel intervention operation navigation based on 3D/2D registration, and the method comprises the following steps: (1) for the preoperative static three-dimensional CTA data, segmenting the image by adopting a method based on a convolutional neural network; (2) for the dynamic XRA data in the operation, a foreground and background separation method based on RPCA is adopted to segment the sequence image; (3) respectively constructing a 3D and a 2D vascular topological model, and registering vascular data by using a vascular 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation; (4) and according to the transformation from the 3D blood vessel to the XRA imaging space, projecting the 3D blood vessel model to an XRA imaging plane, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method and a device for applying to vessel intervention operation navigation based on 3D/2D registration.
Background
The fusion of a preoperative or intraoperative 3D image and a real-time XRA image is the current mainstream vascular interventional navigation mode based on multi-mode image fusion, such as the Vessel Assist system of GE and the Vessel navigator system of Philips.
However, these systems require manual registration of the multi-modality images and lack rapid elastic registration, thereby limiting the widespread use of surgical navigation systems.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for applying to vessel intervention operation navigation based on 3D/2D registration, which can automatically register multimode images, realize rapid elastic registration and promote the wide application of an operation navigation system.
The technical scheme of the invention is as follows: the method applied to the navigation of the vascular intervention operation based on the 3D/2D registration comprises the following steps:
(1) for the preoperative static three-dimensional CTA data, segmenting the image by adopting a method based on a convolutional neural network;
(2) for the dynamic XRA data in the operation, a foreground and background separation method based on RPCA is adopted to segment the sequence image;
(3) respectively constructing a 3D and a 2D vascular topological model, and registering vascular data by using a vascular 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
(4) and according to the transformation from the 3D blood vessel to the XRA imaging space, projecting the 3D blood vessel model to an XRA imaging plane, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
The invention applies the blood vessel 3D/2D registration method to the blood vessel interventional operation navigation, segments and models the 3D and 2D blood vessels based on the CTA and XRA blood vessel segmentation extraction technology, obtains the real-time dynamic blood vessel registration result by applying the 3D/2D registration method, and fuses and displays the 3D blood vessel and the 2D radiography image by adopting the depth perception enhancement visualization method, thereby automatically registering the multimode image, realizing the rapid elastic registration and promoting the wide application of the operation navigation system.
Also provided is an apparatus for application to vessel intervention surgical navigation based on 3D/2D registration, comprising:
the CTA blood vessel structure extraction module is used for segmenting the image by adopting a convolution neural network-based method for preoperative static three-dimensional CTA data;
the XRA vascular structure extraction module is used for segmenting a sequence image for intraoperative dynamic XRA data by adopting a foreground and background separation method based on RPCA;
the registration module is used for respectively constructing a 3D and a 2D blood vessel topological model and registering the blood vessel data by using a blood vessel 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
and the 3D/2D virtual-real fusion display module is used for projecting the 3D blood vessel model to an XRA imaging plane according to the transformation from the 3D blood vessel to the XRA imaging space, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
Drawings
FIG. 1 shows a CTA vessel segmentation depth network architecture based on 3D U-Net.
Fig. 2 shows an aorta XRA segmentation method architecture based on a generative countermeasure network.
Fig. 3 shows a schematic of vessel 3D/2D + t registration.
Fig. 4 shows a flow chart of a method according to the invention for application to vessel intervention surgical navigation based on 3D/2D registration.
Detailed Description
As shown in fig. 4, the method applied to vessel intervention operation navigation based on 3D/2D registration comprises the following steps:
(1) for the preoperative static three-dimensional CTA data, segmenting the image by adopting a method based on a convolutional neural network;
(2) for the dynamic XRA data in the operation, a foreground and background separation method based on RPCA is adopted to segment the sequence image;
(3) respectively constructing a 3D and a 2D vascular topological model, and registering vascular data by using a vascular 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
(4) and according to the transformation from the 3D blood vessel to the XRA imaging space, projecting the 3D blood vessel model to an XRA imaging plane, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
The invention applies the blood vessel 3D/2D registration method to the blood vessel interventional operation navigation, segments and models the 3D and 2D blood vessels based on the CTA and XRA blood vessel segmentation extraction technology, obtains the real-time dynamic blood vessel registration result by applying the 3D/2D registration method, and fuses and displays the 3D blood vessel and the 2D radiography image by adopting the depth perception enhancement visualization method, thereby automatically registering the multimode image, realizing the rapid elastic registration and promoting the wide application of the operation navigation system.
The deep learning method has the advantages that no interaction is needed, and the precision of the deep learning method depends on the quality and the quantity of training samples and the learning capability of the model. Preferably, in the step (1), two layers of depth 3D U-Net are used as the convolutional network framework, as shown in fig. 1, the framework comprises a compression path from shallow layer to deep layer and an expansion path from deep layer to shallow layer, and the feature replication path connecting the corresponding layers. The cube in the figure represents a multi-channel feature map, and the feature size and the number of channels (size) are indicated above each feature map3× channel) the input of the network is a picture subblock of single channel size P × P × PThe propagation of features of the same layer uses the mode of convolution (Conv) + excitation of rectified linear units (ReLU) + Batch Normalization (BN), indicated as light blue arrow, the down-sampling of the shallow layer into the deep layer uses the mode of 2 × 2 maximum pooling (Max), indicated as dark blue arrow, the up-sampling of the deep layer into the shallow layer uses the mode of deconvolution (DeConv) + ReLU excitation, indicated as green arrow, the up-sampling of the deep layer into the shallow layer is spliced with the copied features (red cube block), and the final layer outputs the segmentation result using the mode of "Conv + Sigmoid excitation".
Preferably, in the step (1), the training images and the corresponding segmentation gold standards are paired for training the network, and the loss function of the network uses the cross entropy of the output result and the gold standards and performs parameter optimization by using a random gradient descent method; the network model is adopted for segmenting both coronary artery and aorta, and due to the size difference of blood vessels, the size P of the patch for the coronary artery CTA data is 32, and the size P of the aorta data is 64; the data preprocessing is used for eliminating non-vascular tissues to accelerate the training efficiency; for coronary artery CTA data, 30 cases of coronary artery CTAs and corresponding gold standards are used as a training set, vesselness filtering is used for carrying out enhancement filtering on the data, bone tissues and lung tissues are screened out by setting a threshold value, and a blood vessel candidate region of the training set is obtained by carrying out morphological expansion operation on an enhancement region; 66944 pairs of 64 × 64 × 64 image sub-blocks are uniformly selected in the blood vessel candidate region as a training set, 12223 pairs of image sub-blocks are used as a test set, wherein samples at the bifurcation of the blood vessel are densely sampled so as to cope with the imbalance phenomenon of the blood vessel branch and the bifurcation samples; in the training and testing of coronary CTA data, image sub-blocks are selected from a candidate region for calculation; for aortic CTA data, 20 CTA data and corresponding segmentation gold standards were used for training and testing; co-extraction 89634 trains the 64X 64 image sub-block input network and 16446 tests the image sub-blocks.
Preferably, in the step (1), the maximum connected domain processing is performed on the initial segmentation result to remove the local outlier noise; constructing a blood vessel surface model by using a surface mesh reconstruction method, wherein the model is used for 3D/2D fusion display; and acquiring a 3D blood vessel center line from the surface model by adopting a triangulated mesh surface skeleton line extraction method, and constructing a 3D blood vessel topological model according to the connection relation of skeleton lines for 3D/2D registration based on topology.
Preferably, in the step (2), a U-Net network is used as a basis of the generator, and the U-Net guarantees an initial feature map by using an inter-layer hop connection operation so as to propagate to a final layer. The contrast image input generator outputs a blood vessel probability map with the same size, and the pixel value of 0-1 distribution represents the probability that the pixel belongs to a blood vessel. The discriminator receives a pair of contrast images and vessel segmentation images, and judges whether the vessel segmentation images are artificially labeled gold standards or output results of the generator, and the overall framework is as shown in fig. 2. The method uses the U-Net network as the basis of the generator, and the initial characteristic diagram keeps low-level characteristics such as edges and spots and is suitable for accurate segmentation, so that the U-Net ensures that the initial characteristic diagram can be propagated to a final layer by using interlayer jump connection operation.
Preferably, in the step (2),
the generator G is represented as a mapping G from the contrast image x to the vessel segmentation image y: x → y, and the discriminator D is expressed from the image pair { x, y } to the binary classification result {0,1}NWherein 0 and 1 denote y as machine-generated and manually labeled, respectively, and N denotes the dimension of the discrimination result; discriminating by using an ImageGAN model, wherein N is 1; the objective function of the network is expressed as formula (1)
The generator optimization is described as a countermeasure process with the discriminator, since G is similar to the conditional GAN model with the image as input, the GAN network aims to solve the equation (2) optimization problem,
accurately judging for training a discriminator D, wherein the maximum value of D (x, y) is taken, and the minimum value of D (x, G (x)) is taken as much as possible; on the other hand, the generator should prevent the arbiter from making the correct decisions by producing an output that is not visible to the real data; since the final goal is to obtain the actual output from the generator, the objective function is defined as the minimum maximum of the objective function.
Preferably, in the step (2),
the golden standard image is used for strengthening the segmentation task, the cross entropy of the segmentation result and the golden standard is increased in the loss function according to the formula (3), so that the aim of punishing the distance between the gold segmentation result and the golden standard is achieved,
summing the GAN network objective function and the segmentation task loss function to obtain an overall objective function as a formula (4)
G*=arg minG[maxDLGAN(G,D)]+λLSEG(G) (4)
Wherein λ is a weight coefficient that balances the contributions of the two part objective functions;
preferably, in step (3), a combined intraoperative 3D/2D + t registration strategy is employed, as shown in fig. 3. Selecting a frame from the XRA sequence as a key frame, and registering the 3D blood vessel to the key frame 2D blood vessel to obtain initial transformationThe initial transformation comprises large-range rigid transformation and local elastic transformation, and one of the 3D/2D registration methods is selected for key frame registration to obtain the initial transformation
After obtaining the initial transformation of the key frameAnd then, the registration of the t +1 th frame 2D blood vessel takes the registration result of the previous frame as referenceOnly minor elastic deformations are involved; fast registration for interframesUsing a fast 3D/2D registration algorithm DT-Plus based on distance transformation for 3D vessel points yiAnd 2D vessel point xjDT-Plus measures the accuracy of the registration using equation (5),
whereinRepresenting the elastic deformation of the blood vessel, and controlling the deformation of the blood vessel by adopting a manifold regularization blood vessel deformation model; minimum distance of 3D and 2D pointsThe registration result is obtained by pre-calculating the distance transformation of the 2D vessel center line and optimizing the objective function by adopting a Nelder-Mead algorithm.
The 3D/2D virtual-real fusion display aims to encode the preoperative 3D vessel (virtual) stereo information into the intraoperative real-time 2D image (real) plane. According to the stereoscopic information of the object perceived by human eyes through the outline and the transparency of the object, the fusion display of the 3D blood vessel and the XRA image is realized by adopting an enhanced depth perception visualization method. Preferably, as shown in fig. 4, in the step (4),
the object contour is defined as the surface of an object with a normal vector n vertical to the sight line direction v, and an included angle theta between the vectors is obtained through vector dot product, wherein theta is arccos (| n · v |); the basic contour factor k associated with the voxel opacity value is calculated by equation (6)
κ=eS(θ-ψ)(6)
The formula uses two parameters to control the sharpness and thickness of an object, whereFor controlling the sharpness of the contour, # ∈ [0 ]; π/2]A threshold for controlling the thickness of the profile; the contour factor is used for adjusting the color opacity of the voxel, the surface with a smaller included angle theta has higher transparency, a smaller threshold psi can result in a thicker contour, and the higher S is, the clearer the contour is;
the depth perception of the object is achieved by an alpha-blending method, calculating the integral of the projection ray through the voxel, whose discrete form is equation (7),
α thereiniAnd CiOpacity and color value of the voxel, respectively; depth ratio gammad∈[0,1]For defining coefficients linearly related to the depth values,
wherein D is the visual depth in the Z-axis direction, DnearAnd DfarDefining a depth range of the object;
the color RGB values C of the voxels are encoded by the formula (9),
equation (8) defines the color coding of the RGB mode. In addition, different color coding modes, such as an RB mode and a gray mode, may be provided, whose color coding formulas are respectively as follows,
C=(1.0-γd,0,γd) (10)
C=(1.0-γd,1.0-γd,1.0-γd) (11)
in practical application, the color coding mode can be set according to the requirement of an operator.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, corresponding to the method of the invention, the invention also simultaneously comprises a device for application to vessel intervention surgical navigation based on 3D/2D registration, which is generally represented in the form of functional modules corresponding to the steps of the method. The device includes:
the CTA blood vessel structure extraction module is used for segmenting the image by adopting a convolution neural network-based method for preoperative static three-dimensional CTA data;
the XRA vascular structure extraction module is used for segmenting a sequence image for intraoperative dynamic XRA data by adopting a foreground and background separation method based on RPCA;
the registration module is used for respectively constructing a 3D and a 2D blood vessel topological model and registering the blood vessel data by using a blood vessel 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
a 3D/2D virtual-real fusion display module which displays the virtual-real fusion display module according to the transformation of the 3D blood vessel to the XRA imaging space,
and projecting the 3D blood vessel model to an XRA imaging plane, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (10)
1. The method applied to the vascular intervention operation navigation based on 3D/2D registration is characterized in that: which comprises the following steps:
(1) for the preoperative static three-dimensional CTA data, segmenting the image by adopting a method based on a convolutional neural network;
(2) for the dynamic XRA data in the operation, a foreground and background separation method based on RPCA is adopted to segment the sequence image;
(3) respectively constructing a 3D and a 2D vascular topological model, and registering vascular data by using a vascular 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
(4) and according to the transformation from the 3D blood vessel to the XRA imaging space, projecting the 3D blood vessel model to an XRA imaging plane, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
2. The method for vessel intervention surgical navigation based on 3D/2D registration according to claim 1, wherein: in the step (1), two layers of depth 3D U-Net are used as a convolution network framework, and the framework comprises a compression path from a shallow layer to a deep layer, an expansion path from the deep layer to the shallow layer and a feature replication path connecting corresponding layers; the feature propagation of the same layer uses a convolution Conv + rectification linear unit ReLU excitation + batch normalization BN mode; the method of 2 multiplied by 2 Max pooling Max pooling is used for downsampling from the shallow layer to the deep layer; the deep layer is up-sampled to the shallow layer by using a deconvolution Deconv + ReLU excitation mode; and splicing the deep features with the copied features after up-sampling, and outputting a segmentation result by using Conv + Sigmoid excitation in the final layer.
3. The method for vessel intervention surgical navigation based on 3D/2D registration according to claim 2, wherein: in the step (1), the training images and the corresponding segmentation gold standards are used for training the network in pairs, and the loss function of the network uses the output result and the cross entropy of the gold standards and uses a random gradient descent method to optimize parameters; the network model is adopted for segmenting both coronary artery and aorta, and due to the size difference of blood vessels, the size P of the patch for the coronary artery CTA data is 32, and the size P of the aorta data is 64; the data preprocessing is used for eliminating non-vascular tissues to accelerate the training efficiency; for coronary artery CTA data, 30 cases of coronary artery CTAs and corresponding gold standards are used as a training set, vesselness filtering is used for carrying out enhancement filtering on the data, bone tissues and lung tissues are screened out by setting a threshold value, and a blood vessel candidate region of the training set is obtained by carrying out morphological expansion operation on an enhancement region; 66944 pairs of 64 × 64 × 64 image sub-blocks are uniformly selected in the blood vessel candidate region as a training set, 12223 pairs of image sub-blocks are used as a test set, wherein samples at the bifurcation of the blood vessel are densely sampled so as to cope with the imbalance phenomenon of the blood vessel branch and the bifurcation samples; in the training and testing of coronary CTA data, image sub-blocks are selected from a candidate region for calculation; for aortic CTA data, 20 CTA data and corresponding segmentation gold standards were used for training and testing; co-extraction 89634 trains the 64X 64 image sub-block input network and 16446 tests the image sub-blocks.
4. The method for vessel intervention surgical navigation based on 3D/2D registration as claimed in claim 3, wherein: in the step (1), the maximum connected domain processing is carried out on the initial segmentation result to remove the local outlier noise; constructing a blood vessel surface model by using a surface mesh reconstruction method, wherein the model is used for 3D/2D fusion display; and acquiring a 3D blood vessel center line from the surface model by adopting a triangulated mesh surface skeleton line extraction method, and constructing a 3D blood vessel topological model according to the connection relation of skeleton lines for 3D/2D registration based on topology.
5. The method for vessel intervention surgical navigation based on 3D/2D registration according to claim 4, wherein: in the step (2), a U-Net network is used as a generator base, and the U-Net guarantees an initial feature map by using an interlayer jump connection operation so as to propagate to a final layer.
6. The method for vessel intervention surgical navigation based on 3D/2D registration according to claim 5, wherein: in the step (2), the step (c),
the generator G is represented as a mapping G x → y from the contrast image x to the vessel segmentation image y, and the discriminator D is represented as a mapping G x → y from the image pair { x, y } to the binary classification result {0,1}NWherein 0 and 1 denote y as machine-generated and artificial labels, respectivelyNote that N represents the dimension of the discrimination result; discriminating by using an ImageGAN model, wherein N is 1; the objective function of the network is expressed as formula (1)
The generator optimization is described as a countermeasure process with the discriminator, since G is similar to the conditional GAN model with the image as input, the GAN network aims to solve the equation (2) optimization problem,
accurately judging for training a discriminator D, wherein the maximum value of D (x, y) is taken, and the minimum value of D (x, G (x)) is taken as much as possible; on the other hand, the generator should prevent the arbiter from making the correct decisions by producing an output that is not visible to the real data; since the final goal is to obtain the actual output from the generator, the objective function is defined as the minimum maximum of the objective function.
7. The method for vessel intervention surgical navigation based on 3D/2D registration as claimed in claim 6, wherein: in the step (2), the step (c),
the golden standard image is used for strengthening the segmentation task, the cross entropy of the segmentation result and the golden standard is increased in the loss function according to the formula (3), so that the aim of punishing the distance between the gold segmentation result and the golden standard is achieved,
summing the GAN network objective function and the segmentation task loss function to obtain an overall objective function as a formula (4)
Where λ is a weighting coefficient that balances the contributions of the two-part objective function.
8. The method for vessel intervention surgical navigation based on 3D/2D registration as claimed in claim 7, wherein: in the step (3), a combined intraoperative 3D/2D + t registration strategy is adopted, one frame is selected from an XRA sequence and used as a key frame, and the 3D blood vessel is registered to the key frame 2D blood vessel to obtain initial transformationThe initial transformation comprises large-range rigid transformation and local elastic transformation, and one of the 3D/2D registration methods is selected for key frame registration to obtain the initial transformation
After obtaining the initial transformation of the key frameAnd then, the registration of the t +1 th frame 2D blood vessel takes the registration result of the previous frame as reference Only minor elastic deformations are involved; fast registration for interframesUsing a fast 3D/2D registration algorithm DT-Plus based on distance transformation for 3D vessel points yiAnd 2D vessel point xjDT-Plus measures the accuracy of the registration using equation (5),
whereinRepresenting the elastic deformation of the blood vessel, and controlling the deformation of the blood vessel by adopting a manifold regularization blood vessel deformation model; minimum distance of 3D and 2D pointsThe registration result is obtained by pre-calculating the distance transformation of the 2D vessel center line and optimizing the objective function by adopting a Nelder-Mead algorithm.
9. The method for vessel intervention surgical navigation based on 3D/2D registration according to claim 8, wherein: in the step (4), the step of (C),
the object contour is defined as the surface of an object with a normal vector n vertical to the sight line direction v, and an included angle theta between the vectors is obtained through vector dot product, wherein theta is arccos (| n · v |); the basic contour factor k associated with the voxel opacity value is calculated by equation (6)
κ=eS(θ-ψ)(6)
The formula uses two parameters to control the sharpness and thickness of an object, whereFor controlling the sharpness of the contour, # ∈ [0 ]; π/2]A threshold for controlling the thickness of the profile; the contour factor is used for adjusting the color opacity of the voxel, the surface with a smaller included angle theta has higher transparency, a smaller threshold psi can result in a thicker contour, and the higher S is, the clearer the contour is;
the depth perception of the object is achieved by an alpha-blending method, calculating the integral of the projection ray through the voxel, whose discrete form is equation (7),
α thereiniAnd CiOpacity and color value of the voxel, respectively; depth ratio gammad∈[0,1]For defining coefficients linearly related to the depth values,
wherein D is the visual depth in the Z-axis direction, DnearAnd DfarDefining a depth range of the object; the color RGB values C of the voxels are encoded by the formula (9),
10. 3D/2D registration-based guide wire 3D simulation tracking device is characterized in that: it includes:
the CTA blood vessel structure extraction module is used for segmenting the image by adopting a convolution neural network-based method for preoperative static three-dimensional CTA data;
the XRA vascular structure extraction module is used for segmenting a sequence image for intraoperative dynamic XRA data by adopting a foreground and background separation method based on RPCA;
the registration module is used for respectively constructing a 3D and a 2D blood vessel topological model and registering the blood vessel data by using a blood vessel 3D/2D registration method; for real-time dynamic blood vessel data in the operation, the deformation of the 2D blood vessel is quickly compensated by adopting a method based on distance transformation;
and the 3D/2D virtual-real fusion display module is used for projecting the 3D blood vessel model to an XRA imaging plane according to the transformation from the 3D blood vessel to the XRA imaging space, and realizing the fusion of the 3D blood vessel and the 2D image by adopting an enhanced depth perception visualization method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010554304.8A CN111784727B (en) | 2020-06-17 | 2020-06-17 | Method and device for applying to vessel intervention operation navigation based on 3D/2D registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010554304.8A CN111784727B (en) | 2020-06-17 | 2020-06-17 | Method and device for applying to vessel intervention operation navigation based on 3D/2D registration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111784727A true CN111784727A (en) | 2020-10-16 |
CN111784727B CN111784727B (en) | 2023-04-07 |
Family
ID=72757167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010554304.8A Active CN111784727B (en) | 2020-06-17 | 2020-06-17 | Method and device for applying to vessel intervention operation navigation based on 3D/2D registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111784727B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348860A (en) * | 2020-10-27 | 2021-02-09 | 中国科学院自动化研究所 | Vessel registration method, system and device for endovascular aneurysm surgery |
CN115035001A (en) * | 2022-08-11 | 2022-09-09 | 北京唯迈医疗设备有限公司 | Intraoperative navigation system based on DSA imaging device, computing device and program product |
CN115082428A (en) * | 2022-07-20 | 2022-09-20 | 江苏茂融智能科技有限公司 | Metal spot detection method and system based on neural network |
CN117539791A (en) * | 2023-12-09 | 2024-02-09 | 广州翼辉信息技术有限公司 | Automatic test system of embedded software |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6351513B1 (en) * | 2000-06-30 | 2002-02-26 | Siemens Corporate Research, Inc. | Fluoroscopy based 3-D neural navigation based on co-registration of other modalities with 3-D angiography reconstruction data |
CN103892861A (en) * | 2012-12-28 | 2014-07-02 | 北京思创贯宇科技开发有限公司 | CT-XA-image- multi-dimensional fused-based simulation navigation system and method |
CN103914814A (en) * | 2012-12-28 | 2014-07-09 | 北京思创贯宇科技开发有限公司 | Image fusion method and system for CT coronary image and XA angiography image |
CN109993730A (en) * | 2019-03-20 | 2019-07-09 | 北京理工大学 | 3D/2D blood vessel method for registering and device |
CN111260704A (en) * | 2020-01-09 | 2020-06-09 | 北京理工大学 | Vascular structure 3D/2D rigid registration method and device based on heuristic tree search |
-
2020
- 2020-06-17 CN CN202010554304.8A patent/CN111784727B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6351513B1 (en) * | 2000-06-30 | 2002-02-26 | Siemens Corporate Research, Inc. | Fluoroscopy based 3-D neural navigation based on co-registration of other modalities with 3-D angiography reconstruction data |
CN103892861A (en) * | 2012-12-28 | 2014-07-02 | 北京思创贯宇科技开发有限公司 | CT-XA-image- multi-dimensional fused-based simulation navigation system and method |
CN103914814A (en) * | 2012-12-28 | 2014-07-09 | 北京思创贯宇科技开发有限公司 | Image fusion method and system for CT coronary image and XA angiography image |
CN109993730A (en) * | 2019-03-20 | 2019-07-09 | 北京理工大学 | 3D/2D blood vessel method for registering and device |
CN111260704A (en) * | 2020-01-09 | 2020-06-09 | 北京理工大学 | Vascular structure 3D/2D rigid registration method and device based on heuristic tree search |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348860A (en) * | 2020-10-27 | 2021-02-09 | 中国科学院自动化研究所 | Vessel registration method, system and device for endovascular aneurysm surgery |
CN115082428A (en) * | 2022-07-20 | 2022-09-20 | 江苏茂融智能科技有限公司 | Metal spot detection method and system based on neural network |
CN115035001A (en) * | 2022-08-11 | 2022-09-09 | 北京唯迈医疗设备有限公司 | Intraoperative navigation system based on DSA imaging device, computing device and program product |
CN115035001B (en) * | 2022-08-11 | 2022-12-09 | 北京唯迈医疗设备有限公司 | Intraoperative navigation system, computing device and program product based on DSA imaging device |
CN117539791A (en) * | 2023-12-09 | 2024-02-09 | 广州翼辉信息技术有限公司 | Automatic test system of embedded software |
CN117539791B (en) * | 2023-12-09 | 2024-04-30 | 广州翼辉信息技术有限公司 | Automatic test system of embedded software |
Also Published As
Publication number | Publication date |
---|---|
CN111784727B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111784727B (en) | Method and device for applying to vessel intervention operation navigation based on 3D/2D registration | |
Liu et al. | Robust color guided depth map restoration | |
CN111612754B (en) | MRI tumor optimization segmentation method and system based on multi-modal image fusion | |
US20030099330A1 (en) | Method for automatically detecting casting defects in a test piece | |
CN102124490A (en) | Methods and systems for reducing or eliminating perceived ghosting in displayed stereoscopic images | |
Lawonn et al. | Illustrative visualization of vascular models for static 2D representations | |
WO2010041584A1 (en) | Imaging system and method | |
CN112991365B (en) | Coronary artery segmentation method, system and storage medium | |
CN110766623A (en) | Stereo image restoration method based on deep learning | |
WO2021226862A1 (en) | Neural opacity point cloud | |
Hovhannisyan et al. | AED-Net: A single image dehazing | |
Mishra et al. | Underwater image enhancement using multiscale decomposition and gamma correction | |
CN112489048B (en) | Automatic optic nerve segmentation method based on depth network | |
CN117197627B (en) | Multi-mode image fusion method based on high-order degradation model | |
Zheng et al. | Overwater image dehazing via cycle-consistent generative adversarial network | |
CN108924434A (en) | A kind of three-dimensional high dynamic-range image synthesis method based on exposure transformation | |
CN112785540A (en) | Generation system and method of diffusion weighted image | |
CN116258671B (en) | MR image-based intelligent sketching method, system, equipment and storage medium | |
CN114882158B (en) | Method, apparatus, device and readable medium for NERF optimization based on attention mechanism | |
CN112712507B (en) | Method and device for determining calcified region of coronary artery | |
CN116258627A (en) | Super-resolution recovery system and method for extremely-degraded face image | |
CN113362360B (en) | Ultrasonic carotid plaque segmentation method based on fluid velocity field | |
CN114387380A (en) | Method for generating a computer-based visualization of 3D medical image data | |
CN111445508A (en) | Visualization method and device for enhancing depth perception in 2D/3D image fusion | |
CN113160142A (en) | Brain tumor segmentation method fusing prior boundary |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |