CN113469258B - X-ray angiography image matching method and system based on two-stage CNN - Google Patents

X-ray angiography image matching method and system based on two-stage CNN Download PDF

Info

Publication number
CN113469258B
CN113469258B CN202110772730.3A CN202110772730A CN113469258B CN 113469258 B CN113469258 B CN 113469258B CN 202110772730 A CN202110772730 A CN 202110772730A CN 113469258 B CN113469258 B CN 113469258B
Authority
CN
China
Prior art keywords
image
contrast
branch
matching
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110772730.3A
Other languages
Chinese (zh)
Other versions
CN113469258A (en
Inventor
刘市祺
谢晓亮
侯增广
曲新凯
韩文正
周小虎
马西瑶
周彦捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Huadong Hospital
Original Assignee
Institute of Automation of Chinese Academy of Science
Huadong Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science, Huadong Hospital filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110772730.3A priority Critical patent/CN113469258B/en
Publication of CN113469258A publication Critical patent/CN113469258A/en
Application granted granted Critical
Publication of CN113469258B publication Critical patent/CN113469258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention belongs to the field of image recognition, and particularly relates to an X-ray angiography image matching method and system based on two-stage CNN, aiming at solving the problem that an angiography image for coronary arteries cannot be accurately acquired and matched with an angiography-free image in the prior art. The invention comprises the following steps: acquiring a sequence of angiographic images containing coronary arteries; dividing the radiography image sequence into a radiography image set and a non-radiography image set through a lightweight classification CNN network; extracting a sequence at least comprising one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and storing all the extracted contrast image sequences as a contrast image library; and searching a matching contrast image without a contrast image from the contrast image library through a lightweight matching CNN network. The invention increases the Euclidean distance of the unmatched data pairs and decreases the Euclidean distance of the matched data pairs, thereby improving the matching accuracy of the non-contrast images and the contrast images.

Description

X-ray angiography image matching method and system based on two-stage CNN
Technical Field
The invention belongs to the field of image recognition, and particularly relates to an X-ray angiography image matching method and system based on two-stage CNN.
Background
In an intravascular interventional operation, since human vascular tissues are penetrated by X-rays and are invisible in an X-ray contrast image, in order to determine the relative pose of a catheter or other interventional instrument and a blood vessel, a doctor needs to inject a contrast medium with a high density into the blood vessel to display the shape of the blood vessel. Because the coronary artery pressure is strong and the blood flow rate is fast, the injected contrast agent can be quickly washed away within a few seconds, and one frame of X-ray contrast blood vessel image fixation is usually selected as an intraoperative reference image after each contrast in the clinical operation. Under the influence of breathing and heartbeat, the coronary arteries are severely elastically deformed, so that the physician must frequently inject contrast agent to update the reference image.
However, excessive injection of contrast media can cause additional drainage burden on the patient's kidneys, and may even cause some physical damage to patients with poor renal function. On the other hand, the prevalence of cancer increases in doctors exposed to X-ray radiation for a long period of time, and in order to prevent the injury caused by the radiation, the doctors need to perform an operation with heavy lead protective clothing, and standing with a load for a long time may cause damage to the doctors' bodies.
If an X-ray angiographic image and a non-angiographic image under the influence of heartbeat can be captured and classified from a small number of angiographic video sequences, salient features in the angiographic and non-angiographic images can be extracted respectively, and similar information can be searched from the features, thereby training a model which can match the X-ray angiographic image and the non-angiographic image with the same deformation, the model can be used to match the pre-extracted X-ray angiographic image with the intraoperative real-time non-angiographic image during surgery.
At present, there are few relevant researches on blood vessel matching, and most of the researches are on the matching problem of the blood vessels of the sclera of the eye and the abdominal aorta. The sclera capillary blood vessels of the eyes are red, and the blood vessel structures can be clearly seen without contrast agents; the abdominal aorta is far from the heart, the displacements and deformations occur very slightly, and its vessels are thicker than the coronary diameters, so the deformations occur almost negligibly compared to the vessel diameters. Therefore, the matching research of the angiography image aiming at the coronary artery is more complex and difficult and has strong innovation.
Disclosure of Invention
In order to solve the above-mentioned problems in the prior art, that is, the prior art cannot acquire an accurate angiographic image of a coronary artery and match the angiographic image with an angiographic image without angiographic image, the present invention provides an X-ray angiographic image matching method based on a two-stage CNN, specifically comprising:
step S100, acquiring an angiography image sequence containing coronary arteries;
step S200, dividing the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching CNN network;
step S300, extracting a sequence at least comprising one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and storing all the extracted contrast image sequences as a contrast image library;
and S400, selecting a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and searching a matching contrast image of the non-contrast image set from the contrast image library.
In some preferred embodiments, the lightweight classification matches classification branches of a CNN network, and based on Xception network improvement, input images are denoised and classified into an angiogram image set and an angiogram-free image set by a parallel three-channel preprocessing method.
In some preferred embodiments, the parallel three-channel preprocessing method includes performing gaussian filtering, mean filtering, histogram equalization on three channels of an input image, and performing classification processing.
In some preferred embodiments, the lightweight classification matches matching branches of a CNN network, and a double loss auxiliary training network construction based on a pseudo-twin network specifically includes a main path and an auxiliary path;
the main path comprises a pre-branching portion and a pseudo-twinning dense module;
the front branch part comprises a first front branch part and a second front branch part, the first front branch part and the second front branch part have the same structure and do not share weights, and the structure of the first front branch part is the same as that of the first three convolution modules of VGG 19;
the Pseudo-twin Dense module is a Pseudo-Siemese Dense BLOCK network;
the auxiliary path comprises a post-branch part and an auxiliary path sharing part;
the post-branching part comprises a first post-branching part and a second post-branching part, the first post-branching part and the second post-branching part have the same structure but do not share weights, and the structure of the first post-branching part is the same as the last two convolution modules of the VGG 19; the first branch rear part and the second branch rear part are respectively connected with the output of the first branch front part and the output of the second branch front part; the outputs of the first and second post-branch portions are connected and connect the convolutional layer and the 2 full-link layers.
In some preferred embodiments, the lightweight classification matches a matching branch of a CNN network, and the test method is as follows:
step A100, inputting the sequence of images with contrast in the non-contrast image set and the library of images with contrast into a first branch front part and a second branch front part respectively, and obtaining a characteristic image without contrast and a characteristic image with contrast through the first branch front part and the second branch front part respectively;
step A200, the non-contrast characteristic image and the contrast characteristic image are sent to the pseudo-twinning dense module after maximum pooling operation;
step A300, the pseudo-twin dense module outputs Euclidean distances of the non-contrast characteristic image and the contrast characteristic image;
step A400, a group of non-contrast characteristic images with the minimum Euclidean distance and contrast images corresponding to the contrast characteristic images are matched contrast images.
In some preferred embodiments, the lightweight classification matches the matching branch of the CNN network, and the training method is as follows:
b100, acquiring a contrast image training sequence with a label and a non-contrast image training sequence;
step B200, acquiring and obtaining a non-contrast characteristic image and a contrast characteristic image by the method in the step A100 based on the contrast image training sequence and the non-contrast image training sequence; acquiring a matched contrast image by the method of the steps A100-A400;
step B300, based on the non-contrast characteristic image and the contrast characteristic image, generating an auxiliary road non-contrast characteristic image and an auxiliary road contrast characteristic image through the first branch rear part and the second branch rear part;
step B400, merging the auxiliary road non-contrast characteristic diagram and the auxiliary road contrast characteristic diagram and generating a two-classification result through an auxiliary road sharing part;
b500, adjusting the auxiliary road parameters by a back propagation optimization method based on the two classification results until the cross entropy loss function is lower than a preset threshold value;
based on the matched contrast image, adjusting main path parameters by a back propagation optimization method until a contrast loss function is lower than a preset threshold value;
and when the cross entropy loss function and the contrast loss function are both lower than a preset threshold value, obtaining a matching branch of the trained lightweight classified matching CNN network.
In some preferred embodiments, the cross-entropy loss function is:
Figure GDA0003496675530000051
wherein, yiLabel representing the ith sample, piThe probability that the ith sample is matched correctly is shown, N is the total number of samples, and i is the sample serial number.
In some preferred embodiments, the contrast loss function L is:
Figure GDA0003496675530000052
Figure GDA0003496675530000053
D(X1,X2)=‖G(X1)-G(X2)‖2
wherein the contrast loss function L represents the sum of the individual loss functions L of all data pairs, P represents the number of data pairs, X1And X2Respectively representing input non-contrast image set and contrast image sequence, Y representing sample label, i representing sample serial number, G representing output of each input after passing through main path, D representing Euclidean distance between two output vectors of input after passing through network, m being presetA contribution threshold.
In some preferred embodiments, the dense module concatenates the non-contrast feature image and the contrast feature image in the channel dimension after maximum pooling, generates a concatenated feature image after activation by convolution and ReLU, and subdivides the concatenated feature image into a first dense module branch and a second dense module branch which have the same structure but do not share the weight, wherein the first dense module branch and the second dense module branch perform hierarchical dense connection on the concatenated feature image, the first dense module branch also participates the non-contrast feature image in the hierarchical dense connection, and the second dense module branch also considers the contrast feature image to participate in the hierarchical dense connection, so as to obtain a dense-connected non-contrast image feature vector and a dense-connected contrast image feature vector.
On the other hand, the invention provides an X-ray angiography image matching system based on two-stage CNN, which comprises an image acquisition unit, an image classification unit, an angiography image sequence extraction unit and an image matching unit;
the image acquisition unit is configured to acquire a contrast image sequence containing coronary arteries;
the image classification unit is configured to classify the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching (CNN) network;
the contrast image sequence extraction unit is configured to extract a sequence at least including one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and store all the extracted contrast image sequences as a contrast image library;
the image matching unit is configured to select a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and search a matching contrast image of the non-contrast image set from the contrast image library.
The invention has the beneficial effects that:
(1) the X-ray angiography image matching method based on the two-stage CNN firstly leads the complete angiography process image sequence to pass through the classification branch of the classification matching CNN network and the matching branch of the classification matching CNN network, so that the Euclidean distance of the unmatched data pairs is increased, the Euclidean distance of the matched data pairs is decreased, and the matching accuracy of the non-angiography image and the angiography image is improved.
(2) The X-ray angiography image matching method based on the two-stage CNN carries out preprocessing by a three-channel preprocessing method instead of three methods independently, so that different noises are filtered comprehensively, the final classification task is not influenced greatly, and the accuracy of matching of a non-contrast image and a contrast image is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of an X-ray angiography image matching method based on two-stage CNN according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a lightweight matching CNN network of an X-ray angiography image matching method based on a two-stage CNN according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a pseudo-twinning dense module of an X-ray angiography image matching method based on two-stage CNN according to an embodiment of the present invention;
FIG. 4 is a diagram of the matching effect of the two-stage CNN-based X-ray angiography image matching method according to the embodiment of the present invention;
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides an X-ray angiography image matching method based on two-stage CNN, which comprises the steps of firstly enabling an integral angiography process image sequence to pass through a classification branch of a classification matching CNN network and a matching branch of the classification matching CNN network, enabling Euclidean distances of unmatched data pairs to be larger, enabling Euclidean distances of matched data pairs to be smaller, and accordingly improving matching accuracy of a non-angiography image and an angiography image.
The invention discloses an X-ray angiography image matching method based on two-stage CNN, which comprises the following steps:
step S100, acquiring an angiography image sequence containing coronary arteries;
step S200, dividing the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching CNN network;
step S300, extracting a sequence at least comprising one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and storing all the extracted contrast image sequences as a contrast image library;
and S400, selecting a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and searching a matching contrast image of the non-contrast image set from the contrast image library.
In order to more clearly describe the two-stage CNN-based X-ray angiography image matching method of the present invention, the following describes the steps in the embodiment of the present invention in detail with reference to fig. 1.
The two-stage CNN-based X-ray angiography image matching method according to the first embodiment of the present invention includes steps S100 to S500, and each step is described in detail as follows:
step S100, acquiring an angiography image sequence containing coronary arteries;
step S200, dividing the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching CNN network;
in this embodiment, the lightweight classification is matched with the classification branch of the CNN network, and based on the Xception network improvement, the input image is denoised and classified into an image set with contrast and an image set without contrast by a parallel three-channel preprocessing method.
In this embodiment, the parallel three-channel preprocessing method includes performing gaussian filtering, mean filtering, histogram equalization on three channels of an input image, and then performing classification processing.
Because the input image is a gray-scale image and the pixel values of the three channels are completely the same, different noises can be filtered comprehensively by the parallel three-channel preprocessing method provided by the invention, although the edges of the blood vessel image are blurred by Gaussian filtering and mean filtering, the defect of the edge characteristics can not generate excessive influence on the classification task of the step, and the classification accuracy is still greatly improved aiming at the step.
Step S300, extracting a sequence at least comprising one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and storing all the extracted contrast image sequences as a contrast image library;
and S400, selecting a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and searching a matching contrast image of the non-contrast image set from the contrast image library. In this embodiment, each non-contrast image is matched with a sequence of contrast images, and the sequence and continuity of the non-contrast images are not limited in the matching process. The matching effect of the present invention is shown in fig. 4.
In this embodiment, the matching branch of the lightweight class matching CNN network is shown in fig. 2, where 1, 2, 3, 4, 5 and 6 are main paths, 7, 8, 9, 10, 11 and 12 are auxiliary paths, the connections between 2 and 3, between 3 and 7, between 7 and 8, between 3 and 4 are maxporoling, and the connections between 9 and 10, between 4 and 5 are Global Average power; the construction of a double loss auxiliary training network based on a pseudo twin network, all the preferable convolution operations in the invention can be realized by adopting convolution of a3 multiplied by 3 convolution kernel, and specifically comprises a main path and an auxiliary path;
the main path comprises a pre-branching portion and a pseudo-twinning dense module;
the front branch part comprises a first front branch part and a second front branch part, the first front branch part and the second front branch part have the same structure and do not share weights, and the structure of the first front branch part is the same as that of the first three convolution modules of VGG 19; the network with the same structure but without sharing the weight can better capture the associated features between the images with larger differences.
The Pseudo-twin Dense module is a Pseudo-Siemese Dense BLOCK network;
the auxiliary path comprises a post-branch part and an auxiliary path sharing part;
the post-branching part comprises a first post-branching part and a second post-branching part, the first post-branching part and the second post-branching part have the same structure but do not share weights, and the structure of the first post-branching part is the same as the last two convolution modules of the VGG 19; the first branch rear part and the second branch rear part are respectively connected with the output of the first branch front part and the output of the second branch front part; the outputs of the first and second post-branch portions are connected and connect the convolutional layer and the 2 full-link layers.
In this embodiment, the lightweight matching network generates euclidean distances between the non-contrast image and the contrast image sequence only through the main road in the test phase, and trains by adopting a two-way back propagation method in the main road and the auxiliary road in the training phase.
In this embodiment, the test method of the lightweight classification matching CNN network matching branch is as follows:
step A100, inputting the sequence of images with contrast in the non-contrast image set and the library of images with contrast into a first branch front part and a second branch front part respectively, and obtaining a characteristic image without contrast and a characteristic image with contrast through the first branch front part and the second branch front part respectively;
step A200, the non-contrast characteristic image and the contrast characteristic image are sent to the pseudo-twinning dense module after maximum pooling operation;
step A300, outputting Euclidean distances of the non-contrast characteristic image and the contrast characteristic image for the twin dense module;
in this embodiment, as shown in fig. 3, the dense module concatenates the non-contrast characteristic image and the contrast characteristic image in the channel dimension after maximum pooling, generates a concatenated characteristic image after convolution and ReLU activation, and subdivides the concatenated characteristic image into a first dense module branch and a second dense module branch which have the same structure but do not share a weight, where the first dense module branch and the second dense module branch perform hierarchical dense connection on the concatenated characteristic image, the first dense module branch further participates the non-contrast characteristic image in the hierarchical dense connection, and the second dense module branch further considers the contrast characteristic image to participate in the hierarchical dense connection, so as to obtain a densely connected non-contrast image characteristic vector and a densely connected contrast image characteristic vector. The layers in fig. 3 that are connected by the same curved arrow represent densely connecting the same feature map.
Step A400, a group of non-contrast characteristic images with the minimum Euclidean distance and contrast images corresponding to the contrast characteristic images are matched contrast images.
In this embodiment, the training method for the lightweight classification matching branch of the CNN network includes:
b100, acquiring a contrast image training sequence with a label and a non-contrast image training sequence;
step B200, acquiring and obtaining a non-contrast characteristic image and a contrast characteristic image by the method in the step A100 based on the contrast image training sequence and the non-contrast image training sequence; acquiring a matched contrast image by the method of the steps A100-A400;
step B300, based on the non-contrast characteristic image and the contrast characteristic image, generating an auxiliary road non-contrast characteristic image and an auxiliary road contrast characteristic image through the first branch rear part and the second branch rear part;
step B400, merging the auxiliary road non-contrast characteristic diagram and the auxiliary road contrast characteristic diagram and generating a two-classification result through an auxiliary road sharing part, wherein the two-classification result represents correct or wrong matching;
b500, adjusting the auxiliary road parameters and the parameters before branching by a back propagation optimization method based on the two classification results until the cross entropy loss function is lower than a preset threshold value;
in this embodiment, the cross entropy loss function is shown in equation (1):
Figure GDA0003496675530000121
wherein, yiLabel representing the ith sample, piThe probability that the ith sample is matched correctly is shown, N is the total number of samples, and i is the sample serial number.
Based on the matched contrast image, adjusting main path parameters by a back propagation optimization method until a contrast loss function is lower than a preset threshold value;
in this embodiment, the contrast loss function L is:
Figure GDA0003496675530000122
Figure GDA0003496675530000123
D(X1,X2)=‖G(X1)-G(X2)‖2 (4)
wherein the contrast loss function L represents the sum of the individual loss functions L of all data pairs, P represents the number of data pairs, X1And X2The method comprises the steps of respectively representing input non-contrast images and input contrast image sequences, wherein Y represents a label of a sample, i represents a sample serial number, G represents an output of each input after passing through a main path where the input is located, D represents a Euclidean distance between output vectors of the two inputs after passing through a network, and m is a preset contribution threshold value. Equation (3) represents the specific content of l, the first term sum term represents the loss function for the matching data pair, and is the Euclidean distance between the two input corresponding output vectorsThe smaller the distance between outputs, the smaller the loss function, the squared distance of the distance; the second term sum term is a loss function for unmatched data, and is 0 or the square of the difference between a preset contribution threshold m and the Euclidean distance, and the larger the distance between outputs is, the smaller the loss function is. The purpose of setting the contribution threshold m is to set a radius so that only unmatched data with a certain range of euclidean distances will contribute to model optimization. D of formula (4) represents the Euclidean distance between the output vectors of two inputs after passing through the network, G represents the output of every input after passing through the branch front part, through this loss function, can make the Euclidean distance of the matching data pair diminish, and the Euclidean distance of the mismatching data pair enlarges, and then has increased the accuracy of matching.
And when the cross entropy loss function and the contrast loss function are both lower than a preset threshold value, obtaining a matching branch of the trained lightweight classified matching CNN network.
The invention relates to an X-ray angiography image matching method based on two-stage CNN, which is a method for matching angiography images and is firstly proposed at present, the accuracy of rank1 can reach 59.28%, the accuracy of rank3 can reach 91.00%, the recall rate of rank1 image guide wires in blood vessels can reach 78.22%, and the Hausdorff distance can reach 4.80341, so that a better result is realized.
The optimizer adopted in this embodiment is Adam optimization algorithm, and the parameter is 0.001. Each training model has a batch size of 10 and an epoch of 100.
In order to evaluate the contribution of the selected network structure to the classification phase method of the present invention, this example performed comparative experiments on different backbone networks, and the results are shown in table 1. None of the experiments performed in table 1 were subjected to any image pre-processing.
TABLE 1 statistical table of performance comparison of Xceptions with other networks
Figure GDA0003496675530000131
As can be seen from table 1, the Xception network has the highest classification accuracy and precision, and the video memory usage is the smallest among the three. Although the classification performance of the VGG19 is close to Xmeeting, the video memory occupation is more than 1.5 times of the Xmeeting. Under the condition that the size of the model has obvious advantages, the Xconcept also keeps stronger classification performance and faster convergence rate, and is an ideal scheme for solving the task of the classification stage.
In order to verify the effectiveness of the image preprocessing method, the present embodiment compares different image preprocessing methods, and the results are shown in table 2.
TABLE 2 model Performance comparison statistical tables under different preprocessing methods
Figure GDA0003496675530000141
As can be seen from table 2, the results of the experiment can be improved by using the three-channel preprocessing method compared with the model without using the preprocessing method, and the results are rather deteriorated by the Frangi filtering and CLAHE.
In the two-stage CNN provided by the invention, the matching stage is a double loss auxiliary training network based on a pseudo twin network. The main path of the network comprises a pseudo-twin dense module, a contrast loss function is adopted, an auxiliary path is composed of a plurality of convolution layers, and a cross entropy loss function is adopted. During training, two-way reverse propagation is adopted, and during testing, only a main-way network is adopted.
The twin network consists of two CNNs that are identical and share a weight, while the pseudo-twin network consists of two CNNs that are identical but do not share a weight. The input of the matched CNN at the stage is two corresponding images with contrast and images without contrast, and the input images have certain difference, so that a pseudo-twin network structure is adopted, and the correlation characteristics between the images with larger difference can be captured better.
Due to the special structure of the twin network, the size of the model is twice that of a single trunk network, so that an excessively deep convolutional neural network is not suitable for being selected, and the situation that the model is too large, the program operation speed is too slow, and even the hardware condition cannot be supported is avoided. VGGNet is a classical convolutional neural network with a simple structure and strong performance, and all the adopted convolution kernels are 3 x 3 in size, so that more effective information can be captured and the calculation amount is small compared with that of a larger convolution kernel. In the proposed double-loss auxiliary training network, all convolution operations adopt a3 x 3 convolution kernel, the structure of three convolution modules before branching is the same as the first three convolution modules of VGG19, and the two convolution modules of an auxiliary path after branching are the same as the last two convolution modules of VGG 19.
In the task, the matched rank-n accuracy can be counted by referring to the pedestrian re-identification problem. Sorting the Euclidean distances of the feature vectors of all the input image pairs in ascending order, wherein the rank-n correct rate is the prediction correct rate of the image marked as matching in the first n images with the minimum Euclidean distance. However, since the images to be matched are consecutive frames derived from the video sequence, and have small morphological and positional differences from each other, the prediction is considered correct in this task as long as there are marked matching images in the adjacent frames before and after the predicted image.
Because the guide wire is usually put into the intravascular interventional operation firstly to be used as the guidance of the subsequent interventional instrument, and the guide wire can only move in the blood vessel, the occupation ratio of the guide wire in the blood vessel is adopted as an evaluation index. Since the Recall ratio (Recall) is an index for evaluating how much of the positive sample is predicted correctly, and the guide wire, which is the positive sample predicted correctly in the present task, occupies the area of the blood vessel portion in the predicted image, the Recall ratio in the present task can be defined as "the ratio of the area where the guide wire overlaps the blood vessel to the total area of the guide wire". The larger the proportion of the overlapped area of the guide wire and the blood vessel to the total area of the guide wire is, the better the matching effect is. When Recall is 1, the guide wire is positioned in the blood vessel completely, and the matching effect is best; when Recall is 0, it means that the guide wires are all located outside the blood vessel, and the matching effect is the worst.
The Hausdorff Distance (Hausdorff Distance) is an index used to measure the Distance between proper subsets in space, which may indicate the maximum degree of mismatch between two point sets. Compared with the Euclidean distance, the Hausdorff distance is more flexible and is more suitable for the situation that the shape of the point set in the task is irregular. In this experiment, the matching between the two images was determined by counting the hausdorff distance between the guide wire in the non-contrast image and the blood vessel in the X-ray contrast image. The smaller the hausdorff distance is, the more the positions of the guide wire and the blood vessel in the two images are matched, and the two images are more likely to be in the same heart cycle state, namely the two images are more matched.
In order to evaluate the effectiveness and superiority of the network structure, the present embodiment compares the double loss training assistant network proposed by the present invention with the twin network, the pseudo-twin network, and the three-branch network TS-Net combining the twin network and the pseudo-twin network, and the results are shown in table 3.
TABLE 3 comparative experimental results
Figure GDA0003496675530000161
Wherein Siamese is a twin network, and Pseudo-Siamese is a Pseudo-twin network.
As is clear from Table 3, the method provided by the invention is remarkably superior to several classical structures in various indexes, wherein the rank-1 accuracy can be improved by more than 20%.
Compared with the traditional method of repeatedly injecting a contrast medium to obtain a current blood vessel radiography image and then fixing a certain frame image as a reference image, the model enables a doctor to refer to a dynamic blood vessel image in the whole operation process, is more accurate and more convenient compared with a certain frame static image, effectively solves the problem of difficult blind operation of the doctor, improves the operation efficiency, and further reduces the radiation injury of the doctor and the spinal damage caused by protective clothing. Meanwhile, for the patient, the method can reduce the use times and the use amount of the contrast agent in the interventional operation, so that the side effect generated after the operation of the patient can be reduced, and the physical health of the patient is further ensured. Therefore, the invention has high acceptance in clinical application and has important clinical practical significance.
The X-ray angiography image matching system based on the two-stage CNN comprises an image acquisition unit, an image classification unit, an angiography image sequence extraction unit and an image matching unit, wherein the image acquisition unit is used for acquiring images;
the image acquisition unit is configured to acquire a contrast image sequence containing coronary arteries;
the image classification unit is configured to classify the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching (CNN) network;
the contrast image sequence extraction unit is configured to extract a sequence at least including one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and store all the extracted contrast image sequences as a contrast image library;
the image matching unit is configured to select a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and search a matching contrast image of the non-contrast image set from the contrast image library.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the two-stage CNN-based X-ray angiography image matching system provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a third embodiment of the present invention includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor to implement the two-stage CNN based X-ray angiographic image matching method described above.
A computer-readable storage medium of a fourth embodiment of the present invention stores computer instructions for execution by the computer to implement the two-stage CNN-based X-ray angiography image matching method described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A two-stage CNN-based X-ray angiography image matching method, the method comprising:
step S100, acquiring an angiography image sequence containing coronary arteries;
step S200, dividing the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through lightweight classification matching of classification branches of a CNN network;
step S300, extracting a sequence at least comprising one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and storing all the extracted contrast image sequences as a contrast image library;
and S400, selecting a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and searching a matching contrast image of the non-contrast image set from the contrast image library.
2. The two-stage CNN-based X-ray angiography image matching method of claim 1, wherein the lightweight classification matches classification branches of a CNN network, and based on Xception network improvement, input images are denoised and classified into an angiogram-present image set and an angiogram-absent image set by a parallel three-channel preprocessing method.
3. The two-stage CNN-based X-ray angiography image matching method according to claim 2, wherein the parallel three-channel preprocessing method is to perform gaussian filtering, mean filtering, and histogram equalization processing on three channels of the input image, respectively, and then perform classification processing.
4. The two-stage CNN-based X-ray angiography image matching method according to claim 2, wherein the lightweight classification matches matching branches of a CNN network, and a pseudo-twin network-based double loss auxiliary training network is constructed, specifically comprising a main road and an auxiliary road;
the main path comprises a pre-branching portion and a pseudo-twinning dense module;
the front branch part comprises a first front branch part and a second front branch part, the first front branch part and the second front branch part have the same structure and do not share weights, and the structure of the first front branch part is the same as that of the first three convolution modules of VGG 19;
the Pseudo-twin Dense module is a Pseudo-Siemese Dense BLOCK network;
the auxiliary path comprises a post-branch part and an auxiliary path sharing part;
the post-branching part comprises a first post-branching part and a second post-branching part, the first post-branching part and the second post-branching part have the same structure but do not share weights, and the structure of the first post-branching part is the same as the last two convolution modules of the VGG 19; the first branch rear part and the second branch rear part are respectively connected with the output of the first branch front part and the output of the second branch front part; the outputs of the first and second post-branch portions are connected and connect the convolutional layer and the 2 full-link layers.
5. The two-stage CNN-based X-ray angiography image matching method of claim 4, wherein the lightweight classification matches matching branches of a CNN network, and the test method is:
step A100, inputting the sequence of images with contrast in the non-contrast image set and the library of images with contrast into a first branch front part and a second branch front part respectively, and obtaining a characteristic image without contrast and a characteristic image with contrast through the first branch front part and the second branch front part respectively;
step A200, the non-contrast characteristic image and the contrast characteristic image are sent to the pseudo-twinning dense module after maximum pooling operation;
step A300, the pseudo-twin dense module outputs Euclidean distances of the non-contrast characteristic image and the contrast characteristic image;
step A400, a group of non-contrast characteristic images with the minimum Euclidean distance and contrast images corresponding to the contrast characteristic images are matched contrast images.
6. The two-stage CNN-based X-ray angiography image matching method of claim 5, wherein the lightweight classification matches matching branches of a CNN network, and the training method comprises:
b100, acquiring a contrast image training sequence with a label and a non-contrast image training sequence;
step B200, acquiring and obtaining a non-contrast characteristic image and a contrast characteristic image by the method in the step A100 based on the contrast image training sequence and the non-contrast image training sequence; acquiring a matched contrast image by the method of the steps A100-A400;
step B300, based on the non-contrast characteristic image and the contrast characteristic image, generating an auxiliary road non-contrast characteristic image and an auxiliary road contrast characteristic image through the first branch rear part and the second branch rear part;
step B400, merging the auxiliary road non-contrast characteristic diagram and the auxiliary road contrast characteristic diagram and generating a two-classification result through an auxiliary road sharing part;
b500, adjusting the auxiliary road parameters by a back propagation optimization method based on the two classification results until the cross entropy loss function is lower than a preset threshold value;
based on the matched contrast image, adjusting main path parameters by a back propagation optimization method until a contrast loss function is lower than a preset threshold value;
and when the cross entropy loss function and the contrast loss function are both lower than a preset threshold value, obtaining a matching branch of the trained lightweight classified matching CNN network.
7. The two-stage CNN-based X-ray angiography image matching method of claim 6, wherein the cross-entropy loss function is:
Figure FDA0003496675520000041
wherein, yiLabel representing the ith sample, piIndicates the probability that the ith sample matches correctly, N indicates the total number of samples, i indicatesSample number.
8. The two-stage CNN-based X-ray angiography image matching method of claim 6, wherein the contrast loss function L is:
Figure FDA0003496675520000042
Figure FDA0003496675520000043
D(X1,X2)=||G(X1)-G(X2)||2
wherein the contrast loss function L represents the sum of the individual loss functions L of all data pairs, P represents the number of data pairs, X1And X2The method comprises the steps of respectively representing an input non-contrast image set and an input contrast image sequence, wherein Y represents a label of a sample, i represents a sample sequence number, G represents an output of each input after passing through a main path where the input is located, D represents a Euclidean distance between output vectors of two inputs after passing through a network, a G pseudo-twin dense module represents an output of each input after passing through a branch front part where the input is located, and m is a preset contribution threshold value.
9. The two-stage CNN-based X-ray angiography image matching method according to claim 5, wherein the dense module concatenates the non-contrast feature image and the contrast feature image in channel dimensions after maximum pooling, generates a concatenated feature image after convolution and ReLU activation, and is further divided into a first dense module branch and a second dense module branch which have the same structure and do not share weights, the first dense module branch and the second dense module branch perform hierarchical dense connection on the concatenated feature images, the first dense module branch further participates the non-contrast feature images in hierarchical dense connection, the second dense module branch further considers the contrast feature images in hierarchical dense connection, and thus the densely connected non-contrast image feature vectors and the densely connected contrast image feature vectors are obtained.
10. An X-ray angiography image matching system based on two-stage CNN is characterized by comprising an image acquisition unit, an image classification unit, an angiography image sequence extraction unit and an image matching unit;
the image acquisition unit is configured to acquire a contrast image sequence containing coronary arteries;
the image classification unit is configured to classify the angiography image sequence containing the coronary artery into an angiography image set and an angiography-free image set through a classification branch of a lightweight classification matching (CNN) network;
the contrast image sequence extraction unit is configured to extract a sequence at least including one heartbeat cycle based on the contrast image set to obtain a contrast image sequence, and store all the extracted contrast image sequences into a contrast image library;
the image matching unit is configured to select a matching branch of a non-contrast image through a lightweight classification matching CNN network based on the non-contrast image set, and search a matching contrast image of the non-contrast image set from the contrast image library.
CN202110772730.3A 2021-07-08 2021-07-08 X-ray angiography image matching method and system based on two-stage CNN Active CN113469258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110772730.3A CN113469258B (en) 2021-07-08 2021-07-08 X-ray angiography image matching method and system based on two-stage CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110772730.3A CN113469258B (en) 2021-07-08 2021-07-08 X-ray angiography image matching method and system based on two-stage CNN

Publications (2)

Publication Number Publication Date
CN113469258A CN113469258A (en) 2021-10-01
CN113469258B true CN113469258B (en) 2022-03-11

Family

ID=77879180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110772730.3A Active CN113469258B (en) 2021-07-08 2021-07-08 X-ray angiography image matching method and system based on two-stage CNN

Country Status (1)

Country Link
CN (1) CN113469258B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN111192320A (en) * 2019-12-30 2020-05-22 上海联影医疗科技有限公司 Position information determining method, device, equipment and storage medium
CN112184690A (en) * 2020-10-12 2021-01-05 推想医疗科技股份有限公司 Coronary vessel trend prediction method, prediction model training method and device
CN112348860A (en) * 2020-10-27 2021-02-09 中国科学院自动化研究所 Vessel registration method, system and device for endovascular aneurysm surgery
CN112465813A (en) * 2020-12-17 2021-03-09 北京工业大学 Intravascular ultrasonic elasticity analysis method based on stress strain

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10413256B2 (en) * 2017-09-13 2019-09-17 LiteRay Medical, LLC Systems and methods for ultra low dose CT fluoroscopy
CN111179288A (en) * 2019-12-20 2020-05-19 浙江理工大学 Interactive contrast blood vessel segmentation method and system
CN111369528B (en) * 2020-03-03 2022-09-09 重庆理工大学 Coronary artery angiography image stenosis region marking method based on deep convolutional network
CN111986181B (en) * 2020-08-24 2021-07-30 中国科学院自动化研究所 Intravascular stent image segmentation method and system based on double-attention machine system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN111192320A (en) * 2019-12-30 2020-05-22 上海联影医疗科技有限公司 Position information determining method, device, equipment and storage medium
CN112184690A (en) * 2020-10-12 2021-01-05 推想医疗科技股份有限公司 Coronary vessel trend prediction method, prediction model training method and device
CN112348860A (en) * 2020-10-27 2021-02-09 中国科学院自动化研究所 Vessel registration method, system and device for endovascular aneurysm surgery
CN112465813A (en) * 2020-12-17 2021-03-09 北京工业大学 Intravascular ultrasonic elasticity analysis method based on stress strain

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Automatic Coronary Artery Segmentation in X-ray Angiograms by multiple Convolutional Neural Networks;YANG SY等;《3rd International Conference on Multimedia and Image Processing(ICMIP)》;20180318;摘要 *
CT冠状动脉造影不同斑块性质的冠心病患者临床特点分析;曲新凯等;《上海交通大学学报(医学版)》;20140828;第34卷(第08期);摘要 *
血管造影图像序列减影增强技术综述;杜晨冰等;《生命科学仪器》;20181025;摘要 *
造影图像的冠脉结构识别与匹配方法研究;肖若秀;《中国博士学位论文全文数据库(电子期刊)》;20140415(第04期);摘要 *

Also Published As

Publication number Publication date
CN113469258A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN111145206B (en) Liver image segmentation quality assessment method and device and computer equipment
CN107563983A (en) Image processing method and medical imaging devices
JP2019193808A (en) Diagnostically useful results in real time
CN111932554B (en) Lung vessel segmentation method, equipment and storage medium
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN116630334B (en) Method, device, equipment and medium for real-time automatic segmentation of multi-segment blood vessel
CN112184690A (en) Coronary vessel trend prediction method, prediction model training method and device
CN112308846A (en) Blood vessel segmentation method and device and electronic equipment
CN113674291A (en) Full-type aortic dissection real-false lumen image segmentation method and system
CN113724203B (en) Model training method and device applied to target feature segmentation in OCT image
CN113469258B (en) X-ray angiography image matching method and system based on two-stage CNN
Zheng et al. Efficient detection of native and bypass coronary ostia in cardiac CT volumes: Anatomical vs. pathological structures
KR101927298B1 (en) Vessel Segmentation in Angiogram
Liu et al. Yolo-angio: an algorithm for coronary anatomy segmentation
CN111353989B (en) Coronary artery vessel complete angiography image identification method
Moalla et al. Automatic Coronary Angiogram Keyframe Extraction.
M'hiri et al. Hierarchical segmentation and tracking of coronary arteries in 2D X-ray Angiography sequences
Ciompi et al. Learning to detect stent struts in intravascular ultrasound
Ghofrani et al. Liver Segmentation in CT Images Using Deep Neural Networks
Cui Supervised Filter Learning for Coronary Artery Vesselness Enhancement Diffusion in Coronary CT Angiography Images
US20240161285A1 (en) Determining estimates of hemodynamic properties based on an angiographic x-ray examination
Condurache et al. Vessel segmentation in 2D-projection images using a supervised linear hysteresis classifier
CN116630386B (en) CTA scanning image processing method and system thereof
CN116994067B (en) Method and system for predicting fractional flow reserve based on coronary artery calcification
CN114693648B (en) Blood vessel center line extraction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant