CN114612407A - Coronary artery intima boundary segmentation method based on adjacent frame consistency - Google Patents
Coronary artery intima boundary segmentation method based on adjacent frame consistency Download PDFInfo
- Publication number
- CN114612407A CN114612407A CN202210210195.7A CN202210210195A CN114612407A CN 114612407 A CN114612407 A CN 114612407A CN 202210210195 A CN202210210195 A CN 202210210195A CN 114612407 A CN114612407 A CN 114612407A
- Authority
- CN
- China
- Prior art keywords
- adjacent frame
- consistency
- deep learning
- intima
- coronary artery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004351 coronary vessel Anatomy 0.000 title claims abstract description 40
- 230000011218 segmentation Effects 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012014 optical coherence tomography Methods 0.000 claims abstract description 49
- 238000013135 deep learning Methods 0.000 claims abstract description 38
- 239000013589 supplement Substances 0.000 claims abstract description 25
- 238000010586 diagram Methods 0.000 claims description 14
- 230000001502 supplementing effect Effects 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 3
- 230000009469 supplementation Effects 0.000 claims 2
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000002808 connective tissue Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 208000029078 coronary artery disease Diseases 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a coronary artery intima boundary segmentation method based on adjacent frame consistency, which comprises the following steps: acquiring an OCT image to be detected and inputting the OCT image to a deep learning network; extracting image characteristics and establishing a corresponding relation between an optical coherence tomography image and an intima of a coronary artery based on a multi-scale densely-linked cavity convolution module; calculating the consistency of adjacent frames in the optical coherence tomography image based on an adjacent frame information supplement module, and acquiring an adjacent frame supplement relation; and outputting an intima boundary segmentation boundary result according to the corresponding relation between the optical coherence tomography image and the intima of the coronary artery and the adjacent frame supplement relation. By using the invention, the coronary artery intima boundary can be segmented rapidly and accurately. The method for segmenting the intima boundary of the coronary artery based on the consistency of adjacent frames can be widely applied to the field of medical image processing.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a coronary artery intima boundary segmentation method based on adjacent frame consistency.
Background
Identification of the intima of coronary vessels is the first step in clinical quantification of coronary slice data. The quantified blood vessel index is an important reference index for guiding clinical diagnosis, and is the first step of identifying vulnerable plaques, evaluating interventional therapy effect and evaluating the intimal coverage condition after the operation of the drug eluting stent. Accurate identification of the intima of the coronary arteries is important for the diagnosis of patients with coronary artery disease. Clinically, Optical Coherence Tomography (OCT) techniques can clearly show the cross section and internal structure of coronary arteries, so OCT images are widely used clinically for detecting the intima of coronary arteries. But it is time consuming and inefficient for the clinician to manually segment the intima and adventitia of the coronary arteries in OCT images.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a coronary artery intima boundary segmentation method based on adjacent frame consistency, which can rapidly and accurately segment the coronary artery intima boundary.
The first technical scheme adopted by the invention is as follows: a coronary artery intima boundary segmentation method based on adjacent frame consistency comprises the following steps:
acquiring an OCT image to be detected and inputting the OCT image to a deep learning network;
the deep learning network comprises a multi-scale densely linked cavity convolution module and an adjacent frame information supplement module;
extracting image characteristics and establishing a corresponding relation between an optical coherence tomography image and an intima of a coronary artery based on a multi-scale densely-linked cavity convolution module;
based on the adjacent frame information supplement module, calculating the consistency of the adjacent frames in the optical coherence tomography image, and acquiring the supplement relation of the adjacent frames;
and outputting an intima boundary segmentation boundary result according to the corresponding relation between the optical coherence tomography image and the intima of the coronary artery and the adjacent frame supplement relation.
Further, the step of calculating consistency of adjacent frames in the optical coherence tomography image and acquiring a complementary relationship of the adjacent frames by the adjacent frame information supplementing module specifically includes:
calculating a cosine similarity vector diagram between adjacent frame features based on an adjacent frame information supplement module;
judging the consistency of adjacent frames in the corresponding optical coherence tomography image according to a cosine similarity vector diagram between the characteristics of the adjacent frames;
and obtaining the adjacent frame supplementary relation.
Further, the neighboring frame information supplementing module is formulated as follows:
ca,b(i,j;fa,fb)=max σ(fa(i,j),fb(i,j))
in the above formula, σ represents a cosine similarity vector diagram between two adjacent frame features, and ca,b(i,j;fa,fb) Representing pixel points f in adjacent framesa(i, j) and fb(ii) maximum agreement of (i, j).
Further, before the step of acquiring the OCT image to be measured and inputting the OCT image to the deep learning network, the method further includes:
and constructing a training data set and training the deep learning network based on the training data set to obtain the deep learning network after training.
Further, the step of constructing a training data set and training the deep learning network based on the training data set to obtain a trained deep learning network specifically includes:
collecting OCT images of different health conditions and corresponding coronary artery intima segmentation results as a training data set;
based on a training data set, taking OCT images of different health conditions as input, taking a corresponding coronary artery intima segmentation result as output, and training a deep learning network;
and adjusting the network parameters of the deep learning network until the error rate reaches a preset range, and obtaining the deep learning network after training.
Further, the network parameters of the deep learning network include the number of convolution layers, the number of cavity convolution layers, the number of BN layers, the number of RELU layers, the number of pooling layers, the number of upsampling layers, the number of output layers, an initial weight, and a bias value.
The method has the beneficial effects that: according to the method, the self-learning capability of the deep learning network is utilized, the corresponding relation between the OCT image and the intima segmentation result of the coronary artery is established, the intima segmentation result of the current coronary artery corresponding to the current image characteristic is determined, the efficiency of the intima segmentation process of the coronary artery is improved, the segmentation result is more accurate, the supplement of the adjacent frame characteristic is realized, the characteristic of the current frame is enriched, and the expandability is strong.
Drawings
FIG. 1 is a coronary intima boundary segmentation method based on adjacent frame consistency according to the present invention;
FIG. 2 is a schematic structural diagram of a deep learning network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a multi-scale densely linked hole convolution module according to an embodiment of the present invention;
FIG. 4 is a block diagram of an adjacent frame information supplement module according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the segmentation results according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
As shown in fig. 1, the present invention provides a coronary intima boundary segmentation method based on adjacent frame consistency, which includes the following steps:
and S0, constructing a training data set and training the deep learning network based on the training data set to obtain the deep learning network after training.
S0.1, collecting OCT images of different health conditions and corresponding coronary artery intima segmentation results as a training data set;
s0.2, training a deep learning network by taking OCT images of different health conditions as input and corresponding coronary artery intima segmentation results as output based on a training data set;
and S0.3, adjusting the network parameters of the deep learning network until the error rate reaches a preset range, and obtaining the deep learning network after training.
The network parameters of the deep learning network comprise convolution layer number, cavity convolution layer number, BN layer number, RELU layer number, pooling layer number, up-sampling layer number, output layer number, initial weight and offset value.
Specifically, OCT image sequences of a plurality of volunteers and coronary artery intima and adventitia segmentation results are selected as sample data, a neural network is learned and trained, and the neural network is enabled to fit the relation between the OCT images and the coronary artery intima segmentation results by adjusting the network structure and the weight among network nodes, so that the neural network can accurately fit the corresponding relation between the OCT image sequences of different patients and the coronary artery intima segmentation results.
S1, acquiring an OCT image to be detected and inputting the OCT image to the deep learning network;
s2, the deep learning network comprises a multi-scale densely-linked hole convolution module and an adjacent frame information supplement module;
s3, extracting image features and establishing a corresponding relation between an optical coherence tomography image and an intima of a coronary artery based on a multi-scale densely-linked cavity convolution module;
in particular, the features include high-level abstract coronary intimal features and high-level abstract coronary intimal features.
The S110 multi-scale densely linked cavity convolution module is used for establishing the corresponding relation between the OCT image and the coronary artery intima and the sub-network structureAs shown with reference to fig. 3. Cn1×n1(n2)@n3Representing a step size of n2The convolution kernel is n3N of (A) to (B)1×n1The convolution operation of (2); dn1×n1(n2rn4)@n3Representing a step size of n2With a convolution kernel of n3The void factor is n4N of (A) to (B)1×n1Hole convolution operation of
Wherein n is1,n2,n3And n is4Are shown as numerals of corresponding positions in the figures.
S4, calculating the consistency of adjacent frames in the optical coherence tomography image based on the adjacent frame information supplement module, and acquiring the supplement relation of the adjacent frames;
s4.1, calculating a cosine similarity vector diagram between adjacent frame features based on an adjacent frame information supplement module;
s4.2, judging the consistency of adjacent frames in the corresponding optical coherence tomography image according to a cosine similarity vector diagram between the characteristics of the adjacent frames;
and S4.3, obtaining the supplement relation of the adjacent frames.
Specifically, referring to fig. 4, the self-network structure diagram of the adjacent frame information supplement module is expressed as follows
ca,b(i,j;fa,fb)=max σ(fa(i,j),fb(i,j))
In the above formula, σ represents a cosine similarity vector diagram between two adjacent frame features, and ca,b(i,j;fa,fb) Representing pixel points f in adjacent framesa(i, j) and fb(i, j) maximum consistency.
c* a,b(i,j;fa,fb,ya,yb)=max σ(fa(i,j),fb(i,j))
c* a,b(i,j;fa,fb) Represents the result y of the segmentationa,ybMaximum consistency of supervised neighboring pixel points.
And S5, outputting an intima boundary segmentation boundary result according to the corresponding relation between the optical coherence tomography image and the intima of the coronary artery and the supplement relation between adjacent frames.
To quantify the perceptual consistency of the two frame segmentation decisions, c is calculateda,b(i,j;fa,fb) And c* a,b(i,j;fa,fb) And fused to adjacent frames.
ρ (-) denotes the similarity of each frame to the neighboring frame, H is the height of the image frame, W denotes the width of the image frame, representing the mean of the coherence of different frames.
A coronary intima boundary segmentation method based on adjacent frame consistency comprises the following steps:
the training module is used for constructing a training data set and training the deep learning network based on the training data set to obtain a trained deep learning network;
the system comprises an acquisition module, a deep learning network and a data processing module, wherein the acquisition module is used for acquiring an OCT image to be detected and inputting the OCT image to the deep learning network, and the deep learning network comprises a multi-scale densely-linked cavity convolution module and an adjacent frame information supplement module;
the multi-scale densely linked cavity convolution module is used for extracting image characteristics and establishing a corresponding relation between an optical coherence tomography image and a coronary artery intima;
the adjacent frame information supplementing module is used for calculating the consistency of adjacent frames in the optical coherence tomography image and acquiring the supplement relation of the adjacent frames;
and the output module is used for outputting the intima boundary segmentation boundary result according to the corresponding relation between the optical coherence tomography image and the intima of the coronary artery and the adjacent frame supplement relation.
A coronary artery intimal boundary segmentation device based on adjacent frame consistency comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method for coronary intimal boundary segmentation based on adjacent frame consistency as described above.
The contents in the method embodiments are all applicable to the device embodiments, the functions specifically implemented by the device embodiments are the same as those in the method embodiments, and the beneficial effects achieved by the device embodiments are also the same as those achieved by the method embodiments.
A storage medium having stored therein instructions executable by a processor, the storage medium comprising: the processor-executable instructions, when executed by the processor, are for implementing a coronary intimal boundary segmentation method based on adjacent frame conformance as described above.
The contents in the above method embodiments are all applicable to the present storage medium embodiment, the functions specifically implemented by the present storage medium embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present storage medium embodiment are also the same as those achieved by the above method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. A coronary intima boundary segmentation method based on adjacent frame consistency is characterized by comprising the following steps:
acquiring an OCT image to be detected and inputting the OCT image to a deep learning network;
the deep learning network comprises a multi-scale densely linked cavity convolution module and an adjacent frame information supplement module;
extracting image characteristics and establishing a corresponding relation between an optical coherence tomography image and an intima of a coronary artery based on a multi-scale densely-linked cavity convolution module;
calculating the consistency of adjacent frames in the optical coherence tomography image based on an adjacent frame information supplement module, and acquiring an adjacent frame supplement relation;
and outputting an intima boundary segmentation boundary result according to the corresponding relation between the optical coherence tomography image and the intima of the coronary artery and the adjacent frame supplement relation.
2. The method for coronary intimal boundary segmentation based on adjacent frame consistency according to claim 1, wherein the adjacent frame information supplementation module calculates the consistency of adjacent frames in the optical coherence tomography image and obtains the adjacent frame supplementation relationship, which specifically comprises:
calculating a cosine similarity vector diagram between adjacent frame features based on an adjacent frame information supplement module;
judging the consistency of adjacent frames in the corresponding optical coherence tomography image according to a cosine similarity vector diagram between the characteristics of the adjacent frames;
and obtaining the adjacent frame supplementary relation.
3. The method for coronary intimal boundary segmentation based on adjacent frame consistency according to claim 2, wherein the adjacent frame information supplementing module is formulated as follows:
ca,b(i,j;fa,fb)=maxσ(fa(i,j),fb(i,j))
in the above formula, σ represents a cosine similarity vector diagram between two adjacent frame features, and ca,b(i,j;fa,fb) Representing pixel points f in adjacent framesa(i, j) and fb(i, j) maximum consistency.
4. The method for segmenting the intima boundary of the coronary artery based on the consistency of adjacent frames as claimed in claim 3, wherein before the step of acquiring the OCT image to be measured and inputting the OCT image to the deep learning network, the method further comprises:
and constructing a training data set and training the deep learning network based on the training data set to obtain the deep learning network after training.
5. The method for segmenting the intimal boundary of coronary artery based on the consistency of adjacent frames as set forth in claim 4, wherein the step of constructing a training data set and training the deep learning network based on the training data set to obtain the trained deep learning network specifically comprises:
collecting OCT images of different health conditions and corresponding coronary artery intima segmentation results as a training data set;
based on a training data set, training a deep learning network by taking OCT images of different health conditions as input and taking a corresponding coronary artery intima segmentation result as output;
and adjusting the network parameters of the deep learning network until the error rate reaches a preset range, and obtaining the deep learning network after training.
6. The method according to claim 5, wherein the network parameters of the deep learning network include the number of convolution layers, the number of cavity convolution layers, the number of BN layers, the number of RELU layers, the number of pooling layers, the number of upsampling layers, the number of output layers, the initial weights, and the offset values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210210195.7A CN114612407A (en) | 2022-03-03 | 2022-03-03 | Coronary artery intima boundary segmentation method based on adjacent frame consistency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210210195.7A CN114612407A (en) | 2022-03-03 | 2022-03-03 | Coronary artery intima boundary segmentation method based on adjacent frame consistency |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114612407A true CN114612407A (en) | 2022-06-10 |
Family
ID=81861837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210210195.7A Pending CN114612407A (en) | 2022-03-03 | 2022-03-03 | Coronary artery intima boundary segmentation method based on adjacent frame consistency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114612407A (en) |
-
2022
- 2022-03-03 CN CN202210210195.7A patent/CN114612407A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9968257B1 (en) | Volumetric quantification of cardiovascular structures from medical imaging | |
CN110475505B (en) | Automatic segmentation using full convolution network | |
Carass et al. | Longitudinal multiple sclerosis lesion segmentation: resource and challenge | |
CN110706246B (en) | Blood vessel image segmentation method and device, electronic equipment and storage medium | |
Bi et al. | Automatic liver lesion detection using cascaded deep residual networks | |
Roslan et al. | Skull stripping of MRI brain images using mathematical morphology | |
CN111476757A (en) | Coronary artery patch data detection method, system, storage medium and terminal | |
CN113420826B (en) | Liver focus image processing system and image processing method | |
Zöllner et al. | Kidney segmentation in renal magnetic resonance imaging-current status and prospects | |
CN111612756B (en) | Coronary artery specificity calcification detection method and device | |
CN116309571B (en) | Three-dimensional cerebrovascular segmentation method and device based on semi-supervised learning | |
CN114881968A (en) | OCTA image vessel segmentation method, device and medium based on deep convolutional neural network | |
US11600379B2 (en) | Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data | |
Guo et al. | A bone age assessment system for real-world X-ray images based on convolutional neural networks | |
CN111584066A (en) | Brain medical image diagnosis method based on convolutional neural network and symmetric information | |
CN111340794A (en) | Method and device for quantifying coronary artery stenosis | |
CN115810018A (en) | Method and system for optimizing segmentation results of blood vessel tree and coronary artery tree of CT image | |
CN116309264A (en) | Contrast image determination method and contrast image determination device | |
CN114612407A (en) | Coronary artery intima boundary segmentation method based on adjacent frame consistency | |
CN113222985B (en) | Image processing method, image processing device, computer equipment and medium | |
Stephens et al. | MRI to CTA translation for pulmonary artery evaluation using CycleGANs trained with unpaired data | |
CN114557670A (en) | Physiological age prediction method, apparatus, device and medium | |
CN114519722A (en) | Carotid artery extraction method based on convolutional neural network | |
CN114066908A (en) | Method and system for brain tumor image segmentation | |
Chan et al. | Automated quality controlled analysis of 2d phase contrast cardiovascular magnetic resonance imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |