CN110853045A - Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium - Google Patents

Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium Download PDF

Info

Publication number
CN110853045A
CN110853045A CN201910906518.4A CN201910906518A CN110853045A CN 110853045 A CN110853045 A CN 110853045A CN 201910906518 A CN201910906518 A CN 201910906518A CN 110853045 A CN110853045 A CN 110853045A
Authority
CN
China
Prior art keywords
wall
segmentation
probability map
inner cavity
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910906518.4A
Other languages
Chinese (zh)
Other versions
CN110853045B (en
Inventor
辛景民
蔡卓桐
武佳懿
郑南宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910906518.4A priority Critical patent/CN110853045B/en
Publication of CN110853045A publication Critical patent/CN110853045A/en
Application granted granted Critical
Publication of CN110853045B publication Critical patent/CN110853045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention belongs to the field of medical image processing, and discloses a vascular wall segmentation method based on a nuclear magnetic resonance image, a device and a storage medium, wherein the vascular wall segmentation method comprises the following steps: s1: selecting interested regions ROI from three adjacent slices of the T1 weighted MRI image to form a triple input sequence and input the triple input sequence into a segmentation neural network; s2: performing downsampling for a plurality of times by dividing a neural network; s3: carrying out a plurality of times of upsampling operations by dividing a neural network and combining jump links, and superposing the upsampling results of each time in parallel to obtain a characteristic diagram with the same size as a triple input sequence; s4: and fusing and activating the characteristic graphs to obtain an initial inner cavity probability graph and an initial outer wall probability graph, and optimizing through a segmentation loss function to obtain a final inner cavity probability graph and a final outer wall probability graph to finish segmentation. The invention adopts an end-to-end network, can automatically segment the inner cavity and the outer wall of the carotid artery from MRI, and can further rapidly detect the carotid atherosclerotic plaque through the segmentation result.

Description

Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium
Technical Field
The invention belongs to the field of medical image processing, and relates to a vascular wall segmentation method and device based on a nuclear magnetic resonance image and a storage medium.
Background
Atherosclerosis is a leading cause of morbidity and mortality worldwide, with one of the most important sites of atherosclerotic disease of clinical significance in humans being the carotid artery. Carotid atherosclerosis is a progressive systemic disease characterized by the formation of atherosclerotic plaques, manifested as thickening of the vessel wall leading to stenosis, and structural changes in the vessel wall leading to stroke. Thus, early detection and appropriate treatment of carotid atherosclerosis can prevent cardiovascular disease.
Clinically, plaque burden is commonly used to measure plaque size and to determine the type of plaque on the vessel wall, and therefore it is more important to monitor the progression of carotid atherosclerosis and to determine whether carotid atherosclerosis has occurred by carotid artery imaging. High-resolution vessel wall MRI (VW-MRI) images, as a modern imaging technique for characterizing vessel wall pathology, can obtain cross-sectional images of arteries and can detect early vessel wall abnormalities. Plaque burden measurement is accomplished by identifying the luminal and outer wall boundaries of the vessel wall on a VW-MRI image. However, due to the existence of complex signal features near the vessel wall, the analysis of the VW-MRI images layer by layer has a certain reproducibility, which is troublesome for plaque burden evaluation.
Existing segmentation methods mostly use energy functions, and these functions depend on the cavity boundary having the strongest position of gradient from dark to outer, and the outer wall boundary having the inverse gradient from light to dark. Therefore, they have difficulty in segmenting the lumen and the outer wall in pathological areas and low-quality images, resulting in inaccurate calculation of the blood vessel wall, so these methods mostly complete the research of carotid atherosclerosis only by quantitatively analyzing the load of the carotid atherosclerotic plaque, such as the blood vessel wall thickness and the normalized wall index. However, because of the unusual shape of the lumen and outer wall boundaries near severely diseased arteries and carotid bifurcations, measurement of vessel wall thickness is itself a challenging task that is difficult to apply well in analyzing carotid atherosclerotic plaque load. In addition, most of these methods divide the cavity boundary first and then divide the outer wall boundary, which results in a long training time.
Disclosure of Invention
The invention aims to overcome the defects that in the prior art, when a lumen and an outer wall are segmented, the segmentation is independent and the segmentation time is long, and the lumen and the outer wall are difficult to segment in a pathological region and a low-quality image, so that the calculation of a blood vessel wall is inaccurate, and provides a blood vessel wall segmentation method, a device and a storage medium based on a nuclear magnetic resonance image.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a vascular wall segmentation method based on a nuclear magnetic resonance image comprises the following steps:
s1: selecting three adjacent slices from all slices of the T1 weighted MRI image, selecting interested regions ROI from the three adjacent slices, forming a triple input sequence from the three interested regions ROI and inputting the triple input sequence into a segmentation neural network;
s2: sequentially carrying out a plurality of times of downsampling on the triple input sequence by segmenting a contraction path of the neural network, wherein convolution operation is carried out twice before each downsampling, and each convolution operation comprises 3x3 convolution, batch normalization and linear correction which are sequentially carried out;
s3: by dividing an expansion path of the neural network, carrying out a plurality of times of upsampling operations through a combined jump link according to the result of each downsampling, and parallelly superposing the results of each upsampling to obtain a feature map with the same size as the triple input sequence;
s4: after the feature map is convolved by 1x1, an initial inner cavity probability map and an initial outer wall probability map are obtained through sigmoid activation, the inner cavity probability map and the outer wall probability map are optimized for preset times through a segmentation loss function, a final inner cavity probability map and a final outer wall probability map are obtained, and segmentation is completed.
The vascular wall segmentation method based on the nuclear magnetic resonance image is further improved as follows:
the specific method for selecting the region of interest ROI in S1 is as follows:
a manual center point is set on a slice of the T1 weighted MRI image, from which a region of interest ROI of 80x80 pixels is acquired.
The specific method for selecting the regions of interest ROI of three adjacent slices to form the triple input sequence in S1 is as follows:
adjusting each ROI by the formula (1) to obtain an adjusted image ui
Figure BDA0002213434580000031
Wherein: v. ofiIs the intensity, v, of the pixel in the region of interest ROI to be adjustedminIs the minimum intensity, v, of the pixel in the region of interest ROI to be adjustedmaxIs the maximum intensity of the pixels in the region of interest ROI to be adjusted;
the three adjusted images are combined to form a triple input sequence.
In the step S4, after performing convolution of the feature map by 1x1, activating by sigmoid, and obtaining an initial inner cavity probability map and an initial outer wall probability map, the specific method includes:
convolving the characteristic diagram by 1x1 and activating the characteristic diagram by sigmoid to obtain an initial inner cavity probability diagram
Figure BDA0002213434580000032
And outer wall probability map
Figure BDA0002213434580000033
Figure BDA0002213434580000034
Figure BDA0002213434580000035
Where σ (·) is the sigmoid activation function computed at each pixel, M is the number of upsamples,is the result of the m-th upsampling,
Figure BDA0002213434580000037
is a preset fusion weight for the mth layer lumen prediction,
Figure BDA0002213434580000038
is the preset fusion weight of the prediction of the mth layer outer wall.
The segmentation loss function in S4
Figure BDA0002213434580000041
Comprises the following steps:
Figure BDA0002213434580000042
wherein: wlSet of all standard network layer parameters, W, used for calculating the intracavity probability mapoSet of all standard network layer parameters, W, used for computing the outer wall probability mapvFor the set of all standard network layer parameters used to calculate the vessel wall probability map, α and gamma are three hyper-parameters and are each 1,
Figure BDA0002213434580000043
is the loss of the Dice in the inner cavity,
Figure BDA0002213434580000044
is the loss of the Dice on the outer wall,is the vessel wall Dice loss and:
Figure BDA0002213434580000046
Figure BDA0002213434580000047
Figure BDA0002213434580000048
wherein N is the number of pixels of the image,
Figure BDA0002213434580000049
is a probability map of the lumen
Figure BDA00022134345800000410
The value at the pixel i is such that,
Figure BDA00022134345800000411
probability map of outer wall
Figure BDA00022134345800000412
The value at the pixel i is such that,is an inner cavity YlumenA true value at the pixel i is given,
Figure BDA00022134345800000414
is an outer wall YouterwallThe true value at pixel i.
In another aspect of the present invention, a computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the above-mentioned vessel wall segmentation method based on nuclear magnetic resonance image when executing the computer program.
In yet another aspect of the present invention, a computer-readable storage medium stores a computer program, which when executed by a processor implements the steps of the above-mentioned vessel wall segmentation method based on nuclear magnetic resonance images.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides an effective segmentation neural network, which realizes the simultaneous segmentation of a pair of corresponding cavity and outer wall areas of the carotid artery on a T1 weighted MRI image. Firstly, 2.5D information is obtained by adding two adjacent slices for enhancing overall information, and the problem that a two-dimensional slice along the central line of the carotid artery lacks spatial information is solved. Secondly, a U-shaped convolution network is formed by dividing a contraction path and an expansion path of the network, weighting fusion is realized by parallelly superposing the up-sampling results each time, and the U-shaped convolution neural network and a weighting fusion layer are used as main bodies of the neural network, so that multi-scale characteristic layer information in the up-sampling process is fused, and the segmentation effect is improved. In addition, the lumen region segmentation and the outer wall region segmentation are combined together, the segmentation of the lumen region and the outer wall region is researched as a multi-label problem in a single segmentation subnetwork, only one segmentation network is needed to solve the problem, new triple segmentation loss functions are designed to supervise network learning, the relationship between the lumen region and the outer wall region is formulated by applying the segmentation loss functions, the inner cavity and the outer wall in the carotid artery can be automatically segmented from a T1 weighted MRI image through an end-to-end network, and the carotid atherosclerotic plaque can be quickly detected through the segmentation result based on the high-precision segmentation result.
Drawings
FIG. 1 is a schematic diagram of a partitioned neural network according to the present invention;
FIG. 2 is a schematic diagram of a neural network for detection;
FIG. 3 is a general diagram of a segmented neural network and a detected neural network;
FIG. 4 is a schematic diagram of a process of acquiring a triple input sequence from an MRI image according to the present invention;
FIG. 5 is a T1 weighted image and corresponding segmentation detection results obtained by the method of the present invention;
FIG. 6 shows the segmentation of the lumen and outer wall by the physician of the present invention and various methods.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1 to 4, the blood vessel wall segmentation method based on the nuclear magnetic resonance image of the invention comprises the following steps:
s1: three adjacent slices are selected from all the slices of the T1 weighted MRI image, a region of interest ROI is selected from all the three adjacent slices, and the three region of interest ROIs are formed into a triple input sequence and input into a segmentation neural network. The specific method for selecting the ROI comprises the following steps: a manual center point is set on a slice of the T1 weighted MRI image, and a region of 80 × 80 pixels is acquired from the manual center point, resulting in a region of interest ROI. The specific method for selecting the ROI of three adjacent slices to form the triple input sequence comprises the following steps: adjusting each ROI by the formula (1) to obtainAdjusting an image ui
Figure BDA0002213434580000061
Wherein: v. ofiIs the intensity, v, of the pixel in the region of interest ROI to be adjustedminIs the minimum intensity, v, of the pixel in the region of interest ROI to be adjustedmaxIs the maximum intensity of the pixels in the region of interest ROI to be adjusted; the three adjusted images are combined to form a triple input sequence.
S2: and sequentially carrying out downsampling on the triple input sequence for a plurality of times by dividing a contraction path of the neural network, wherein convolution operation is carried out twice before each downsampling, and each convolution operation comprises 3x3 convolution, batch normalization and linear correction which are sequentially carried out.
S3: and by dividing an expansion path of the neural network, carrying out up-sampling operation for a plurality of times through joint jump linking according to the down-sampling result each time, and parallelly superposing the up-sampling result each time to obtain a characteristic diagram with the same size as the triple input sequence. Upsampling is the process of performing a transposed convolution operation on the input and then restoring the input to a feature map of a particular size.
S4: after the feature map is convolved by 1x1, an initial inner cavity probability map and an initial outer wall probability map are obtained through sigmoid activation, the inner cavity probability map and the outer wall probability map are optimized for preset times through a segmentation loss function, a final inner cavity probability map and a final outer wall probability map are obtained, and segmentation is completed. The specific method for obtaining the initial inner cavity probability map and the outer wall probability map by carrying out convolution on the feature map by 1x1 and then activating the feature map by sigmoid is as follows:
convolving the characteristic diagram by 1x1 and activating the characteristic diagram by sigmoid to obtain an initial inner cavity probability diagram
Figure BDA0002213434580000071
And outer wall probability map
Figure BDA0002213434580000072
Figure BDA0002213434580000073
Figure BDA0002213434580000074
Where σ (·) is the sigmoid activation function computed at each pixel, M is the number of upsamples,
Figure BDA0002213434580000075
is the result of the m-th upsampling,is the fusion weight of the m-th layer lumen prediction, then
Figure BDA0002213434580000077
Is the fusion weight predicted by the outer wall of the mth layer, the first fusion weight is generated at any time, then the optimal weight is reached along with the updating of training,
Figure BDA0002213434580000078
wlumenand wouterwallRespectively as follows:
Figure BDA00022134345800000710
Figure BDA00022134345800000711
segmentation loss function
Figure BDA0002213434580000081
Comprises the following steps:
Figure BDA0002213434580000082
wherein: wlSet of all standard network layer parameters, W, used for calculating the intracavity probability mapoSet of all standard network layer parameters, W, used for computing the outer wall probability mapvFor the set of all standard network layer parameters used to calculate the vessel wall probability map, α and gamma are three hyper-parameters and are each 1,
Figure BDA0002213434580000083
is the loss of the Dice in the inner cavity,
Figure BDA0002213434580000084
is the loss of the Dice on the outer wall,
Figure BDA0002213434580000085
is the vessel wall Dice loss and:
Figure BDA0002213434580000086
Figure BDA0002213434580000087
Figure BDA0002213434580000088
wherein N is the number of pixels of the image,
Figure BDA0002213434580000089
is a probability map of the lumenThe value at the pixel i is such that,
Figure BDA00022134345800000811
probability map of outer wallThe value at the pixel i is such that,
Figure BDA00022134345800000813
is an inner cavity YlumenA true value at the pixel i is given,
Figure BDA00022134345800000814
is an outer wall YouterwallThe true value at pixel i.
The following describes the specific principle and design concept of the vessel wall segmentation method based on the nuclear magnetic resonance image in detail:
the invention provides a vascular wall segmentation method based on nuclear magnetic resonance images, which comprises a segmentation neural network for segmenting an inner cavity and an outer wall, and meanwhile, a segmentation loss function is added into the network to restrain the segmentation neural network. The method is divided into the following parts:
and constructing a segmentation network. A modified U-shaped convolution network (U-Net) is used as a main body of the segmented neural network, wherein the segmented neural network comprises a contraction path and an expansion path, and further comprises a deep network comprising four edge layers, and a weighted fusion layer is added in the training process to fuse the edge layers and learn fusion weights. Compared with the traditional U-shaped convolution network, the network structure provided by the invention has the following three differences: (i) each convolutional layer (Conv) is followed by a Batch Normalization (BN) and a linear correction unit (ReLU), which is referred to herein as a composite layer (Conv-BN-ReLU). (ii) Since the main information is concentrated in the center of the image, a padding operation is employed in each convolution layer of the shrink path. Therefore, the upsampling step of each kernel of 2 × 2 in the extension path is restored to the size of the corresponding feature map in the contraction path, so that the splicing operation without clipping is realized. (iii) The weighted fusion layer combines the multi-scale features of the different layers, improving segmentation performance, similar to previous work such as Full Convolution Networks (FCNs).
In particular, simple upsampling operation is realized by convolution operation, and the size of the final layer is recovered. Therefore, the proposed segmented neural net structure comprises a deep network comprising four side layers, and a weighted fusion layer is added in the training process to fuse the side layers and learn the fusion weight. In the inventionIn the method, the fusion weighting layer generates a probability map of cavities and outer walls at a pixel level by convolution of two channels of 1 multiplied by 1 and then sigmoid activation. The fusion layer
Figure BDA0002213434580000091
Including a probability map of said lumen
Figure BDA0002213434580000092
And said probability map of the outer wall
Figure BDA0002213434580000093
Respectively as follows:
Figure BDA0002213434580000095
where σ (·) is the sigmoid activation function calculated at each pixel, M is the number of edge output layers, M is 4 in this embodiment,
Figure BDA0002213434580000096
is the result of the m-th edge output layer,
Figure BDA0002213434580000097
is the predicted network weight of the m-th layer lumen, and then
Figure BDA0002213434580000098
Is the predicted network weight of the mth layer outer wall, wherein
Figure BDA0002213434580000099
wlumenAnd wouterwallRespectively as follows:
Figure BDA00022134345800000912
the input of the segmented network is collected. A square image of 80x80 pixels is placed at the manual center point of the MRI image slice and the region of interest (ROI) of the MRI slice weighted for T1 is extracted and the ROIs of three consecutive slices are extracted simultaneously. The ROIs from three consecutive MRI slices form a triple input sequence, with the first and third slices serving as the second slice, which is the target slice segmented and detected under study, to provide 2.5D information of the second slice, which gives a linear transformation in order to adjust the intensity of each cropped image to be relatively uniform and increase the contrast of the image, in the following formula:
Figure BDA0002213434580000101
wherein v isiAnd uiIs to crop the intensity of the pixels in the image and adjust the image accordingly, and vminAnd vmaxIs the minimum intensity and the maximum intensity of the cropped image.
The inner and outer walls are segmented by segmenting the neural network. And taking the segmentation tasks of the inner cavity and the outer wall as the multi-classification problem of the multi-features of the image, taking a triple input sequence as input, and adding a triple loss function as a loss function for segmenting the neural network to obtain the final segmentation result of the inner cavity and the outer wall. This considers a new triple Dice loss function for simultaneous segmentation of the lumen and the outer wall. While cross-entropy loss optimizes network parameters for multi-label segmentation through back propagation, it is generally applicable to learning weak associations between classes in images. However, the inner cavity area is covered by the outer wall area here, which means that the pixels marked as inner cavity are also marked as outer wall, and the outer wall area excluding the inner cavity area should form a ring. Therefore, the relationship between the inner cavity area and the outer wall area is considered to exploreWith a loss of shrinkage. Representing all standard network layer parameter sets as WSDefining a Dice loss function for segmenting a neural network
Figure BDA0002213434580000102
Comprises the following steps:
Figure BDA0002213434580000103
wherein, WlSet of all standard network layer parameters, W, used for calculating the intracavity probability mapoSet of all standard network layer parameters, W, used for computing the outer wall probability mapvFor the set of all standard network layer parameters used to calculate the vessel wall probability map, α and gamma are three hyper-parameters and are each 1,is the loss of the Dice in the inner cavity,
Figure BDA0002213434580000105
is the loss of the Dice on the outer wall,
Figure BDA0002213434580000106
is the vessel wall Dice loss and:
Figure BDA0002213434580000111
Figure BDA0002213434580000112
Figure BDA0002213434580000113
wherein N is the number of pixels of the image,
Figure BDA0002213434580000114
is a probability map of the lumen
Figure BDA0002213434580000115
The value at the pixel i is such that,
Figure BDA0002213434580000116
probability map of outer wall
Figure BDA0002213434580000117
The value at the pixel i is such that,is an inner cavity YlumenA true value at the pixel i is given,is an outer wall YouterwallThe true value at pixel i. In addition to this, the vessel wall loss term is designed as a constraint term, taking into account the relationship that the outer wall area should form a loop in addition to the lumen area.
In particular, a detection method is also disclosed for detecting the segmented vessel wall. First, a detection neural network is constructed. The present invention develops two CNNs with the same structure to learn the features of the proposed streams in the neural network, where the composite layer (Conv-BN-ReLU) also as a unit learns the hierarchical features of each CNN hierarchical stream. Two CNNs with the same structure were developed to learn the features of the proposed streams in the detection neural network, where the composite layer (Conv-BN-ReLU) also learns the hierarchical features of each CNN hierarchical stream as a unit. Specifically, in each stream, one maximum pooling layer is added after every two composite layers, which is fixed at a size of 2 × 2, and then the number of nuclei of the convolutional layer is doubled in every two composite layers, from 16 to 64. Since the thickness of the vessel wall in a part of MRI images is only 2 or 3 pixels, an image with the convolution kernel size of 3 multiplied by 3 and the step size of 1 is selected for processing. Inspired by the Global Average Pool (GAP), we add a GAP layer in the convolution feature map after each maximum pool layer and use it as the feature vector of the full-connected layer to generate the required output. Finally, after the interstitial layer, the remaining layer is a fully connected layer, consisting of a sigmoid-activated neuron, which is used to detect the probability of developing atherosclerotic plaques.
To further efficiently utilize the multi-modal information in the morphological and image streams, two corresponding convolution signatures are derived from the outputs (ith layer) of the two streams for detection. In particular, two streams connecting a maximum pooling layer are connected, and the results are then input into a network comprising two composite layers (size 3 × 3 for 128-layer kernel, size 2 × 2 for maximum pooling layer, spacer layer and fully connected layer for neuron activation by a sigmoid). Thus, three outputs are generated, a morphology output, an image output, and a joint output, where the joint output is the final classification output of the detection neural network.
And detecting the carotid atherosclerotic plaque by detecting the neural network. And (3) by utilizing the constructed detection neural network, combining the output of the segmentation neural network with a triple input sequence to be input into the detection neural network, and combining a loss function of a layer weight for minimizing the classification error of an output layer to obtain the detection result of the carotid atherosclerosis plaque. The blood vessel wall image is obtained by carrying out subtraction operation on the inner cavity probability image and the outer wall probability image through the neural network detection, the neural network detection comprises a morphological stream and an image stream, then the two streams are combined to a feature level, a final detection result is generated and output, and therefore the influence of segmentation errors is relieved. Two outputs of morphology flow and image flow are further generated and two additional penalties are exploited to provide deep monitoring for "guided" feature flow learning.
In addition to this, a loss function for detecting a neural network, here Wm,WiAnd WjRepresenting the weights, W, of the morphological output, the image output and the joint output of the detection sub-network, respectivelyDRepresenting the set of all standard network layer parameters. Will detect the overall loss of neural network sub-networks
Figure BDA0002213434580000121
Is defined as:
Figure BDA0002213434580000122
wherein:the loss of the output of the form is expressed,
Figure BDA0002213434580000124
which represents the loss of the output of the image,representing combined output loss, αm,αiAnd αjThree hyper-parameters are all 1;
Figure BDA0002213434580000126
andare obtained by the formula (2):
Figure BDA0002213434580000128
wherein: y isiAs a true tag, p (x)i) Is the morphological output, image output or joint output of any set of triple input sequences and the blood vessel wall map.
Compared with other existing methods, the proposed DeepMAD network (comprising a segmentation neural network and a detection neural network) can achieve foreground segmentation performance (the Dice for an inner cavity reaches 0.9594, and the Dice for an outer wall reaches 0.9657) and better carotid atherosclerotic plaque detection accuracy (the accuracy of AUC and 0.8916 of 0.9503) in a training data set of a test data set (comprising an intangible object) from the same source. In addition, the trained DeepMAD model can be successfully transferred to another test data set for segmentation and detection tasks, and the performance is remarkable (inner cavity 0.9475Dice, outer wall 0.9542Dice, 0.9227AUC, detection accuracy 0.8679). See tables 1 and 2 for specific performance parameters.
TABLE 1 comparison of segmentation results for different methods
Figure BDA0002213434580000131
TABLE 2 comparison of test results of different methods
Method of producing a composite material AUC Accuracy of measurement Recall rate Rate of accuracy
Fine-tuning VGG-16 0.7183 0.4795 0.6376 0.6935
Image Network 0.9283 0.7288 0.8239 0.8593
DeepMADw/oDS 0.9367 0.7870 0.8206 0.8878
DeepMAD-IS 0.9254 0.7768 0.7717 0.8688
DeepMAD-MS 0.9406 0.7766 0.8239 0.8796
The invention 0.9503 0.8029 0.8326 0.8916
The proposed DeepMAD network also automatically segments the lumen and outer wall together to detect high-performance atherosclerotic carotid plaque. The DeepMAD network may be used in clinical trials to help radiologists get rid of tedious reading tasks such as screening examinations, separating normal carotid arteries from atherosclerotic arteries, and delineating vessel wall contours.
Referring to fig. 4, a process for obtaining a triple input sequence from an MRI image is schematically illustrated, wherein (a) the region is three regions of interest (marked by white squares) in consecutive carotid artery slices, (b) the region is a corresponding cropped image of 80 × 80 pixels, and (c) the region is a cropped image of the adjusted image.
Referring to fig. 5, a T1 weighted image and corresponding segmentation detection results are obtained from the deep morphology network; where row i shows the carotid bifurcation and the internal carotid artery, rows ii and iii show the corresponding lumen segmentation and external wall segmentation using the proposed deep morphological network, row iv shows the expert's segmentation results, row v shows the test results, where NC indicates the absence of atherosclerotic plaque and AC indicates the presence of atherosclerotic plaque.
Referring to fig. 6, the segmentation results of the cavity and the outer wall obtained by the doctor and different methods are shown; wherein (a-f) are the results of trained radiologists labels, (g-l) are the results from the conventional U-Net method, (m-r) are the results from the proposed segmentation of subnetworks without vessel wall loss, and (s-r) are the results from our segmentation network.
The method for segmenting the vascular wall based on the nuclear magnetic resonance image can adopt the forms of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The vessel wall segmentation method based on the nuclear magnetic resonance image can be stored in a computer readable storage medium if the vessel wall segmentation method is realized in the form of a software functional unit and is sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice. The computer storage medium may be any available medium or data storage device that can be accessed by a computer, including but not limited to magnetic memory (e.g., floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.), optical memory (e.g., CD, DVD, BD, HVD, etc.), and semiconductor memory (e.g., ROM, EPROM, EEPROM, nonvolatile memory (NANDFLASH), Solid State Disk (SSD)), etc.
In an exemplary embodiment, a computer device is also provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method for vessel wall segmentation based on magnetic resonance images when executing the computer program. The processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc.
Different from the traditional segmentation method based on a model, the effective segmentation neural network is provided, the carotid artery on a BB-VW-MRI image is segmented simultaneously by a pair of corresponding cavity and outer wall regions, the characteristic layers obtained by sampling at each time are overlapped by using a U-shaped convolution neural network to form a weighted fusion layer, in addition, the inner cavity region segmentation and the outer wall region segmentation are combined together, new triple Dice loss is designed to supervise network learning, wherein the relation between the inner cavity surface and the outer wall region can be formulated by applying triple Dice loss, and finally, the probability graph of the inner cavity and the outer wall of the carotid artery is obtained. The proposed depgmad network is an integral and end-to-end network in which the segmentation and detection neural networks are cascaded, and in order to overcome the lack of spatial information for the 2D slices along the carotid centerline, two additional adjacent slices are also input into the proposed depgmad network and used to obtain 2.5D information, so that the depgmad network can achieve promising segmentation and detection performance on test datasets, including invisible subjects from the same source as the training dataset. The DeepMAD network provided by the invention is used for segmenting a two-dimensional carotid artery vessel wall region and detecting a carotid atherosclerosis slice, wherein a CNN is used for segmenting an inner cavity region and an outer wall region together, so that a carotid atherosclerosis plaque is automatically detected, and the method is convenient to use, simple to operate and extremely high in practicability.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (7)

1. A vascular wall segmentation method based on a nuclear magnetic resonance image is characterized by comprising the following steps:
s1: selecting three adjacent slices from all slices of the T1 weighted MRI image, selecting interested regions ROI from the three adjacent slices, forming a triple input sequence from the three interested regions ROI and inputting the triple input sequence into a segmentation neural network;
s2: sequentially carrying out a plurality of times of downsampling on the triple input sequence by segmenting a contraction path of the neural network, wherein convolution operation is carried out twice before each downsampling, and each convolution operation comprises 3x3 convolution, batch normalization and linear correction which are sequentially carried out;
s3: by dividing an expansion path of the neural network, carrying out a plurality of times of upsampling operations through a combined jump link according to the result of each downsampling, and parallelly superposing the results of each upsampling to obtain a feature map with the same size as the triple input sequence;
s4: after the feature map is convolved by 1x1, an initial inner cavity probability map and an initial outer wall probability map are obtained through sigmoid activation, the inner cavity probability map and the outer wall probability map are optimized for preset times through a segmentation loss function, a final inner cavity probability map and a final outer wall probability map are obtained, and segmentation is completed.
2. The vessel wall segmentation method based on the magnetic resonance image according to claim 1, wherein the specific method for selecting the region of interest ROI in S1 is as follows:
a manual center point is set on a slice of the T1 weighted MRI image, from which a region of interest ROI of 80x80 pixels is acquired.
3. The vessel wall segmentation method based on the magnetic resonance image according to claim 1, wherein the specific method for selecting the region of interest ROI of three adjacent slices to constitute the triple input sequence in S1 is as follows:
adjusting each ROI by the formula (1) to obtain an adjusted image ui
Figure FDA0002213434570000011
Wherein: v. ofiIs the intensity, v, of the pixel in the region of interest ROI to be adjustedminIs the minimum intensity, v, of the pixel in the region of interest ROI to be adjustedmaxIs the maximum intensity of the pixels in the region of interest ROI to be adjusted;
the three adjusted images are combined to form a triple input sequence.
4. The method of segmenting a blood vessel wall based on a magnetic resonance image according to claim 1, wherein the specific method for obtaining the initial inner cavity probability map and the outer wall probability map by convolving the feature map by 1x1 and activating the feature map by sigmoid in S4 is as follows:
convolving the characteristic diagram by 1x1 and activating the characteristic diagram by sigmoid to obtain an initial inner cavity probability diagramAnd outer wall probability map
Figure FDA0002213434570000023
Figure FDA0002213434570000024
Where σ (·) is the sigmoid activation function computed at each pixel, M is the number of upsamples,
Figure FDA0002213434570000025
is the result of the m-th upsampling,
Figure FDA0002213434570000026
is a preset fusion weight for the mth layer lumen prediction,
Figure FDA0002213434570000027
is the preset fusion weight of the prediction of the mth layer outer wall.
5. The method of claim 1, wherein the segmentation loss function in S4 is a function of a segment loss in the vessel wall based on the magnetic resonance image
Figure FDA0002213434570000028
Comprises the following steps:
wherein: wlSet of all standard network layer parameters, W, used for calculating the intracavity probability mapoSet of all standard network layer parameters, W, used for computing the outer wall probability mapvFor the set of all standard network layer parameters used to calculate the vessel wall probability map, α and gamma are three hyper-parameters and are each 1,
Figure FDA00022134345700000210
is the loss of the Dice in the inner cavity,
Figure FDA00022134345700000211
is the loss of the Dice on the outer wall,
Figure FDA00022134345700000212
is the vessel wall Dice loss and:
Figure FDA0002213434570000031
Figure FDA0002213434570000032
wherein N is the number of pixels of the image,
Figure FDA0002213434570000033
is a probability map of the lumen
Figure FDA0002213434570000034
The value at the pixel i is such that,
Figure FDA0002213434570000035
probability map of outer wall
Figure FDA0002213434570000036
The value at the pixel i is such that,
Figure FDA0002213434570000037
is an inner cavity YlumenA true value at the pixel i is given,
Figure FDA0002213434570000038
is an outer wall YouterwallThe true value at pixel i.
6. Computer arrangement comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the method according to any of claims 1 to 5 when executing the computer program.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
CN201910906518.4A 2019-09-24 2019-09-24 Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium Active CN110853045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906518.4A CN110853045B (en) 2019-09-24 2019-09-24 Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906518.4A CN110853045B (en) 2019-09-24 2019-09-24 Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium

Publications (2)

Publication Number Publication Date
CN110853045A true CN110853045A (en) 2020-02-28
CN110853045B CN110853045B (en) 2022-02-11

Family

ID=69596189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906518.4A Active CN110853045B (en) 2019-09-24 2019-09-24 Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium

Country Status (1)

Country Link
CN (1) CN110853045B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353442A (en) * 2020-03-03 2020-06-30 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN112560945A (en) * 2020-12-14 2021-03-26 珠海格力电器股份有限公司 Equipment control method and system based on emotion recognition
WO2022033015A1 (en) * 2020-08-11 2022-02-17 天津拓影科技有限公司 Method and apparatus for processing abnormal region in image, and image segmentation method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460758A (en) * 2018-02-09 2018-08-28 河南工业大学 The construction method of Lung neoplasm detection model
US20190015059A1 (en) * 2017-07-17 2019-01-17 Siemens Healthcare Gmbh Semantic segmentation for cancer detection in digital breast tomosynthesis
CN109377500A (en) * 2018-09-18 2019-02-22 平安科技(深圳)有限公司 Image partition method and terminal device neural network based
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190015059A1 (en) * 2017-07-17 2019-01-17 Siemens Healthcare Gmbh Semantic segmentation for cancer detection in digital breast tomosynthesis
CN108460758A (en) * 2018-02-09 2018-08-28 河南工业大学 The construction method of Lung neoplasm detection model
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN109377500A (en) * 2018-09-18 2019-02-22 平安科技(深圳)有限公司 Image partition method and terminal device neural network based
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHIRAG BALAKRISHNA ET AL: "Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder", 《ARXIV》 *
刘丰伟等: "人工智能在医学影像诊断中的应用", 《北京生物医学工程》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353442A (en) * 2020-03-03 2020-06-30 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
WO2022033015A1 (en) * 2020-08-11 2022-02-17 天津拓影科技有限公司 Method and apparatus for processing abnormal region in image, and image segmentation method and apparatus
CN112560945A (en) * 2020-12-14 2021-03-26 珠海格力电器股份有限公司 Equipment control method and system based on emotion recognition

Also Published As

Publication number Publication date
CN110853045B (en) 2022-02-11

Similar Documents

Publication Publication Date Title
Hasan et al. DSNet: Automatic dermoscopic skin lesion segmentation
Sun et al. Saunet: Shape attentive u-net for interpretable medical image segmentation
Yang et al. Robust segmentation of arterial walls in intravascular ultrasound images using Dual Path U-Net
CN110852987B (en) Vascular plaque detection method and device based on deep morphology and storage medium
Sevastopolsky Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network
CN110853045B (en) Vascular wall segmentation method and device based on nuclear magnetic resonance image and storage medium
Praveen et al. Ischemic stroke lesion segmentation using stacked sparse autoencoder
Acharya et al. Detection of acute lymphoblastic leukemia using image segmentation and data mining algorithms
Chen et al. 3D intracranial artery segmentation using a convolutional autoencoder
CN112529839A (en) Method and system for extracting carotid artery blood vessel center line in nuclear magnetic resonance image
Abdelmaguid et al. Left ventricle segmentation and volume estimation on cardiac mri using deep learning
Agarwal et al. Review on Deep Learning based Medical Image Processing
Adegun et al. An enhanced deep learning framework for skin lesions segmentation
Chalakkal et al. Improved vessel segmentation using curvelet transform and line operators
Mustafa et al. Infrared and visible image fusion based on dilated residual attention network
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
Salahuddin et al. Multi-resolution 3d convolutional neural networks for automatic coronary centerline extraction in cardiac CT angiography scans
Singh et al. Deep-learning based system for effective and automatic blood vessel segmentation from Retinal fundus images
Brahim et al. A 3D network based shape prior for automatic myocardial disease segmentation in delayed-enhancement MRI
Bindhu et al. Segmentation of skin cancer using Fuzzy U-network via deep learning
Abbasi et al. Automatic brain ischemic stroke segmentation with deep learning: A review
Selvathi et al. Brain region segmentation using convolutional neural network
Kanse et al. HG-SVNN: harmonic genetic-based support vector neural network classifier for the glaucoma detection
Singamshetty et al. Brain Tumor Detection Using the Inception Deep Learning Technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant