CN113052860A - Three-dimensional cerebral vessel segmentation method and storage medium - Google Patents

Three-dimensional cerebral vessel segmentation method and storage medium Download PDF

Info

Publication number
CN113052860A
CN113052860A CN202110360853.6A CN202110360853A CN113052860A CN 113052860 A CN113052860 A CN 113052860A CN 202110360853 A CN202110360853 A CN 202110360853A CN 113052860 A CN113052860 A CN 113052860A
Authority
CN
China
Prior art keywords
edge
cerebrovascular
features
feature
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110360853.6A
Other languages
Chinese (zh)
Other versions
CN113052860B (en
Inventor
夏立坤
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202110360853.6A priority Critical patent/CN113052860B/en
Publication of CN113052860A publication Critical patent/CN113052860A/en
Application granted granted Critical
Publication of CN113052860B publication Critical patent/CN113052860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional cerebrovascular segmentation method and a storage medium, the invention segments preprocessed MRA data through a convolutional neural network, the convolutional neural network can effectively and fully extract cerebrovascular characteristics, thereby generating edge-optimized cerebrovascular characteristics, and finally generating a final binarization cerebrovascular segmentation result by using an OSTU algorithm. The experimental result shows that compared with the existing network framework, under the same condition, the performance of the three-dimensional cerebrovascular segmentation framework based on the improved coding-decoding convolutional neural network structure is superior to that of the existing most advanced framework, the problem of difficult extraction of cerebrovascular edge features is finally effectively solved, and the cerebrovascular segmentation result is remarkably improved.

Description

Three-dimensional cerebral vessel segmentation method and storage medium
Technical Field
The present invention relates to the field of computer technology, and in particular, to a three-dimensional cerebrovascular segmentation method and a storage medium.
Background
The automatic extraction of cerebral vessels is of great importance for understanding the pathogenesis, diagnosis and treatment of cerebrovascular diseases, and the accurate reconstruction of cerebral vessels from Magnetic Resonance Angiography (MRA) is of great importance for supporting many clinical applications of early diagnosis, optimal treatment and neurosurgical planning of vascular-related diseases. However, the manual labeling of cerebrovascular structures is a work which is tedious even for experts, and the conventional computer-aided system cannot reliably extract and segment cerebrovascular structures due to high degree of anatomical variation.
With the rapid development of the deep neural network, feature extraction and segmentation methods based on deep learning are gradually increased, and the deep neural network does not need to design features manually, so that a large amount of tedious manual design work can be greatly saved. The existing MRA-based cerebrovascular segmentation is mainly divided into two steps: preprocessing data; and cerebrovascular segmentation based on neural network algorithms. The data preprocessing can reduce the data storage space required by the neural network training and reduce the calculation amount. And the cerebrovascular segmentation algorithm based on the convolutional neural network can extract relevant features of the cerebrovascular to segment. However, the features of the cerebrovascular edge are fully extracted, and the part is a key part of the convolutional neural network-based cerebrovascular segmentation algorithm, but the features of the cerebrovascular edge cannot be well extracted by the existing method, so that the final cerebrovascular segmentation result is influenced.
Disclosure of Invention
The invention provides a three-dimensional cerebrovascular segmentation method and a storage medium, which are used for solving the problem that the existing method cannot well extract the edge characteristics of a cerebrovascular vessel and improving the accuracy of final cerebrovascular segmentation.
In a first aspect, the present invention provides a method for segmenting a three-dimensional cerebral blood vessel, the method comprising: segmenting the preprocessed cerebrovascular MRA data through a cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features; and automatically generating an optimal threshold value according to the pixel probability value of the image by using a maximum between-class variance OSTU algorithm for the cerebrovascular features to generate a final binarization cerebrovascular segmentation result.
Optionally, the method further comprises: preprocessing MRA data;
the preprocessing of the MRA data includes:
removing skull in the MRA data;
randomly cutting the MRA data after the skull is removed into data blocks with preset sizes;
performing MRA data expansion on each data block by randomly rotating preset degrees;
and carrying out zero mean and unit variance normalization processing on the expanded MRA data to obtain preprocessed MRA data.
Optionally, the segmenting the preprocessed cerebrovascular MRA data through the cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features includes:
and weighting the cerebrovascular features from the encoder and the decoder through a cerebrovascular feature extraction network Edge-Net so as to adaptively select the Edge-optimized cerebrovascular features.
Optionally, the weighting, by the cerebrovascular feature extraction network Edge-Net, the cerebrovascular features from the encoder and the decoder to adaptively select the Edge-optimized cerebrovascular features includes:
edge weighting is carried out on the features from the encoder in each layer, a reverse edge attention mechanism REAM is embedded between the MRA features in different layers, and edge feature extraction of the cerebrovascular features is carried out;
screening and fusing the edge features extracted by the encoder and the edge features recovered by the decoder;
and updating the neural network parameters of the edge features after screening and fusion by using a preset edge optimization loss function to obtain edge-optimized cerebrovascular features.
Optionally, the edge weighting performed on each layer from the encoder, embedding a reverse edge attention mechanism read between different layers, and performing edge feature extraction on the cerebrovascular features includes:
setting the characteristics of the encoder output of each layer to PoutAt PoutUsing a x a convolution to produce
Figure BDA0003005466310000021
Figure BDA0003005466310000022
The minimal vessel boundary attention enhancement weight a is:
Figure BDA0003005466310000023
wherein h, w and d represent the size of the feature mapping, namely height, width and depth, c represents the number of channels, and sigma is a Sigmoid function;
setting the upper layer characteristic as B epsilon Rh×w×d×cBoundary feature
Figure BDA0003005466310000031
Comprises the following steps:
Figure BDA0003005466310000032
h is the height of the feature mapping, w is the width of the feature mapping, d is the depth of the feature mapping, c is the number of channels, a is a natural number, ☉ is matrix multiplication, the upper layer feature B is multiplied by the attention weight A to obtain an edge feature, and REAM is embedded between different layers to extract the edge feature of the cerebral vessels.
Optionally, the filtering and fusing the edge features extracted by the encoder and the edge features restored by the decoder includes:
let the output of the encoder and the output of the decoder in the same layer be F1,F2To F1,F2Carrying out fusion, i.e. on F1And F2Adding element by element to obtain a fusion characteristic U;
carrying out global average pooling operation on the fusion characteristics U to obtain global information s on each channel;
and performing full connection operation on the global information s to find the proportion of each channel, selecting information with different weights through attention among the channels, and applying a softmax function to each channel to obtain the corresponding weight. F is to be1,F2And multiplying the edge features by corresponding weights respectively and then adding the weights to obtain the fused edge features.
Optionally, the updating of the neural network parameters by using a preset edge optimization loss function on the edge features after the screening and the fusion to obtain edge-optimized cerebrovascular features includes:
optimizing the fused edge characteristics sequentially through the mask label and the soft edge label, when the training result meets the preset requirement, generating the soft edge label from the mask label by using a Laplacian operator, and guiding a neural network by using the generated soft edge label to obtain the edge-optimized cerebrovascular characteristics.
Optionally, the optimizing the fused edge feature through the mask label includes:
optimizing the fused edge characteristics by using a mask label through a Dice loss function;
the optimizing the fused edge feature by the soft edge label includes:
and combining the Dice loss function and the BCE loss function, and optimizing the edge characteristics of the mask label after the mask label is optimized and fused by using the soft edge label so as to further optimize the edge characteristics of the mask label after the mask label is optimized and fused.
Optionally, the step of optimizing the fused edge feature of the mask label by combining the Dice loss function and the BCE loss function and using the soft edge label to further optimize the optimally fused edge feature of the mask label includes:
the integral loss function formed by combining the Dice loss function and the BCE loss function is as follows:
Figure BDA0003005466310000041
Figure BDA0003005466310000042
wherein p represents the prediction result output by the neural network, g is the original mask label, gedgeIs a generated soft edge label, DSC is an evaluation index in the neural network training process, lambda is a hyper-parameter, delta and tau are weight balance coefficients of a Dice loss function and a BCE loss function, andthe neural network training is automatically updated, and log (1+ δ τ) is an auxiliary term.
In a second aspect, the present invention further provides a computer-readable storage medium, in which a signal-mapped computer program is stored, and the computer program, when executed by at least one processor, implements any one of the above-mentioned three-dimensional cerebrovascular segmentation methods.
The invention has the following beneficial effects:
the invention segments the preprocessed MRA data through the convolutional neural network, and the convolutional neural network can effectively and more fully extract the cerebrovascular characteristics, thereby generating edge-optimized cerebrovascular characteristics, and finally generating the final binarization cerebrovascular segmentation result by using the OSTU algorithm. The experimental result shows that compared with the existing network framework at present, the three-dimensional cerebrovascular segmentation framework based on the improved coding-decoding convolutional neural network structure provided by the invention is superior to the most advanced framework at present under the same condition, the problem that the edge characteristics of the cerebrovascular vessel cannot be well extracted by the existing method is effectively solved, and the accuracy of the final cerebrovascular segmentation result is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of a three-dimensional cerebrovascular segmentation process based on an improved encoding-decoding convolutional neural network structure according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a feature selection module according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Aiming at the problem that the final cerebrovascular segmentation result is influenced because the edge features of the cerebrovascular vessel cannot be well extracted in the prior art, the embodiment of the invention provides an end-to-end automatic segmentation method for the three-dimensional cerebrovascular vessel, which is based on a convolutional neural network, and fully extracts vessel edge structure information in Magnetic Resonance imaging (MRA) data by combining a reverse edge attention mechanism module, a feature selection module and an edge optimization loss function, so that the best cerebrovascular classification result is obtained in an open source database. The present invention will be described in further detail below with reference to the drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the invention provides a three-dimensional cerebrovascular segmentation method, and referring to fig. 1, the method comprises the following steps:
s101, segmenting the preprocessed cerebrovascular MRA data through a cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features after cerebrovascular probability mapping;
in the embodiment of the present invention, the preprocessing the MRA data includes:
removing skull in the MRA data;
randomly cutting the MRA data after the skull is removed into data blocks with preset sizes;
performing MRA data expansion on each data block by randomly rotating preset degrees;
and carrying out zero mean and unit variance normalization processing on the expanded MRA data to obtain preprocessed MRA data.
It should be noted that, the data block with the preset size and the preset random rotation angle described in the embodiment of the present invention may be set according to actual operations, and the present invention is not limited to this specifically.
In specific implementation, the embodiment of the invention uses a magnetic resonance brain image segmentation tool FSL-bet (brain Extraction tool) to remove the skull in the MRA data, randomly cuts a patch with a size of 96 × 96 × 96, which can reduce the occupied memory during computer training and ensure that the patch contains sufficient blood vessel information, and then randomly rotates 90 degrees to perform data expansion, so that the diversity of data is increased, the training model is more robust, the convergence of a neural network is accelerated, and the zero mean and unit variance normalization of the MRA data is realized.
In specific implementation, in S101 in the embodiment of the present invention, the cerebrovascular features from the encoder and the decoder after the preprocessing are weighted and fused by the cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features, that is, cerebrovascular features after the cerebrovascular probability mapping, so as to adaptively select the Edge-optimized cerebrovascular features.
That is, in the embodiment of the present invention, edge weighting is performed on the features from the encoder in each layer, a reverse edge attention mechanism read is embedded between the features in different layers, and edge feature extraction of cerebrovascular features is performed;
screening and fusing the edge features extracted by the encoder and the edge features recovered by the decoder;
and updating the neural network parameters of the edge features after screening and fusion by using a preset edge optimization loss function to obtain edge-optimized cerebrovascular features.
Based on the problem that the existing neural network-based cerebral blood segmentation algorithm fuses recovery features from a decoder and coding features from an encoder through simple operation, and the weight of the two features is neglected in a simple fusion mode, so that the segmentation accuracy is low, the invention innovatively provides that a reverse edge attention mechanism Module (REAM) is used for extracting cerebral vessel edge features, and a Feature Selection Module (FSM) is used for weighting the edge features from the encoder and the recovery features from the decoder to adaptively select the features, so that the Feature utilization rate is improved, and finally the MRA cerebral vessel segmentation result is improved.
And S102, automatically generating an optimal threshold value according to the pixel probability value of the image by using a maximum between-class variance OSTU algorithm for the cerebrovascular features, and obtaining a final binarization cerebrovascular segmentation result.
Specifically, the embodiment of the invention uses an OSTU algorithm to automatically generate an optimal threshold value according to the pixel probability value of the image, so as to generate a final binarization cerebrovascular segmentation result, wherein the vessel pixel is 1 and the background pixel is 0.
In general, the method of embodiments of the present invention includes three stages: the method comprises a data preprocessing stage, a cerebrovascular feature extraction stage and a segmentation result generation stage. The embodiment of the invention firstly preprocesses data to enable the convolutional neural network to more effectively and fully extract cerebrovascular characteristics, then fully extracts the cerebrovascular characteristics by adopting the cerebrovascular characteristic extraction network Edge-Net based on REAM and FSM, selects the characteristics of an encoder and a decoder according to weight information, further optimizes cerebrovascular edges by combining an Edge optimization loss function, and finally generates a final binarization cerebrovascular segmentation result by utilizing an OSTU algorithm. The experimental result shows that compared with the existing network framework at present, the performance of the three-dimensional cerebrovascular segmentation framework based on the improved coding-decoding convolutional neural network structure is superior to that of the most advanced framework under the same condition.
In the embodiment of the present invention, the edge weighting is performed on the features from the encoder in each layer, a reverse edge attention mechanism read is embedded between the features in different layers, and edge feature extraction of the cerebrovascular features is performed, including:
the features of the encoder output for each layer are set to produce e by applying a x a convolution above, with a small vessel boundary enhancement attention mechanism as:
Figure BDA0003005466310000071
setting the upper layer characteristic as B epsilon Rh×w×d×cBoundary feature
Figure BDA0003005466310000072
Comprises the following steps:
Figure BDA0003005466310000073
wherein a is a natural number, ☉ is matrix multiplication, the upper layer feature B is multiplied by the attention weight A to obtain an edge feature, and REAM is embedded between different layers to extract the edge feature of the cerebral blood vessel.
Specifically, the data processed in the first stage is sent to a designed convolutional neural network Edege-Net with a U-shaped structure for training, and the network is an end-to-end deep neural network based on a coder-decoder structure. And segmenting the preprocessed cerebrovascular data by using the proposed cerebrovascular feature extraction network (Edge-Net) to generate cerebrovascular probability features. The method comprises the following specific steps:
the encoder is based on the ResNet block comprising four encoder stages including a ResBlock and a max-pooling layer, and the decoder comprising three decoding stages including a deconvolution and a ResBlock.
The reverse edge attention mechanism module REAM in the embodiment of the invention carries out edge weighting on the features of each layer from the encoder, thereby realizing edge feature extraction. Defining the characteristics of the encoder output of each layer as PoutFirst at PoutApplying a 1 × 1 × 1 convolution to produce
Figure BDA0003005466310000081
The mechanism of enhanced attention at the boundary of the small blood vessels is defined as:
Figure BDA0003005466310000082
defining the characteristics of the upper layer as B epsilon Rh×w×d×cIs then a boundary feature
Figure BDA0003005466310000083
Can be defined as:
Figure BDA0003005466310000084
☉ represents matrix multiplication, and the edge feature is obtained by multiplying the upper layer feature B and the attention weight A, thus embedding REAM between different layers realizes the extraction of the edge feature of the cerebral blood vessel.
In addition, in the embodiment of the present invention, the screening and fusing the edge features extracted by the encoder and the edge features restored by the decoder includes:
let the output of the encoder and the output of the decoder in the same layer be F1,F2To F1And F2Performing fusion, namely adding the sums element by element to obtain a fusion characteristic U;
carrying out global average pooling (gp) operation on the fusion characteristics U to obtain global information s on each channel;
and performing full connection (fc) operation on the global information s to find the proportion of each channel, selecting information with different weights through attention among the channels, and applying softmax operation on each channel to obtain the corresponding weight.
F is to be1,F2Multiplying the two branches by the corresponding weights respectively to obtain the transmission V of the two branches1And V2And then adding the two to obtain the fused edge feature, as shown in fig. 2.
In addition, in the embodiment of the present invention, the updating of the neural network parameters by using a preset edge optimization loss function on the edge features after the screening and the fusion to obtain edge-optimized cerebrovascular features includes: optimizing the fused edge characteristics sequentially through the mask label and the soft edge label, when the training result meets the preset requirement, generating the soft edge label from the mask label by using a Laplacian operator, and guiding a neural network by using the generated soft edge label to obtain the edge-optimized cerebrovascular characteristics.
In specific implementation, the optimization of the fused edge features through the mask label in the embodiment of the present invention includes:
optimizing the fused edge characteristics by using a mask label through a Dice loss function;
the optimizing the fused edge feature by the soft edge label includes:
and combining the Dice loss function and the BCE loss function, and optimizing the edge characteristics of the mask label after the mask label is optimized and fused by using the soft edge label so as to further optimize the edge characteristics of the mask label after the mask label is optimized and fused.
Specifically, the optimizing the fused edge feature of the mask label by combining the Dice loss function and the BCE loss function and using the soft edge label to further optimize the optimized fused edge feature of the mask label includes:
the integral loss function formed by combining the Dice loss function and the BCE loss function is as follows:
Figure BDA0003005466310000091
Figure BDA0003005466310000092
wherein p represents the prediction result output by the neural network, g is the original mask label, gedgeThe method is characterized in that the method is a generated soft edge label, DSC is an evaluation index in the neural network training process, lambda is a hyper-parameter and is set to be 0.8, delta and tau are weight balance coefficients of a Dice loss function and a BCE loss function, the weight balance coefficients can be automatically updated along with the neural network training, and log (1+ delta tau) is an auxiliary item.
In specific implementation, in an Edge-Net network, the embodiment of the invention firstly extracts high-level semantic features of cerebral vessels from an input decoder; secondly, sending the high-level semantic features obtained by the encoder into a decoder for feature recovery; and thirdly, extracting edge features from each layer of the encoder through REAM, carrying out weighting self-adaptive selection fusion on the features from REAM and recovery features of the decoder by a Feature Selection Module (FSM), and finally optimizing the edge through an edge optimization loss function.
In general, the method according to the embodiment of the present invention aims at the problem that the existing neural network segmentation algorithm cannot be fused according to the weights from the features of the encoder and the decoder, edge weighting is performed on the encoding features of each layer through the real, and then the encoding features and the decoder features weighted by the real Selection Module (FSM) are weighted by a Feature Selection Module (FSM) to adaptively select the features, so as to improve the Feature utilization rate, thereby finally improving the segmentation result. Meanwhile, the Edge optimization loss function is used for guiding Edge-Net training, the Edge optimization loss function is divided into two stages, and compared with the training only by using the original label of the cerebral vessels, the method can better guide the neural network to extract the Edge characteristics by generating the soft Edge label through the Laplacian operator. Meanwhile, the invention utilizes the advantages of the Dice loss function and the BCE loss function, not only can focus on the classification information of the blood vessels, but also can relieve the unbalance problem in the cerebrovascular data, so that the finally generated cerebrovascular edge is more accurate. Compared with other existing methods, the method has the highest accuracy in the development of the source data set.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program for signal mapping is stored, and when the computer program is executed by at least one processor, the computer program is configured to implement any one of the above-mentioned three-dimensional cerebrovascular segmentation methods. Reference will be made in detail to the embodiments of the method of the present invention, which are not discussed in detail herein.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, and the scope of the invention should not be limited to the embodiments described above.

Claims (10)

1. A method of three-dimensional cerebrovascular segmentation, comprising:
segmenting the preprocessed cerebrovascular MRA data through a cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features;
and automatically generating an optimal threshold value according to the pixel probability value of the image by using a maximum between-class variance OSTU algorithm for the cerebrovascular features to generate a final binarization cerebrovascular segmentation result.
2. The method of claim 1, further comprising: preprocessing MRA data;
the preprocessing of the MRA data includes:
removing skull in the MRA data;
randomly cutting the MRA data after the skull is removed into data blocks with preset sizes;
performing MRA data expansion on each data block by randomly rotating preset degrees;
and carrying out zero mean and unit variance normalization processing on the expanded MRA data to obtain preprocessed MRA data.
3. The method of claim 1, wherein the segmenting the preprocessed cerebrovascular MRA data through the cerebrovascular feature extraction network Edge-Net to generate Edge-optimized cerebrovascular features comprises:
and weighting the cerebrovascular features from the encoder and the decoder through a cerebrovascular feature extraction network Edge-Net so as to adaptively select the Edge-optimized cerebrovascular features.
4. The method of claim 3, wherein the weighting of the cerebrovascular features from the encoder and decoder by the cerebrovascular feature extraction network Edge-Net to adaptively select Edge-optimized cerebrovascular features comprises:
edge weighting is carried out on the features from the encoder in each layer, a reverse edge attention mechanism REAM is embedded between the features in different layers, and edge feature extraction of cerebral vessels is carried out;
screening and fusing the edge features extracted by the encoder and the edge features recovered by the decoder;
and updating the neural network parameters of the edge features after screening and fusion by using a preset edge optimization loss function to obtain edge-optimized cerebrovascular features.
5. The method of claim 4, wherein the edge weighting is performed on the features from the encoder in each layer, the reverse edge attention mechanism READ is embedded between the features in different layers, and the edge feature extraction of the cerebrovascular features is performed, and the method comprises:
setting the characteristics of the encoder output of each layer to PoutAt PoutUsing a x a convolution to produce
Figure FDA0003005466300000021
The minimal vessel boundary attention enhancement weight a is:
Figure FDA0003005466300000022
wherein h, w and d represent the size of the feature mapping, namely height, width and depth, c represents the number of channels, and sigma is a Sigmoid function;
setting the upper layer characteristic as B epsilon Rh×w×d×cBoundary feature
Figure FDA0003005466300000023
Comprises the following steps:
Figure FDA0003005466300000024
h is the height of the feature mapping, w is the width of the feature mapping, d is the depth of the feature mapping, c is the number of channels, a is a natural number, ☉ is matrix multiplication, the upper layer feature B is multiplied by the attention weight A to obtain an edge feature, and REAM is embedded between different layers to extract the edge feature of the cerebral vessels.
6. The method of claim 5, wherein the filtering and fusing the edge features extracted by the encoder and the edge features recovered by the decoder comprises:
let the output of the encoder and the output of the decoder in the same layer be F1,F2To F1,F2Carrying out fusion, i.e. on F1And F2Adding element by element to obtain a fusion characteristic U;
carrying out global average pooling operation on the fusion characteristics U to obtain global information s on each channel;
and performing full connection operation on the global information s to find the proportion of each channel, selecting information with different weights through attention among the channels, and applying a softmax function to each channel to obtain the corresponding weight. F is to be1,F2And multiplying the edge features by corresponding weights respectively and then adding the weights to obtain the fused edge features.
7. The method according to claim 6, wherein the updating of neural network parameters using a preset edge optimization loss function on the filtered and fused edge features to obtain edge-optimized cerebrovascular features comprises:
optimizing the fused edge characteristics sequentially through the mask label and the soft edge label, when the training result meets the preset requirement, generating the soft edge label from the mask label by using a Laplacian operator, and guiding a neural network by using the generated soft edge label to obtain the edge-optimized cerebrovascular characteristics.
8. The method of claim 7,
the optimization of the fused edge features through the mask label comprises the following steps:
optimizing the fused edge characteristics through a Dice loss function;
the optimizing the fused edge feature by the soft edge label includes:
edge features are further optimized by combining a Dice loss function and a Binary Cross Entropy (BCE) loss function and extracting the edge features using soft edge labels.
9. The method according to claim 8, wherein the optimizing the mask label further comprises optimizing the fused edge feature by combining a Dice loss function and a BCE loss function and using the soft edge label to optimize the fused edge feature, and the optimizing the fused edge feature comprises:
the loss function formed by combining the Dice loss function and the BCE loss function is as follows:
Figure FDA0003005466300000031
Figure FDA0003005466300000032
wherein p represents a prediction result output by the neural network, g is an original mask label, gedge is a generated soft edge label, DSC is an evaluation index in the neural network training process, lambda is a hyper-parameter, delta and tau are weight balance coefficients of a Dice loss function and a BCE loss function, and are automatically updated along with the neural network training, and log (1+ delta tau) is an auxiliary item.
10. A computer-readable storage medium, in which a computer program of a signal mapping is stored, which computer program, when being executed by at least one processor, is adapted to carry out the method of three-dimensional cerebrovascular segmentation according to any one of claims 1 to 9.
CN202110360853.6A 2021-04-02 2021-04-02 Three-dimensional cerebral vessel segmentation method and storage medium Active CN113052860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110360853.6A CN113052860B (en) 2021-04-02 2021-04-02 Three-dimensional cerebral vessel segmentation method and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110360853.6A CN113052860B (en) 2021-04-02 2021-04-02 Three-dimensional cerebral vessel segmentation method and storage medium

Publications (2)

Publication Number Publication Date
CN113052860A true CN113052860A (en) 2021-06-29
CN113052860B CN113052860B (en) 2022-07-19

Family

ID=76517199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110360853.6A Active CN113052860B (en) 2021-04-02 2021-04-02 Three-dimensional cerebral vessel segmentation method and storage medium

Country Status (1)

Country Link
CN (1) CN113052860B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649371A (en) * 2024-01-30 2024-03-05 西安交通大学医学院第一附属医院 Image processing method and device for brain blood vessel intervention operation simulator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165660A (en) * 2018-06-20 2019-01-08 扬州大学 A kind of obvious object detection method based on convolutional neural networks
CN110222672A (en) * 2019-06-19 2019-09-10 广东工业大学 The safety cap of construction site wears detection method, device, equipment and storage medium
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111680695A (en) * 2020-06-08 2020-09-18 河南工业大学 Semantic segmentation method based on reverse attention model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165660A (en) * 2018-06-20 2019-01-08 扬州大学 A kind of obvious object detection method based on convolutional neural networks
CN110222672A (en) * 2019-06-19 2019-09-10 广东工业大学 The safety cap of construction site wears detection method, device, equipment and storage medium
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111680695A (en) * 2020-06-08 2020-09-18 河南工业大学 Semantic segmentation method based on reverse attention model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHUHAN CHEN等: "Reverse Attention for Salient Object Detection", 《ARXIV》 *
李天培等: "基于双注意力编码-解码器架构的视网膜血管分割", 《计算机科学》 *
项圣凯等: "使用密集弱注意力机制的图像显著性检测", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649371A (en) * 2024-01-30 2024-03-05 西安交通大学医学院第一附属医院 Image processing method and device for brain blood vessel intervention operation simulator
CN117649371B (en) * 2024-01-30 2024-04-09 西安交通大学医学院第一附属医院 Image processing method and device for brain blood vessel intervention operation simulator

Also Published As

Publication number Publication date
CN113052860B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN111047551B (en) Remote sensing image change detection method and system based on U-net improved algorithm
CN106940816B (en) CT image pulmonary nodule detection system based on 3D full convolution neural network
CN112258488A (en) Medical image focus segmentation method
CN110599500A (en) Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN112489023A (en) Pavement crack detection method based on multiple scales and multiple layers
CN112348830B (en) Multi-organ segmentation method based on improved 3D U-Net
CN112862830A (en) Multi-modal image segmentation method, system, terminal and readable storage medium
CN113052860B (en) Three-dimensional cerebral vessel segmentation method and storage medium
CN110738660A (en) Spine CT image segmentation method and device based on improved U-net
CN116934780B (en) Deep learning-based electric imaging logging image crack segmentation method and system
CN112288749A (en) Skull image segmentation method based on depth iterative fusion depth learning model
CN113393476B (en) Lightweight multi-path mesh image segmentation method and system and electronic equipment
CN110599495B (en) Image segmentation method based on semantic information mining
CN116740119A (en) Tobacco leaf image active contour segmentation method based on deep learning
CN114627035A (en) Multi-focus image fusion method, system, device and storage medium
CN116664605B (en) Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN113344933A (en) Glandular cell segmentation method based on multi-level feature fusion network
CN116468947A (en) Cutter image recognition method, cutter image recognition device, computer equipment and storage medium
CN116363149A (en) Medical image segmentation method based on U-Net improvement
CN113554655B (en) Optical remote sensing image segmentation method and device based on multi-feature enhancement
CN115660998A (en) Image defogging method based on deep learning and traditional priori knowledge fusion
CN115719335A (en) Cerebrovascular image-label two-stage generation method, device and storage medium
CN115147303A (en) Two-dimensional ultrasonic medical image restoration method based on mask guidance
CN113808140A (en) Aluminum-silicon alloy microscopic image segmentation method for sensing gap area
CN114399519B (en) MR image 3D semantic segmentation method and system based on multi-modal fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant