CN111784713A - Attention mechanism-introduced U-shaped heart segmentation method - Google Patents

Attention mechanism-introduced U-shaped heart segmentation method Download PDF

Info

Publication number
CN111784713A
CN111784713A CN202010727090.XA CN202010727090A CN111784713A CN 111784713 A CN111784713 A CN 111784713A CN 202010727090 A CN202010727090 A CN 202010727090A CN 111784713 A CN111784713 A CN 111784713A
Authority
CN
China
Prior art keywords
training
data
segmentation
heart
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010727090.XA
Other languages
Chinese (zh)
Inventor
崔晓娟
杨铁军
白鑫昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202010727090.XA priority Critical patent/CN111784713A/en
Publication of CN111784713A publication Critical patent/CN111784713A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Aiming at the problem that the precision of a classical U-shaped segmentation network in heart substructure segmentation is not high, the invention provides an Attention mechanism-introduced U-shaped network (AU-Net). On the basis of a classic U-Net structure, an algorithm firstly carries out cutting pretreatment on a CT heart image, and the problem of class imbalance is relieved by reducing the input of background pixels; then, an attention mechanism is introduced, useless information areas are filtered, areas containing useful information are emphasized, the resolution of characteristic images is enhanced, the expression capacity of detail features is improved, and more detail features are obtained. And finally, introducing a residual block through Add operation to improve jump connection in a training network, fusing low-level features from a contraction path, fusing regional information and global information, and improving the recognition capability of the network on heart details to obtain a more accurate heart segmentation result. The invention has good application prospect in the automatic heart segmentation.

Description

Attention mechanism-introduced U-shaped heart segmentation method
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for cardiac substructure segmentation.
Background
According to the heart disease and stroke statistical report of the American Heart Association (AHA) in 2019, it is pointed out that about 1055000 cases are expected to develop coronary heart disease in 2019 in the united states, including 720000 new and 335000 recurrent coronary artery cases, and this figure is also increasing year by year. At present, cardiovascular diseases are one of the main causes of human non-accidental death, the morbidity is high, and no fixed morbidity rule exists. Now, cardiovascular diseases gradually develop to young people and threaten the healthy life of human beings all the time. Accurate calculation, modeling and analysis of the entire cardiac structure is critical for research and application in the medical field for effective treatment and prevention of these diseases. At present, the cardiac substructure is generally segmented manually by doctors or experts according to the existing medical knowledge, medical conditions and clinical knowledge, the method is time-consuming and labor-consuming and has strong subjectivity, and the segmentation result is different from person to person. With the advent of large-scale labeled data and the development of computers, the realization of automatic segmentation of the heart using deep learning algorithms has become a hot spot of current research. Among them, Olaf Ronneberger proposed a U-Net model for medical image segmentation based on the Full Convolutional Network (FCN) model in 2015. Both U-Net and FCN have classical encoding-decoding topologies, but U-Net has a symmetric network structure and hopping connections, and the results of U-Net are superior to FCN in segmentation of cardiac images. Aiming at the problem that the heart image segmentation precision is difficult to improve, the improvement research performed by researchers on the basis of U-Net can be roughly divided into two types: the improvement research based on a 2DU-Net framework and the improvement research based on a 3D U-Net framework. Although the 2D network partitioning is less computationally and storage demanding, the 2D network partitioning typically discards spatial information between slices. 3D network segmentation can maximize the information between slice sequences. In summary, aiming at improving the segmentation accuracy, the invention provides a 3D U heart segmentation network AU-Net with attention mechanism, which is mainly used for solving the problem of low segmentation accuracy of the segmented heart substructure.
Disclosure of Invention
The invention aims to solve the problem of low precision of heart substructure segmentation, and provides an automatic method for accurately segmenting the heart substructure.
The invention is realized by the following technical scheme: a method for segmenting a U-shaped cardiac image by introducing an attention mechanism. Firstly, cutting and scaling preprocessing are carried out on an image so as to reduce training parameters and cover global information; then carrying out 100-period parameter training on the AU-Net network of the invention, and storing the trained parameters; and finally, segmenting the test data set by using the trained weight to finally obtain a segmentation result graph.
(1) Data preprocessing: the image preprocessing firstly carries out label data recoding on CT images of 10 training volume data in an MM-WHS2017 data set so as to enable the CT images to be suitable for a multi-classification task, and one-hot coding is further carried out on recoded label values so as to generate labels suitable for multi-class segmentation tasks of a neural network. And then randomly cutting the data to cut the image into a size of 96 multiplied by 96, and scaling the data with a scaling ratio of 0.6 because the sub-blocks after cutting can not cover enough global information. Because the gray value of the original data is dozens of to thousands of gray values, the speed of directly training the neural network is slow, and therefore, the data needs to be normalized and preprocessed so as to facilitate the training of the neural network.
(2) A training stage: in the training stage, a Tensorflow deep learning framework is used for learning model parameters of a training set, a convolution kernel with the size of 3 multiplied by 3 is adopted, the step length of each convolution is set to be 2, and the number of characteristic channels is not changed. Selecting 50% of data in the training set as a verification set, training the data by using a cross entropy loss function, and selecting a weight coefficient when the verification set is minimally lost as a final training weight in a training stage in an experiment to provide a weight parameter for a subsequent testing stage. The main innovation of the AU-Net network provided by the invention is as follows:
a. improving the hopping connection: a residual error unit (ResNet) is introduced through the Add operation, as shown in FIG. 1, the receptive field of the low-level features from the contraction path is expanded, the feature images of the contraction path and the expansion path are better fused, and the global information and the local information are combined.
b. Attention mechanism (Soft Attention): by introducing an attention mechanism, non-information areas are filtered, areas containing useful information are emphasized, the resolution of characteristic images is enhanced, the expression capacity of detail characteristics is improved, and more detail characteristics can be obtained.
(3) And (3) a testing stage: MR images of 20 brain tumor patients in the MM-WHS2017 data set were randomly selected as a training set. Firstly, voxel images of four modes with the size of 240 multiplied by 155 are directly input into a testing stage, and brain tumor segmentation is carried out on the testing image by using final weight parameters obtained in a training stage, so that a segmented tumor result graph is finally obtained.
The invention provides an improved AU-Net heart image segmentation algorithm aiming at the defects that the heart image segmentation precision of a U-shaped convolution network is not high and the boundary of each region is fuzzy. The skip-join is improved by a residual block so that shallow features are combined with deep features. And an attention mechanism is introduced, the reusability of the features is increased, the shallow features are fused with the corresponding high-level features, and finally a trainable end-to-end segmentation algorithm is formed. Compared with the classic U-Net segmentation algorithm, the algorithm has a finer structure, can effectively solve the problems of over-segmentation and under-segmentation of heart segmentation, and has higher accuracy of segmentation results.
Drawings
FIG. 1 is a schematic diagram of a residual block;
fig. 2 is a flow chart of a network training method for cardiac segmentation.
Detailed Description
To verify the cardiac substructure segmentation performance of the present invention, we selected the MM-WHS2017 dataset for training and testing.
The method comprises the steps of preprocessing CT image data, using Spyder software, and performing image normalization processing by image rotation, translation transformation and contrast enhancement.
Step two, training an AU-Net network in Spyder software, wherein the batch _ size is 8, the learning _ rate is 0.001, an Adama optimizer is adopted, L2 regularization is used for preventing overfitting, and a regularization coefficient is set to be 0.0005. 60000 epochs are trained, the training set and the verification set are divided by the ratio of 1:1, the two processes are discontinuously trained, network parameters are adjusted until the network converges, and the training is finished.
And step three, testing the AU-Net network by adopting a test set of the MM-WHS2017 data set. To evaluate the segmentation results, 2 common evaluation criteria, similarity coefficient (Dice) and Jaccard index, were used, as in table 1.
TABLE 1 comparison with other people's work (Dice index)
Figure 374966DEST_PATH_IMAGE002
Experimental results show that the algorithm structure of the invention is more precise, the problems of over-segmentation and under-segmentation of heart segmentation can be effectively solved, the segmentation result precision is higher, and the completeness and the accuracy of brain tumor segmentation can be ensured.

Claims (1)

1. The invention discloses a brain tumor segmentation method based on a U-shaped network, which comprises the following steps:
(1) data preprocessing: firstly, performing label data recoding on CT images of 10 training volume data in an MM-WHS2017 data set to enable the CT images to be suitable for a multi-classification task, and further performing one-hot coding on recoded label values to generate labels suitable for multi-class segmentation tasks of a neural network; then randomly cutting the data, cutting the image into 96 × 96 × 96 sizes, and scaling the data with a scaling ratio of 0.6 because the cut sub-blocks cannot cover enough global information; because the gray value of the original data is dozens of to thousands of gray values, the speed of directly training the neural network is slow, so that the data needs to be normalized and preprocessed to facilitate the training of the neural network;
(2) a training stage: in the training stage, parameter training is carried out on the AU-Net network provided by the invention; using a Tensorflow deep learning framework to learn model parameters of a training set, adopting convolution kernels with the size of 3 multiplied by 3, setting the step length of each convolution to be 2, and keeping the number of characteristic channels unchanged; selecting 50% of data in a training set as a verification set, training the data by using a cross entropy loss function, and selecting a weight coefficient when the verification set is minimally lost as a final training weight in a training stage in an experiment to provide a weight parameter for a subsequent testing stage;
(3) and (3) a testing stage: randomly selecting 10 training volume data in the MM-WHS2017 data set as a training set; firstly, the CT image with the size of 512 multiplied by 512 is directly input into a testing stage, and the final weight parameters obtained in the training stage are used for carrying out heart substructure segmentation on the testing image, and finally a segmented heart substructure result image is obtained.
CN202010727090.XA 2020-07-26 2020-07-26 Attention mechanism-introduced U-shaped heart segmentation method Pending CN111784713A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010727090.XA CN111784713A (en) 2020-07-26 2020-07-26 Attention mechanism-introduced U-shaped heart segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010727090.XA CN111784713A (en) 2020-07-26 2020-07-26 Attention mechanism-introduced U-shaped heart segmentation method

Publications (1)

Publication Number Publication Date
CN111784713A true CN111784713A (en) 2020-10-16

Family

ID=72764175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010727090.XA Pending CN111784713A (en) 2020-07-26 2020-07-26 Attention mechanism-introduced U-shaped heart segmentation method

Country Status (1)

Country Link
CN (1) CN111784713A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634285A (en) * 2020-12-23 2021-04-09 西南石油大学 Method for automatically segmenting abdominal CT visceral fat area
CN112927224A (en) * 2021-03-30 2021-06-08 太原理工大学 Heart nuclear magnetic image recognition method, device and equipment based on deep learning and random forest and storage medium
CN113139972A (en) * 2021-03-22 2021-07-20 杭州电子科技大学 Cerebral apoplexy MRI image focus region segmentation method based on artificial intelligence

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634285A (en) * 2020-12-23 2021-04-09 西南石油大学 Method for automatically segmenting abdominal CT visceral fat area
CN113139972A (en) * 2021-03-22 2021-07-20 杭州电子科技大学 Cerebral apoplexy MRI image focus region segmentation method based on artificial intelligence
CN112927224A (en) * 2021-03-30 2021-06-08 太原理工大学 Heart nuclear magnetic image recognition method, device and equipment based on deep learning and random forest and storage medium

Similar Documents

Publication Publication Date Title
EP4002271A1 (en) Image segmentation method and apparatus, and storage medium
Kumar et al. Breast cancer classification of image using convolutional neural network
El-Shafai et al. Efficient Deep-Learning-Based Autoencoder Denoising Approach for Medical Image Diagnosis.
CN111784713A (en) Attention mechanism-introduced U-shaped heart segmentation method
CN112270666A (en) Non-small cell lung cancer pathological section identification method based on deep convolutional neural network
CN110533683B (en) Image omics analysis method fusing traditional features and depth features
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
Osadebey et al. Three-stage segmentation of lung region from CT images using deep neural networks
Pradhan et al. Lung cancer detection using 3D convolutional neural networks
CN113781461A (en) Intelligent patient monitoring and sequencing method
Jiang et al. [Retracted] Application of Deep Learning in Lung Cancer Imaging Diagnosis
CN113744209A (en) Heart segmentation method based on multi-scale residual U-net network
CN114066883A (en) Liver tumor segmentation method based on feature selection and residual fusion
CN113408603A (en) Coronary artery stenosis degree identification method based on multi-classifier fusion
CN113344933A (en) Glandular cell segmentation method based on multi-level feature fusion network
Al-Ani et al. A review on detecting brain tumors using deep learning and magnetic resonance images.
Vavekanand A Deep Learning Approach for Medical Image Segmentation Integrating Magnetic Resonance Imaging to Enhance Brain Tumor Recognition
Medi et al. Skinaid: A gan-based automatic skin lesion monitoring method for iomt frameworks
CN113744210A (en) Heart segmentation method based on multi-scale attention U-net network
CN111798455A (en) Thyroid nodule real-time segmentation method based on full convolution dense cavity network
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition
Lee et al. Cardiac CT Image Segmentation for Deep Learning-Based Coronary Calcium Detection Using K-Means Clustering and Grabcut Algorithm.
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
Essaf et al. Review on deep learning methods used for computer-aided lung cancer detection and diagnosis
Zeeshan Aslam et al. AML‐Net: Attention‐based multi‐scale lightweight model for brain tumour segmentation in internet of medical things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201016