CN113744210A - Heart segmentation method based on multi-scale attention U-net network - Google Patents

Heart segmentation method based on multi-scale attention U-net network Download PDF

Info

Publication number
CN113744210A
CN113744210A CN202110964471.4A CN202110964471A CN113744210A CN 113744210 A CN113744210 A CN 113744210A CN 202110964471 A CN202110964471 A CN 202110964471A CN 113744210 A CN113744210 A CN 113744210A
Authority
CN
China
Prior art keywords
training
data
segmentation
heart
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110964471.4A
Other languages
Chinese (zh)
Inventor
崔晓娟
白鑫昊
杨铁军
李磊
樊超
巩跃洪
苗建雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN202110964471.4A priority Critical patent/CN113744210A/en
Publication of CN113744210A publication Critical patent/CN113744210A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

针对经典U形分割网络在分割心脏子结构精度不高的问题,本发明提出了一种多尺度注意力U‑Net网络用于心脏分割。在经典U‑Net结构基础上,算法首先将CT心脏图像进行裁剪预处理,通过减少背景像素的输入,接着采用Z‑score标准化方法进行归一化处理,以消除像素灰度分布范围的差异;然后引入注意力机制,充分利用浅层信息。在骨干网络中加入空间注意机制,在跳转连接中加入通道注意机制,使网络能够充分利用浅卷积层提取信息,保留有用信息,去除冗余信息;同时引入不同尺度卷积核的Inception模块,同时提取并融合不同尺度的特征信息,实现精确分割。本发明在心脏自动分割中具有良好的应用前景。Aiming at the problem that the classical U-shaped segmentation network has low accuracy in segmenting the substructure of the heart, the present invention proposes a multi-scale attention U-Net network for cardiac segmentation. Based on the classic U-Net structure, the algorithm firstly preprocesses the CT heart image, reduces the input of background pixels, and then uses the Z-score normalization method for normalization to eliminate the difference in the distribution range of pixel gray levels; Then an attention mechanism is introduced to make full use of shallow information. The spatial attention mechanism is added to the backbone network, and the channel attention mechanism is added to the jump connection, so that the network can make full use of the shallow convolution layer to extract information, retain useful information, and remove redundant information; at the same time, Inception modules with different scale convolution kernels are introduced. , while extracting and fusing feature information of different scales to achieve accurate segmentation. The invention has a good application prospect in the automatic segmentation of the heart.

Description

Heart segmentation method based on multi-scale attention U-net network
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a method for cardiac substructure segmentation.
Background
According to the heart disease and stroke statistical report of the American Heart Association (AHA) in 2019, it is pointed out that about 1055000 cases are expected to develop coronary heart disease in 2019 in the united states, including 720000 new and 335000 recurrent coronary artery cases, and this figure is also increasing year by year. At present, cardiovascular diseases are one of the main causes of human non-accidental death, the morbidity is high, and no fixed morbidity rule exists. Now, cardiovascular diseases gradually develop to young people and threaten the healthy life of human beings all the time. Accurate calculation, modeling and analysis of the entire cardiac structure is critical for research and application in the medical field for effective treatment and prevention of these diseases. At present, the cardiac substructure is generally segmented manually by doctors or experts according to the existing medical knowledge, medical conditions and clinical knowledge, the method is time-consuming and labor-consuming and has strong subjectivity, and the segmentation result is different from person to person. With the advent of large-scale labeled data and the development of computers, the realization of automatic segmentation of the heart using deep learning algorithms has become a hot spot of current research. Among them, Olaf Ronneberger proposed a U-Net model for medical image segmentation based on the Full Convolutional Network (FCN) model in 2015. Both U-Net and FCN have classical encoding-decoding topologies, but U-Net has a symmetric network structure and hopping connections, and the results of U-Net are superior to FCN in segmentation of cardiac images. Aiming at the problem that the heart image segmentation precision is difficult to improve, the improvement research performed by researchers on the basis of U-Net can be roughly divided into two types: improvement study based on 2D U-Net framework, improvement study based on 3D U-Net framework. Although the 2D network partitioning is less computationally and storage demanding, the 2D network partitioning typically discards spatial information between slices. 3D network segmentation can maximize the information between slice sequences. In summary, aiming at improving the segmentation accuracy, the invention provides a 3D U heart segmentation network AU-Net with attention mechanism, which is mainly used for solving the problem of low segmentation accuracy of the segmented heart substructure.
Disclosure of Invention
The invention aims to solve the problem of low precision of heart substructure segmentation, and provides an automatic method for accurately segmenting the heart substructure.
The invention is realized by the following technical scheme: a heart segmentation method based on a multi-scale attention U-net network. Firstly, cutting and scaling preprocessing are carried out on an image so as to reduce training parameters and cover global information; then 30000 times of training is carried out on the AMU-Net network of the invention, and the trained parameters are stored; and finally, segmenting the test data set by using the trained weight to finally obtain a segmentation result graph.
(1) Data preprocessing: image preprocessing first re-encodes the label data of the CT images of 10 training volume data in the MM-WHS2017 dataset to make them suitable for multi-classification tasks. The data is then randomly cropped, the image is cropped to 256 x 16 size, and light data enhancement techniques are applied to the data and label. On a random basis, the data is rotated between-15 and +15 degrees and scaled in between by 0.9-1.1. This ensures slight robustness and variability of network training.
(2) A training stage: in the training stage, the AMU-Net network provided by the invention is subjected to parameter training. And (3) learning model parameters of the training set by using a Tensorflow deep learning framework, setting the batch processing size to be 4 and setting the training iteration number to be 30000 by adopting an Adam optimizer. Selecting 50% of data in the training set as a verification set, training the data by using a cross entropy loss function, and selecting a weight coefficient when the verification set is minimally lost as a final training weight in a training stage in an experiment to provide a weight parameter for a subsequent testing stage.
The main innovation of the AU-Net network provided by the invention is as follows:
a. introducing a multi-scale inclusion module: by introducing the inclusion modules of convolution kernels with different scales, as shown in fig. 1, feature information with different scales is extracted and fused, global information and local information are better combined, and more features are obtained.
b. An attention mechanism is introduced: by introducing an attention mechanism, non-information areas are filtered, areas containing useful information are emphasized, the resolution of characteristic images is enhanced, the expression capacity of detail characteristics is improved, and more detail characteristics can be obtained.
(3) And (3) a testing stage: randomly selecting 10% of training volume data in the MM-WHS2017 data set as a test set. Firstly, a CT image with the size of 256 multiplied by 16 is directly input into a testing stage, and the final weight parameters obtained in a training stage are used for carrying out heart substructure segmentation on the testing image, so that a segmented heart substructure result graph is finally obtained.
The invention provides an improved AMU-Net heart image segmentation algorithm aiming at the defects that the heart image segmentation precision of a U-shaped convolution network is not high and the boundary of each region is fuzzy. By introducing a multi-scale inclusion module, feature information of different scales is extracted and fused. And an attention mechanism is introduced, the reusability of the features is increased, the shallow features are fused with the corresponding high-level features, and finally a trainable end-to-end segmentation algorithm is formed. Compared with the classic U-Net segmentation algorithm, the algorithm has a finer structure, can effectively solve the problems of over-segmentation and under-segmentation of heart segmentation, and has higher accuracy of segmentation results.
Drawings
Fig. 1 is a schematic diagram of a multi-scale inclusion structure.
Fig. 2 is a flow chart of a network training method for cardiac segmentation.
Detailed Description
To verify the cardiac substructure segmentation performance of the present invention, we selected the MM-WHS2017 dataset for training and testing.
The method comprises the steps of preprocessing CT image data, using Spyder software, and performing image normalization processing by image rotation, translation transformation and contrast enhancement.
Step two, training the AMU-Net network in Spyder software, wherein the batch _ size is 4, the learning _ rate is 0.001, adopting an Adama optimizer, using L2 regularization to prevent overfitting, and setting the regularization coefficient to be 0.0005. 60000 epochs are trained, the training set and the verification set are divided by the ratio of 1:1, the two processes are discontinuously trained, network parameters are adjusted until the network converges, and the training is finished.
And step three, testing the AMU-Net network by adopting a test set of the MM-WHS2017 data set. To evaluate the segmentation results, 2 common evaluation criteria, similarity coefficient (Dice) and Jaccard index, were used, as in table 1.
TABLE 1 comparison with other people's work (Dice index)
Figure BDA0003223518220000031
Experimental results show that the algorithm structure of the invention is finer, the problems of over-segmentation and under-segmentation of heart segmentation can be effectively solved, the segmentation result precision is higher, and the completeness and the accuracy of the heart segmentation can be ensured.

Claims (1)

1. The invention discloses a heart segmentation method based on a multi-scale attention U-net network, which comprises the following steps of:
(1) data preprocessing: firstly, carrying out label data recoding on CT images of 10 training volume data in an MM-WHS2017 data set to enable the CT images to be suitable for a multi-classification task; then randomly cutting the data, cutting the image into 256 multiplied by 16 size, and applying optical data enhancement technology on the data and the label; on a random basis, the data is rotated between-15 and +15 degrees and scaled in the range of 0.9-1.1 therebetween; this ensures slight robustness and variability of network training;
(2) a training stage: in the training stage, parameter training is carried out on the AMU-Net network provided by the invention; learning model parameters of a training set by using a Tensorflow deep learning framework, setting the batch processing size to be 4 and the training iteration number to be 30000 by using an Adam optimizer; selecting 50% of data in a training set as a verification set, training the data by using a cross entropy loss function, and selecting a weight coefficient when the verification set is minimally lost as a final training weight in a training stage in an experiment to provide a weight parameter for a subsequent testing stage;
(3) and (3) a testing stage: randomly selecting 10% of training volume data in the MM-WHS2017 data set as a test set; firstly, a CT image with the size of 256 multiplied by 16 is directly input into a testing stage, and the final weight parameters obtained in a training stage are used for carrying out heart substructure segmentation on the testing image, so that a segmented heart substructure result graph is finally obtained.
CN202110964471.4A 2021-08-22 2021-08-22 Heart segmentation method based on multi-scale attention U-net network Withdrawn CN113744210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110964471.4A CN113744210A (en) 2021-08-22 2021-08-22 Heart segmentation method based on multi-scale attention U-net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110964471.4A CN113744210A (en) 2021-08-22 2021-08-22 Heart segmentation method based on multi-scale attention U-net network

Publications (1)

Publication Number Publication Date
CN113744210A true CN113744210A (en) 2021-12-03

Family

ID=78732134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110964471.4A Withdrawn CN113744210A (en) 2021-08-22 2021-08-22 Heart segmentation method based on multi-scale attention U-net network

Country Status (1)

Country Link
CN (1) CN113744210A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066913A (en) * 2022-01-12 2022-02-18 广东工业大学 A kind of heart image segmentation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066913A (en) * 2022-01-12 2022-02-18 广东工业大学 A kind of heart image segmentation method and system
CN114066913B (en) * 2022-01-12 2022-04-22 广东工业大学 A kind of heart image segmentation method and system

Similar Documents

Publication Publication Date Title
Salido et al. Using deep learning to detect melanoma in dermoscopy images
CN107154043B (en) Pulmonary nodule false positive sample inhibition method based on 3DCNN
Zhang et al. Automatic skin lesion segmentation by coupling deep fully convolutional networks and shallow network with textons
Uysal et al. Computer-aided retinal vessel segmentation in retinal images: convolutional neural networks
Zhang et al. A novel denoising method for CT images based on U-net and multi-attention
Rajee et al. Gender classification on digital dental x-ray images using deep convolutional neural network
Zhang et al. A novel denoising method for low-dose CT images based on transformer and CNN
Sert et al. Ensemble of convolutional neural networks for classification of breast microcalcification from mammograms
Tan et al. Analysis of segmentation of lung parenchyma based on deep learning methods
CN110706225A (en) Tumor identification system based on artificial intelligence
US20240395023A1 (en) A computer-implemented method, data processing apparatus, and computer program for active learning for computer vision in digital images
Yang et al. RADCU-Net: Residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation
CN111784713A (en) A U-shaped Heart Segmentation Method Introducing Attention Mechanism
CN118196013B (en) Multi-task medical image segmentation method and system supporting collaborative supervision of multiple doctors
CN110033448B (en) An AI-assisted Hamilton grading prediction analysis method for AGA clinical images
CN115689993A (en) Skin cancer image segmentation method and system based on attention and multi-feature fusion
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
Kong et al. Data enhancement based on M2-Unet for liver segmentation in Computed Tomography
CN116778250A (en) Coronary artery lesion classification method based on transfer learning and CBAM
CN114913164B (en) Super-pixel-based two-stage weak supervision new crown focus segmentation method
Bhardwaj et al. Detection and classification of lung cancer CT images using mask R-CNN based generated mask method
Malaiarasan et al. Towards Enhanced Deep CNN For Early And Precise Skin Cancer Diagnosis
Wang et al. An efficient hierarchical optic disc and cup segmentation network combined with multi-task learning and adversarial learning
Lv et al. An improved residual U-Net with morphological-based loss function for automatic liver segmentation in computed tomography
CN113744210A (en) Heart segmentation method based on multi-scale attention U-net network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211203

WW01 Invention patent application withdrawn after publication