CN112315451A - Brain tissue segmentation method based on image clipping and convolutional neural network - Google Patents

Brain tissue segmentation method based on image clipping and convolutional neural network Download PDF

Info

Publication number
CN112315451A
CN112315451A CN202011376522.3A CN202011376522A CN112315451A CN 112315451 A CN112315451 A CN 112315451A CN 202011376522 A CN202011376522 A CN 202011376522A CN 112315451 A CN112315451 A CN 112315451A
Authority
CN
China
Prior art keywords
image
brain
brain tissue
segmentation
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011376522.3A
Other languages
Chinese (zh)
Inventor
宫照煊
张国栋
郭薇
周唯
刘智
孔令宇
国翠
柳昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Aerospace University
Original Assignee
Shenyang Aerospace University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Aerospace University filed Critical Shenyang Aerospace University
Priority to CN202011376522.3A priority Critical patent/CN112315451A/en
Publication of CN112315451A publication Critical patent/CN112315451A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Neurology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a brain tissue segmentation method based on image cutting and a convolutional neural network, which comprises the following steps of: s1, cutting the brain MR image to obtain the brain tissue interested area; s2, extracting the cut image by a convolution neural network model; s3, increasing the number of the clipped image training data sets to obtain an expanded data set; and S4, training the expanded data set by using the Segnet model, inputting the image to be tested into the trained network, taking the output of the network as the initial segmentation result of the brain tissue, testing by using different amounts of data to obtain the initial segmentation results of a plurality of groups of brain tissues, and applying random selection fusion to the segmentation results to realize accurate segmentation of the brain tissue. The method provided by the invention can be used for accurately segmenting the brain structure tissue.

Description

Brain tissue segmentation method based on image clipping and convolutional neural network
Technical Field
The invention belongs to the technical field of image segmentation, and relates to a brain tissue segmentation method based on image clipping and a convolutional neural network.
Background
The brain diseases have the characteristics of high morbidity, high mortality, high disability rate, high recurrence rate, more complex complications and the like; the changes of the position, the volume and the shape of brain structures such as hippocampus, amygdala, thalamus and the like are closely related to various diseases, and the changes need to be accurately segmented to be determined and analyzed, so that the research on the position, the volume and the shape of the brain structures can provide support for the clinical research of various diseases. However, the anatomical structure of these brain structures is complex, mostly located in the middle of the brain, and is very close to the gray level of the surrounding tissues, and in addition to the offset field effect of the MR image itself, the local volume effect and the influence of tissue motion, etc., even the most experienced image physicians perform manual segmentation, which is a great challenge.
Therefore, how to segment brain tissue quickly, accurately and effectively is a problem which needs to be solved in the medical science at present.
Disclosure of Invention
The invention aims to provide a brain tissue segmentation method based on image cutting and a convolutional neural network.
The purpose of the invention can be realized by the following technical scheme:
a brain tissue segmentation method based on image clipping and a convolutional neural network comprises the following steps:
s1, cutting the brain MR image to obtain the brain tissue interested area;
s2, extracting the cut image by a convolution neural network model;
s3, increasing the number of the clipped image training data sets to obtain an expanded data set;
and S4, training the expanded data set by using the Segnet model, inputting the image to be tested into the trained network, using the output of the network as the initial segmentation result of the brain tissue, testing by using different amounts of data to obtain the initial segmentation results of a plurality of groups of brain tissues, and applying random selection fusion to the segmentation results to realize accurate segmentation of the brain tissue.
Further, a sub-portion with the center size of 128 × 128 of the original image is intercepted as an input image for subsequent deep learning, and the sub-portion includes all brain tissue regions.
Further, the data clipping method is as follows:
s11, searching from top to bottom, from left to right, and from right to left, respectively, determining a brain boundary line in the direction when a pixel point greater than 0 exists in the searched row or column, wherein the four boundary lines form a bounding box of the brain region and obtain four vertex coordinates;
s12, determining a linear equation according to the two points to obtain a linear equation of two diagonal lines of the bounding box, wherein the intersection point of the diagonal line equations is the center point of the brain region;
s13, a region of 128 × 128 size is cut out from the original image with the center point as the center, and the cut image region is obtained.
Further, the convolutional neural network model consists of two stages, namely a top-down stage and a bottom-up stage; the sizes of the convolutional layers in the top-down stage are 3x3, the sizes of the pooling layers are 2x2, and each convolutional layer enters a correction linear unit activation function after being processed; the bottom-up stage adopts up-sampling, pooling and correcting linear unit activation function, and the last layer is formed by a 1x1 convolution layer for realizing image segmentation.
The invention has the beneficial effects that:
the method realizes automatic extraction of brain tissue by using a deep learning and multi-map random selection method, firstly cuts the brain MR image to obtain the brain tissue region of interest, and convolves the cut data to more effectively learn image characteristics, thereby improving the segmentation precision of the deep learning. And then increasing the number of training data sets by rotating, translating and other operations on the cut images, training the expanded data sets by using a Segnet model, inputting the images to be tested into a trained network, outputting the network as an initial segmentation result of the brain tissue, testing by using different numbers of data to obtain the initial segmentation results of a plurality of groups of brain tissues, and applying random selection and fusion to the results to realize accurate segmentation of the brain tissue.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for segmenting brain tissue based on image clipping and convolutional neural network according to the present invention;
FIG. 2 is a schematic diagram of an image cropping process of a brain tissue segmentation method based on image cropping and a convolutional neural network according to the present invention;
FIG. 3 is a schematic diagram of a brain tissue structure accurately segmented by the method of the present invention.
Detailed Description
The invention is explained in detail by the following examples in conjunction with fig. 1, 2 and 3:
as shown in fig. 1, the present invention provides a brain tissue segmentation method based on image clipping and convolutional neural network, comprising the following steps:
s1, cutting the brain MR image to obtain the brain tissue interested area;
s2, extracting the cut image by a convolution neural network model; the image features can be more effectively learned by performing convolution on the cut data, so that the segmentation precision of deep learning is improved.
S3, increasing the number of the clipped image training data sets to obtain an expanded data set; specifically, the number of training data sets can be increased by rotating, translating and the like the clipped image;
and S4, training the expanded data set by using the Segnet model, inputting the image to be tested into the trained network, using the output of the network as the initial segmentation result of the brain tissue, testing by using different amounts of data to obtain the initial segmentation results of a plurality of groups of brain tissues, and applying random selection fusion to the segmentation results to realize accurate segmentation of the brain tissue.
Further, a sub-portion with the center size of 128 × 128 of the original image is intercepted as an input image for subsequent deep learning, and the sub-portion includes all brain tissue regions.
As shown in fig. 2, the original MR image is typically 256 x 256 in size, and the brain tissue (hippocampus, thalamus, amygdala, etc.) is typically located in the central region of the imaged brain. Training the network by directly using the original image as the input of the deep learning network may result in poor or even failed segmentation effect (such as a completely black image) during testing. In order to solve the above problems, the present invention provides a fully automatic image cropping method, by which a sub-portion of 128 × 128 size of the original image center is intercepted as an input image for subsequent deep learning, and the sub-portion includes all brain tissue regions.
Further, the data clipping method is as follows:
s11, searching from top to bottom, from bottom to top, from left to right, and from right to left in the image, determining a brain boundary line (line 1 in fig. 2 (a)) in the direction when a pixel point greater than 0 exists in the searched row or column, wherein the four boundary lines form a bounding box of the brain region and obtain four vertex coordinates;
s12, determining a linear equation according to the two points to obtain a linear equation (line 2 in figure 2 (a)) of two diagonal lines of the bounding box, wherein the point where the diagonal line equations intersect is the center point of the brain region;
s13, a region of 128 × 128 size (line 3 in fig. 2 (b)) is cut out from the original image with the center point as the center, and the cut image region is obtained.
The invention adopts a convolution neural network to realize the extraction of brain tissues. The network model consists of two stages from top to bottom and from bottom to top. The traditional pooling and convolution operations are adopted in the top-down stage, the size of each convolutional layer is 3x3, the size of each pooling layer is 2x2, and each convolutional layer enters a correction linear unit activation function after being processed; the bottom-up stage adopts up-sampling, pooling and correcting linear unit activation function, and the last layer is formed by a 1x1 convolution layer for realizing image segmentation. The convolutional neural network structure used in the present invention is as follows:
Figure BDA0002807297840000051
using the method described in this example, the brain tissue structure was accurately segmented as shown in FIG. 3, which contains 6 tissues, 4 being the pallor nucleus, 5 being the hippocampus, 6 being the amygdala, 7 being the caudate nucleus, 8 being the lenticular nucleus, and 9 being the thalamus.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (4)

1. A brain tissue segmentation method based on image clipping and a convolutional neural network is characterized by comprising the following steps:
s1, cutting the brain MR image to obtain the brain tissue interested area;
s2, extracting the cut image by a convolution neural network model;
s3, increasing the number of the clipped image training data sets to obtain an expanded data set;
and S4, training the expanded data set by using the Segnet model, inputting the image to be tested into the trained network, taking the output of the network as the initial segmentation result of the brain tissue, testing by using different amounts of data to obtain the initial segmentation results of a plurality of groups of brain tissues, and applying random selection fusion to the segmentation results to realize accurate segmentation of the brain tissue.
2. The method of claim 1, wherein a 128 x 128 sub-portion of the original image center size is cut out as an input image for subsequent deep learning, and the sub-portion includes all brain tissue regions.
3. The brain tissue segmentation method based on image cropping and convolutional neural network as claimed in claim 2, wherein the data cropping method is as follows:
s11, searching from top to bottom, from left to right, and from right to left, respectively, determining a brain boundary line in the direction when a pixel point greater than 0 exists in the searched row or column, wherein the four boundary lines form a bounding box of the brain region and obtain four vertex coordinates;
s12, determining a linear equation according to the two points to obtain a linear equation of two diagonal lines of the bounding box, wherein the intersection point of the diagonal line equations is the center point of the brain region;
s13, a region of 128 × 128 size is cut out from the original image with the center point as the center, and the cut image region is obtained.
4. The method of claim 1, wherein the convolutional neural network model consists of two stages, top-down and bottom-up; the sizes of the convolutional layers in the top-down stage are 3x3, the sizes of the pooling layers are 2x2, and each convolutional layer enters a correction linear unit activation function after being processed; the bottom-up stage adopts up-sampling, pooling and correcting linear unit activation function, and the last layer is formed by a 1x1 convolution layer for realizing image segmentation.
CN202011376522.3A 2020-11-30 2020-11-30 Brain tissue segmentation method based on image clipping and convolutional neural network Withdrawn CN112315451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011376522.3A CN112315451A (en) 2020-11-30 2020-11-30 Brain tissue segmentation method based on image clipping and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011376522.3A CN112315451A (en) 2020-11-30 2020-11-30 Brain tissue segmentation method based on image clipping and convolutional neural network

Publications (1)

Publication Number Publication Date
CN112315451A true CN112315451A (en) 2021-02-05

Family

ID=74308204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011376522.3A Withdrawn CN112315451A (en) 2020-11-30 2020-11-30 Brain tissue segmentation method based on image clipping and convolutional neural network

Country Status (1)

Country Link
CN (1) CN112315451A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016681A (en) * 2017-03-29 2017-08-04 浙江师范大学 Brain MRI lesion segmentation approach based on full convolutional network
CN108898140A (en) * 2018-06-08 2018-11-27 天津大学 Brain tumor image segmentation algorithm based on improved full convolutional neural networks
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic
US20190333222A1 (en) * 2018-04-26 2019-10-31 NeuralSeg Ltd. Systems and methods for segmenting an image
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016681A (en) * 2017-03-29 2017-08-04 浙江师范大学 Brain MRI lesion segmentation approach based on full convolutional network
US20190333222A1 (en) * 2018-04-26 2019-10-31 NeuralSeg Ltd. Systems and methods for segmenting an image
CN108898140A (en) * 2018-06-08 2018-11-27 天津大学 Brain tumor image segmentation algorithm based on improved full convolutional neural networks
CN111161275A (en) * 2018-11-08 2020-05-15 腾讯科技(深圳)有限公司 Method and device for segmenting target object in medical image and electronic equipment
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢波涛 等: "改进的全卷积神经网络的脑肿瘤图像分割", 《信号处理》 *

Similar Documents

Publication Publication Date Title
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN109410219B (en) Image segmentation method and device based on pyramid fusion learning and computer readable storage medium
CN111524135B (en) Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN108492248A (en) Depth map super-resolution method based on deep learning
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN110223304B (en) Image segmentation method and device based on multipath aggregation and computer-readable storage medium
CN111582104B (en) Remote sensing image semantic segmentation method and device based on self-attention feature aggregation network
CN106845529A (en) Image feature recognition methods based on many visual field convolutional neural networks
CN111553858B (en) Image restoration method and system based on generation countermeasure network and application thereof
CN111445478A (en) Intracranial aneurysm region automatic detection system and detection method for CTA image
CN107633522A (en) Brain image dividing method and system based on local similarity movable contour model
CN104217459B (en) A kind of spheroid character extracting method
CN111724401A (en) Image segmentation method and system based on boundary constraint cascade U-Net
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN108764250A (en) A method of extracting essential image with convolutional neural networks
CN111161278A (en) Deep network aggregation-based fundus image focus segmentation method
CN107481224A (en) Method for registering images and device, storage medium and equipment based on structure of mitochondria
CN110599495B (en) Image segmentation method based on semantic information mining
CN112257810A (en) Submarine biological target detection method based on improved FasterR-CNN
CN112315451A (en) Brain tissue segmentation method based on image clipping and convolutional neural network
CN107464272A (en) The interpolation method of central diffusion type meteorological causes isopleth based on key point
CN116778164A (en) Semantic segmentation method for improving deep V < 3+ > network based on multi-scale structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210205

WW01 Invention patent application withdrawn after publication