CN112132842A - Brain image segmentation method based on SEEDS algorithm and GRU network - Google Patents

Brain image segmentation method based on SEEDS algorithm and GRU network Download PDF

Info

Publication number
CN112132842A
CN112132842A CN202011037578.6A CN202011037578A CN112132842A CN 112132842 A CN112132842 A CN 112132842A CN 202011037578 A CN202011037578 A CN 202011037578A CN 112132842 A CN112132842 A CN 112132842A
Authority
CN
China
Prior art keywords
image
brain image
brain
superpixel
gru network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011037578.6A
Other languages
Chinese (zh)
Inventor
文颖
顾安琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202011037578.6A priority Critical patent/CN112132842A/en
Publication of CN112132842A publication Critical patent/CN112132842A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Abstract

The invention provides a brain image segmentation method based on an energy-driven sampling super-pixel segmentation (SEEDS) algorithm and a Gated Round Unit (GRU) network. Firstly, preprocessing a multi-modal image to obtain an effective brain tissue image; meanwhile, dividing the original brain image into a certain number of superpixel blocks by using an SEEDS superpixel division method and extracting the spatial characteristics of the superpixel blocks; then, for each super pixel block, classification is carried out through the trained GRU network, and then the classification result is combined with the original brain image to obtain a final brain tissue segmentation result. The invention constructs the spatial characteristics of the brain image by using the super-pixels as basic units, not only saves the local characteristics and the edge information of the brain tissue, but also can accurately describe the brain tissue structure of each part, and simultaneously fuses the original image information and the spatial characteristics by using the GRU network, thereby greatly improving the segmentation precision.

Description

Brain image segmentation method based on SEEDS algorithm and GRU network
Technical Field
The invention belongs to the technical field of image segmentation, and particularly relates to a brain image segmentation method based on an SEEDS algorithm and a GRU network.
Background
In recent years, brain diseases are more and more concerned by people due to high morbidity and high risk, and an important link for diagnosing brain functional diseases is to extract brain tissues. As a non-invasive, non-radiative imaging modality, brain Magnetic Resonance (MR) has become an important tool for diagnosing and treating brain diseases. MR images of different modalities of T1, T2 and PD can be obtained by setting and adjusting different parameters, and different brain information is reflected on the images of different modalities, so that the mode information is fully utilized to facilitate deeper research. The brain lesion area is segmented from the MR image, and the navigation function can be realized for subsequent treatment.
Image segmentation refers to dividing an image into different and characteristic regions according to some similar characteristics. The traditional image segmentation method usually takes pixels as basic units, rarely considers the relationship among the pixels, and once a picture with an overlarge size is encountered, the efficiency of processing segmentation is greatly reduced. Under the condition, the appearance of the super-pixels combines the pixels into a perceptually meaningful atomic region, can be used for replacing a traditional pixel network rigid structure, can capture image redundancy, greatly improves the segmentation efficiency, effectively reduces the complexity of an image processing task, can not only extract the characteristics of a local region of an image, but also reflect the spatial structure information of the image, provides convenience for calculating the image characteristics, and is widely applied to a plurality of image processing preprocessing steps. Currently, superpixels have been involved in many fields, such as the field of computer vision, depth estimation, tracking and recognition of targets, and the like.
With the increasing number of superpixel algorithms proposed, superpixel segmentation can be roughly divided into two categories according to different principles: one is a graph theory based method, and the algorithm for comparing the representative types is as follows: graph-based Cut (Graph-based Cut), Entropy Rate based (control Rate suppixel, ERS). The graph theory algorithm treats the pixel points as nodes of the graph, such that each superpixel is the minimum spanning tree of constituent pixels, and the graph theory method adheres well to image boundaries, but has the disadvantage of producing superpixels of very irregular size and shape. While entropy rate-based algorithms achieve image segmentation by maximizing an objective function containing a balance term and a random walk entropy rate, they can produce superpixels with more regular shapes. The other method is a gradient ascent-based method, and specifically includes Simple Linear Iterative Clustering (SLIC), Mean shift algorithm (MS), and the like. The mean shift algorithm can generate superpixels with regular shapes by searching for pixel points belonging to the same cluster along the density rising direction, and has the defects of low speed and incapability of controlling the number of the superpixels. The simple linear iterative clustering is a classical superpixel segmentation method, a superpixel is generated by adopting a K-means algorithm, the time complexity is low, and the size and compactness of the superpixel can be controlled at the same time.
Disclosure of Invention
In view of the above defects in the prior art, on the basis of improving the effect of the existing image segmentation method, the present invention aims to provide a brain image segmentation method based on an Energy-Driven sampled Via Energy-Driven Sampling (SEEDS) algorithm and a Gated Recursive Unit (GRU) network. Firstly, preprocessing a multi-modal image to obtain an effective brain tissue image; meanwhile, dividing the original brain image into a certain number of superpixel blocks by using an SEEDS superpixel division method and extracting the spatial characteristics of the superpixel blocks; then, for each super pixel block, classification is carried out through the trained GRU network, and then the classification result is combined with the original brain image to obtain a final brain tissue segmentation result. The invention constructs the spatial characteristics of the brain image by using the super-pixels as basic units, not only saves the local characteristics and the edge information of the brain tissue, but also can accurately describe the brain tissue structure of each part, and simultaneously fuses the original image information and the spatial characteristics by using the GRU network, thereby greatly improving the segmentation precision.
In order to achieve the above object, the present invention provides a brain image segmentation method based on an SEEDS algorithm and a GRU network, comprising the following steps:
step S1: preprocessing the brain image set to remove a skull part in the brain image;
step S2: carrying out primary image segmentation on the images of the training set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph;
step S3: constructing a characteristic sequence training set and a truth set according to the superpixel undirected graph obtained in the step S2;
step S4: taking the characteristic sequence training set and the truth value set constructed in the step S3 as the input of the GRU network to train the GRU network;
step S5: for the test image, steps S2 and S3 are repeated, forming steps S2 ' and S3 ', i.e., step S2 ': carrying out primary image segmentation on the images of the training set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph; step S3': constructing a characteristic sequence according to the super-pixel undirected graph obtained in the step S2'; the feature sequence obtained in step S3' is input into the trained GRU network obtained in step S4 and classified to obtain the classification result of each superpixel block, and the classification result is returned to the original image to obtain the segmented brain image.
Further, the above-mentioned segmentation of the brain image is to segment three important brain tissues of the human brain in the brain image: gray matter, white matter and cerebrospinal fluid.
Further, the brain image set is from the medical image database, brain web.
Further, the method for segmenting the brain image based on the SEEDS algorithm and the GRU network further comprises a step S0 before the step S1: the brain image set is divided into a training set and a test set, wherein the number of images in the training set accounts for more than 50%, preferably more than 70% of the number of images in the brain image set.
Further, the preprocessing of the brain image set to remove the skull portion in the brain image in step S1 is to process the brain image set by the BET algorithm.
Further, the constructing of the superpixel undirected graph in step S2 is to perform pre-segmentation on the image by using the SEEDS algorithm to obtain a superpixel pre-segmentation graph, and then to use each superpixel region as a node, and to connect adjacent superpixel regions by using edges to construct the superpixel undirected graph.
Further, the step S3 specifically includes the following steps:
step S3-1: for each super pixel node, constructing a D multiplied by B characteristic sequence, wherein D is the input characteristic dimension, and B is the length of the sequence;
step S3-2: constructing a truth value vector according to a given segmentation truth value of each training image;
step S3-3: and repeating the steps S3-1 and S3-2 for each image of the training set to construct a characteristic sequence training set and a truth set suitable for the GRU network.
Further, the step S4 specifically includes the following steps:
step S4-1: initializing a GRU network structure; setting the number numHiddem of hidden units of the GRU network as 50, the number numClass of segmentation as 3, the maximum iteration round number maxEpoch as 50, and the batch size as miniBatchSize as 512;
step S4-2: selecting Adam as a network optimization algorithm, cross entropy loss as a loss function and Relu as an activation function, and training a GRU network; the training process is divided into a forward propagation process and a backward propagation process, firstly, a characteristic sequence training set and a truth value set constructed in the step S3 are input, forward propagation obtains a prediction result through a Relu activation function, and then each iteration is realized through calculating cross entropy loss and backward propagation to update GRU parameters.
Further, the step S5 specifically includes the following steps:
step S2': carrying out primary image segmentation on the images of the test set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph;
step S3': constructing a characteristic sequence according to the super-pixel undirected graph obtained in the step S2';
step S5-1: inputting the characteristic sequence obtained in the step S3' into the trained GRU network obtained in the step S4 for classification to obtain a classification result of each superpixel block;
step S5-2: and returning the classification label of each super pixel block to the corresponding region of each super pixel block in the original test image as the label of each region to obtain a final segmentation result, namely a segmented brain image.
Firstly, preprocessing a multi-modal image to obtain an effective brain tissue image; meanwhile, dividing the original brain image into a certain number of superpixel blocks by using an SEEDS superpixel division method and extracting the spatial characteristics of the superpixel blocks; then, for each super pixel block, classification is carried out through the trained GRU network, and then the classification result is combined with the original brain image to obtain a final brain tissue segmentation result. The method constructs the spatial characteristics of the brain image by using the superpixel as a basic unit, not only saves the local characteristics and the edge information of the brain tissue, but also can accurately describe the brain tissue structure of each part, and simultaneously fuses the original image information and the spatial characteristics by using the GRU network, thereby greatly improving the segmentation precision.
Drawings
FIG. 1 is a flow chart of a brain image segmentation method based on SEEDS algorithm and GRU network according to a preferred embodiment of the present invention;
FIG. 2 is a gray scale image of an original brain image according to a preferred embodiment of the present invention;
FIG. 3 is an image of a skull removed according to a preferred embodiment of the present invention;
FIG. 4 is an image pre-segmented by the SEEDS algorithm in accordance with a preferred embodiment of the present invention;
FIG. 5 is a flow chart of a GRU classification network in accordance with a preferred embodiment of the present invention;
FIG. 6 is a block diagram of a GRU classification network in accordance with a preferred embodiment of the present invention;
FIG. 7 is a diagram of the final segmentation result of a preferred embodiment of the present invention; wherein, (a) is a segmentation true value graph, and (b) is a segmentation result graph obtained in the present embodiment.
Detailed Description
The following examples are given to illustrate the present invention in detail, and the following examples are given to illustrate the detailed embodiments and specific procedures of the present invention, but the scope of the present invention is not limited to the following examples.
In a preferred embodiment, the main segmentation task of the present invention is to divide three important brain tissues in the human brain: gray Matter (GM), White Matter (WM), cerebrospinal Fluid (CSF), and the rest is Background (BG). As shown in fig. 1, the method for segmenting a brain image based on an SEEDS algorithm and a GRU network according to the present invention includes the following steps:
step S0: in this embodiment, a medical image database, the brain web, is selected, and the database includes an anatomical model-based magnetic resonance scan that simulates the normal brain and includes MR images of three modalities, T1, T2, and PD. Selecting 400 MR images of three modes as a brain image set respectively, dividing the MR images in the brain image set into training samples and testing samples according to the proportion of 7:3, namely, selecting 840 images of 280 images randomly in each mode to form the training set
Figure BDA0002705589790000041
The other 120 images in each mode form a test set by 360 images;
step S1: for the training set
Figure BDA0002705589790000042
The image of (2) is pre-processed. Brain images can be divided into brain tissue structures and non-brain tissue structures, wherein brain combinations mainly include gray matter, white matter and cerebrospinal fluid; the non-brain tissue is mainly skull, eyeball, etc. The main segmentation task of this embodiment is to divide the three brain tissues such that each brain image in the training set retains the three most important parts. The BET algorithm is a variable lattice model, which can better extract the skull base brain tissue by pushing contour points to the edges of the brain tissue through the interaction of three forces, namely, for each image, the algorithm for removing skull by using the BET algorithm is shown as the following formula (1) and formula (2):
Mi=bet(Bi,F,G) (1)
B′i=Bi×Mi (2)
wherein M isiRepresenting a mask of a brain shell removed segmentation image obtained by a BET algorithm; b isiIs an original brain image; f is an image density threshold, in this example, the default isIdentifying a value of 0.5; g is a vertical gradient density threshold, and in this embodiment, a default value of 0 is taken. B'iI ═ 1,2, … N, namely B'iObtained by overlapping the mask image and the original image. As shown in fig. 2-3, wherein fig. 2 is an original brain image, and fig. 3 is an image after a skull removing process.
Step S2: and carrying out image segmentation on the brain image by using an SEEDS algorithm, and constructing a superpixel undirected graph. The method adopts an energy-driven sampling superpixel segmentation algorithm to segment the brain image, the superpixel is a small region which is composed of a series of pixel points with adjacent positions and similar characteristics such as color, brightness, texture and the like, effective information for further image segmentation is mostly reserved in the small regions, and the boundary information of objects in the image can not be generally damaged. The SEEDS algorithm firstly selects a regular complete superpixel partition, then carries out optimization by moving a superpixel boundary or exchanging pixels between adjacent superpixels, carries out boundary updating by using blocks defined as a hierarchical structure, gradually reduces to a pixel level along with algorithm iteration from a larger block, and finally generates a superpixel block which can run faster while ensuring more accurate boundary information. The superpixel segmentation using the SEEDS algorithm is shown in the following equation (3):
Si=superpixelSEEDS(image_size,num_superpixels,num_levels) (3)
wherein S isiThe image _ size is a two-dimensional matrix and is the size of an input image, and specifically comprises the width image _ width of the input image, the height image _ height of the input image and the number image _ channels of the input image; num _ superpixels is the number of desired superpixels, and num _ superpixels in this embodiment is 4000; num _ levels are block-level numbers, and the larger the value is, the more accurate the segmentation is, and the smoother the shape is, but more memory and CPU time are required, and num _ levels is taken as 12 in this embodiment. Fig. 4 shows the pre-segmentation effect of the SEEDS algorithm.
And each block of the superpixels obtained by pre-segmentation is regarded as a node of an undirected graph, adjacent nodes are connected by using undirected edges, and an adjacency matrix is established by the number of the nodes and the undirected edges to construct a superpixel undirected graph G (V, E).
Step S3: feature sequence training set T for building GRU network input according to superpixel segmentation graphtrainIt is shown in the following formula (4):
Figure BDA0002705589790000053
wherein the content of the first and second substances,
Figure BDA0002705589790000054
representing a characteristic sequence formed by jth super pixel nodes of the ith training sample; d is the input characteristic dimension; b isi,jIs the characteristic sequence length; n represents the number of samples and,
Figure BDA0002705589790000055
representing the number of superpixel nodes for the ith training sample. In particular, the signature sequence Ti,jThe construction of (2) is shown in the following equation (5):
Figure BDA0002705589790000056
wherein, Ti,jFrom Li,jAnd Ni,jComposition is carried out; l isi,jA feature vector representing a current superpixel node; n is a radical ofi,jAnd the feature vectors represent the feature vectors of the nodes of the adjacent domains of the current super pixel node. L isi,jAnd Ni,jAs shown in the following equations (6) and (7):
Li,j=maxB′i(h,w),(h,w)∈Ri,j (6)
Figure BDA0002705589790000051
Figure BDA0002705589790000052
wherein L isi,jThe maximum value of the pixels in the domain where the current super-pixel node is located; n is a radical ofi,jThe method comprises the following steps of (1) forming a maximum value of a node pixel of an adjacent domain of a current super pixel node;
Figure BDA0002705589790000061
representing an image area where an s adjacent node of a j super pixel node of an ith training sample is located; n is a radical ofi,jIndicating the number of nodes neighboring the current node.
For truth set GtrainThe definition is shown as formula (8) and formula (9):
Figure BDA0002705589790000062
gi,j=modeDi(h,w)(h,w)∈Ri,j (9)
wherein, gi,jRepresenting a truth label corresponding to each super pixel node; diRepresents the true value corresponding to the ith training sample, D in this embodimentiTaking a value of {1,2,3 }; mode represents the calculation of the mode of the label in the current node area.
Step S4: the GRU classification network is trained. The GRU network can be used for processing time sequence data, past information can be memorized through a gating mechanism, meanwhile, unimportant information can be selectively forgotten, and long-term context and other relations can be modeled, the dependence relation with larger time step distance in the time sequence can be better captured, the GRU network is a variant network of the LSTM long-term memory network, the problem that the recurrent neural network RNN cannot better process long-distance dependence and the problems of gradient loss and gradient explosion caused by long-term dependence can be solved, and on the basis of optimizing LSTM structure complexity, training time is shorter, and the method is easier to achieve.
As shown in fig. 5, in this embodiment, the super-pixel node construction feature sequence obtained in step S3 is used as an input of the GRU classification network, and the result of the sequence is output through the GRU unit, and then passes through the full link layer and the softmax layer, so as to finally obtain a vector with an output length of 3.
As shown in fig. 6, the basic GRU network structure used in this embodiment is defined as follows: in a GRU unit, the current time step is input with XtAnd previous time step hidden state
Figure BDA0002705589790000063
As the input of the reset gate and the update gate, the last time step h is calculated by an activation function sigmoidt-1And current time step
Figure BDA0002705589790000066
As the final output of this stage and the input of the next stage. The GRU network has only two gates, reset gate rtAnd update gate (update gate) zt. Reset gate rtFor controlling whether the calculation of a candidate state depends on the state h at the previous momentt-1And the method determines how much past information is discarded, and is helpful for capturing the short-term dependence in the time sequence. Updating the door ztThe input gate and the forgetting gate in the LSTM are combined into a whole, the memory information before control can continuously keep the data size of the current time, and the long-term dependence relationship in the time sequence can be captured. Which information can ultimately be determined by these two gates as the output of the GRU network. Wherein Wr、Wz
Figure BDA0002705589790000064
Are respectively the corresponding weight, XtFor the current time step input, ht-1Hidden state at previous time t-1, htFor the final hidden state at the current time step t,
Figure BDA0002705589790000065
for candidate hidden states at the current time step t, the two parts pass through a weight ztTo adjust so that the hidden state h of the final time step ttCan be updated by equation (10):
zt=σ(Wz·[ht-1,Xt])
rt=σ(Wr·[ht-1,Xt])
Figure BDA0002705589790000072
Figure BDA0002705589790000071
where x denotes matrix element multiplication and σ denotes sigmoid function.
ztTo update the activation result of the gate, it also controls the inflow of information in a gated form. z is a radical oftAnd ht-1The Hadamard product of (a) represents the information that was retained to the final memory at the previous time step, which information plus the information that was retained to the final memory at the current memory is equal to the content of the final GRU output.
In this embodiment, the number numHiddem of GRU hidden units is set to 50, the number numClass of segmentation is set to 3, the maximum iteration round number maxEpoch is set to 50, and the batch size miniBatchSize is set to 512, and the GRU network is trained. Selecting Adam as a network optimization algorithm, cross entropy loss as a loss function, and Relu as an activation function; the training process is divided into a forward propagation process and a backward propagation process, the forward propagation process obtains a prediction result through a Relu activation function, and each iteration is realized through calculating cross entropy loss and backward propagation to update GRU parameters.
Step S5: repeating the step S2 and the step S3 aiming at the images of the test set to form a step S2 'and a step S3', namely performing primary image segmentation on the images of the test set in the brain image set by using the SEEDS algorithm to construct a superpixel undirected graph; constructing a characteristic sequence according to the obtained super-pixel undirected graph; inputting the obtained characteristic sequence into a trained GRU network for classification to obtain a classification result of each superpixel block; the classification labels of the super pixel blocks are returned to the corresponding regions of the super pixel blocks in the original test image to be used as labels of each region, and the final segmentation result is obtained as shown in fig. 7(b), and the final segmentation result is compared with the segmentation true value image shown in fig. 7 (a).
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A brain image segmentation method based on an SEEDS algorithm and a GRU network is characterized by comprising the following steps:
step S1: preprocessing the brain image set to remove a skull part in the brain image;
step S2: carrying out primary image segmentation on the images of the training set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph;
step S3: constructing a characteristic sequence training set and a truth set according to the superpixel undirected graph obtained in the step S2;
step S4: taking the characteristic sequence training set and the truth value set constructed in the step S3 as the input of the GRU network to train the GRU network;
step S5: carrying out primary image segmentation on the images of the training set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph; constructing a characteristic sequence according to the obtained super-pixel undirected graph; the obtained feature sequence is input to the trained GRU network obtained in step S4, and classified to obtain a classification result of each superpixel block, and the classification result is returned to the original image to obtain a segmented brain image.
2. The method for brain image segmentation based on SEEDS algorithm and GRU network as claimed in claim 1, wherein the segmentation of the brain image is to segment gray matter, white matter and cerebrospinal fluid in the brain image.
3. The method for brain image segmentation based on the SEEDS algorithm and the GRU network according to claim 1, wherein the set of brain images is from a medical image database BrainWeb.
4. The method for brain image segmentation based on SEEDS algorithm and GRU network as claimed in claim 1, further comprising step S0 before step S1: the method comprises the steps of dividing a brain image set into a training set and a testing set, wherein the number of images of the training set accounts for more than 50% of the number of images of the brain image set.
5. The method of claim 4, wherein the number of images of the training set is more than 70% of the number of images of the brain image set.
6. The method for brain image segmentation based on SEEDS algorithm and GRU network as claimed in claim 1, wherein the pre-processing of the brain image set to remove the skull portion in the brain image in step S1 is processing of the brain image set by BET algorithm.
7. The method for segmenting the brain image based on the SEEDS algorithm and the GRU network as claimed in claim 1, wherein the constructing the superpixel undirected graph in the step S2 is implemented by pre-segmenting the image through the SEEDS algorithm to obtain the superpixel pre-segmented graph, and then using each superpixel area as a node, and connecting the adjacent superpixel areas by using edges to construct the superpixel undirected graph.
8. The method for segmenting the brain image based on the SEEDS algorithm and the GRU network as claimed in claim 1, wherein the step S3 specifically comprises the following steps:
step S3-1: for each super pixel node, constructing a D multiplied by B characteristic sequence, wherein D is the input characteristic dimension, and B is the length of the sequence;
step S3-2: constructing a truth value vector according to a given segmentation truth value of each training image;
step S3-3: and repeating the steps S3-1 and S3-2 for each image of the training set to construct a characteristic sequence training set and a truth set suitable for the GRU network.
9. The method for segmenting the brain image based on the SEEDS algorithm and the GRU network as claimed in claim 1, wherein the step S4 specifically comprises the following steps:
step S4-1: initializing a GRU network structure; setting the number numHiddem of hidden units of the GRU network as 50, the number numClass of segmentation as 3, the maximum iteration round number maxEpoch as 50, and the batch size as miniBatchSize as 512;
step S4-2: selecting Adam as a network optimization algorithm, cross entropy loss as a loss function and Relu as an activation function, and training a GRU network; the training process is divided into a forward propagation process and a backward propagation process, firstly, a characteristic sequence training set and a truth value set constructed in the step S3 are input, forward propagation obtains a prediction result through a Relu activation function, and then each iteration is realized through calculating cross entropy loss and backward propagation to update GRU parameters.
10. The method for segmenting the brain image based on the SEEDS algorithm and the GRU network as claimed in claim 1, wherein the step S5 specifically comprises the following steps:
step S2': carrying out primary image segmentation on the images of the test set in the brain image set by using an SEEDS algorithm to construct a superpixel undirected graph;
step S3': constructing a characteristic sequence according to the super-pixel undirected graph obtained in the step S2';
step S5-1: inputting the characteristic sequence obtained in the step S3' into the trained GRU network obtained in the step S4 for classification to obtain a classification result of each superpixel block;
step S5-2: and returning the classification label of each super pixel block to the region corresponding to each super pixel block in the original test image as the label of each region to obtain the segmented brain image.
CN202011037578.6A 2020-09-28 2020-09-28 Brain image segmentation method based on SEEDS algorithm and GRU network Pending CN112132842A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011037578.6A CN112132842A (en) 2020-09-28 2020-09-28 Brain image segmentation method based on SEEDS algorithm and GRU network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011037578.6A CN112132842A (en) 2020-09-28 2020-09-28 Brain image segmentation method based on SEEDS algorithm and GRU network

Publications (1)

Publication Number Publication Date
CN112132842A true CN112132842A (en) 2020-12-25

Family

ID=73840820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011037578.6A Pending CN112132842A (en) 2020-09-28 2020-09-28 Brain image segmentation method based on SEEDS algorithm and GRU network

Country Status (1)

Country Link
CN (1) CN112132842A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745473A (en) * 2014-01-16 2014-04-23 南方医科大学 Brain tissue extraction method
CN106651886A (en) * 2017-01-03 2017-05-10 北京工业大学 Cloud image segmentation method based on superpixel clustering optimization CNN
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109102512A (en) * 2018-08-06 2018-12-28 西安电子科技大学 A kind of MRI brain tumor image partition method based on DBN neural network
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
CN110163822A (en) * 2019-05-14 2019-08-23 武汉大学 The netted analyte detection and minimizing technology and system cut based on super-pixel segmentation and figure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745473A (en) * 2014-01-16 2014-04-23 南方医科大学 Brain tissue extraction method
CN106651886A (en) * 2017-01-03 2017-05-10 北京工业大学 Cloud image segmentation method based on superpixel clustering optimization CNN
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109102512A (en) * 2018-08-06 2018-12-28 西安电子科技大学 A kind of MRI brain tumor image partition method based on DBN neural network
CN109741341A (en) * 2018-12-20 2019-05-10 华东师范大学 A kind of image partition method based on super-pixel and long memory network in short-term
CN110163822A (en) * 2019-05-14 2019-08-23 武汉大学 The netted analyte detection and minimizing technology and system cut based on super-pixel segmentation and figure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘树春, 北京:机械工业出版社 *

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN109345508B (en) Bone age evaluation method based on two-stage neural network
Kumar et al. Breast cancer classification of image using convolutional neural network
CN108491766B (en) End-to-end crowd counting method based on depth decision forest
Gao et al. Fusion of medical images based on salient features extraction by PSO optimized fuzzy logic in NSST domain
CN110859624A (en) Brain age deep learning prediction system based on structural magnetic resonance image
Yang et al. Heterogeneous SPCNN and its application in image segmentation
CN114677403A (en) Liver tumor image segmentation method based on deep learning attention mechanism
Ninh et al. Skin lesion segmentation based on modification of SegNet neural networks
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN116884623B (en) Medical rehabilitation prediction system based on laser scanning imaging
CN112396587A (en) Method for detecting crowding degree in bus compartment based on cooperative training and density map
Priyadharshini et al. A novel hybrid Extreme Learning Machine and Teaching–Learning-Based​ Optimization algorithm for skin cancer detection
Senthilkumaran et al. Brain image segmentation
Bhimavarapu et al. Analysis and characterization of plant diseases using transfer learning
Chen et al. Skin lesion segmentation using recurrent attentional convolutional networks
CN110033448B (en) AI-assisted male baldness Hamilton grading prediction analysis method for AGA clinical image
CN111611919B (en) Road scene layout analysis method based on structured learning
Elmenabawy et al. Deep segmentation of the liver and the hepatic tumors from abdomen tomography images
Yuan et al. Explore double-opponency and skin color for saliency detection
CN108765384B (en) Significance detection method for joint manifold sequencing and improved convex hull
Chang Research on sports video image based on fuzzy algorithms
CN115909016A (en) System, method, electronic device, and medium for analyzing fMRI image based on GCN
CN112132842A (en) Brain image segmentation method based on SEEDS algorithm and GRU network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225