CN112686902B - Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image - Google Patents

Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image Download PDF

Info

Publication number
CN112686902B
CN112686902B CN201910987988.8A CN201910987988A CN112686902B CN 112686902 B CN112686902 B CN 112686902B CN 201910987988 A CN201910987988 A CN 201910987988A CN 112686902 B CN112686902 B CN 112686902B
Authority
CN
China
Prior art keywords
tumor
image
feature
area
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910987988.8A
Other languages
Chinese (zh)
Other versions
CN112686902A (en
Inventor
陈皓
夏雨
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN201910987988.8A priority Critical patent/CN112686902B/en
Publication of CN112686902A publication Critical patent/CN112686902A/en
Application granted granted Critical
Publication of CN112686902B publication Critical patent/CN112686902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a two-stage calculation method for identifying and dividing brain glioma in Magnetic Resonance Imaging (MRI), which comprises the following steps: 1. carrying out identification and coarse positioning on a suspected tumor area on the gridded image by adopting a convolutional neural network; 2. extracting 104 multiplied by 4 first-order and second-order image omics features and fusing 128-dimensional high-order features obtained through a convolutional neural network to form a feature set; 3. feature selection is carried out through L1 regularization, and 178-dimensional feature vectors are generated; 4. and (3) performing pixel-level classification and tumor boundary labeling on the suspected tumor area by adopting an ensemble learning method to obtain a brain glioma segmentation result. The method improves the overall calculation efficiency through two-stage operation of tumor identification, suspected region coarse positioning and pixel level fine positioning, forms a more comprehensive feature set by fusing the characteristics of the image omics and high-order information acquired by a convolutional neural network, generates more effective feature vectors through feature selection, and improves the identification accuracy rate integrally.

Description

Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image
Technical Field
The invention belongs to the field of computer vision and pattern recognition, and particularly relates to a two-stage calculation method for brain glioma recognition and segmentation aiming at a Magnetic Resonance Imaging (MRI).
Background
Glioblastomas are common invasive tumors in the brain and are also the tumors with the highest brain mortality rate, and can appear in any form and at any time in any position of the brain, so that the precise segmentation of the brain glioma is difficult due to the large difference of the tissue structures of different areas of the brain. Meanwhile, the traditional manual segmentation method needs to consume a large amount of manpower and time, and has high requirements on the teacher resource level of segmentation personnel. Therefore, the realization of the automatic and accurate segmentation of the brain glioma is beneficial to making up the defects of the traditional manual segmentation method and is beneficial to early screening and later treatment and recovery of the tumor. Therefore, the automatic and accurate segmentation of the brain glioma can effectively improve the automation capability of the medical auxiliary diagnosis process and is also an important means for improving the diagnosis efficiency and accuracy.
Currently, the segmentation methods for brain glioma can be roughly divided into two categories: one is segmentation based on image gray scale information, such as segmentation based on region growing and level set techniques, but this method is not suitable for treating high-grade glioma and may fail for non-enhanced tumor images. Another class is based primarily on Convolutional Neural Network (CNN) methods. The CNN can directly obtain image information from the convolution kernel, and makes a major breakthrough in image classification, image segmentation, target detection, and the like. However, such methods focus more on processing image texture information, and mostly consider the segmentation result from the image global, and lack the ability to distinguish at the pixel level. The method constructs a richer feature set by extracting the image omics information of the image and fusing the high-order information of the image obtained through CNN, and simultaneously improves the overall image identification efficiency and the tumor segmentation accuracy rate through an integrated learning process based on two-stage calculation.
Disclosure of Invention
Aiming at the problems that manual intervention is needed in the segmentation process and the segmentation result is easily affected by image noise and intensity nonuniformity and the like in the traditional method, a novel brain glioma segmentation method is provided, the method combines the advantages of a convolutional neural network and ensemble learning, carries out tumor identification on a gridded MRI image through a convolutional neural network model, roughly positions the tumor, and then carries out fine tumor segmentation in a multi-modal nuclear magnetic resonance image by adopting the ensemble learning method. For this reason, key technical problems to be solved include: tumor rapid identification, feature extraction, fusion and selection and refined segmentation.
In order to achieve the purpose, the specific technical scheme of the invention is as follows:
a two-stage computational method for brain glioma identification and segmentation in Magnetic Resonance Imaging (MRI), the method comprising the steps of:
step 1: data preparation, specifically:
case data is prepared in the following structure, and MRI data of one case contains images of four sequences: flair, T1, tlc and T2, whose structures can be represented as:
I={I 1 ,I 2 ,I 3 ,...,I N }
Figure GDA0004001529500000021
wherein N denotes the total number of layers of the MRI image layered sequence, I n Is a set of n-th layer images, with a size of 4 x 240,
Figure GDA0004001529500000022
respectively represent I n Images of four sequences of middle Flair, T1ce and T2;
and 2, step: the initialization process of the tumor rapid identification method specifically comprises the following steps:
step 2.1: the MRI image gridding treatment is to divide the MRI image into a plurality of equidistant rectangular areas in Flair, T1, tlce and T2 sequences respectively, and the size of the rectangular area is set as g l ×g l When the size of an MRI image is t × t, the starting point is (0, 0) point of the original image
Figure GDA0004001529500000023
The step length of (2) divides the original image into equidistant rectangular areas, and the step is carried out when the edge area is less than one rectangular area0 is supplemented, and a complete rectangular area is formed; let the rectangular region be p x,y I.e. p x,y Is composed of point (x, y) and point (x + g) l ,y+g l ) If the set of rectangular regions is Ω (P), the data of each rectangular region can be represented as:
Figure GDA0004001529500000024
experiment on g l Search is carried out to find g l The model positioning accuracy is highest when 26 times are taken, so p x,y The size is 4 × 26 × 26;
step 2.2: training a quick recognition network, specifically:
step 2.2.1: constructing a two-channel convolutional neural network, wherein the input of each channel in the two channels is 4 multiplied by 26, a channel A consists of 13 multiplied by 13 convolutional layers and Dropout layers with the discarding rate of 0.5, and the channel is only subjected to single-layer convolution calculation; channel B consists of a typical network structure: the method comprises the following steps of a 5 × 5 convolutional layer, a 4 × 4 pooling layer, a 3 × 3 convolutional layer, a 2 × 2 pooling layer and a Dropout layer with a discarding rate of 0.5, wherein a plurality of small convolutional kernels are used in a channel B, so that the model can obtain the detailed information of an image, and finally, feature diagrams of two channels are merged through a fusion channel to complete a classification model;
step 2.2.2: a training model, wherein each MRI image in a training sample is divided into a tumor region and a non-tumor region according to a segmentation standard, a rectangular region of 4 x 26 is randomly sampled from a training data set for training, a corresponding label is added to each rectangular region, batch processing parameters are 16 during training, the number of data iterations is 20, an Adam optimizer is used, wherein the Adam parameters are that the learning rate is 0.005, the learning rate attenuation factor is 0.1, and the impulse is 0.9;
and 3, step 3: the identification of the tumor and the rough positioning of the suspected area in the MRI image specifically comprise the following steps:
step 3.1: identification of suspected tumor regions, input data image I n From the (0, 0) point of the slice image as the starting point, the method
Figure GDA0004001529500000031
Step size of (2) traverses the rectangular region set omega (P), and searches each rectangular region P x,y Namely, a convolutional neural network model is utilized to carry out classification judgment on whether each rectangular region contains a tumor region; if the classification result is 0, the suspected tumor tissue is not found in the input whole area block; if the classification result is 1, which indicates that the input whole area block has the suspected tumor tissue, initializing a marking matrix Mask, and marking 1 in a corresponding rectangular area on the marking matrix Mask, wherein the size of the marking matrix Mask is 240 multiplied by 240, the initial value is 0, and the marking matrix Mask is divided into a plurality of sizes and p x,y Identical corresponding rectangular regions;
step 3.2: marking the boundary of a suspected tumor area, firstly, summarizing the areas marked as 1 in a marking matrix Mask to form a complete suspected tumor area; then, in I n To find the corresponding p x,y (ii) a Finally, all p are recorded with queue L x,y The area coordinates, the data stored in L, can be expressed as follows:
L={L 1 ,L 2 ,...,L k }
Figure GDA0004001529500000032
wherein k represents I n Total number of middle and coarse positioning areas, L i A set of boundary coordinates representing the ith coarse positioning area,
Figure GDA0004001529500000033
the number of boundary coordinates of the ith area is represented;
and 4, step 4: determining a feature extraction area in pixel search, specifically:
because only low-order features with equal intensity can be extracted from a single pixel, in order to obtain more abundant feature information, more diverse feature information needs to be extracted from a smaller local area with the processed pixel as the center, and the feature extraction domain takes the pixel (x, y) to be processed as the center to form g s ×g s Is sampled in the area G x,y I.e. G x,y Is a point of origin
Figure GDA0004001529500000034
And point
Figure GDA0004001529500000035
The determined rectangular area can be expressed as:
Figure GDA0004001529500000036
through experiments, g is found s The highest accuracy is obtained when 10 is taken, so G x,y The size is 4 × 10 × 10;
and 5: constructing a segmentation model, specifically:
step 5.1: training a convolutional neural network to extract high-order features, specifically:
step 5.1.1: a convolutional neural network is constructed, the characteristics of the convolutional neural network are mainly extracted by using a convolutional neural network characteristic extractor, and the convolutional neural network has the structure as follows: the method comprises two convolution layers, wherein the sizes of the convolution layers are respectively 16 multiplied by 10 and 32 multiplied by 5, a maximum pooling layer is followed after each convolution layer, the sizes of the pooling layers are respectively 16 multiplied by 5 and 32 multiplied by 2, and finally, a full connection layer is added to represent the final output characteristic;
step 5.1.2: training process, randomly extracting G from image x,y Training a rectangular area, wherein batch processing parameters are 16 during training, the number of data iterations is 20, an SGD optimizer is used, the learning rate of SGD parameters is 0.005, the learning rate attenuation factor is 0.0, the impulse is 0.9, and the accuracy of a training model is required to be more than 85%;
step 5.1.3: feature extraction, calculated by the convolutional neural network, from G x,y Extracting 128-dimensional convolutional neural network characteristics F from each layer of image CNN
Step 5.2: the method comprises the following steps of (1) extracting image omics characteristics, specifically:
for G x,y In each layer of images for performing the cinematologyExtracting characteristics, extracting 104 characteristics of the imaged omics, and combining the characteristics of four sequences of Flair, T1ce and T2
Figure GDA0004001529500000041
And
Figure GDA0004001529500000042
forming an image omics feature set F radiomics
Figure GDA0004001529500000043
Step 5.3: feature fusion forms a feature set, and the image omics features and the convolutional neural network features are fused to improve the expression capability of the feature set on the focus information, so that the original overall feature set can be expressed as:
F=F radiomics +F CNN
wherein, F CNN Representing 128-dimensional convolutional neural network features, F radiomics Representing the image omics characteristics on four sequences of 4 × 104 dimensions, F comprises 544-dimensional characteristics in total;
step 5.4: l1 regularization feature selection, wherein an L1 regularization Lasso algorithm (Lasso) is adopted for feature selection, so that redundant features possibly existing between two feature sets are reduced, and a final feature set F fin Only 178 features are included, wherein the first-order feature and the second-order feature of the imagery omics comprise 32 and 82-dimensional convolution neural network features, and the final feature set F fin Can be expressed as:
F fin =F fin-radiomics +F fin-CNN
wherein, F fin-CNN Representing the feature of the convolutional neural network after feature selection, F fin-radiomics Representing image group characteristics on the four sequences after characteristic selection;
step 5.5: training a pixel level classification model, specifically:
the XGboost algorithm is adopted in the classification model of the step, the learning rate is O.1, the maximum depth of each tree is 6, the number of decision trees is 125, the weight sum of minimum leaf node samples is 1, the minimum loss function reduction value required by node splitting is 0.1, the random sampling proportion of each tree is controlled to be 0.8, the random sampling characteristic proportion of each tree is controlled to be 0.8, the weight sum of positive samples is 1, the minimum sample number allowed by leaf nodes is 10, and the classification accuracy rate after training is more than 95%;
step 6: performing refined segmentation on the suspected tumor area, specifically:
step 6.1: if the obtained suspected tumor area position queue L is not empty, a new coordinate point C (x, y) is randomly taken out from the L and is made to be a current processing point C; otherwise, ending the calculation;
step 6.2: if the current processing point is calculated and the pixel points in the adjacent eight directions are judged to be finished, returning to the step 6.1; otherwise, the feature extraction region G is obtained through step 4 by using the coordinates c (x, y) x,y And obtaining the characteristic extraction region G through the segmentation model constructed in the step 5 x,y The classification result of (2);
step 6.3: if classified, G x,y Labeling the area of edema, non-enhanced tumor or enhanced tumor around the tumor, labeling G x,y The category of the regional center pixel point c (x, y) is a corresponding label, a new direction is selected from eight directions of 0 degree, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, 315 degrees and the like as a moving direction in sequence, a pixel is moved in the direction to obtain a new pixel coordinate c, and the step 6.2 is returned; if classified, G is obtained x,y Is a non-tumor tag, then G is labeled x,y The type of the regional center pixel point is a non-tumor label, meanwhile, the coordinate point is returned to the current processing point C, C (x, y) = C, and the step 6.2 is returned;
and 7: tumor boundaries and different tissues are marked in a refined manner, specifically:
according to the classification mark obtained by each pixel point, the pixel points (x, y) obtaining the same classification result are collected to form the marking of a tumor boundary and different tumor tissues, and I is used seg Recording refined segmentation knot of tumor tissueAnd (5) fruit.
The invention has the beneficial effects that: the invention carries out automatic identification and segmentation of brain glioma aiming at Magnetic Resonance Imaging (MRI) images, can effectively position the tumor, extracts and integrates multi-dimensional image characteristic information, and can realize automatic tumor segmentation at the same time. In addition, the method has high calculation efficiency and recognition accuracy during recognition and segmentation, and saves a large amount of manpower and material resources. By verification, the method can finish coarse positioning of tumors and fine tumor tissue segmentation in the BRATS2017 data set.
Drawings
FIG. 1 is a diagram of four sequence information of MRI images used in an embodiment of the present invention;
FIG. 2 shows the gridding of the MRI image according to an embodiment of the present invention;
FIG. 3 is a two-channel convolutional neural network used in an embodiment of the present invention;
FIG. 4 shows the result of a suspected tumor area in an embodiment of the present invention;
FIG. 5 is a diagram illustrating the determination of feature extraction areas according to an embodiment of the present invention;
FIG. 6 is a convolutional neural network used in an embodiment of the present invention;
FIG. 7 is a method of feature fusion and selection used in embodiments of the present invention;
FIG. 8 shows the final refined segmentation result in accordance with an embodiment of the present invention;
fig. 9 is an overall flow chart of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Step 1: an example of an MRI data image is input, and referring to fig. 1, the MRI image is organized in this manner;
and 2, step: gridding the MRI data image of each layer, and referring to FIG. 2 for gridding results of each layer of image;
and step 3: referring to fig. 3, a two-channel convolutional neural network is constructed and a classification model is trained.
And 4, step 4: classifying all equidistant rectangular areas after MRI image gridding through the constructed classification model, summarizing results of all rectangular areas, and outputting a coarse positioning result of a suspected tumor area, wherein the result refers to FIG. 4;
and 5: and traversing the boundary coordinates of the suspected tumor area, and performing pixel-level search on each pixel coordinate from eight directions. Referring to fig. 5, each time a new pixel coordinate is searched, a rectangular region is constructed centering on the pixel coordinate, and a feature extraction region is determined.
And 6: referring to fig. 6, a convolutional neural network is trained. Extracting convolutional neural network characteristics from the rectangular region constructed in the step 5 by using the network, extracting image omics characteristics from the corresponding rectangular region, performing characteristic selection and fusion on the extracted convolutional neural network characteristics and the image omics characteristics by referring to fig. 7, and outputting a final characteristic vector;
and 7: and classifying the feature extraction region by using the XGboost classification model and the fused feature vector, and determining a classification label of the region so as to determine a label of a central pixel of the region.
And 8: and summarizing pixel points of the same classification label to form tumor boundaries and labels of different tumor tissues, and referring to a result 8.

Claims (1)

1. A two-stage computing device for brain glioma identification and segmentation in magnetic resonance imaging MRI, said device enabling brain glioma identification and segmentation in magnetic resonance imaging using the following method steps:
step 1: data preparation, specifically:
case data is prepared in the following structure, and MRI data of one case contains images of four sequences: flair, T1ce and T2, whose structure can be expressed as:
I={I 1 ,I 2 ,I 3 ,...,I N }
Figure FDA0004001529490000011
wherein N denotes the total number of layers of the MRI image layered sequence, I n Is the nth layer image set, the size of which is 4 x 240,
Figure FDA0004001529490000012
respectively represent I n Images of four sequences of middle Flair, T1ce and T2;
step 2: the initialization process of the tumor rapid identification method specifically comprises the following steps:
step 2.1: the MRI image gridding process divides the MRI image into a plurality of equidistant rectangular areas in Flair, T1ce and T2 sequences respectively, and the size of the rectangular area is set as g l ×g l When the size of an MRI image is t × t, the starting point is (0, 0) point of the original image
Figure FDA0004001529490000013
The step length of the method divides the original image into equidistant rectangular areas, and when the edge area is less than one rectangular area, the 0 complementing treatment is carried out to change the edge area into a complete rectangular area; let the rectangular region be p x,y I.e. p x,y Is composed of point (x, y) and point (x + g) l ,y+g l ) If the set of rectangular regions is Ω (P), the data of each rectangular region can be represented as:
Figure FDA0004001529490000014
experiment on g l Search is carried out to find g l The model positioning accuracy is highest when 26 times are taken, so p x,y The size is 4 × 26 × 26;
step 2.2: training a quick recognition network, specifically:
step 2.2.1: constructing a two-channel convolutional neural network, wherein the input of each channel of the two channels is 4 multiplied by 26, a channel A consists of 13 multiplied by 13 convolutional layers and a Dropout layer with the discarding rate of 0.5, and the channel is only subjected to single-layer convolution calculation; channel B consists of a typical network structure: the method comprises the following steps of a 5 × 5 convolutional layer, a 4 × 4 pooling layer, a 3 × 3 convolutional layer, a 2 × 2 pooling layer and a Dropout layer with a discarding rate of 0.5, wherein a plurality of small convolutional kernels are used in a channel B, so that the model can obtain the detailed information of an image, and finally, feature diagrams of two channels are merged through a fusion channel to complete a classification model;
step 2.2.2, training a model, dividing each MRI image in a training sample into a tumor region and a non-tumor region according to a segmentation standard, randomly sampling a rectangular region of 4 multiplied by 26 from a training data set for training, adding a corresponding label to each rectangular region, wherein the batch processing parameter is 16 during training, the data iteration number is 20, an Adam optimizer is used, wherein the Adam parameter is 0.005 for the learning rate, the learning rate attenuation factor is 0.1, and the impulse is 0.9;
and 3, step 3: the identification of the tumor and the rough positioning of the suspected area in the MRI image specifically comprise the following steps:
step 3.1: identification of suspected tumor region, input data image I n Taking the (0, 0) point of the slice image as the starting point
Figure FDA0004001529490000021
Step size of (2) traverses the rectangular region set omega (P), and searches each rectangular region P x,y Namely, a convolutional neural network model is utilized to carry out classification judgment on whether each rectangular region contains a tumor region; if the classification result is 0, the fact that no suspected tumor tissue is found in the input whole area block is shown; if the classification result is 1, which indicates that the input whole area block has the suspected tumor tissue, initializing a marking matrix Mask, and marking 1 in a corresponding rectangular area on the marking matrix Mask, wherein the marking matrix Mask has a size of 240 × 240 and an initial value of 0, and is divided into a plurality of sizes and p x,y The same corresponding rectangular area;
step 3.2: marking the boundary of the suspected tumor area, firstly, summarizing the areas marked as 1 in a marking matrix Mask to form a complete suspected tumor area; then, in I n To find the corresponding p x,y (ii) a Finally, record all p with queue L xy The area coordinates, the data stored in L, may be expressed as follows:
L={L 1 ,L 2 ,...,L k }
Figure FDA0004001529490000022
wherein k represents I n Total number of middle and coarse positioning areas, L i A set of boundary coordinates representing the ith coarse positioning area,
Figure FDA0004001529490000023
the number of boundary coordinates of the ith area is represented;
step 4, determining a feature extraction area in pixel search, which specifically comprises the following steps:
since only the low-order intensity features can be extracted from a single pixel, in order to obtain more abundant feature information, more various feature information needs to be extracted from a smaller local area centered on a pixel to be processed, and the feature extraction domain is g which is formed by centering on the pixel (x, y) to be processed s ×g s Sampling region G of (2) x,y I.e. G x,y Is a point of origin
Figure FDA0004001529490000024
And point
Figure FDA0004001529490000025
The determined rectangular area can be expressed as:
Figure FDA0004001529490000026
through experiments, g is found s The highest accuracy is obtained when 10 hours are taken, so G x,y The size is 4 × 10 × 10;
step 5, constructing a segmentation model, which specifically comprises the following steps:
step 5.1: training a convolutional neural network to extract high-order features, specifically:
step 5.1.1: a convolutional neural network is constructed, the characteristics of the convolutional neural network are mainly extracted by using a convolutional neural network characteristic extractor, and the convolutional neural network has the structure as follows: the method comprises the following steps of (1) including two convolution layers, wherein the sizes of the convolution layers are respectively 16 multiplied by 10 and 32 multiplied by 5, a maximum pooling layer is arranged behind each convolution layer, the sizes of the pooling layers are respectively 16 multiplied by 5 and 32 multiplied by 2, and finally, a full-connection layer is added to represent the final output characteristic;
step 5.1.2: training process, randomly extracting G from image x,y Training a rectangular area, wherein batch processing parameters are 16 during training, the number of data iterations is 20, an SGD optimizer is used, the learning rate of SGD parameters is 0.005, the learning rate attenuation factor is 0.0, the impulse is 0.9, and the accuracy of a training model is required to be more than 85%;
step 5.1.3: feature extraction, calculated by the convolutional neural network, from G x,y Extracting 128-dimensional convolution neural network characteristic F from each layer of image CNN
And step 5.2: the method comprises the following steps of (1) extracting image omics characteristics, specifically:
for G x,y Carrying out image omics feature extraction on each layer of image, extracting 104 image omics features, and combining the features of four sequences of Flair, T1ce and T2
Figure FDA0004001529490000031
And
Figure FDA0004001529490000032
forming an image omics feature set F radiomics
Figure FDA0004001529490000033
Step 5.3: feature fusion forms a feature set, and the image omics features and the convolutional neural network features are fused to improve the expression capability of the feature set on the focus information, so that the original overall feature set can be expressed as:
F=F radiomics +F CNN
wherein, F CNN Representing 128-dimensional convolutional neural network features, F radiomics Representing the characteristics of the image group on four sequences with dimensions of 4 multiplied by 104, F contains 544-dimensional characteristics;
step 5.4: l1 regularization feature selection, wherein feature selection is performed by adopting an L1 regularization Lasso algorithm, so that redundant features possibly existing between two feature sets are reduced, and finally a feature set F fin Only comprising 178 features, wherein the first-order feature and the second-order feature of the imagery omics comprise 32 and 82-dimensional convolution neural network features, and the final feature set F fin Can be expressed as:
F fin =F fin-radiomics +F fin-CNN
wherein, F fin-CNN Representing features of the convolutional neural network after feature selection, F fin-radiomics Representing image group characteristics on the four sequences after characteristic selection;
step 5.5: training a pixel level classification model, specifically:
the classification model of the step adopts an XGboost algorithm, the parameters are that the learning rate is 0.1, the maximum depth of each tree is 6, the number of decision trees is 125, the weight sum of minimum leaf node samples is 1, the minimum loss function reduction value required by node splitting is 0.1, the proportion of random sampling of each tree is controlled to be 0.8, the proportion of the features of random sampling of each tree is controlled to be 0.8, the weight sum of positive samples is 1, the minimum sample number allowed by leaf nodes is 10, and the classification accuracy rate after training needs to be more than 95%;
step 6: performing refined segmentation on the suspected tumor area, specifically:
step 6.1: if the obtained suspected tumor area position queue L is not empty, a new coordinate point C (x, y) is randomly taken out from the L and is made to be a current processing point C; otherwise, finishing the calculation;
step 6.2: if the current processing point is calculated and the pixel points in the adjacent eight directions are judged to be finished, returning to the step 6.1; otherwise, the feature extraction region G is obtained through step 4 by using the coordinates c (x, y) x,y And obtaining the characteristic extraction region G through the segmentation model constructed in the step 5 x,y Is divided intoClass results;
step 6.3: if classified, G x,y The label of (a) is an edema zone, a non-enhanced tumor zone or an enhanced tumor zone around the tumor, and then labeled G x,y The category of the central pixel point c (x, y) in the area is the corresponding label, a new direction is selected from eight directions of 0 degree, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees as a moving direction, a pixel is moved in the new direction to obtain a new pixel coordinate c, and the step 6.2 is returned; if classified, G is obtained x,y If the label of (a) is a non-tumor label, then label G x,y The type of the regional center pixel point is a non-tumor label, meanwhile, the coordinate point is returned to the current processing point C, C (x, y) = C, and the step 6.2 is returned;
and 7: tumor boundaries and different tissues are marked in a refined mode, and the detailed steps are as follows:
according to the classification mark obtained by each pixel point, summarizing the pixel points (x, y) obtaining the same classification result to form the tumor boundary and the labels of different tumor tissues, and using I seg And recording the refined segmentation result of the tumor tissue.
CN201910987988.8A 2019-10-17 2019-10-17 Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image Active CN112686902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910987988.8A CN112686902B (en) 2019-10-17 2019-10-17 Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910987988.8A CN112686902B (en) 2019-10-17 2019-10-17 Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image

Publications (2)

Publication Number Publication Date
CN112686902A CN112686902A (en) 2021-04-20
CN112686902B true CN112686902B (en) 2023-02-03

Family

ID=75444463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910987988.8A Active CN112686902B (en) 2019-10-17 2019-10-17 Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image

Country Status (1)

Country Link
CN (1) CN112686902B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516653B (en) * 2021-08-11 2024-03-15 中科(厦门)数据智能研究院 Method for identifying glioma recurrence and necrosis through multi-feature fusion calculation
CN114677537B (en) * 2022-03-06 2024-03-15 西北工业大学 Glioma classification method based on multi-sequence magnetic resonance images
CN114332547B (en) * 2022-03-17 2022-07-08 浙江太美医疗科技股份有限公司 Medical object classification method and apparatus, electronic device, and storage medium
CN114863165B (en) * 2022-04-12 2023-06-16 南通大学 Vertebral bone density classification method based on fusion of image histology and deep learning features
CN117092255A (en) * 2023-10-19 2023-11-21 广州恒广复合材料有限公司 Quality detection and analysis method and device for quaternary ammonium salt in washing and caring composition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292871A (en) * 2007-04-25 2008-10-29 中国科学院自动化研究所 Method for specification extraction of magnetic resonance imaging brain active region based on pattern recognition
CN104008536A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Multi-focus noise image fusion method based on CS-CHMT and IDPCNN
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
WO2019001208A1 (en) * 2017-06-28 2019-01-03 苏州比格威医疗科技有限公司 Segmentation algorithm for choroidal neovascularization in oct image
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018082084A1 (en) * 2016-11-07 2018-05-11 中国科学院自动化研究所 Brain tumor automatic segmentation method by means of fusion of full convolutional neural network and conditional random field

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292871A (en) * 2007-04-25 2008-10-29 中国科学院自动化研究所 Method for specification extraction of magnetic resonance imaging brain active region based on pattern recognition
CN104008536A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Multi-focus noise image fusion method based on CS-CHMT and IDPCNN
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
WO2019001208A1 (en) * 2017-06-28 2019-01-03 苏州比格威医疗科技有限公司 Segmentation algorithm for choroidal neovascularization in oct image
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Population-based studies on incidence, survival rates, and genetic alterations in astrocytic and oligodendroglial gliomas;OHGAKI H 等;《Journal of Neuropathology & Experimental Neurology》;20051231;第64卷(第6期);第479-489页 *
基于膨胀卷积平滑及轻型上采样的实时语义分割;程晓悦;《激光与光电子学进展》;20190723;全文 *

Also Published As

Publication number Publication date
CN112686902A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112686902B (en) Two-stage calculation method for brain glioma identification and segmentation in nuclear magnetic resonance image
CN109614985B (en) Target detection method based on densely connected feature pyramid network
CN107016405B (en) A kind of pest image classification method based on classification prediction convolutional neural networks
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
Wan et al. Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks
CN113076871B (en) Fish shoal automatic detection method based on target shielding compensation
CN110633758A (en) Method for detecting and locating cancer region aiming at small sample or sample unbalance
CN113505670B (en) Remote sensing image weak supervision building extraction method based on multi-scale CAM and super-pixels
CN109886271B (en) Image accurate segmentation method integrating deep learning network and improving edge detection
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN104699781B (en) SAR image search method based on double-deck anchor figure hash
CN114092487A (en) Target fruit instance segmentation method and system
CN112037221B (en) Multi-domain co-adaptation training method for cervical cancer TCT slice positive cell detection model
CN112990282B (en) Classification method and device for fine-granularity small sample images
CN112528058B (en) Fine-grained image classification method based on image attribute active learning
CN106203496A (en) Hydrographic curve extracting method based on machine learning
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN114600155A (en) Weakly supervised multitask learning for cell detection and segmentation
CN114781514A (en) Floater target detection method and system integrating attention mechanism
CN108564582B (en) MRI brain tumor image automatic optimization method based on deep neural network
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
CN116883650A (en) Image-level weak supervision semantic segmentation method based on attention and local stitching
Deepa et al. FHGSO: Flower Henry gas solubility optimization integrated deep convolutional neural network for image classification
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant