CN113052812B - AmoebaNet-based MRI prostate cancer detection method - Google Patents
AmoebaNet-based MRI prostate cancer detection method Download PDFInfo
- Publication number
- CN113052812B CN113052812B CN202110301547.5A CN202110301547A CN113052812B CN 113052812 B CN113052812 B CN 113052812B CN 202110301547 A CN202110301547 A CN 202110301547A CN 113052812 B CN113052812 B CN 113052812B
- Authority
- CN
- China
- Prior art keywords
- data
- model
- training
- amoebanet
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010060862 Prostate cancer Diseases 0.000 title claims abstract description 24
- 208000000236 Prostatic Neoplasms Diseases 0.000 title claims abstract description 24
- 238000001514 detection method Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 45
- 238000010276 construction Methods 0.000 claims abstract description 19
- 238000002790 cross-validation Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims abstract description 16
- 230000000694 effects Effects 0.000 claims abstract description 15
- 238000011156 evaluation Methods 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000013135 deep learning Methods 0.000 claims abstract description 4
- 230000002708 enhancing effect Effects 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 43
- 238000011176 pooling Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 13
- 101150064138 MAP1 gene Proteins 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 9
- 101100456045 Schizosaccharomyces pombe (strain 972 / ATCC 24843) map3 gene Proteins 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 claims description 3
- 101100075995 Schizosaccharomyces pombe (strain 972 / ATCC 24843) fma2 gene Proteins 0.000 claims description 3
- 238000010200 validation analysis Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 208000009956 adenocarcinoma Diseases 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000013188 needle biopsy Methods 0.000 description 1
- 238000004223 overdiagnosis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of image recognition, and particularly relates to an AmoebaNet-based MRI prostate cancer detection method, which comprises the following steps: the method comprises the steps of data set construction, data preprocessing, data set division, model construction, model training and model evaluation, wherein the data set construction reads picture data and corresponding labels of a PROSTATEX data set, and the picture data and the corresponding labels are stored in a matrix form; the data preprocessing is used for enhancing and scaling the data; the data set division recombines the data set into a plurality of training sets and test set combinations by a K-fold cross validation method; the model construction is based on AmoebaNet, a multi-scale nonlinear deep learning network is established, meanwhile, 1 x 1 convolution is added to promote data dimensionality, and full connection is used for finally classifying features; the model training trains a plurality of combined pair models; and the model identification effect is evaluated by adopting the accuracy, the recall rate and F1-Score.
Description
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to an AmoebaNet-based MRI prostate cancer detection method.
Background
MRI is a highly reliable method for detecting prostate cancer, but accurate interpretation of MRI images requires a great deal of expertise and experience of radiologists, and takes a great deal of time, so that MRI diagnosis is now applied in a low rate.
Cause of problems or defects: transrectal ultrasound-guided needle biopsy is currently the most reliable method for diagnosing prostate cancer, however, such conventional diagnosis not only causes great pain to the patient, but also may lead to a situation where the sampling region is not the lesion region, thereby delaying the treatment of the disease, or may lead to over-diagnosis and treatment.
Disclosure of Invention
Aiming at the technical problem of poor application of the image recognition technical model, the invention provides the AMoebaNet-based MRI prostate cancer detection method which is high in recognition efficiency, low in cost and strong in generalization capability.
In order to solve the technical problems, the invention adopts the technical scheme that:
an AmoebaNet-based MRI prostate cancer detection method comprising the steps of:
s100, data set construction: reading the picture data of the prestatex data set and the corresponding label thereof, and storing the picture data and the corresponding label in a matrix form;
s200, data preprocessing: enhancing and scaling the data;
s300, data set division: recombining the data set into a plurality of training sets and test sets by a K-fold cross validation method;
s400, model construction: establishing a model based on AmoebaNet, establishing a multi-scale nonlinear deep learning network, simultaneously adding 1 × 1 convolution to promote data dimensionality, and finally classifying the features by using full connection;
s500, model training: training the network by adopting specified parameters, and training the model by using a plurality of combinations obtained by K-fold cross validation;
s600, model evaluation: and evaluating the model identification effect by adopting the accuracy, the recall rate and F1-Score.
In the S100 data set construction, a PROSTATEX competition data set is used, 84 images containing characteristics of prostate cancer focuses and 280 images of benign cases are read, all picture data are stored in npy formats in a matrix mode, labels of the picture data are also stored in npy formats in the matrix mode, the data matrix format is (n, x, y), n is a data number, x and y are two-dimensional pixel matrices of the data, the label matrix format is (n, l), n is a label number, l is a data label, 1 represents a prostate cancer picture, 0 represents a normal picture, and the labels correspond to the data one by one.
In the S200 data preprocessing, two modes of mirror image transformation and contrast transformation are adopted to enhance the data, the mirror image conversion mode is to exchange corresponding pixel points at two ends of the data based on a central shaft to achieve the aim of mirror image transformation, the contrast transformation mode is to firstly carry out normalization processing on the gray data, and the normalization mode isWherein x'iFor the results obtained after normalization, xiFor a pixel point to be processed, X is a set of all numerical values of the data, the data after normalization are multiplied by coefficients of 0.8, 0.9, 1.1 and 1.2 respectively, the data are normalized again after calculation is finished, then the data are multiplied by 255, an image is restored, the data after data enhancement and the original data are mixed and randomly disordered, and all the data are scaled and disorderedTo 400 x 400 size.
In the step S300, in the data set division, a K-fold cross validation method is used to divide the data set, the data set is divided into a training set, a validation set and a test set, K is 5, the set averagely divides all data into 5 data sets, and the data sets are numbered as data sets a/b/c/d/e, when model training is performed, 4 data sets are selected as the training set to perform model parameter training, and 1 data set is selected as the test set to evaluate the model identification effect.
In the S400 model construction, the model construction is carried out based on the AmoebaNet-A, firstly, 1 x 1 convolution is carried out on data for 2 times, the data are lifted to 8 channels, then, the AmoebaNet modules are used for extracting features, each AmoebaNet-A module consists of 5 calculation modules, wherein the first calculation module consists of an average pooling layer with the size of 3 x 3 and a maximum pooling layer with the size of 3 x 3, after the calculation is finished, the features obtained by the two calculation modules are subjected to ADD processing to obtain feature map1, and the input of the calculation module is the input of the network of the previous layer; the second calculation module performs 3 × 3 average pooling once, the pooled input is the input of the network of the current layer, and after the calculation is finished, the pooled result and the input of the previous layer are subjected to ADD to obtain a feature map 2; the third calculation module performs convolution of 5 × 5 and convolution of 3 × 3, the input of the convolution of 5 × 5 is feature map1, the input of the convolution of 3 × 3 is the input of the layer, and then the results of the convolution of 5 × 5 and the input of the convolution of 3 × 3 are added to obtain a feature map 3; the fourth calculation module performs convolution operation of 3 × 3 once, the input of the convolution operation is feature map1, and the result obtained by calculation and the input of the network of the current layer are subjected to ADD to obtain a feature map 4; the fifth calculation module performs 3 × 3 average pooling once and performs 3 × 3 convolution operation once, the pooled input is feature map3, the input of the convolution operation is the input of the previous layer, and after the calculation is completed, the calculation results of the feature map3 and the previous layer are subjected to ADD to obtain feature map 5; after all the calculation is finished, performing concatement on feature map2, feature map4 and feature map5 to obtain the features extracted by the network of the current layer, and finishing the final classification task by using the full connection layer after the features of 5 AmoebaNet modules are extracted.
In the S500 model training, training network parameters by using training set data after a network is built, adopting SGD as an optimizer, setting 500 epochs to be trained by using an initial learning rate of 0.01, attenuating 50% of each 100 epoch learning rate, and 32 batch size, setting a cross entropy loss function as a loss function, stopping training when 20 continuous epoch model loss values are not reduced, storing the model, training the model by using 5 data groups obtained by K-fold cross validation to obtain 5 parameter models, evaluating the prediction results of the 5 models on the corresponding verification sets and comparing the results with the effects, if the models have similar performances, the model training is proved to be complete, the model is stored, the model building is completed, if the performance difference of 5 models is large, and performing K-fold cross validation to divide the data set again, and adjusting the learning rate to train the model again until the optimal model is obtained.
In the S600 model evaluation, a trained model is used for carrying out prostate cancer MRI classification prediction on test set data, the prediction result is compared with the corresponding label, and the recognition effect evaluation is carried out, wherein the evaluation mode is F1-Score, the higher the F1-Score value is, the better the recognition effect is, and the formula is as follows:
wherein F1 is F1-score, A is accuracy, R is recall, TP is positive class number, FP is negative class number, FN is positive class number, and TN is negative class number.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, through a data enhancement method, the effective expansion of the MRI image data of the front face adenocarcinoma is realized, so that the identification performance of a deep network is ensured, the generalization capability and the robustness of the network are enhanced, an intelligent network is constructed on the basis of the AmoebaNet, the network can efficiently identify the MRI image without manual participation, the detection flow of the front face adenocarcinoma is simplified, and the detection of the prostate cancer is greatly accelerated.
Drawings
FIG. 1 is a flow chart of the main steps of the present invention;
FIG. 2 is a diagram of the network architecture of the present invention;
FIG. 3 is a block diagram of an AmoebaNet module of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An AmoebaNet-based MRI prostate cancer detection method, as shown in fig. 1, comprising the steps of:
s100, data set construction: reading the picture data of the prestatex data set and the corresponding label thereof, and storing the picture data and the corresponding label in a matrix form;
s200, data preprocessing: enhancing and scaling the data;
s300, data set division: recombining the data set into a plurality of training sets and test sets by a K-fold cross validation method;
s400, model construction: establishing a model based on AmoebaNet, establishing a multi-scale nonlinear deep learning network, simultaneously adding 1 × 1 convolution to promote data dimensionality, and finally classifying the features by using full connection;
s500, model training: training the network by adopting specified parameters, and training the model by using a plurality of combinations obtained by K-fold cross validation;
s600, model evaluation: and evaluating the model identification effect by adopting the accuracy, the recall rate and F1-Score.
Further, in the step S100, in the data set construction, the PROSTATEx match data set is used, 84 images containing characteristics of prostate cancer lesions and 280 images of benign cases are read, all the image data are stored in npy format in a matrix manner, the labels of the image data are also stored in npy format in a matrix manner, the data matrix format is (n, x, y), wherein n is a data number, x and y are two-dimensional pixel matrices of the data, the format of the label matrix is (n, l), wherein n is a label number, l is a data label, 1 represents a prostate cancer image, 0 represents a normal image, and the labels correspond to the data one by one.
Further, in the step S200 of data preprocessing, because the data volume of the data set is small, under-fitting occurs easily to the model, and the optimal recognition effect cannot be achieved, and the data collected by different nuclear magnetic resonance devices are different, in order to enhance the generalization capability and robustness of the model and prevent the occurrence of under-fitting, two modes of mirror image transformation and contrast transformation are adopted to enhance the data, the mirror image transformation mode is to base the data on a central axis, and corresponding pixel points at two ends of the data are exchanged to achieve the purpose of mirror image transformation, the contrast transformation mode is to firstly normalize the gray data, and the normalization mode isWherein x'iFor the results obtained after normalization, xiFor the pixel point to be processed, X is the set of all numerical values of the data, the normalized data are multiplied by coefficients of 0.8, 0.9, 1.1 and 1.2 respectively, the data are normalized again after calculation is finished, then the data are multiplied by 255, an image is restored, the data after data enhancement and the original data are mixed and randomly disordered, and all the data are scaled to 400X 400.
Further, in the step S300, in the data set division, a K-fold cross validation method is adopted to divide the data set, the data set is divided into a training set, a validation set and a test set, K is 5, the set averagely divides all data into 5 data sets, the data sets are numbered as data sets a/b/c/d/e, when model training is performed, 4 small data sets are selected as the training sets to perform model parameter training, and 1 small data set and the test set are used to evaluate the model identification effect.
Further, in the step S400 of model construction, as shown in fig. 2, because the characteristics of the front face adenocarcinoma MRI image are weak, the model construction is performed based on AmoebaNet-a, and the variant framework of the model can extract more effective characteristics from data, the network firstly performs 1 × 1 convolution on the data 2 times, and then lifts the data to 8 channels, and then uses an AmoebaNet module to extract the characteristics, as shown in fig. 3, each AmoebaNet-a module is composed of 5 calculation modules, wherein the first calculation module is composed of an average pooling layer with a size of 3 × 3 and a maximum pooling layer with a size of 3 × 3, and after the calculation is completed, ADD processing is performed on the characteristics obtained by the two modules to obtain an atfe map1, and the input of the calculation module is the input of the previous network; the second calculation module performs 3-by-3 average pooling once, the pooled input is the input of the network of the current layer, and after the calculation is finished, the pooling result and the input of the previous layer are subjected to ADD to obtain a feature map 2; the third calculation module performs convolution of 5 × 5 and convolution of 3 × 3, the input of the convolution of 5 × 5 is feature map1, the input of the convolution of 3 × 3 is the input of the layer, and then the results of the convolution of 5 × 5 and the input of the convolution of 3 × 3 are added to obtain a feature map 3; the fourth calculation module performs convolution operation of 3 × 3 once, the input of the convolution operation is feature map1, and the result obtained by calculation and the input of the network of the current layer are subjected to ADD to obtain a feature map 4; the fifth calculation module performs 3 × 3 average pooling once and performs 3 × 3 convolution operation, the pooled input is feature map3, the input of the convolution operation is the input of the previous layer, and after the calculation is completed, ADD is performed on the calculation results of the feature map3 and the convolution operation, so that a feature map5 is obtained; after all the calculation is finished, performing concatement on feature map2, feature map4 and feature map5 to obtain the features extracted by the network of the current layer, and finishing the final classification task by using the full connection layer after the features of 5 AmoebaNet modules are extracted.
Further, in the step S500 of model training, after the network is built, training the network parameters by using training set data, adopting SGD as an optimizer, setting 500 epochs to be trained by using an initial learning rate of 0.01, attenuating 50% of each 100 epoch learning rate, adjusting the size of batch size to 32, setting a cross entropy loss function as the loss function, stopping training when 20 continuous epoch model loss values are not reduced, storing the model, training the model by using 5 data groups obtained by K-fold cross validation to obtain 5 parameter models, evaluating and comparing the predicted results of the 5 models on the corresponding verification sets, if the models have similar performance, the model training is proved to be complete, the model is stored, the model building is completed, if the performance difference of 5 models is large, and performing K-fold cross validation to divide the data set again, and adjusting the learning rate to train the model again until the optimal model is obtained.
Further, in the step S600 of model evaluation, the trained model is used to perform MRI classification prediction on the test set data, the prediction result is compared with the corresponding label, and the recognition effect evaluation is performed, wherein the evaluation mode is F1-Score, and the higher the F1-Score is, the better the recognition effect is, and the formula is as follows:
wherein F1 is F1-score, A is accuracy, R is recall, TP is positive class number, FP is negative class number, FN is positive class number, and TN is negative class number.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are included in the scope of the present invention.
Claims (6)
1. An AmoebaNet-based MRI prostate cancer detection method, characterized by: comprises the following steps:
s100, data set construction: reading the picture data of the prestatex data set and the corresponding label thereof, and storing the picture data and the corresponding label in a matrix form;
s200, data preprocessing: enhancing and scaling the data;
s300, data set division: recombining the data set into a plurality of training sets and test sets by a K-fold cross validation method;
s400, model construction: establishing a model based on AmoebaNet, establishing a multi-scale nonlinear deep learning network, simultaneously adding 1 × 1 convolution to promote data dimensionality, and finally classifying the features by using full connection;
in the S400 model construction, the model construction is carried out based on the AmoebaNet-A, firstly, the data is subjected to 1 × 1 convolution for 2 times, the data is promoted to 8 channels, then, the AmoebaNet modules are used for extracting features, each AmoebaNet-A module consists of 5 calculation modules, wherein the first calculation module consists of an average pooling layer with the size of 3 × 3 and a maximum pooling layer with the size of 3 × 3, after the calculation is finished, the features obtained by the two modules are subjected to ADD processing to obtain feature map1, and the input of the calculation module is the input of the previous layer of network; the second calculation module performs 3 × 3 average pooling once, the pooled input is the input of the network of the current layer, and after the calculation is finished, the pooled result and the input of the previous layer are subjected to ADD to obtain a feature map 2; the third calculation module performs convolution of 5 × 5 and convolution of 3 × 3, the input of the convolution of 5 × 5 is feature map1, the input of the convolution of 3 × 3 is the input of the layer, and then the results of the convolution of 5 × 5 and the input of the convolution of 3 × 3 are added to obtain a feature map 3; the fourth calculation module performs convolution operation of 3 × 3 once, the input of the convolution operation is feature map1, and the result obtained by calculation and the input of the network of the current layer are subjected to ADD to obtain a feature map 4; the fifth calculation module performs 3 × 3 average pooling once and performs 3 × 3 convolution operation once, the pooled input is feature map3, the input of the convolution operation is the input of the previous layer, and after the calculation is completed, the calculation results of the feature map3 and the previous layer are subjected to ADD to obtain feature map 5; after all the calculation is finished, performing concatement on feature map2, feature map4 and feature map5 to obtain the features extracted by the network of the current layer, and finishing the final classification task by using a full connection layer after the features of 5 AmoebaNet modules are extracted;
s500, model training: training the network by adopting specified parameters, and training the model by using a plurality of combinations obtained by K-fold cross validation;
s600, model evaluation: and evaluating the model identification effect by adopting the accuracy, the recall rate and F1-Score.
2. The AmoebaNet-based MRI prostate cancer detection method of claim 1, wherein: in the S100 data set construction, a PROSTATEX competition data set is used, 84 images containing characteristics of prostate cancer focuses and 280 images of benign cases are read, all picture data are stored in npy formats in a matrix mode, labels of the picture data are also stored in npy formats in the matrix mode, the data matrix format is (n, x, y), n is a data number, x and y are two-dimensional pixel matrices of the data, the label matrix format is (n, l), n is a label number, l is a data label, 1 represents a prostate cancer picture, 0 represents a normal picture, and the labels correspond to the data one by one.
3. The AmoebaNet-based MRI prostate cancer detection method of claim 1, wherein: in the S200 data preprocessing, two modes of mirror image transformation and contrast transformation are adopted to enhance the data, the mirror image conversion mode is to exchange corresponding pixel points at two ends of the data based on a central shaft to achieve the aim of mirror image transformation, the contrast transformation mode is to firstly carry out normalization processing on the gray data, and the normalization mode isWherein x'iFor the results obtained after normalization, xiFor the pixel point to be processed, X is the set of all numerical values of the data, the normalized data are multiplied by coefficients of 0.8, 0.9, 1.1 and 1.2 respectively, the data are normalized again after calculation is finished, then the data are multiplied by 255, an image is restored, the data after data enhancement and the original data are mixed and randomly disordered, and all the data are scaled to 400X 400.
4. The AmoebaNet-based MRI prostate cancer detection method of claim 1, wherein: in the step S300, in the data set division, a K-fold cross validation method is used to divide the data set, the data set is divided into a training set, a validation set and a test set, K is 5, the set averagely divides all data into 5 data sets, and the data sets are numbered as data sets a/b/c/d/e, when model training is performed, 4 data sets are selected as the training set to perform model parameter training, and 1 data set is selected as the test set to evaluate the model identification effect.
5. The AmoebaNet-based MRI prostate cancer detection method of claim 1, wherein: in the S500 model training, training network parameters by using training set data after a network is built, adopting SGD as an optimizer, setting 500 epochs to be trained by using an initial learning rate of 0.01, attenuating 50% of each 100 epoch learning rate, and 32 batch size, setting a cross entropy loss function as a loss function, stopping training when 20 continuous epoch model loss values are not reduced, storing the model, training the model by using 5 data groups obtained by K-fold cross validation to obtain 5 parameter models, evaluating and comparing the predicted results of the 5 models on the corresponding verification sets, if the models have similar performance, the model training is proved to be complete, the model is stored, the model building is completed, if the performance difference of 5 models is large, and performing K-fold cross validation to divide the data set again, and adjusting the learning rate to train the model again until the optimal model is obtained.
6. The AmoebaNet-based MRI prostate cancer detection method of claim 1, wherein: in the S600 model evaluation, a trained model is used for carrying out prostate cancer MRI classification prediction on test set data, the prediction result is compared with the corresponding label, and the recognition effect evaluation is carried out, wherein the evaluation mode is F1-Score, the higher the F1-Score value is, the better the recognition effect is, and the formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301547.5A CN113052812B (en) | 2021-03-22 | 2021-03-22 | AmoebaNet-based MRI prostate cancer detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110301547.5A CN113052812B (en) | 2021-03-22 | 2021-03-22 | AmoebaNet-based MRI prostate cancer detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113052812A CN113052812A (en) | 2021-06-29 |
CN113052812B true CN113052812B (en) | 2022-06-24 |
Family
ID=76514224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110301547.5A Active CN113052812B (en) | 2021-03-22 | 2021-03-22 | AmoebaNet-based MRI prostate cancer detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113052812B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113254435B (en) * | 2021-07-15 | 2021-10-29 | 北京电信易通信息技术股份有限公司 | Data enhancement method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718952A (en) * | 2016-01-22 | 2016-06-29 | 武汉科恩斯医疗科技有限公司 | Method for focus classification of sectional medical images by employing deep learning network |
CN107256544A (en) * | 2017-04-21 | 2017-10-17 | 南京天数信息科技有限公司 | A kind of prostate cancer image diagnosing method and system based on VCG16 |
CN111325713A (en) * | 2020-01-21 | 2020-06-23 | 浙江省北大信息技术高等研究院 | Wood defect detection method, system and storage medium based on neural network |
CN111414815A (en) * | 2020-03-04 | 2020-07-14 | 清华大学深圳国际研究生院 | Pedestrian re-identification network searching method and pedestrian re-identification method |
WO2020176435A1 (en) * | 2019-02-25 | 2020-09-03 | Google Llc | Systems and methods for producing an architecture of a pyramid layer |
CN111652225A (en) * | 2020-04-29 | 2020-09-11 | 浙江省北大信息技术高等研究院 | Non-invasive camera reading method and system based on deep learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086799A (en) * | 2018-07-04 | 2018-12-25 | 江苏大学 | A kind of crop leaf disease recognition method based on improvement convolutional neural networks model AlexNet |
CN110175671B (en) * | 2019-04-28 | 2022-12-27 | 华为技术有限公司 | Neural network construction method, image processing method and device |
CN112215332B (en) * | 2019-07-12 | 2024-05-14 | 华为技术有限公司 | Searching method, image processing method and device for neural network structure |
US20210073612A1 (en) * | 2019-09-10 | 2021-03-11 | Nvidia Corporation | Machine-learning-based architecture search method for a neural network |
CN110782015B (en) * | 2019-10-25 | 2024-10-15 | 腾讯科技(深圳)有限公司 | Training method, device and storage medium for network structure optimizer of neural network |
CN112257794B (en) * | 2020-10-27 | 2022-10-28 | 东南大学 | YOLO-based lightweight target detection method |
-
2021
- 2021-03-22 CN CN202110301547.5A patent/CN113052812B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718952A (en) * | 2016-01-22 | 2016-06-29 | 武汉科恩斯医疗科技有限公司 | Method for focus classification of sectional medical images by employing deep learning network |
CN107256544A (en) * | 2017-04-21 | 2017-10-17 | 南京天数信息科技有限公司 | A kind of prostate cancer image diagnosing method and system based on VCG16 |
WO2020176435A1 (en) * | 2019-02-25 | 2020-09-03 | Google Llc | Systems and methods for producing an architecture of a pyramid layer |
CN111325713A (en) * | 2020-01-21 | 2020-06-23 | 浙江省北大信息技术高等研究院 | Wood defect detection method, system and storage medium based on neural network |
CN111414815A (en) * | 2020-03-04 | 2020-07-14 | 清华大学深圳国际研究生院 | Pedestrian re-identification network searching method and pedestrian re-identification method |
CN111652225A (en) * | 2020-04-29 | 2020-09-11 | 浙江省北大信息技术高等研究院 | Non-invasive camera reading method and system based on deep learning |
Non-Patent Citations (4)
Title |
---|
Regularized Evolution for Image Classifier Architecture Search;Esteban Real 等;《The Thirty-third AAAI Conference Artificial Intelligence》;20191231;第4780-4789页 * |
基于多模态融合的自动驾驶感知及计算;张燕咏等;《计算机研究与发展》;20200901(第09期);第5-23页 * |
基于深度学习的胃癌病理图像分类方法;张泽中等;《计算机科学》;20181115;第273-278页 * |
基于级联卷积神经网络的前列腺磁共振图像分类;刘可文等;《波谱学杂志》;20200609(第02期);第29-38页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113052812A (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111738301B (en) | Long-tail distribution image data identification method based on double-channel learning | |
CN112800876B (en) | Super-spherical feature embedding method and system for re-identification | |
US11430255B2 (en) | Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network | |
CN112446891B (en) | Medical image segmentation method based on U-Net network brain glioma | |
CN105138993A (en) | Method and device for building face recognition model | |
CN111222457B (en) | Detection method for identifying authenticity of video based on depth separable convolution | |
CN111738363A (en) | Alzheimer disease classification method based on improved 3D CNN network | |
CN111160313A (en) | Face representation attack detection method based on LBP-VAE anomaly detection model | |
CN114842524B (en) | Face false distinguishing method based on irregular significant pixel cluster | |
CN112263224B (en) | Medical information processing method based on FPGA edge calculation | |
CN113052812B (en) | AmoebaNet-based MRI prostate cancer detection method | |
CN114882537B (en) | Finger new visual angle image generation method based on nerve radiation field | |
CN110992309A (en) | Fundus image segmentation method based on deep information transfer network | |
CN117133059B (en) | Face living body detection method and device based on local attention mechanism | |
CN114283301A (en) | Self-adaptive medical image classification method and system based on Transformer | |
CN117275048A (en) | Fingerprint identification method based on fusion of global features and local minutiae features | |
CN112906637B (en) | Fingerprint image identification method and device based on deep learning and electronic equipment | |
CN113569684B (en) | Short video scene classification method, system, electronic equipment and storage medium | |
CN114550252A (en) | Human face recognition method based on attention mechanism | |
Li et al. | A Multi-Category Brain Tumor Classification Method Bases on Improved ResNet50. | |
CN116150617B (en) | Tumor infiltration lymphocyte identification method and system | |
Sheng et al. | Apply Masked-attention Mask Transformer to Instance Segmentation in Pathology Images | |
CN118097312B (en) | Classification method of mammary gland ultrasonic image based on convolutional neural network and capsule network | |
CN117952995B (en) | Cardiac image segmentation system capable of focusing, prompting and optimizing | |
Han et al. | Brain Tumor Recognition Based on Data Augmentation and Convolutional Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |