CN109919187B - Method for classifying thyroid follicular picture by using bagging fine tuning CNN - Google Patents

Method for classifying thyroid follicular picture by using bagging fine tuning CNN Download PDF

Info

Publication number
CN109919187B
CN109919187B CN201910079367.XA CN201910079367A CN109919187B CN 109919187 B CN109919187 B CN 109919187B CN 201910079367 A CN201910079367 A CN 201910079367A CN 109919187 B CN109919187 B CN 109919187B
Authority
CN
China
Prior art keywords
picture
model
fine
tuning
thyroid follicular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910079367.XA
Other languages
Chinese (zh)
Other versions
CN109919187A (en
Inventor
杨柏林
闫早明
董芳芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201910079367.XA priority Critical patent/CN109919187B/en
Publication of CN109919187A publication Critical patent/CN109919187A/en
Application granted granted Critical
Publication of CN109919187B publication Critical patent/CN109919187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method for classifying an ultrasonic picture of a thyroid follicular nodule and an ultrasonic picture of thyroid follicular cancer based on a bagging fine-tuning interception V3 and interception respet V2 combined integration method. The method takes the segmented ultrasonic image as a data set, fine-tunes and stores 9 models through a bagging method, then respectively inputs a test set into the stored 9 models, and takes the average value of the scores output by the 9 models as the final picture classification basis by adding the scores output by the 9 models, thereby achieving the effect of classifying the two pictures. The invention can meet the requirement of obtaining the classification scores of the two pictures through the ultrasonic image and has high accuracy.

Description

Method for classifying thyroid follicular picture by using bagging fine tuning CNN
Technical Field
The invention belongs to the technical field of image recognition and classification, and particularly relates to a method for classifying an ultrasonic image of a thyroid follicular nodule and an ultrasonic image of thyroid follicular cancer based on bagging fine adjustment of initiation V3 and initiation respet V2 and then integrating the fine adjustment of initiation V3 and initiation respet V2.
Background
The basic idea of Bagging is that a weak classifier is given, although the identification accuracy of a single weak classifier is not high, a plurality of weak classifiers are subjected to comprehensive voting, and finally the result accuracy is improved. The study of convolutional neural networks began in the 80 to 90 th century, with LeNet-5 being the earliest. The existing convolutional neural network is rapidly developed and has a better recognition rate on pictures, and the defect that if the data set is not too many, the convolutional neural network cannot be trained to obtain a plurality of results. Problem one the ultrasound picture number of thyroid follicular cancer is scarce; the second problem is that the picture of thyroid follicular carcinoma is very similar to the picture of thyroid follicular nodule, which is challenging to classify. Korean jiaquan university published a paper in 2017: the data set is enlarged by means of sub-sampling pictures, and then an Alexnet convolutional neural network model is trained to classify the two pictures, but the classification result is not satisfactory.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for classifying thyroid follicular pictures by using bagging fine tuning CNN.
The method mainly uses a small data set to carry out bagging fine tuning and stores the data for many times; inputting two types of segmented pictures into a plurality of stored models; and finally, taking the average value of the scores of each class obtained by a plurality of stored models as the final score, wherein the class with high score indicates that the class is possibly very large.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the invention comprises the following steps:
the method comprises the following steps: and (5) dividing the picture. The picture is segmented using a GAC model that changes the gradient boundary stopping function to a phase-consistent boundary stopping function.
Step two: and dividing the divided picture data into a training set and a picture set to be classified, and respectively cutting the picture into small pieces of 299 pixels by 299 pixels in the training set and the picture set to be classified. Training with patches in the training set. The small pieces in the set of pictures to be sorted are used for testing.
Step three: using a bagging method to fine-tune the parameters of initiation V3, specifically:
3-1: in each trim batch, 59 pieces were randomly selected from the training set as a trim batch set, including 24 pieces of thyroid follicular carcinoma and 35 pieces of thyroid follicular nodule.
3-2: the number of fine-tuning layers: and (3) fixing other weight parameters of the network model through transfer learning, and only training the weight parameters of the logs layer and the Auxlogs layer.
3-3: fine adjustment of parameters: the learning rate is 0.001 and the overfitting case is avoided using the L2 regular expression. The RMSProp gradient descent method is adopted. The activation function is a Relu function. The fine tuning is iterated through 250-300 batches.
Step four: the inclusion V3 is saved as a V3-1 model after 300 batches of iterative fine tuning. After the initiation V3 is finely adjusted for many times in the third step, the initiation V3 is respectively stored as a V3-2 model, a V3-3 model, a V3-4 model and a V3-5 model.
Step five: fine-tuning the initiation rest V2 for multiple times in the step three, storing the fine-tuned models, and storing the models as a V2-1 model, a V2-2 model, a V2-3 model and a V2-4 model after fine-tuning for 4 times.
Step six: respectively inputting the small pieces in the picture set to be classified into the 9 stored models, and outputting two scores by each model. One score is the size of the ultrasound image possibility of thyroid follicular cancer, the other score is the size of the ultrasound image possibility of thyroid follicular nodule, the same type scores of the 9 results are added and divided by 9 to calculate an average value, and the average value is used as a final result to judge which type of ultrasound image belongs.
The invention has the beneficial effects that: compared with the prior art, the method for finely tuning the convolutional neural network by using bagging is used for identifying the two similar pictures, and the identification accuracy is greatly improved.
Detailed Description
The embodiment comprises the following steps:
the method comprises the following steps: and (5) dividing the picture. The picture is segmented using a GAC model that changes the gradient boundary stopping function to a phase-consistent boundary stopping function. Since the boundaries of the images of thyroid follicular carcinoma and thyroid follicular nodule are unclear and difficult to segment, in order to enable better segmentation, the phase consistency boundary stop function is used instead of the boundary stop function of the original image.
Step two: and dividing the segmented picture data into a training set and a picture set to be classified, and then cutting the segmented picture into small pieces of 299 pixels in the training set and the picture set to be classified respectively. Training with patches, patches in the set of pictures to be classified are used for testing. The 299 x 299 pixel level is selected to reduce the interference and influence of a black background on classification after segmentation, and the number of images of thyroid follicular nodules and thyroid follicular carcinoma is insufficient, so that a convolution network model cannot be trained, and the data are increased by cutting into small pieces.
Step three: the parameters of initiation V3 are fine-tuned using a bagging method. The reason for using bagging training is to solve the problem that the number of two pictures to be classified is unbalanced, and if the unbalanced ratio of the two data is 1: 4, then even a casual guess will have an accuracy of 80%, which severely affects the classification accuracy. The fine tuning is adopted to avoid the overfitting problem caused by insufficient number of pictures, the overfitting problem is generated when a multilayer convolutional neural network is trained when a data set is few, and the overfitting can cause inaccuracy when new data is tested.
3-1: in each trim batch, 59 pieces were randomly selected from the training set as a trim batch set, including 24 pieces of thyroid follicular carcinoma and 35 pieces of thyroid follicular nodule.
3-2: the number of fine-tuning layers: and (3) fixing other weight parameters of the network model through transfer learning, and only training the weight parameters of the logs layer and the Auxlogs layer.
3-3: fine adjustment of parameters: the learning rate is 0.001, which should not be too large, otherwise, the learning rate cannot fall into the local minimum. And (3) avoiding the over-fitting condition by using an L2 regular form, selecting a RMSProp gradient descent method, selecting a Relu function as an activation function, and iteratively fine-tuning by using 250-300 batches.
Step four: the inclusion V3 is saved as a V3-1 model after 300 batches of iterative fine tuning. After 4 times of fine adjustment of initiation V3 in the third step, the model is respectively saved as a V3-2 model, a V3-3 model, a V3-4 model and a V3-5 model. In this embodiment, it is found that the accuracy of classifying the thyroid follicular nodule images by the acceptance V3 network is very high, and the acceptance respet V2 network has non-convolution and pooling operations, so that the detail information in the thyroid follicular carcinoma images can be clearly retained, and the retention of the detail information has a positive effect on distinguishing the thyroid follicular carcinoma images, so that the classification by the two networks needs to be finely adjusted.
Step five: fine-tuning the initiation rest V2 for multiple times in the step three, storing the fine-tuned models, and storing the models as a V2-1 model, a V2-2 model, a V2-3 model and a V2-4 model after fine-tuning for 4 times. The reason for saving the model after fine tuning is one: the training set is less, so the global minimum value of the weight can not be reached when the training is finished, and the local minimum value of the weight can only be reached when the training is finished. The second reason is that: normally, local minima trapped in the network are undesirable. However, it is found that storing different local minima values in the network and then integrating the stored local minima models together for classification has better effect than the global minimum point model. The third reason is that: since the selection is random from the training dataset, the direction of each descent differs by one direction, so the local minimum point reached at each time is also different. The reason is four: the integrated local minimum values are different, so when the test set is put in, the model stored by each local minimum value can generate different classification results, and thus different classification effects are combined to better avoid overfitting and be more robust.
Step six: respectively inputting small pieces in a picture set to be classified into 9 stored models, and outputting two scores by each model. One score is the size of the ultrasound image probability of thyroid follicular carcinoma, and the other score is the size of the ultrasound image probability of thyroid follicular nodule; and adding the same-class scores of the 9 results and dividing by 9 to calculate an average value as a result for finally judging which class of pictures belongs to.
The results of the experiment were measured by methods of accuracy and sensitivity, as well as specificity, and the results of the method of this example and the published paper of the university of Jiaquan are compared as shown in the following table:
TABLE 1 ultrasound image of thyroid follicular cancer and ultrasound image of thyroid follicular nodule
Figure DEST_PATH_IMAGE001

Claims (2)

1. A method for classifying thyroid follicular pictures by using bagging fine tuning CNNs is characterized by comprising the following steps:
the method comprises the following steps: segmenting the picture;
step two: dividing the divided picture data into a training set and a picture set to be classified, and respectively cutting the picture into small pieces of 299 pixels by 299 pixels in the training set and the picture set to be classified; wherein the training is performed with a patch in a training set; testing by using a small piece in a picture set to be classified;
step three: using a bagging method to fine-tune the parameters of initiation V3, specifically:
3-1: in each fine-tuning batch, 59 pieces are randomly selected from the training set to serve as a fine-tuning batch set, wherein the fine-tuning batch set comprises 24 pieces of thyroid follicular carcinoma and 35 pieces of thyroid follicular nodule;
3-2: the number of fine-tuning layers: fixing other weight parameters of the network model through transfer learning, and only training the weight parameters of the logs layer and the Auxlogs layer;
3-3: fine adjustment of parameters: the learning rate is 0.001, and an L2 regular expression is used to avoid the overfitting condition; selecting a RMSProp gradient descent method; the activation function is a Relu function; iteratively fine-tuning through 250 batches to 300 batches;
step four: the initiation V3 is saved as a V3-1 model after being iteratively trimmed for 300 batches; finely adjusting the initiation V3 for multiple times in the third step, and then respectively storing the initiation V3 as a V3-2 model, a V3-3 model, a V3-4 model and a V3-5 model;
step five: fine-tuning the initiation rest V2 for multiple times in a mode of step three, storing the fine-tuned models, and storing the models as a V2-1 model, a V2-2 model, a V2-3 model and a V2-4 model after fine-tuning for 4 times;
step six: respectively inputting small pieces in a picture set to be classified into the 9 stored models, and outputting two scores by each model; one score is the size of the ultrasound picture likelihood of thyroid follicular carcinoma, and the other score is the size of the ultrasound picture likelihood of thyroid follicular nodules; and adding the same-class scores of the 9 results and dividing by 9 to calculate an average value, and using the average value as a final result to judge which type of ultrasonic picture belongs to.
2. The method of classifying thyroid follicular pictures with bagging trimming CNNs as claimed in claim 1, wherein: in step one, the picture is segmented by using a GAC model which changes a gradient boundary stopping function into a phase consistency boundary stopping function.
CN201910079367.XA 2019-01-28 2019-01-28 Method for classifying thyroid follicular picture by using bagging fine tuning CNN Active CN109919187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910079367.XA CN109919187B (en) 2019-01-28 2019-01-28 Method for classifying thyroid follicular picture by using bagging fine tuning CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910079367.XA CN109919187B (en) 2019-01-28 2019-01-28 Method for classifying thyroid follicular picture by using bagging fine tuning CNN

Publications (2)

Publication Number Publication Date
CN109919187A CN109919187A (en) 2019-06-21
CN109919187B true CN109919187B (en) 2021-02-12

Family

ID=66960894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910079367.XA Active CN109919187B (en) 2019-01-28 2019-01-28 Method for classifying thyroid follicular picture by using bagging fine tuning CNN

Country Status (1)

Country Link
CN (1) CN109919187B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834943A (en) * 2015-05-25 2015-08-12 电子科技大学 Brain tumor classification method based on deep learning
CN106951928A (en) * 2017-04-05 2017-07-14 广东工业大学 The Ultrasound Image Recognition Method and device of a kind of thyroid papillary carcinoma
CN108898160A (en) * 2018-06-01 2018-11-27 中国人民解放军战略支援部队信息工程大学 Breast cancer tissue's Pathologic Grading method based on CNN and image group Fusion Features

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9547061B2 (en) * 2012-01-27 2017-01-17 Koninklijke Philips N.V. Tumor segmentation and tissue classification in 3D multi-contrast
CN107835654B (en) * 2015-05-21 2020-03-13 奥林巴斯株式会社 Image processing apparatus, image processing method, and recording medium
CN106056595B (en) * 2015-11-30 2019-09-17 浙江德尚韵兴医疗科技有限公司 Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN108520518A (en) * 2018-04-10 2018-09-11 复旦大学附属肿瘤医院 A kind of thyroid tumors Ultrasound Image Recognition Method and its device
CN109002831A (en) * 2018-06-05 2018-12-14 南方医科大学南方医院 A kind of breast density classification method, system and device based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834943A (en) * 2015-05-25 2015-08-12 电子科技大学 Brain tumor classification method based on deep learning
CN106951928A (en) * 2017-04-05 2017-07-14 广东工业大学 The Ultrasound Image Recognition Method and device of a kind of thyroid papillary carcinoma
CN108898160A (en) * 2018-06-01 2018-11-27 中国人民解放军战略支援部队信息工程大学 Breast cancer tissue's Pathologic Grading method based on CNN and image group Fusion Features

Also Published As

Publication number Publication date
CN109919187A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN107194318B (en) Target detection assisted scene identification method
US20210019872A1 (en) Detecting near-duplicate image
CN110222681A (en) A kind of casting defect recognition methods based on convolutional neural networks
CN109558821B (en) Method for calculating number of clothes of specific character in video
CN110781897B (en) Semantic edge detection method based on deep learning
CN109753995B (en) Optimization method of 3D point cloud target classification and semantic segmentation network based on PointNet +
CN108009222B (en) Three-dimensional model retrieval method based on better view and deep convolutional neural network
CN109635643B (en) Fast face recognition method based on deep learning
Liu et al. Age classification using convolutional neural networks with the multi-class focal loss
CN103927534A (en) Sprayed character online visual detection method based on convolutional neural network
CN110032938A (en) A kind of Tibetan language recognition method, device and electronic equipment
CN109902757B (en) Face model training method based on Center Loss improvement
CN105989001B (en) Image search method and device, image search system
CN106372624B (en) Face recognition method and system
JP2019106171A5 (en)
CN111027590B (en) Breast cancer data classification method combining deep network features and machine learning model
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
TWI709188B (en) Fusion-based classifier, classification method, and classification system
CN108154158B (en) Building image segmentation method for augmented reality application
CN110781941A (en) Human ring labeling method and device based on active learning
CN111833322B (en) Garbage multi-target detection method based on improved YOLOv3
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN106372111A (en) Local feature point screening method and system
WO2018006631A1 (en) User level automatic segmentation method and system
CN111461120A (en) Method for detecting surface defects of convolutional neural network object based on region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant