CN109376777A - Cervical cancer tissues pathological image analysis method and equipment based on deep learning - Google Patents

Cervical cancer tissues pathological image analysis method and equipment based on deep learning Download PDF

Info

Publication number
CN109376777A
CN109376777A CN201811212019.7A CN201811212019A CN109376777A CN 109376777 A CN109376777 A CN 109376777A CN 201811212019 A CN201811212019 A CN 201811212019A CN 109376777 A CN109376777 A CN 109376777A
Authority
CN
China
Prior art keywords
trained
full articulamentum
image
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811212019.7A
Other languages
Chinese (zh)
Inventor
李晨
孔繁捷
蒋涛
许宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Smart Motion Muniu Intelligent Technology Co Ltd
Original Assignee
Sichuan Muniuliuma Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Muniuliuma Intelligent Technology Co ltd filed Critical Sichuan Muniuliuma Intelligent Technology Co ltd
Priority to CN201811212019.7A priority Critical patent/CN109376777A/en
Publication of CN109376777A publication Critical patent/CN109376777A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

This method comprises: obtaining cervical cancer tissues pathological image, and image tag is arranged for every image in the invention discloses a kind of cervical cancer tissues pathological image analysis method and equipment based on deep learning;Based on being trained respectively to two convolutional neural networks for trained image, trained two convolutional neural networks are obtained;The parameter of fixed trained two convolutional neural networks obtains trained full articulamentum one based on being trained for trained image to full articulamentum one;Image to be tested is input in trained classifier, two convolutional neural networks extract feature vector from image respectively, the feature vector f1 and f2 of output are stitched together and are input to full articulamentum one, feature vector f3 is exported, classification results are determined by the maximum element of numerical value in feature vector f3.The differentiation degree for the original tissue pathological slice displaing micro picture that the present invention can automatically acquire doctor carries out Classification and Identification, and auxiliary doctor diagnoses.

Description

Cervical cancer tissues pathological image analysis method and equipment based on deep learning
Technical field
The present invention relates to medical field more particularly to a kind of cervical cancer tissues pathological image analyses based on deep learning Method and apparatus.
Background technique
Cervical carcinoma is common one of gynecologic malignant tumor, for now, cervical cancer tissues pathology micro-image Computer-aided diagnosis research is focused primarily upon using classical image characteristic extracting method and machine learning classification method to uterine neck Histopathology picture carries out image segmentation and pathological abnormalities screening, rarely has research for point of cervical cancer tissues pathology picture Change degree carries out computer-aided diagnosis research.
The prior art according to use computer vision methods to the histopathology image of only single cervical cancer cell into Then row feature extraction is classified using feature further progress of the conventional machines learning method to extraction.As shown in Figure 1, the party Method from top to bottom includes five steps altogether:
(1) data prediction: becoming gray level image for color tissue pathological image, and image size is constant, carries out image Enhancing filters out noise jamming, strengthens image edge information.
(2) Methods of Segmentation On Cell Images: carrying out image segmentation twice respectively, to distinguish cell and nucleus, is converted to binary map As after, realized using the algorithm based on Threshold segmentation
(3) characteristics extraction: on the basis of carrying out threshold process to image, using 8 connection chain codes to cell and cell Core carries out morphological feature extraction, and the feature of extraction has perimeter, area, circularity, rectangular degree, nucleocytoplasmic ratio etc., later to characteristic value It is standardized.
(4) machine learning method (artificial neural network or support vector machines) learning functionality part: in the spy to cell After value indicative is extracted and standardized, measured Standard Eigenvalue is learnt, training classifier parameters weight, so that instead Reach required set value to error e.
(5) machine learning method classified part: this part is the most important part of system, and pervious various pieces are all these Partial preparation makes higher point of final classification accuracy rate having learnt to have obtained after a series of parameter learning Class weight.Finally can accurately it be classified to the cell image data for test.
However this method can only select that noise is smaller, clarity is high individual cells histopathology picture is handled, Complete tissue pathology microsection image can not directly be handled, need manually to extract individual cells image; This method has selected the morphological feature of 27 engineers as the input value of machine learning, and this transportable property of method is poor, It is easy to generate over-fitting in study;And can only the cell categories of individual cells classify, can not globally diagnose disease The cancer differentiation degree of people, thus the cancer state of an illness of auxiliary judgment patient.
Summary of the invention
The technical problems to be solved by the present invention are: in view of the problems of the existing technology, the present invention provides one kind and is based on The cervical cancer tissues pathological image analysis method and equipment of deep learning can possess one the histopathology of many cells Picture is classified, and the classification to cervical cancer tissues pathological image differentiation degree is passed through, it can be determined that the grade malignancy of cancer, To help doctor preferably to formulate medical plan, cancer is timely and effectively treated.
A kind of cervical cancer tissues pathological image analysis method based on deep learning provided by the invention, comprising:
Step 1 obtains cervical cancer tissues pathological image, and image tag is arranged for every image;
Step 2 obtains trained classifier, the classification based on being trained for trained image to classifier Device includes two convolutional neural networks and full articulamentum one, and the input terminal of the full articulamentum one connects described two convolutional Neurals The output end of network, to the method that classifier is trained include: based on for trained image respectively to two convolutional Neurals Network is trained, and obtains trained two convolutional neural networks;The parameter of fixed trained two convolutional neural networks, Based on being trained for trained image to full articulamentum one, trained full articulamentum one is obtained;
Image to be tested is input in trained classifier by step 3, and two convolutional neural networks are respectively from figure Feature vector is extracted as in, the feature vector f1 and f2 of output are stitched together and are input to full articulamentum one, exports feature Vector f 3, classification results are determined by the maximum element of numerical value in feature vector f3.
Further, full articulamentum two and full articulamentum are connected separately with behind two convolutional neural networks in classifier Three, the input terminal of full articulamentum one connects the output end of full articulamentum two and full articulamentum three at this time, is training two in step 2 Also full articulamentum two and full articulamentum three are trained while a convolutional neural networks.
Further, step 1 further include: the image x for obtaining eachiIt is divided into 16 equal-sized subgraph zij, make With the mirror image edge filling picture z ' that subgraph filling growth is equal with widthij, for each picture z 'ij, carry out picture rotation 0 °, 90 °, 180 ° and 270 ° operation and flip horizontal, flip vertical and channel turning operation, i=1,2 ..., n, j=1, 2..., 16, n is total number of images.
Further, the training method of classifier is specifically included in step 2: be set separately two convolutional neural networks and The hyper parameter of full articulamentum two and full articulamentum three;The model parameter downloaded from ImageNet is imported to two convolutional Neural nets In network;It imports and two convolutional neural networks and full articulamentum two and full articulamentum three is trained for the image of training, obtain To trained two convolutional neural networks and full articulamentum two and full articulamentum three;Reset two convolutional neural networks and Hyper parameter and the fixation of full articulamentum two and full articulamentum three;It imports and full articulamentum one is trained for the image of training, Obtain trained full articulamentum one.
Further, further including step 4, Performance Evaluation is carried out to classifier, evaluation index includes accuracy rate accuracy, Accurate rate precision, recall rate recall and F1 estimate, and the calculation formula of each index is as follows:
Wherein, TP is the quantity for the positive sample that the convolutional neural networks prediction being trained to is positive, and FP is trained to The quantity for the negative sample that convolutional neural networks prediction is positive, FN are the positive samples that the convolutional neural networks being trained to are predicted to be negative Quantity, TN is the quantity of negative sample that the convolutional neural networks prediction being trained to is negative.
Further, described two convolutional neural networks are respectively VGG16 and Inception-V3.
A kind of cervical cancer tissues pathological image analytical equipment based on deep learning that another aspect of the present invention provides, packet It includes:
Image tag is arranged for obtaining cervical cancer tissues pathological image, and for every image in image acquiring device;
Classifier training device, for obtaining trained point based on being trained for trained image to classifier Class device, the classifier include two convolutional neural networks and full articulamentum one, and the input terminal of the full articulamentum one connects institute The output end for stating two convolutional neural networks includes: based on for trained image difference to the method that classifier is trained Two convolutional neural networks are trained, trained two convolutional neural networks are obtained;Fixed trained two convolution The parameter of neural network obtains trained full articulamentum one based on being trained for trained image to full articulamentum one;
Classification results output device, for image to be tested to be input in trained classifier, two convolution minds Feature vector is extracted from image respectively through network, the feature vector f1 and f2 of output are stitched together and are input to full connection Layer one, exports feature vector f3, and classification results are determined by the maximum element of numerical value in feature vector f3.
Further, full articulamentum two and full articulamentum are connected separately with behind two convolutional neural networks in classifier Three, the input terminal of full articulamentum one connects the output end of full articulamentum two and full articulamentum three at this time, and classifier training device exists Also full articulamentum two and full articulamentum three are trained while two convolutional neural networks of training.
It further, further include classifier performance assessment device, for carrying out Performance Evaluation to classifier, evaluation index includes Accuracy rate accuracy, accurate rate precision, recall rate recall and F1 estimate, and the calculation formula of each index is as follows:
Wherein, TP is the quantity for the positive sample that the convolutional neural networks prediction being trained to is positive, and FP is trained to The quantity for the negative sample that convolutional neural networks prediction is positive, FN are the positive samples that the convolutional neural networks being trained to are predicted to be negative Quantity, TN is the quantity of negative sample that the convolutional neural networks prediction being trained to is negative.
A kind of computer readable storage medium that another aspect of the present invention provides, is stored thereon with computer program, special The step of sign is, the computer program realizes method as described above when being executed by processor.
Compared with prior art, present invention enhances the degree of intelligence of cervical cancer tissues pathology picture classification, can be certainly Classification and Identification dynamicly is carried out to the differentiation degree of the original tissue pathological slice displaing micro picture of doctor's acquisition, auxiliary doctor carries out Diagnosis.
Detailed description of the invention
Examples of the present invention will be described by way of reference to the accompanying drawings, in which:
Fig. 1 is the method flow diagram classified in the prior art to cervical cancer tissues pathological image;
Fig. 2 is that the image data of the embodiment of the present invention enhances schematic diagram;
Fig. 3 is the cervical cancer tissues pathological image analysis method schematic diagram of the embodiment of the present invention;
Fig. 4 is the scatter plot that the F1- of the classifier of training of the embodiment of the present invention estimates;
Fig. 5 is the cervical cancer tissues pathological image of successful classification of the embodiment of the present invention.
Specific embodiment
All features disclosed in this specification or disclosed all methods or in the process the step of, in addition to mutually exclusive Feature and/or step other than, can combine in any way.
Any feature disclosed in this specification unless specifically stated can be equivalent or with similar purpose by other Alternative features are replaced.That is, unless specifically stated, each feature is an example in a series of equivalent or similar characteristics ?.
The concrete scheme of cervical cancer tissues pathological image analysis method provided by the invention based on deep learning is as follows:
One, sample data obtains and enhances
The histopathology micro-image of cervical cancer tissues slice shooting is prepared by pathology department of Chinese Medical Sciences University, record Cancer pathology type and differentiation degree, tumor size.Using the full-automatic immunohistochemical staining agent of Leica BOND-MAXTM (Leica company) carries out immunohistochemical staining.AQP-1 monoclonal antibody (abcam company, Shanghai) stoste is diluted 1:300 work Make liquid, injection dilutes in open reagent bottle.VEGF polyclonal antibody (abcam company, Shanghai) stoste is diluted to 1:50 work Liquid.20min is repaired using antigen retrieval buffers ER1 later.It dewaxes, expose antigenic determinant, incubation I resists, closes, DAB is aoxidized and shown Color, haematoxylin are redyed the processes such as dehydration and are automatically finished using computer, and artificial mounting is then carried out.Every slice randomly selects 3 are full of the high power lens cause (× 400) of cervical cancer tissues, carry out image by 3.2 image capture software of NIS-Elements F Acquisition.
Since the histopathology micro-image data volume of existing cervical carcinoma only has 307 in total, it was easy to produce quasi- It closes, meanwhile, the histopathology micro-image of cervical carcinoma has rotational invariance, and therefore, rotation and mirror image can be used in we Method data enhancing is carried out to it, as shown in Figure 2.For each sample image xi, i=1,2 ..., n, n are a samples Total number of images in this collection X, we are divided into 16 equal-sized subgraph zij, j=1,2... after 16, use mirror As the edge filling picture z ' that subgraph filling growth is equal with widthij, for each picture z 'ij, we carry out two kinds of data Enhancing operation, one is to 0 ° of picture rotation, 90 °, 180 °, 270 ° of operations, the second is carrying out flip horizontal to image, vertically turning over Turn, channel turning operation, Zhang Zitu z ' each in this wayijCan produce 16 enhancing after picture, picture tag is still original sample figure As xiLabel.Each sample image xi256 pictures are obtained after data enhancing, original data set size is 307, after data enhancing Data set size is expanded to 78592, is 45824 for trained data set size in a specific embodiment.The present invention is real Applying in example the results are shown in Table 1 after data set amplification, and AQP, HIF and VEGF represent three kinds of different cervical carcinoma staining pathologic section sides Formula.In some embodiments, pre- picture can be carried out to histopathology image with K-means dividing method or Mask-RCNN Plain grade segmentation, to remove the garbage in image.
1 cervical cancer tissues pathology micro-image data set of table
Two, classifier training
Based on being trained for trained image to classifier, trained classifier is obtained, the classifier includes Two convolutional neural networks and full articulamentum one, the input terminal of full articulamentum one connect the output end of two convolutional neural networks, Full articulamentum is common deep neural network (DNN).It include: based on for trained to the method that classifier is trained Image is respectively trained two convolutional neural networks, obtains trained two convolutional neural networks;Fixation is trained The parameter of two convolutional neural networks is obtained trained complete based on being trained for trained image to full articulamentum one Articulamentum one.
In an embodiment of the present invention, will by the enhanced image segmentation of data at training set, verifying collection and test set, Training set picture and the corresponding sample label (high, medium and low differentiation degree) Jing Guo binaryzation are input in classifier and are instructed Practice, the feature vector for classification of output 1 × 3:
yi=[yI, 1 yI, 2 yI, 3] (1)
The corresponding input picture of each element of the feature vector of output is likely to be at the probability of high, medium and low differentiation degree Value, final output are the maximum differentiation degree of corresponding element probability value.
It, can be using transfer learning (Transfer Learning) during building convolutional neural networks model Method, transfer learning method can inhibit over-fitting, while can improve the performance of the lower classifier of small data quantity condition training. The embodiment of the present invention realizes transfer learning by importing the parameter of other pre-training models.
In a specific embodiment of the present invention, the mould of two kinds of convolutional neural networks of VGG16 and Inception-V3 is had chosen Type can also choose other kinds of neural network in other embodiments, such as use depth residual error neural network (ResNet) generation For traditional VGG convolutional neural networks, lift scheme complexity improves ability in feature extraction.Before training, the embodiment of the present invention The parameter of model pre-training is first imported, pre-training parameter is obtained by ImageNet data set pre-training, and the parameter of pre-training is usual It cannot change or can only carry out fine-tune during training again, the embodiment of the present invention uses the side of fine-tune Method, the fine-tune for carrying out learning rate to latter 8 layers of VGG16 and being 0.0001, learns latter 249 layers of Inception-V3 The fine-tune that habit rate is 0.0001.
Preferably, it in order to go on smoothly training process, can be added behind each convolutional neural networks model complete Articulamentum, as shown in Figure 3.Full connection thereafter will can be inputted after the characteristic spectrum flaky process of convolutional neural networks output Layer is handled, the effect of flaky process be the characteristic expansion of extracting convolutional neural networks as an one-dimensional feature to Amount.In some embodiments, can be inserted among the subsequent full articulamentum of convolutional neural networks model batch normalization layer and Drop-out layers come inhibit gradient to disappear, gradient explosion and overfitting problem, Drop-out rate are 0.5, last full articulamentum is defeated Softmax layers to be arrived out to classify, the selection of target loss function intersects entropy function, and optimizer uses AdamOptimizer, Learning rate is 0.0005, the batch size selected in training process (the size criticized disposably inputs the quantity of picture) It is 64, (process that all pictures are all input into neural metwork training in training set is one epoch to training epochs, can To be interpreted as the number being trained using pictures all in training set) it is 80.Finally, saving keeps verifying collection accuracy rate highest Model parameter checkpoint, the last model parameter as two convolutional neural networks.
After being trained to full articulamentum, we carry out Fine-tune operation to reel product neural network model, and use is small Learning rate the parameter of pre-training is trained.The convolutional neural networks of VGG16, Inception-v3 structure have been trained Cheng Hou, the feature vector f of output 1 × 3VGG16, fInception-v3The as feature vector of deep learning method extraction, finally by this Two feature vectors are stitched together, and are inputted in a new full Connection Neural Network again, to export final classification knot Fruit, and before full Connection Neural Network new using this, also need using training set picture to the full Connection Neural Network into Row training.
Specifically, the embodiment of the present invention includes: to the method that classifier is trained
Training convolutional neural networks VGG16, Inception-V3 is set separately and corresponds to the i.e. full connection of full articulamentum thereafter The hyper parameter of layer two and full articulamentum three, hyper parameter are as shown in table 2.
The hyper parameter set before the training of table 2
learning rate fine-tune learning rate epochs batch size drop-out rate
0.0005 0.0001 80 64 0.5
It imports from the model parameter downloaded on ImageNet into VGG16 and Inception-V3 model.
The image for training is imported to VGG16 and Inception-V3 neural network and full articulamentum two and full articulamentum Three are trained, and obtain trained two convolutional neural networks and full articulamentum two and full articulamentum three.
After training, the hyper parameter of two convolutional neural networks and full articulamentum two and full articulamentum three is reset simultaneously Fixed, the hyper parameter reset is as shown in table 3.
The hyper parameter reset after the training of table 3
learning rate fine-tunelearning rate epochs batch size drop-out rate
0.0 0.000 80 64 0.5
It imports and full articulamentum one is trained for the image of training, obtain trained full articulamentum one.
Three, classifier is tested
Image to be tested is input in trained classifier, VGG16 and Inception-V3 convolutional neural networks Feature vector is extracted from image respectively, the feature vector f1 and f2 of output are stitched together be input to full articulamentum one into The further Feature Dimension Reduction of row, the feature vector f3 that output is one 1 × 3 are used for last classification, and classification results are by feature vector f3 The middle maximum element of numerical value determines.
Four, classifier performance is assessed
In machine learning field, the assessment to classifier performance is an important job, and its evaluation index often has Following several points: accuracy rate (accuracy), accurate rate (precision), recall rate (recall) and F1- estimate.Wherein, accurately Rate be for given test data set, the ratio between sample number and total number of samples that classifier is correctly classified, accurate rate reflect by Classifier be determined as in positive sample be really positive sample specific gravity, recall rate reflects the positive example being appropriately determined and (referring to classification just True sample) specific gravity of the total amount of positive sample in total is accounted for, F1- estimates, and is a finger for comprehensively considering accurate rate and recall rate Mark, the calculation formula of four kinds of indexs are as follows:
Wherein, TP (True Positive) is the quantity for the positive sample that the convolutional neural networks prediction being trained to is positive, FP (False Positive) is the quantity for the negative sample that the convolutional neural networks prediction being trained to is positive, FN (False Negative) be the quantity of positive sample that the convolutional neural networks prediction being trained to is negative, TN (True Negative) be by The quantity for the negative sample that trained convolutional neural networks prediction is negative.Multi-class (present invention is by cervical carcinoma point of the invention High, normal, basic three phases are divided into, each stage is considered as a classification) in statistics, the sample for the classification studied at this time is positive sample This, the sample standard deviation of other classifications is negative sample.
The classifier performance assessment result of the embodiment of the present invention is as shown in table 4.As shown in Table 4, finally for low differentiation journey The classification accuracy of the cervical cancer tissues pathology picture of degree can achieve 96.05%, and middle differentiation can achieve 58.68%, height Differentiation can achieve 85.39%.
4 classifier performance assessment result of table
Fig. 4 is the scatter plot that the F1- of the classifier of training of the embodiment of the present invention estimates, and F1-, which estimates, can represent classifier Performance, F1- measure value is higher, and the robustness of classifier is stronger.
Fig. 5 illustrates the cervical cancer tissues pathological image of classifier successful classification, the cervical cancer tissues of low differentiation degree The cell shape of pathological image is very irregular, and eucaryotic cell structure is difficult to differentiate, and the cell arrangement in middle differentiation degree image is not Rule, but remain eucaryotic cell structure substantially, the cell arrangement neat compact in differentiated degree image, shape it is relatively full and Rule.Since the feature of the histopathology image of middle differentiation degree is not obvious enough, feature is occupy between low differentiation and differentiated, So being easily confused, classification accuracy is low compared to other two classes.
Another aspect of the present invention additionally provides a kind of cervical cancer tissues pathological image analytical equipment based on deep learning, Including image acquiring device, classifier training device and classification results output device, it is preferable that further include classifier performance assessment Device, each device are corresponded with the step of above-mentioned analysis method.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can To be done through the relevant hardware of the program instructions, which be can be stored in a computer readable storage medium, and storage is situated between Matter may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
The invention is not limited to specific embodiments above-mentioned.The present invention, which expands to, any in the present specification to be disclosed New feature or any new combination, and disclose any new method or process the step of or any new combination.

Claims (10)

1. a kind of cervical cancer tissues pathological image analysis method based on deep learning characterized by comprising
Step 1 obtains cervical cancer tissues pathological image, and image tag is arranged for every image;
Step 2 obtains trained classifier, the classifier packet based on being trained for trained image to classifier Two convolutional neural networks and full articulamentum one are included, the input terminal of the full articulamentum one connects described two convolutional neural networks Output end, to the method that classifier is trained include: based on for trained image respectively to two convolutional neural networks It is trained, obtains trained two convolutional neural networks;The parameter of fixed trained two convolutional neural networks, is based on Full articulamentum one is trained for trained image, obtains trained full articulamentum one;
Image to be tested is input in trained classifier by step 3, and two convolutional neural networks are respectively from image Feature vector is extracted, the feature vector f1 and f2 of output are stitched together and are input to full articulamentum one, exports feature vector F3, classification results are determined by the maximum element of numerical value in feature vector f3.
2. a kind of cervical cancer tissues pathological image analysis method based on deep learning according to claim 1, special Sign is, full articulamentum two and full articulamentum three is connected separately with behind two in classifier convolutional neural networks, at this time The input terminal of full articulamentum one connects the output end of full articulamentum two and full articulamentum three, is training two convolution minds in step 2 Through being also trained to full articulamentum two and full articulamentum three while network.
3. a kind of cervical cancer tissues pathological image analysis method based on deep learning according to claim 1, special Sign is, step 1 further include: the image x for obtaining eachiIt is divided into 16 equal-sized subgraph zij, use mirror image side The edge filling picture z ' that subgraph filling growth is equal with widthij, for each picture z 'ij, carry out 0 ° of picture rotation, 90 °, 180 ° and 270 ° operations and flip horizontal, flip vertical and channel turning operation, i=1,2 ..., n, j=1,2..., 16, N is total number of images.
4. a kind of cervical cancer tissues pathological image analysis method based on deep learning according to claim 2, special Sign is, specifically includes in step 2 to the training method of classifier: two convolutional neural networks and full articulamentum is set separately Two and full articulamentum three hyper parameter;It imports from the model parameter downloaded on ImageNet into two convolutional neural networks;It leads Enter and two convolutional neural networks and full articulamentum two and full articulamentum three are trained for trained image, is trained Two convolutional neural networks and full articulamentum two and full articulamentum three;Reset two convolutional neural networks and full articulamentum Two and full articulamentum three hyper parameter and fixation;It imports and full articulamentum one is trained for the image of training, trained Good full articulamentum one.
5. a kind of cervical cancer tissues pathological image analysis method based on deep learning according to claim 1, special Sign is, further includes step 4, carries out Performance Evaluation to classifier, evaluation index includes accuracy rate accuracy, accurate rate Precision, recall rate recall and F1- estimate, and the calculation formula of each index is as follows:
Wherein, TP is the quantity for the positive sample that the convolutional neural networks prediction being trained to is positive, and FP is the convolution being trained to The quantity for the negative sample that neural network prediction is positive, FN are the numbers for the positive sample that the convolutional neural networks prediction being trained to is negative Amount, TN are the quantity for the negative sample that the convolutional neural networks prediction being trained to is negative.
6. a kind of cervical cancer tissues pathological image analysis side based on deep learning according to claim 1-5 Method, which is characterized in that described two convolutional neural networks are respectively VGG16 and Inception-V3.
7. a kind of cervical cancer tissues pathological image analytical equipment based on deep learning characterized by comprising
Image tag is arranged for obtaining cervical cancer tissues pathological image, and for every image in image acquiring device;
Classifier training device, for obtaining trained classifier based on being trained for trained image to classifier, The classifier includes two convolutional neural networks and full articulamentum one, and the input terminal connection of the full articulamentum one is described two The output end of convolutional neural networks, to the method that classifier is trained include: based on for trained image respectively to two Convolutional neural networks are trained, and obtain trained two convolutional neural networks;Fixed trained two convolutional Neural nets The parameter of network obtains trained full articulamentum one based on being trained for trained image to full articulamentum one;
Classification results output device, for image to be tested to be input in trained classifier, two convolutional Neural nets Network extracts feature vector from image respectively, and the feature vector f1 and f2 of output are stitched together and are input to full articulamentum One, feature vector f3 is exported, classification results are determined by the maximum element of numerical value in feature vector f3.
8. a kind of cervical cancer tissues pathological image analytical equipment based on deep learning according to claim 7, special Sign is, full articulamentum two and full articulamentum three is connected separately with behind two in classifier convolutional neural networks, at this time The input terminal of full articulamentum one connects the output end of full articulamentum two and full articulamentum three, and classifier training device is in training two Also full articulamentum two and full articulamentum three are trained while convolutional neural networks.
9. a kind of cervical cancer tissues pathological image analytical equipment based on deep learning according to claim 7, special Sign is, further includes classifier performance assessment device, for carrying out Performance Evaluation to classifier, evaluation index includes accuracy rate Accuracy, accurate rate precision, recall rate recall and F1- estimate, and the calculation formula of each index is as follows:
Wherein, TP is the quantity for the positive sample that the convolutional neural networks prediction being trained to is positive, and FP is the convolution being trained to The quantity for the negative sample that neural network prediction is positive, FN are the numbers for the positive sample that the convolutional neural networks prediction being trained to is negative Amount, TN are the quantity for the negative sample that the convolutional neural networks prediction being trained to is negative.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 6 is realized when being executed by processor.
CN201811212019.7A 2018-10-18 2018-10-18 Cervical cancer tissues pathological image analysis method and equipment based on deep learning Withdrawn CN109376777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811212019.7A CN109376777A (en) 2018-10-18 2018-10-18 Cervical cancer tissues pathological image analysis method and equipment based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811212019.7A CN109376777A (en) 2018-10-18 2018-10-18 Cervical cancer tissues pathological image analysis method and equipment based on deep learning

Publications (1)

Publication Number Publication Date
CN109376777A true CN109376777A (en) 2019-02-22

Family

ID=65400787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811212019.7A Withdrawn CN109376777A (en) 2018-10-18 2018-10-18 Cervical cancer tissues pathological image analysis method and equipment based on deep learning

Country Status (1)

Country Link
CN (1) CN109376777A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978771A (en) * 2019-04-08 2019-07-05 哈尔滨理工大学 Cell image rapid fusion method based on content analysis
CN110009097A (en) * 2019-04-17 2019-07-12 电子科技大学 The image classification method of capsule residual error neural network, capsule residual error neural network
CN110046670A (en) * 2019-04-24 2019-07-23 北京京东尚科信息技术有限公司 Feature vector dimension reduction method and device
CN110084270A (en) * 2019-03-22 2019-08-02 上海鹰瞳医疗科技有限公司 Pathological section image-recognizing method and equipment
CN110458223A (en) * 2019-08-06 2019-11-15 湖南省华芯医疗器械有限公司 Tumor of bronchus automatic testing method and detection system under a kind of scope
CN110472676A (en) * 2019-08-05 2019-11-19 首都医科大学附属北京朝阳医院 Stomach morning cancerous tissue image classification system based on deep neural network
CN111062411A (en) * 2019-11-06 2020-04-24 北京大学 Method, apparatus and device for identifying multiple compounds from mass spectrometry data
CN111134735A (en) * 2019-12-19 2020-05-12 复旦大学附属中山医院 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium
CN111242131A (en) * 2020-01-06 2020-06-05 北京十六进制科技有限公司 Method, storage medium and device for image recognition in intelligent marking
CN111274903A (en) * 2020-01-15 2020-06-12 合肥工业大学 Cervical cell image classification method based on graph convolution neural network
CN111462076A (en) * 2020-03-31 2020-07-28 湖南国科智瞳科技有限公司 Method and system for detecting fuzzy area of full-slice digital pathological image
CN111783571A (en) * 2020-06-17 2020-10-16 陕西中医药大学 Cervical cell automatic classification model establishment and cervical cell automatic classification method
CN111882001A (en) * 2020-08-05 2020-11-03 武汉呵尔医疗科技发展有限公司 Cervical cell image classification method based on cell biological characteristic-convolutional neural network
CN112309068A (en) * 2020-10-29 2021-02-02 电子科技大学中山学院 Forest fire early warning method based on deep learning
CN112861916A (en) * 2021-01-13 2021-05-28 武汉希诺智能医学有限公司 Invasive cervical carcinoma pathological image classification method and system
CN113408620A (en) * 2021-06-21 2021-09-17 西安工业大学 Classification method for breast tissue pathological images
CN113762379A (en) * 2021-09-07 2021-12-07 福州迈新生物技术开发有限公司 Method for generating training data based on immunohistochemistry and storage device
CN115908954A (en) * 2023-03-01 2023-04-04 四川省公路规划勘察设计研究院有限公司 Geological disaster hidden danger identification system and method based on artificial intelligence and electronic equipment
CN117173485A (en) * 2023-09-18 2023-12-05 西安交通大学医学院第二附属医院 Intelligent classification system method and system for lung cancer tissue pathological images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6823129B1 (en) * 2000-02-04 2004-11-23 Quvis, Inc. Scaleable resolution motion image recording and storage system
CN106548178A (en) * 2016-09-26 2017-03-29 深圳大学 A kind of semantic feature auto-scoring method and system based on Lung neoplasm CT images
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image
KR20180092453A (en) * 2017-02-09 2018-08-20 한국기술교육대학교 산학협력단 Face recognition method Using convolutional neural network and stereo image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6823129B1 (en) * 2000-02-04 2004-11-23 Quvis, Inc. Scaleable resolution motion image recording and storage system
CN106548178A (en) * 2016-09-26 2017-03-29 深圳大学 A kind of semantic feature auto-scoring method and system based on Lung neoplasm CT images
KR20180092453A (en) * 2017-02-09 2018-08-20 한국기술교육대학교 산학협력단 Face recognition method Using convolutional neural network and stereo image
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107545302A (en) * 2017-08-02 2018-01-05 北京航空航天大学 A kind of united direction of visual lines computational methods of human eye right and left eyes image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AKILAN T ET AL: "《Effect of fusing features from multiple DCNN architectures in image classification》", 《IET IMAGE PROCESSING》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084270A (en) * 2019-03-22 2019-08-02 上海鹰瞳医疗科技有限公司 Pathological section image-recognizing method and equipment
CN109978771A (en) * 2019-04-08 2019-07-05 哈尔滨理工大学 Cell image rapid fusion method based on content analysis
CN110009097A (en) * 2019-04-17 2019-07-12 电子科技大学 The image classification method of capsule residual error neural network, capsule residual error neural network
CN110009097B (en) * 2019-04-17 2023-04-07 电子科技大学 Capsule residual error neural network and image classification method of capsule residual error neural network
CN110046670A (en) * 2019-04-24 2019-07-23 北京京东尚科信息技术有限公司 Feature vector dimension reduction method and device
CN110046670B (en) * 2019-04-24 2021-04-30 北京京东尚科信息技术有限公司 Feature vector dimension reduction method and device
CN110472676A (en) * 2019-08-05 2019-11-19 首都医科大学附属北京朝阳医院 Stomach morning cancerous tissue image classification system based on deep neural network
CN110458223A (en) * 2019-08-06 2019-11-15 湖南省华芯医疗器械有限公司 Tumor of bronchus automatic testing method and detection system under a kind of scope
CN111062411A (en) * 2019-11-06 2020-04-24 北京大学 Method, apparatus and device for identifying multiple compounds from mass spectrometry data
CN111134735A (en) * 2019-12-19 2020-05-12 复旦大学附属中山医院 Lung cell pathology rapid on-site evaluation system and method and computer readable storage medium
CN111242131A (en) * 2020-01-06 2020-06-05 北京十六进制科技有限公司 Method, storage medium and device for image recognition in intelligent marking
CN111274903A (en) * 2020-01-15 2020-06-12 合肥工业大学 Cervical cell image classification method based on graph convolution neural network
CN111274903B (en) * 2020-01-15 2022-12-06 合肥工业大学 Cervical cell image classification method based on graph convolution neural network
CN111462076A (en) * 2020-03-31 2020-07-28 湖南国科智瞳科技有限公司 Method and system for detecting fuzzy area of full-slice digital pathological image
CN111783571A (en) * 2020-06-17 2020-10-16 陕西中医药大学 Cervical cell automatic classification model establishment and cervical cell automatic classification method
CN111882001A (en) * 2020-08-05 2020-11-03 武汉呵尔医疗科技发展有限公司 Cervical cell image classification method based on cell biological characteristic-convolutional neural network
CN112309068B (en) * 2020-10-29 2022-09-06 电子科技大学中山学院 Forest fire early warning method based on deep learning
CN112309068A (en) * 2020-10-29 2021-02-02 电子科技大学中山学院 Forest fire early warning method based on deep learning
CN112861916A (en) * 2021-01-13 2021-05-28 武汉希诺智能医学有限公司 Invasive cervical carcinoma pathological image classification method and system
CN113408620A (en) * 2021-06-21 2021-09-17 西安工业大学 Classification method for breast tissue pathological images
CN113762379A (en) * 2021-09-07 2021-12-07 福州迈新生物技术开发有限公司 Method for generating training data based on immunohistochemistry and storage device
CN113762379B (en) * 2021-09-07 2022-06-07 福州迈新生物技术开发有限公司 Method for generating training data based on immunohistochemistry and storage device
WO2023035728A1 (en) * 2021-09-07 2023-03-16 福州迈新生物技术开发有限公司 Method for generating training data based on immunohistochemistry, and storage device
CN115908954A (en) * 2023-03-01 2023-04-04 四川省公路规划勘察设计研究院有限公司 Geological disaster hidden danger identification system and method based on artificial intelligence and electronic equipment
CN117173485A (en) * 2023-09-18 2023-12-05 西安交通大学医学院第二附属医院 Intelligent classification system method and system for lung cancer tissue pathological images
CN117173485B (en) * 2023-09-18 2024-02-13 西安交通大学医学院第二附属医院 Intelligent classification system method and system for lung cancer tissue pathological images

Similar Documents

Publication Publication Date Title
CN109376777A (en) Cervical cancer tissues pathological image analysis method and equipment based on deep learning
Man et al. Classification of breast cancer histopathological images using discriminative patches screened by generative adversarial networks
Gertych et al. Machine learning approaches to analyze histological images of tissues from radical prostatectomies
Dov et al. Weakly supervised instance learning for thyroid malignancy prediction from whole slide cytopathology images
Beevi et al. Automatic mitosis detection in breast histopathology images using convolutional neural network based deep transfer learning
CN104933711B (en) A kind of automatic fast partition method of cancer pathology image
JP5315411B2 (en) Mitotic image detection device and counting system, and method for detecting and counting mitotic images
CN112215117A (en) Abnormal cell identification method and system based on cervical cytology image
CN111985536A (en) Gastroscope pathological image classification method based on weak supervised learning
Dov et al. Thyroid cancer malignancy prediction from whole slide cytopathology images
CN108416379A (en) Method and apparatus for handling cervical cell image
US20200372638A1 (en) Automated screening of histopathology tissue samples via classifier performance metrics
CN107871314A (en) A kind of sensitive image discrimination method and device
WO2013019856A1 (en) Automated malignancy detection in breast histopathological images
Zhang et al. Automatic detection of invasive ductal carcinoma based on the fusion of multi-scale residual convolutional neural network and SVM
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
BenTaieb et al. Automatic diagnosis of ovarian carcinomas via sparse multiresolution tissue representation
Rampun et al. Breast density classification using local ternary patterns in mammograms
Abbasi-Sureshjani et al. Molecular subtype prediction for breast cancer using H&E specialized backbone
Zhang et al. Research on application of classification model based on stack generalization in staging of cervical tissue pathological images
Tsaku et al. Texture-based deep learning for effective histopathological cancer image classification
CN113420793A (en) Improved convolutional neural network ResNeSt 50-based gastric ring cell carcinoma classification method
CN111680553A (en) Pathological image identification method and system based on depth separable convolution
Arar et al. High-quality immunohistochemical stains through computational assay parameter optimization
Guo et al. Pathological Detection of Micro and Fuzzy Gastric Cancer Cells Based on Deep Learning.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191122

Address after: 610043 No.129, middle Juqiao street, Wuhou District, Chengdu, Sichuan Province

Applicant after: Sichuan Smart Motion Muniu Intelligent Technology Co., Ltd.

Address before: West high tech Zone Fucheng Road in Chengdu city of Sichuan province 610000 399 No. 7 Building 1 unit 11 floor No. 1107

Applicant before: SICHUAN MONIULIUMA INTELLIGENT TECHNOLOGY CO., LTD.

TA01 Transfer of patent application right
WW01 Invention patent application withdrawn after publication

Application publication date: 20190222

WW01 Invention patent application withdrawn after publication