CN109300530B - Pathological picture identification method and device - Google Patents
Pathological picture identification method and device Download PDFInfo
- Publication number
- CN109300530B CN109300530B CN201810896157.5A CN201810896157A CN109300530B CN 109300530 B CN109300530 B CN 109300530B CN 201810896157 A CN201810896157 A CN 201810896157A CN 109300530 B CN109300530 B CN 109300530B
- Authority
- CN
- China
- Prior art keywords
- neural network
- pathological
- deep neural
- training
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a pathological picture identification method and a pathological picture identification device, wherein the method comprises the following steps: acquiring a pathological picture to be identified; inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological picture to be recognized, and obtaining a preliminary recognition result by each type of deep neural network model; the method comprises the following steps that a plurality of deep neural network models of different types are generated according to pre-training of a plurality of pathological image samples; and fusing the primary recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized. According to the technical scheme, the efficiency and the accuracy of pathological picture identification are improved.
Description
Technical Field
The invention relates to the technical field of medical treatment, in particular to a pathological picture identification method and device.
Background
Lymph node metastasis is the most common mode of metastasis of a tumor. Radical resection of advanced gastric cancer involves complete resection of the primary focus of gastric cancer, metastasis of lymph nodes and affected tissues and organs. The pathological diagnosis after gastric cancer is the gold standard for gastric cancer diagnosis, and provides important basis for staging and treatment of patients. The assessment of whether lymph nodes are metastasized in pathological diagnosis is more critical in diagnosis, and pathologists need to carefully observe each lymph node one by one, so that the whole process is time-consuming and labor-consuming, depends on experience, is not ideal in accuracy, and doctors with different experiences may have risks of different identification conclusions on the same pathological picture.
Disclosure of Invention
The embodiment of the invention provides a pathological picture identification method, which is used for improving the efficiency and accuracy of pathological picture identification and comprises the following steps:
acquiring a pathological picture to be identified;
inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological picture to be recognized, and obtaining a preliminary recognition result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
and fusing the primary recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized.
The embodiment of the invention also provides a device for identifying the pathological picture, which is used for improving the efficiency and the accuracy of pathological picture identification, and comprises the following components:
the acquiring unit is used for acquiring a pathological picture to be identified;
the recognition unit is used for inputting the pathological pictures to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological pictures to be recognized, and obtaining a primary recognition result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
and the fusion unit is used for fusing the primary recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the identification method of the pathological picture when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program for executing the above pathological image identification method is stored.
According to the technical scheme provided by the embodiment of the invention, a pathological picture to be recognized is obtained firstly, then the pathological picture to be recognized is input into a plurality of different types of deep neural network models generated by pre-training, the pathological picture to be recognized is recognized, and a primary recognition result is obtained by each type of deep neural network model; the multiple deep neural network models of different types are generated according to the pre-training of multiple pathological image samples; and finally, fusing the initial recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized, wherein the trained deep neural network model has the function of automatically recognizing the pathological picture, and the pathological picture is input into the deep neural network model, so that the region with malignant lesion on the pathological picture can be recognized, the good and malignant classification of the pathological picture is realized, the whole process is time-saving and labor-saving, the efficiency of pathological picture recognition is improved, the method is not dependent on personal experience of doctors, the final recognition result is obtained by fusing the initial recognition results obtained by the deep neural network models of different types, and the accuracy of pathological picture recognition is greatly improved.
The technical scheme provided by the embodiment of the invention can be applied to the identification of the gastric lymph node cancer metastasis pathological picture and can also be applied to the identification of other cancer pathological pictures.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
fig. 1 is a schematic flowchart of a method for identifying a pathological image according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an embodiment of a method for identifying a pathological image according to the present invention;
FIG. 3 is a schematic diagram illustrating a multi-GPU parallel training principle according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a pathological image recognition device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
Before describing embodiments of the present invention, the terminology related to the present invention will be described first.
1. False positive rate: the ratio of the number of samples that are actually negative and the number of samples that are predicted to be positive by the model to all negative samples.
2. False negative rate: the proportion of false negative rate, the number of samples that are actually positive and the number of samples that are predicted to be negative by the model, in all positive samples.
3. The accuracy is as follows: accuracy ═ (number of correct predicted samples)/(total number of samples).
4. Training set: and inputting the digital pathological image of the metastasis of the gastric lymph node cancer with marked cancer cell areas (lesion areas) to model training.
5. And (3) test set: the digital pathological image of the metastasis of the gastric lymph node cancer with labeled cancer cell areas (lesion areas) is not input to model training.
6. And (4) verification set: digital pathological image of metastasis of gastric lymph node cancer without marking cancer cell area (lesion area)
7. Transfer learning (Transfer learning):
and migrating the trained model parameters to a new model to accelerate the training of the new model.
8. scn file: a medical picture storage format requires special processing when being read.
9. Top5 error rate: the imagenet images usually have 1000 possible categories, 5 category labels can be predicted simultaneously for each image, the results are calculated when any one of the images is predicted correctly, and when all 5 times are wrong, the prediction is wrong, and the classification error rate is called top5 error rate.
10. HE staining: hematoxylin-eosin staining method (hematoxylin-eosin staining), wherein the hematoxylin staining solution is alkaline, and mainly makes the chromatin in the cell nucleus and the nucleic acid in the cytoplasm bluish; eosin is an acid dye that primarily reddens components in the cytoplasm and extracellular matrix.
11. Negative samples: samples that do not contain cancer cells may also be referred to as negative samples: pathological pictures of normal or benign lesions.
12. Positive samples: samples containing cancer cells, which may also be called positive samples: pathological picture of malignant lesion.
The invention aims at the embodiment of the invention that: the pathological section of the lymph node cleaned by the operation of the gastric cancer patient is diagnosed with or without cancer cells according to various characteristics of the lymph node on the pathological tissue section, and if the cancer cells exist, the position of the cancer cells is displayed.
With the increasing proportion of people 60 years old and older to the general population, the number of people suffering from cancer will increase rapidly according to the incidence rate of cancer in the population. This can lead to more strain on medical resources. In the course of cancer diagnosis, pathological diagnosis is the gold standard for definitive diagnosis. The traditional diagnosis of lymph node cancer metastasis requires a pathologist to repeatedly observe lymph nodes under a microscope to determine the number of lymph nodes and the presence or absence of cancer metastasis in each lymph node. Is limited by the experience of the doctor and the fatigue state of the doctor, and can cause misdiagnosis and missed diagnosis with certain probability. The accuracy of the method reaches 99.80% at the patch level, the false positive rate is lower than 0.06% at the patch level, diagnosis of doctors is effectively assisted, misdiagnosis rate and missed diagnosis rate of doctors are reduced, and medical experience of patients is finally improved.
Then, next, the process from finding a technical problem to proposing an embodiment of the present invention by the inventors is described.
From the last 70 s to the present, the machine learning technology is continuously and rapidly developed, and the production efficiency of human beings is improved. In the machine learning development process, the development of machine learning is always restricted by the hardware performance and the effective data volume. Before and after 2010, hardware performance is greatly improved, and a large amount of high-quality data is accumulated, so that deep learning, which is an important component of machine learning, is promoted, and great breakthrough is made in algorithm and application. In image processing, the deep learning model has made a striding progress in tasks such as classification, detection, segmentation, and the like. Under the condition of large effective data volume, a proper deep learning model is used for modeling data, the effect of the deep learning model is usually due to the effect of the traditional machine learning, and the deep learning model has the capability of transfer learning for different data sets, so that the cost of performing feature engineering in the traditional machine learning is obviously reduced. The inventors have used neural network models primarily to model the data in the present invention.
At present, the diagnosis of medical digital pathological images generally adopts the rough process of cutting, training and classifying models and forecasting. In this process, the cut patch mainly adopts three specifications of 256 × 256, 512 × 512, 1024 × 1024. The classification model mainly adopts a model with good Top5 accuracy effect on an academic data set ImageNet, such as an inclusion model series, a ResNet model series and a VGG model series. And obtaining a result for new data by using a single trained model during prediction. Because the ImageNet data set and the digital pathological image have many similarities in the aspect of feature extraction, a model with good effect on the ImageNet data set can also obtain better results when applied to the digital pathological image data set. In the process of training/testing the model, the existing implementation scheme continuously adjusts the model according to the existing data to achieve a better effect.
However, the inventor finds the following technical problems in the existing solutions, and proposes a corresponding solution to the found technical problems:
A. when determining the model to be used, only 1 depth model is selected for training and testing, subject to code implementation limitations and hardware limitations. And limited to the expression capability of the model with the same depth, the model optimizes one performance index while making other performance indexes lower. Because the inventor considers the technical problem, the proposed technical proposal is as follows: firstly, 6 models are selected for training, the characteristics of the models are analyzed, then the results of the 3 models are comprehensively processed to obtain the final result, and the false positive is controlled to be 0.06% on the patch level, the false negative rate is ensured to be below 0.3%, and the total accuracy rate is above 99.8%.
B. When model training is carried out, the training speed of using a single GPU is low, and the parallel computing advantage of multiple GPUs is not fully utilized. The inventor considers the technical problem and proposes that multiple GPUs are used for training, so that under the condition of the same training data volume, the model training time is reduced, the model debugging time in the training phase is shortened, and the time of developers is saved.
C. And in the model prediction improvement stage, more information is mined from the existing data to improve the model in the existing implementation scheme. If the cancer cell types which are not processed by the model are met, the model cannot be identified, and the iteration direction of the model is not attached to the actual pathological diagnosis. Since the inventor considers the technical problem, the technical proposal is provided: by continuously combining medical professional knowledge and supplementing the digital pathological pictures which are not processed by the model, the model can identify more cancer cell forms, the omission ratio is effectively reduced, and the direction of the iterative model is more in line with the identification requirement of the pathological pictures, namely the actual requirement of pathological diagnosis.
The approach proposed by the inventors relates to cancer cell detection based on a deep learning classification model (deep neural network model). The method according to the flow can comprise: 1. obtaining a positive patch sample (positive sample) and a negative patch sample (negative sample) from the digital pathology scn format file; 2. comprehensively analyzing the conditions of negative and positive patch samples, selecting 6 appropriate deep learning classification models, and fully utilizing the advantages of different models; 3. training the 6 determined deep neural network models by using training set data in a multi-GPU environment, so that each model has the capability of predicting the goodness and malignancy of patch on each lymph node in the digital pathological image; 4. carrying out detailed test on the performance of each model by using a test set, and testing the performance of each model, including false positive rate, false negative rate and accuracy rate; 5. and 3 models are comprehensively optimized according to the performance of each model, and the results of the 3 models on the verification set are fused so as to achieve the purposes of improving the accuracy and reducing the false positive rate and the false negative rate. 6. And analyzing a final prediction result, fully combining the professional knowledge of a doctor, supplementing new training data, and further improving the identification accuracy by a targeted iterative model. The identification scheme of the pathological image is described in detail below.
Fig. 1 is a schematic flow chart of a method for recognizing a pathological image according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step 101: acquiring a pathological picture to be identified;
step 102: inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological picture to be recognized, and obtaining a preliminary recognition result by each type of deep neural network model; the multiple deep neural network models of different types are generated according to the pre-training of multiple pathological image samples;
step 103: and fusing the primary recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized.
According to the technical scheme provided by the embodiment of the invention, a pathological picture to be recognized is obtained firstly, then the pathological picture to be recognized is input into a plurality of different types of deep neural network models generated by pre-training, the pathological picture to be recognized is recognized, and a primary recognition result is obtained by each type of deep neural network model; the multiple deep neural network models of different types are generated according to the pre-training of multiple pathological image samples; and finally, fusing the initial recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized, wherein the trained deep neural network model has the function of automatically recognizing the pathological picture, and the pathological picture is input into the deep neural network model, so that the region with malignant lesion on the pathological picture can be recognized, the good and malignant classification of the pathological picture is realized, the whole process is time-saving and labor-saving, the efficiency of pathological picture recognition is improved, the method is not dependent on personal experience of doctors, the final recognition result is obtained by fusing the initial recognition results obtained by the deep neural network models of different types, and the accuracy of pathological picture recognition is greatly improved.
The following describes each step of the identification method of pathological images in the embodiment of the present invention in detail with reference to fig. 2.
First, a process of generating a plurality of different types of deep neural network models by pre-training is introduced.
In one embodiment, the plurality of deep neural network models of different types are generated by training in advance according to the following method:
obtaining sample data, wherein the sample data comprises a positive sample and a negative sample, the positive sample is a malignant lesion pathological picture, the negative sample is a normal or benign lesion pathological picture, and a lesion area is marked on the malignant lesion pathological picture;
dividing the sample data into a training set, a test set and a verification set;
training a plurality of different types of deep neural network models in a first set by using the training set;
testing a plurality of different types of deep neural network models in the trained first set by using the test set;
according to the test result, screening out a plurality of deep neural network models from a plurality of different types of deep neural network models in the first set to serve as a second set;
and performing fusion verification on the plurality of different types of deep neural network models in the second set by using the verification set to obtain a plurality of different types of deep neural network models generated by pre-training.
In specific implementation, firstly, a process of acquiring training data (sample data) is introduced, and a digital pathological picture of the gastric lymph node cancer metastasis is stored and labeled:
according to the medical clinical standard operation, the lymph node tissue of a gastric cancer patient is firstly dyed in an HE dyeing mode to be made into pathological sections. Then, a digital pathological picture is obtained by 40 times of scanning by a digital pathological scanner and is stored on a magnetic disk medium, and a scn-format picture is obtained. The physical size range of one scn picture stored on a disk is 0.4 GB-8 GB, and the order of magnitude of pixels is 10^ 9-10 ^ 10. The digital pathological pictures are labeled by a pathologist with good professional ability by using Imagescope software, and the cancer area is sketched out. The sketched data is saved as an xml tag file in a specific format for program reading.
Next, introducing a preprocessing process for the sample data after the sample data is acquired, acquiring a positive/negative patch (positive/negative sample):
in one embodiment, after obtaining the sample data, the sample data is further preprocessed as follows:
for each negative sample, the following pre-treatments were performed:
converting the normal or benign pathological picture from an RGB color format into an HSV color format;
adjusting the saturation of the normal or benign pathological picture with the HSV color format to a preset threshold;
extracting a plurality of patch pictures with preset pixel sizes from the foreground cell area of the normal or benign pathological picture after the saturation is adjusted to a preset threshold value;
judging a first proportion of a foreground contained in a patch picture with a preset pixel size to the whole patch picture, and deleting the patch picture with the preset pixel size when the first proportion is smaller than a first preset proportion value;
for each positive sample, the following pre-treatments were performed:
extracting a plurality of patch pictures with preset step lengths from a lesion area marked on a pathological picture of a malignant lesion;
judging a second proportion of the foreground contained in the patch picture with the preset step length to the whole patch picture, and deleting the patch picture with the preset step length when the second proportion is smaller than a second preset proportion value; the foreground contained in the patch picture with the preset step length is a lesion area.
In specific implementation, the first preset ratio value and the second preset ratio value may be flexibly set according to actual working requirements, and may be the same as or different from each other, for example, 0.85 mentioned below. The predetermined pixel size may be 224 × 224 pixels mentioned below, or may be 112 × 112, 128 × 128, 256 × 256, 512 × 512 pixels or similar. The preset step size may be 112 × 112 mentioned below, and may be 128 × 128 or similar.
In specific implementation, the number of pixels of the digital pathological picture is large, which can cause the problem of insufficient memory in the subsequent building and training process of the model. To solve this problem, the present invention adopts a way of patch level classification to solve the limitation of insufficient hardware memory, so that this module divides the digital pathological image into patches of 224 × 224 pixels. The method comprises the following steps:
for a sample with negative whole picture, firstly converting the picture from RGB color format to HSV color format, and then determining a proper threshold value in the layer of saturation H to distinguish the cell foreground of the picture from the background blank without cells. In the foreground cell region, the corresponding patch of each foreground region is ensured according to the non-overlapping patch pictures of 224 × 224 pixels from left to right and from top to bottom. And then judging the proportion of the foreground contained in the patch to the whole patch, if the proportion is less than 0.85, indicating that the patch contains more backgrounds, and deleting the patch. And finally, the left patch is the processed data of the pathological image and is used as the input of the model in the next module. This operation is repeated for each negative pathology image sample.
For positive pathological picture samples, an xml tag file is analyzed by a program, and when the tag is read, different closed regions and coordinate points for describing the regions need to be distinguished. At this time, the inside of the delineation region is used as the foreground, and the other part is used as the background. Since the area of the positive region is smaller than that of the negative region in the overall view, in order to obtain more positive patches to balance the number of negative/positive samples, in the positive region of the positive pathology picture, a 224 × 224 pixel patch is taken in an overlapping 1/2 manner, that is, the patch is sequentially extracted from left to right and from top to bottom in the positive region by a step size of 112 × 112. In the positive area edge portion, the ratio of foreground contained in the patch is judged, and if the ratio is less than 0.85, the patch is deleted. This is repeated for each positive pathology image sample, resulting in the data needed in the next module.
Then, the process of training and generating the plurality of different types of deep neural network models after preprocessing the sample data is introduced.
(1) The training/test data was divided to determine 6 models to be trained:
in an embodiment, the plurality of different types of deep neural network models in the first set includes: inception v3 model, resnet18 model, resnet34 model, resnet50 model, VGG16 model, and VGG19 model.
In specific implementation, 80% of the obtained positive and negative samples are used as a training set, 20% of the obtained positive and negative samples are used as a testing set, and the testing set is used for continuously correcting the model in the model training process so that the performance of the model reaches an ideal level. Through research on classification models and by combining the characteristics of the data, the invention selects 6 models (a plurality of deep neural network models of different types in a first set) to train on the same training set, and the selected models are respectively as follows: inception v3, resnet18, resnet34, resnet50, VGG16 and VGG 19. The selected model has been widely used in academic data sets and real classification tasks, however, the inventor selects the above 6 models according to a lot of experiments. Each model has different characteristics: inceptionv3 blends different network substructures to promote model versatility; the resnet18 network is shallow, and has a good effect on the condition of small data volume or small data difference; resnet34 balances model description capabilities and training speed; the training speed of the resnet50 is relatively higher compared with the network structure with the same layer number; the VGG16 and VGG19 models have more parameters, occupy more computing resources and have rich expression capability. The performance of different models in the data set is different, and through experimental verification of various models, the model with better effect is further optimized, and the overall performance index is improved. The 6 models determined in the module provide a prerequisite for the subsequent model optimization and result fusion, and lay a foundation for the subsequent improvement of the identification accuracy.
In specific implementation, the number of models to be selected may be other numbers, or other types of models.
(2) Model training and test iteration:
in the embodiment of the invention, the implementation of the model refers to a relevant paper and an existing implementation scheme, and selects initial network parameters according to characteristics of patch data: learning rate and trend planning thereof, patch size, prediction classification number, gradient descent optimization algorithm and initialization weight. And training and testing each model, analyzing the current performance of the model according to the test result, and adjusting the training parameters to improve each performance of the model. After 5-10 times of training and test iteration, the capability of each model is exerted to the utmost extent, and the optimal performance of the model under the data set is obtained.
In one embodiment, training a plurality of different types of deep neural network models in a first set using the training set includes: a plurality of different types of deep neural network models in the first set are trained in parallel, wherein each model is trained using 2 graphics processor GPUs.
In specific implementation, the embodiment of the invention utilizes multiple GPU computing resources to accelerate the training speed, thereby shortening the period of model training and testing iteration on the whole. In the multi-GPU implementation, gradients calculated by different GPUs need to be fused, and the fusion mode adopted by the method is summation or averaging. On the data set, the summation effect is better, so the method selects a mode of calculating gradient summation by multiple GPUs. Through a large number of experiments of the inventor, the proposal is as follows: the number of the GPUs used for training a single model is determined, and for each model, the training time can be reduced by 80% by using 2 GPUs, for example, more GPUs are used, and the acceleration effect is not obviously improved due to the fact that the synchronization cost of the multiple GPUs is high. In the training process, each model uses 2 GPUs for accelerated training, and 6 models are trained simultaneously, so that the computing resources of 12 GPUs are fully utilized.
(3) The detailed process of parallel training multiple models is described below.
In one embodiment, training each model using 2 graphics processor GPUs may include:
averagely dividing training set data into a first training data stream and a second training data stream which are not overlapped with each other;
acquiring pathological pictures with preset data volume from a first training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a first loss function value, and calculating a partial derivative of each variable by the loss function to obtain a first gradient value of the variable;
acquiring pathological pictures with preset data volume from a second training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a second loss function value, and calculating a partial derivative of each variable by the loss function to obtain a second gradient value of the variable;
the CPU waits for the first GPU (GPU1) and the second GPU (GPU2) to calculate gradient values, sums the gradient values, updates the corresponding variables with the obtained gradient values to obtain new values of the variables, and transfers the updated variable values to the first GPU (GPU1) and the second GPU (GPU2) to override the variable values of the original model (e.g., the VGG16 model shown in fig. 3) in the first GPU (GPU1) and the second GPU (GPU2) until training is completed.
In specific implementation, because the calculation of each model during training is independent, taking the process of training the VGG16 model on 2 GPUs as an example, the flow is shown in fig. 3:
first, the training data is divided into two non-overlapping data streams, which are respectively a training data stream 1 (first training data stream) and a training data stream 2 (second training data stream). A fixed number of patches in data stream 1 are fetched into the model on GPU1, in this example 64 patches. The model is calculated to obtain a loss function value, a first loss function value, and the loss function calculates a partial derivative of each variable to obtain a gradient value 1 (first gradient value) of the variable. A similar operation is performed on GPU2, resulting in variable gradient value 2 (second gradient value). After that, the control right is given to the CPU, and the CPU waits for the GPU1 and the GPU2 to calculate the gradient values, sums the two gradient values, and then updates the corresponding variable with the obtained gradient value to obtain the new value of the variable. The CPU will synchronize model variables on GPU1 and GPU2 each time: the CPU passes the updated variable values to GPU1 and GPU2, overriding the original VGG16 model variable values in GPU1 and GPU2 so that the model variable values in GPU1 and GPU2 remain consistent. The model then reads the data from the data stream again, calculates the loss function, calculates the variable gradient values, and so on. The above mechanism ensures that the data in data streams 1 and 2 will be used up at the same time, and at this time, the data streams are expanded again by using training data, and similarly, data stream 1 and data stream 2 do not overlap each other and have the same data volume.
Because the CPU will wait for the gradient values calculated by GPU1 and GPU2 to be completed before performing the next gradient value summation operation, the waiting process causes time waste, and thus the training speed cannot be increased by 100% by using 2 GPUs. In the present invention, the training speed is improved by 80% using 2 GPUs.
(4) The prediction process of the 3 models selected by fusion after training the multiple models on the verification set is described below.
In one example, the plurality of different types of deep neural network models in the second set includes: resnet34 model, VGG16 model, and VGG19 model
In specific implementation, the performances of 6 models are compared, a resnet18 model, a resnet50 model and an inceptonv 3 model with poor experimental effect are eliminated, and a resnet34 model with low false negative rate, a VGG16 model with low false positive rate and a VGG19 model with low false positive rate are selected. The 3 model result fusion methods are as follows: for the case that all 3 models predict that patch is negative, the final prediction result of the patch is negative; for the case that all 3 models predict that patch is positive, the patch finally predicts that the result is positive; for the case that 3 models are inconsistent for the same patch predictor, this patch final predictor is negative.
The data of the verification set is predicted by adopting the method, the patch prediction result is mapped on the digital pathological picture, the position of the model prediction positive patch is marked on the pathological picture, and visual presentation is carried out to prepare for the result analysis of the next module.
Secondly, a process of predicting by using a plurality of different types of deep neural network models trained in advance is described.
After the step of obtaining the pathological image to be identified, the above-mentioned process of preprocessing the sample data may also be performed, and after the pathological image to be identified is preprocessed, the efficiency and accuracy of identification are further improved.
Thirdly, a process of fusing the preliminary recognition results obtained by a plurality of deep neural network models of different types is introduced, and please refer to the prediction process performed on the verification set by fusing the selected 3 models after the training of the multiple models.
In specific implementation, the beneficial technical effects obtained according to a large number of experimental results are as follows: after 3-5 iterations according to the actual situation, the embodiment of the invention controls the false positive of the patch level to be 0.06%, and simultaneously ensures that the false negative rate is below 0.3% and the overall accuracy of the patch level is above 99.8%.
Fourthly, introducing a step of model optimization, and during a subsequent use process of the model, further including a process of model optimization, and further improving accuracy of identifying the pathological image.
When the false positive of the pathological picture to be identified is judged to be higher according to the identification result, the negative sample of the pathological picture of the type is supplemented to a pathological picture sample database;
when the false negative of the pathological picture to be identified is judged to be higher according to the identification result, supplementing the positive sample of the pathological picture of the type into a pathological picture sample database;
performing optimization training on a plurality of different types of deep neural network models generated by the pre-training according to the supplemented pathological picture sample database to obtain a plurality of updated different types of deep neural network models;
inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, and recognizing the pathological picture to be recognized, which may include:
and inputting the pathological picture to be recognized into the updated deep neural network models of different types, and recognizing the pathological picture to be recognized.
In specific implementation, the data characteristics have a non-negligible effect on the model effect. The model prediction error can be divided into false positive and false negative. From the data point of view analysis, the commonality of the misprediction data is found, namely, the error occurs on some sort of patch. If false positive is higher on a certain type of patch, the corresponding type of patch has less appearance in negative samples, and the pathological pictures with special negative labels need to be supplemented; if the false negative is high, the model does not learn the positive patch well, and the positive pathological pictures need to be supplemented. The classification of the patch with the wrong model prediction needs to be analyzed and determined together with the professional knowledge of the pathologist and the algorithm engineer to determine what data needs to be supplemented. In addition, as the cancer cells have various forms, more diversified pathological pictures are supplemented, so that the model effect is obviously improved.
Based on the same inventive concept, the embodiment of the present invention further provides a device for recognizing pathological images, as described in the following embodiments. Because the principle of the device for solving the problems is similar to the identification method of the pathological picture, the implementation of the device can refer to the implementation of the identification method of the pathological picture, and repeated details are not repeated.
Fig. 4 is a schematic diagram of a pathological image recognition device according to an embodiment of the present invention, and as shown in fig. 4, the device may include:
the acquiring unit 02 is used for acquiring a pathological picture to be identified;
the identification unit 04 is used for inputting the pathological pictures to be identified into a plurality of different types of deep neural network models generated by pre-training, identifying the pathological pictures to be identified, and obtaining a primary identification result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
and the fusion unit 06 is configured to fuse the preliminary identification results obtained by the multiple different types of deep neural network models to obtain a final identification result of the pathological picture to be identified.
In one embodiment, the above apparatus for recognizing a pathological image may further include: the training unit is used for generating the plurality of deep neural network models of different types through pre-training according to the following method:
obtaining sample data, wherein the sample data comprises a positive sample and a negative sample, the positive sample is a malignant lesion pathological picture, the negative sample is a normal or benign lesion pathological picture, and a lesion area is marked on the malignant lesion pathological picture;
dividing the sample data into a training set, a test set and a verification set;
training a plurality of different types of deep neural network models in a first set by using the training set;
testing a plurality of different types of deep neural network models in the trained first set by using the test set;
according to the test result, screening out a plurality of deep neural network models from a plurality of different types of deep neural network models in the first set to serve as a second set;
and performing fusion verification on the plurality of different types of deep neural network models in the second set by using the verification set to obtain a plurality of different types of deep neural network models generated by pre-training.
In one embodiment, the above apparatus for recognizing a pathological image further includes a preprocessing unit, configured to:
for each negative sample, the following pre-treatments were performed:
converting the normal or benign pathological picture from an RGB color format into an HSV color format;
adjusting the saturation of the normal or benign pathological picture with the HSV color format to a preset threshold;
extracting a plurality of patch pictures with preset pixel sizes from the foreground cell area of the normal or benign pathological picture after the saturation is adjusted to a preset threshold value;
judging a first proportion of a foreground contained in a patch picture with a preset pixel size to the whole patch picture, and deleting the patch picture with the preset pixel size when the first proportion is smaller than a first preset proportion value;
for each positive sample, the following pre-treatments were performed:
extracting a plurality of patch pictures with preset step lengths from a lesion area marked on a pathological picture of a malignant lesion;
judging a second proportion of the foreground contained in the patch picture with the preset step length to the whole patch picture, and deleting the patch picture with the preset step length when the second proportion is smaller than a second preset proportion value; the foreground contained in the patch picture with the preset step length is a lesion area.
In one embodiment, the plurality of different types of deep neural network models in the first set includes: inceptontionv 3 model, resnet18 model, resnet34 model, resnet50 model, VGG16 model, and VGG19 model;
the plurality of different types of deep neural network models in the second set includes: the resnet34 model, the VGG16 model, and the VGG19 model.
In one embodiment, training a plurality of different types of deep neural network models in a first set using the training set includes: a plurality of different types of deep neural network models in the first set are trained in parallel, wherein each model is trained using 2 graphics processor GPUs.
In one embodiment, training each model using 2 graphics processor GPUs may include:
averagely dividing training set data into a first training data stream and a second training data stream which are not overlapped with each other;
acquiring pathological pictures with preset data volume from a first training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a first loss function value, and calculating a partial derivative of each variable by the loss function to obtain a first gradient value of the variable;
acquiring pathological pictures with preset data volume from a second training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a second loss function value, and calculating a partial derivative of each variable by the loss function to obtain a second gradient value of the variable;
and the CPU waits for the first GPU and the second GPU to finish calculating the gradient values, sums the two gradient values, updates the corresponding variable by using the obtained gradient values to obtain a new value of the variable, transmits the updated variable value to the first GPU and the second GPU, and covers the original model variable value in the first GPU and the second GPU until the training is finished.
In an embodiment, the above apparatus for recognizing a pathological image may further include an optimization unit, where the optimization unit is configured to:
when the false positive of the pathological picture to be identified is judged to be higher according to the identification result, the negative sample of the pathological picture of the type is supplemented to a pathological picture sample database;
when the false negative of the pathological picture to be identified is judged to be higher according to the identification result, supplementing the positive sample of the pathological picture of the type into a pathological picture sample database;
performing optimization training on a plurality of different types of deep neural network models generated by the pre-training according to the supplemented pathological picture sample database to obtain a plurality of updated different types of deep neural network models;
inputting the pathological pictures to be recognized into a plurality of different types of deep neural network models generated by pre-training, and recognizing the pathological pictures to be recognized, wherein the method comprises the following steps:
and inputting the pathological picture to be recognized into the updated deep neural network models of different types, and recognizing the pathological picture to be recognized.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the identification method of the pathological picture when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program for executing the above pathological image identification method is stored.
In summary, nowadays, artificial intelligence technology is emerging again, along with big data, new algorithms, and development of cloud computing, training of deep neural network models has become possible, artificial intelligence will bring far-reaching influence to various industries, and artificial intelligence + medical treatment is certainly also in the artificial intelligence technology. The embodiment of the invention utilizes the advantages of artificial intelligence in picture classification and combines traditional medical treatment, so that the pathological pictures can be correctly classified.
The technical scheme provided by the embodiment of the invention can be applied to the identification of the gastric lymph node cancer metastasis pathological picture and can also be applied to the identification of other cancer pathological pictures.
The technical scheme provided by the embodiment of the invention has the following beneficial technical effects:
①, 6 models are selected in the training stage, 3 models are optimized, the results are fused, 6 models are selected for training, the characteristics of each model are fully considered, 3 models are optimized according to the false positive rate and the false negative rate, the intersection is calculated for the positive prediction results of the 3 models to obtain the final result, and the fusion of multiple models effectively exerts the advantages of each model, so that the overall performance is improved.
②, during training, it adopts multiple GPU acceleration scheme, when multiple GPUs are in parallel, it distributes gradient data to different GPUs for calculation, then makes synchronization of multiple GPU results.
③, the stage of analyzing the model result is emphatically combined with the professional knowledge of the doctor and is fit with the reality.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A pathological picture recognition device, comprising:
the acquiring unit is used for acquiring a pathological picture to be identified;
the recognition unit is used for inputting the pathological pictures to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological pictures to be recognized, and obtaining a primary recognition result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
the fusion unit is used for fusing the primary recognition results obtained by the deep neural network models of different types to obtain the final recognition result of the pathological picture to be recognized;
the identification device of the pathological picture further comprises: the training unit is used for generating the plurality of deep neural network models of different types through pre-training according to the following method:
obtaining sample data, wherein the sample data comprises a positive sample and a negative sample, the positive sample is a malignant lesion pathological picture, the negative sample is a normal or benign lesion pathological picture, and a lesion area is marked on the malignant lesion pathological picture;
dividing the sample data into a training set, a test set and a verification set;
training a plurality of different types of deep neural network models in a first set by using the training set;
testing a plurality of different types of deep neural network models in the trained first set by using the test set;
according to the test result, screening out a plurality of deep neural network models from a plurality of different types of deep neural network models in the first set to serve as a second set;
carrying out fusion verification on a plurality of different types of deep neural network models in a second set by using the verification set to obtain a plurality of different types of deep neural network models generated by pre-training;
performing fusion verification on a plurality of different types of deep neural network models in the second set by using the verification set, wherein the fusion verification comprises the following steps: for the condition that the predictions of a plurality of different types of deep neural network models are negative, the final prediction result is negative; for the condition that the predictions of a plurality of different types of deep neural network models are positive, the final prediction result is positive; for the condition that the prediction results of a plurality of different types of deep neural network models are inconsistent, the final prediction result is negative;
training a plurality of different types of deep neural network models in a first set using the training set, including: training a plurality of different types of deep neural network models in the first set in parallel, wherein each model is trained using 2 Graphics Processing Units (GPUs);
training each model using 2 Graphics Processing Units (GPUs) includes:
averagely dividing training set data into a first training data stream and a second training data stream which are not overlapped with each other;
acquiring pathological pictures with preset data volume from a first training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a first loss function value, and calculating a partial derivative of each variable by the loss function to obtain a first gradient value of the variable;
acquiring pathological pictures with preset data volume from a second training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a second loss function value, and calculating a partial derivative of each variable by the loss function to obtain a second gradient value of the variable;
the CPU waits for the first GPU and the second GPU to finish calculating the gradient values, sums the gradient values, updates corresponding variables by using the obtained gradient values to obtain new values of the variables, transmits the updated variable values to the first GPU and the second GPU, covers original model variable values in the first GPU and the second GPU until training is finished;
the identification method of the pathological picture is applied to identification of the gastric lymph node cancer metastasis pathological picture;
the identification device of the pathological picture further comprises a preprocessing unit, wherein the preprocessing unit is used for:
for each negative sample, the following pre-treatments were performed:
converting the normal or benign pathological picture from an RGB color format into an HSV color format;
adjusting the saturation of the normal or benign pathological picture with the HSV color format to a preset threshold;
extracting a plurality of patch pictures with preset pixel sizes from the foreground cell area of the normal or benign pathological picture after the saturation is adjusted to a preset threshold value;
judging a first proportion of a foreground contained in a patch picture with a preset pixel size to the whole patch picture, and deleting the patch picture with the preset pixel size when the first proportion is smaller than a first preset proportion value;
for each positive sample, the following pre-treatments were performed:
extracting a plurality of patch pictures with preset step lengths from a lesion area marked on a pathological picture of a malignant lesion;
judging a second proportion of the foreground contained in the patch picture with the preset step length to the whole patch picture, and deleting the patch picture with the preset step length when the second proportion is smaller than a second preset proportion value; the foreground contained in the patch picture with the preset step length is a lesion area.
2. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements a method for identifying a pathological picture comprising:
acquiring a pathological picture to be identified;
inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological picture to be recognized, and obtaining a preliminary recognition result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
fusing the primary recognition results obtained by a plurality of different types of deep neural network models to obtain a final recognition result of the pathological picture to be recognized;
pre-training and generating the plurality of different types of deep neural network models according to the following method:
obtaining sample data, wherein the sample data comprises a positive sample and a negative sample, the positive sample is a malignant lesion pathological picture, the negative sample is a normal or benign lesion pathological picture, and a lesion area is marked on the malignant lesion pathological picture;
dividing the sample data into a training set, a test set and a verification set;
training a plurality of different types of deep neural network models in a first set by using the training set;
testing a plurality of different types of deep neural network models in the trained first set by using the test set;
according to the test result, screening out a plurality of deep neural network models from a plurality of different types of deep neural network models in the first set to serve as a second set;
carrying out fusion verification on a plurality of different types of deep neural network models in a second set by using the verification set to obtain a plurality of different types of deep neural network models generated by pre-training;
performing fusion verification on a plurality of different types of deep neural network models in the second set by using the verification set, wherein the fusion verification comprises the following steps: for the condition that the predictions of a plurality of different types of deep neural network models are negative, the final prediction result is negative; for the condition that the predictions of a plurality of different types of deep neural network models are positive, the final prediction result is positive; for the condition that the prediction results of a plurality of different types of deep neural network models are inconsistent, the final prediction result is negative;
training a plurality of different types of deep neural network models in a first set using the training set, including: training a plurality of different types of deep neural network models in the first set in parallel, wherein each model is trained using 2 Graphics Processing Units (GPUs);
training each model using 2 Graphics Processing Units (GPUs) includes:
averagely dividing training set data into a first training data stream and a second training data stream which are not overlapped with each other;
acquiring pathological pictures with preset data volume from a first training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a first loss function value, and calculating a partial derivative of each variable by the loss function to obtain a first gradient value of the variable;
acquiring pathological pictures with preset data volume from a second training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a second loss function value, and calculating a partial derivative of each variable by the loss function to obtain a second gradient value of the variable;
the CPU waits for the first GPU and the second GPU to finish calculating the gradient values, sums the gradient values, updates corresponding variables by using the obtained gradient values to obtain new values of the variables, transmits the updated variable values to the first GPU and the second GPU, covers original model variable values in the first GPU and the second GPU until training is finished;
the identification method of the pathological picture is applied to identification of the gastric lymph node cancer metastasis pathological picture;
after obtaining the sample data, the sample data is further preprocessed according to the following mode:
for each negative sample, the following pre-treatments were performed:
converting the normal or benign pathological picture from an RGB color format into an HSV color format;
adjusting the saturation of the normal or benign pathological picture with the HSV color format to a preset threshold;
extracting a plurality of patch pictures with preset pixel sizes from the foreground cell area of the normal or benign pathological picture after the saturation is adjusted to a preset threshold value;
judging a first proportion of a foreground contained in a patch picture with a preset pixel size to the whole patch picture, and deleting the patch picture with the preset pixel size when the first proportion is smaller than a first preset proportion value;
for each positive sample, the following pre-treatments were performed:
extracting a plurality of patch pictures with preset step lengths from a lesion area marked on a pathological picture of a malignant lesion;
judging a second proportion of the foreground contained in the patch picture with the preset step length to the whole patch picture, and deleting the patch picture with the preset step length when the second proportion is smaller than a second preset proportion value; the foreground contained in the patch picture with the preset step length is a lesion area.
3. The computer device of claim 2, wherein the plurality of different types of deep neural network models in the first set comprises: inception v3 model, resnet18 model, resnet34 model, resnet50 model, VGG16 model, and VGG19 model;
the plurality of different types of deep neural network models in the second set includes: the resnet34 model, the VGG16 model, and the VGG19 model.
4. The computer device of claim 2, wherein the identification method of the pathology picture further comprises:
when the false positive of the pathological picture to be identified is judged to be higher according to the identification result, the negative sample of the pathological picture of the type is supplemented to a pathological picture sample database;
when the false negative of the pathological picture to be identified is judged to be higher according to the identification result, supplementing the positive sample of the pathological picture of the type into a pathological picture sample database;
performing optimization training on a plurality of different types of deep neural network models generated by the pre-training according to the supplemented pathological picture sample database to obtain a plurality of updated different types of deep neural network models;
inputting the pathological pictures to be recognized into a plurality of different types of deep neural network models generated by pre-training, and recognizing the pathological pictures to be recognized, wherein the method comprises the following steps:
and inputting the pathological picture to be recognized into the updated deep neural network models of different types, and recognizing the pathological picture to be recognized.
5. A computer-readable storage medium characterized by storing a computer program that executes an identification method of a pathology image:
acquiring a pathological picture to be identified;
inputting the pathological picture to be recognized into a plurality of different types of deep neural network models generated by pre-training, recognizing the pathological picture to be recognized, and obtaining a preliminary recognition result by each type of deep neural network model; the deep neural network models of different types are generated by pre-training according to a plurality of pathological image samples;
fusing the primary recognition results obtained by a plurality of different types of deep neural network models to obtain a final recognition result of the pathological picture to be recognized;
pre-training and generating the plurality of different types of deep neural network models according to the following method:
obtaining sample data, wherein the sample data comprises a positive sample and a negative sample, the positive sample is a malignant lesion pathological picture, the negative sample is a normal or benign lesion pathological picture, and a lesion area is marked on the malignant lesion pathological picture;
dividing the sample data into a training set, a test set and a verification set;
training a plurality of different types of deep neural network models in a first set by using the training set;
testing a plurality of different types of deep neural network models in the trained first set by using the test set;
according to the test result, screening out a plurality of deep neural network models from a plurality of different types of deep neural network models in the first set to serve as a second set;
carrying out fusion verification on a plurality of different types of deep neural network models in a second set by using the verification set to obtain a plurality of different types of deep neural network models generated by pre-training;
performing fusion verification on a plurality of different types of deep neural network models in the second set by using the verification set, wherein the fusion verification comprises the following steps: for the condition that the predictions of a plurality of different types of deep neural network models are negative, the final prediction result is negative; for the condition that the predictions of a plurality of different types of deep neural network models are positive, the final prediction result is positive; for the condition that the prediction results of a plurality of different types of deep neural network models are inconsistent, the final prediction result is negative;
training a plurality of different types of deep neural network models in a first set using the training set, including: training a plurality of different types of deep neural network models in the first set in parallel, wherein each model is trained using 2 Graphics Processing Units (GPUs);
training each model using 2 Graphics Processing Units (GPUs) includes:
averagely dividing training set data into a first training data stream and a second training data stream which are not overlapped with each other;
acquiring pathological pictures with preset data volume from a first training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a first loss function value, and calculating a partial derivative of each variable by the loss function to obtain a first gradient value of the variable;
acquiring pathological pictures with preset data volume from a second training data stream, inputting the pathological pictures into a current model, calculating the model to obtain a second loss function value, and calculating a partial derivative of each variable by the loss function to obtain a second gradient value of the variable;
the CPU waits for the first GPU and the second GPU to finish calculating the gradient values, sums the gradient values, updates corresponding variables by using the obtained gradient values to obtain new values of the variables, transmits the updated variable values to the first GPU and the second GPU, covers original model variable values in the first GPU and the second GPU until training is finished;
the identification method of the pathological picture is applied to identification of the gastric lymph node cancer metastasis pathological picture;
after obtaining the sample data, the sample data is further preprocessed according to the following mode:
for each negative sample, the following pre-treatments were performed:
converting the normal or benign pathological picture from an RGB color format into an HSV color format;
adjusting the saturation of the normal or benign pathological picture with the HSV color format to a preset threshold;
extracting a plurality of patch pictures with preset pixel sizes from the foreground cell area of the normal or benign pathological picture after the saturation is adjusted to a preset threshold value;
judging a first proportion of a foreground contained in a patch picture with a preset pixel size to the whole patch picture, and deleting the patch picture with the preset pixel size when the first proportion is smaller than a first preset proportion value;
for each positive sample, the following pre-treatments were performed:
extracting a plurality of patch pictures with preset step lengths from a lesion area marked on a pathological picture of a malignant lesion;
judging a second proportion of the foreground contained in the patch picture with the preset step length to the whole patch picture, and deleting the patch picture with the preset step length when the second proportion is smaller than a second preset proportion value; the foreground contained in the patch picture with the preset step length is a lesion area.
6. The computer-readable storage medium of claim 5, wherein the plurality of different types of deep neural network models in the first set comprises: inception v3 model, resnet18 model, resnet34 model, resnet50 model, VGG16 model, and VGG19 model;
the plurality of different types of deep neural network models in the second set includes: the resnet34 model, the VGG16 model, and the VGG19 model.
7. The computer-readable storage medium of claim 5, wherein the identification method of the pathology picture further comprises:
when the false positive of the pathological picture to be identified is judged to be higher according to the identification result, the negative sample of the pathological picture of the type is supplemented to a pathological picture sample database;
when the false negative of the pathological picture to be identified is judged to be higher according to the identification result, supplementing the positive sample of the pathological picture of the type into a pathological picture sample database;
performing optimization training on a plurality of different types of deep neural network models generated by the pre-training according to the supplemented pathological picture sample database to obtain a plurality of updated different types of deep neural network models;
inputting the pathological pictures to be recognized into a plurality of different types of deep neural network models generated by pre-training, and recognizing the pathological pictures to be recognized, wherein the method comprises the following steps:
and inputting the pathological picture to be recognized into the updated deep neural network models of different types, and recognizing the pathological picture to be recognized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810896157.5A CN109300530B (en) | 2018-08-08 | 2018-08-08 | Pathological picture identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810896157.5A CN109300530B (en) | 2018-08-08 | 2018-08-08 | Pathological picture identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109300530A CN109300530A (en) | 2019-02-01 |
CN109300530B true CN109300530B (en) | 2020-02-21 |
Family
ID=65168188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810896157.5A Active CN109300530B (en) | 2018-08-08 | 2018-08-08 | Pathological picture identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109300530B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961423B (en) * | 2019-02-15 | 2024-05-31 | 平安科技(深圳)有限公司 | Lung nodule detection method based on classification model, server and storage medium |
CN110097564B (en) * | 2019-04-04 | 2023-06-16 | 平安科技(深圳)有限公司 | Image labeling method and device based on multi-model fusion, computer equipment and storage medium |
CN110335668A (en) * | 2019-05-22 | 2019-10-15 | 台州市中心医院(台州学院附属医院) | Thyroid cancer cell pathological map auxiliary analysis method and system based on deep learning |
CN110706812A (en) * | 2019-09-29 | 2020-01-17 | 医渡云(北京)技术有限公司 | Medical index time sequence prediction method, device, medium and electronic equipment |
CN111276254A (en) * | 2020-01-13 | 2020-06-12 | 印迹信息科技(北京)有限公司 | Medical open platform system and diagnosis and treatment data processing method |
CN111325103B (en) * | 2020-01-21 | 2020-11-03 | 华南师范大学 | Cell labeling system and method |
CN111710394A (en) * | 2020-06-05 | 2020-09-25 | 沈阳智朗科技有限公司 | Artificial intelligence assisted early gastric cancer screening system |
CN111815609B (en) * | 2020-07-13 | 2024-03-01 | 北京小白世纪网络科技有限公司 | Pathological image classification method and system based on context awareness and multi-model fusion |
CN111899252B (en) * | 2020-08-06 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Pathological image processing method and device based on artificial intelligence |
CN112348059A (en) * | 2020-10-23 | 2021-02-09 | 北京航空航天大学 | Deep learning-based method and system for classifying multiple dyeing pathological images |
CN112507801A (en) * | 2020-11-14 | 2021-03-16 | 武汉中海庭数据技术有限公司 | Lane road surface digital color recognition method, speed limit information recognition method and system |
CN112734707B (en) * | 2020-12-31 | 2023-03-24 | 重庆西山科技股份有限公司 | Auxiliary detection method, system and device for 3D endoscope and storage medium |
CN113222928B (en) * | 2021-05-07 | 2023-09-19 | 北京大学第一医院 | Urine cytology artificial intelligence urothelial cancer identification system |
CN114693628A (en) * | 2022-03-24 | 2022-07-01 | 生仝智能科技(北京)有限公司 | Pathological index determination method, pathological index determination device, pathological index determination equipment and storage medium |
CN118378726B (en) * | 2024-06-25 | 2024-09-20 | 之江实验室 | Model training system, method, storage medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1815399A2 (en) * | 2004-11-19 | 2007-08-08 | Koninklijke Philips Electronics N.V. | A stratification method for overcoming unbalanced case numbers in computer-aided lung nodule false positive reduction |
WO2008035276A2 (en) * | 2006-09-22 | 2008-03-27 | Koninklijke Philips Electronics N.V. | Methods for feature selection using classifier ensemble based genetic algorithms |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN205015889U (en) * | 2015-09-23 | 2016-02-03 | 北京科技大学 | Definite system of traditional chinese medical science lingual diagnosis model based on convolution neuroid |
CN107564580B (en) * | 2017-09-11 | 2019-02-12 | 合肥工业大学 | Gastroscope visual aids processing system and method based on integrated study |
CN107909566A (en) * | 2017-10-28 | 2018-04-13 | 杭州电子科技大学 | A kind of image-recognizing method of the cutaneum carcinoma melanoma based on deep learning |
CN108288506A (en) * | 2018-01-23 | 2018-07-17 | 雨声智能科技(上海)有限公司 | A kind of cancer pathology aided diagnosis method based on artificial intelligence technology |
CN108364293A (en) * | 2018-04-10 | 2018-08-03 | 复旦大学附属肿瘤医院 | A kind of on-line training thyroid tumors Ultrasound Image Recognition Method and its device |
-
2018
- 2018-08-08 CN CN201810896157.5A patent/CN109300530B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1815399A2 (en) * | 2004-11-19 | 2007-08-08 | Koninklijke Philips Electronics N.V. | A stratification method for overcoming unbalanced case numbers in computer-aided lung nodule false positive reduction |
WO2008035276A2 (en) * | 2006-09-22 | 2008-03-27 | Koninklijke Philips Electronics N.V. | Methods for feature selection using classifier ensemble based genetic algorithms |
Also Published As
Publication number | Publication date |
---|---|
CN109300530A (en) | 2019-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109300530B (en) | Pathological picture identification method and device | |
CN113454733B (en) | Multi-instance learner for prognostic tissue pattern recognition | |
CN108596882B (en) | The recognition methods of pathological picture and device | |
CN111448582B (en) | System and method for single channel whole cell segmentation | |
Ding et al. | Multi-scale fully convolutional network for gland segmentation using three-class classification | |
Xie et al. | Statistical karyotype analysis using CNN and geometric optimization | |
CN110705403A (en) | Cell sorting method, cell sorting device, cell sorting medium, and electronic apparatus | |
CN111598875A (en) | Method, system and device for building thyroid nodule automatic detection model | |
CN112347977B (en) | Automatic detection method, storage medium and device for induced pluripotent stem cells | |
US11176412B2 (en) | Systems and methods for encoding image features of high-resolution digital images of biological specimens | |
Sikder et al. | Supervised learning-based cancer detection | |
CN112990222B (en) | Image boundary knowledge migration-based guided semantic segmentation method | |
Arbelle et al. | Dual-task ConvLSTM-UNet for instance segmentation of weakly annotated microscopy videos | |
CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
CN115953393B (en) | Intracranial aneurysm detection system, device and storage medium based on multitask learning | |
Wen et al. | Review of research on the instance segmentation of cell images | |
CN112883770A (en) | PD-1/PD-L1 pathological picture identification method and device based on deep learning | |
CN113096080A (en) | Image analysis method and system | |
CN105354405A (en) | Machine learning based immunohistochemical image automatic interpretation system | |
Kromp et al. | Deep Learning architectures for generalized immunofluorescence based nuclear image segmentation | |
Qamar et al. | Segmentation and Characterization of Macerated Fibers and Vessels Using Deep Learning | |
Zheng et al. | WPNet: Wide Pyramid Network for Recognition of HER2 Expression Levels in Breast Cancer Evaluation | |
Sun et al. | Semi-supervised breast cancer pathology image segmentation based on fine-grained classification guidance | |
Bruch et al. | Evaluation of semi-supervised learning using sparse labeling to segment cell nuclei | |
Kromp et al. | Machine learning framework incorporating expert knowledge in tissue image annotation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |