CN108346145B - Identification method of unconventional cells in pathological section - Google Patents

Identification method of unconventional cells in pathological section Download PDF

Info

Publication number
CN108346145B
CN108346145B CN201810097641.1A CN201810097641A CN108346145B CN 108346145 B CN108346145 B CN 108346145B CN 201810097641 A CN201810097641 A CN 201810097641A CN 108346145 B CN108346145 B CN 108346145B
Authority
CN
China
Prior art keywords
model
cells
training
classification
unconventional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810097641.1A
Other languages
Chinese (zh)
Other versions
CN108346145A (en
Inventor
吴健
王彦杰
王文哲
刘雪晨
吴边
陈为
吴福理
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810097641.1A priority Critical patent/CN108346145B/en
Publication of CN108346145A publication Critical patent/CN108346145A/en
Application granted granted Critical
Publication of CN108346145B publication Critical patent/CN108346145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention discloses a method for identifying unconventional cells in pathological sections, which comprises the steps of preprocessing an electronic scanning pathological section to obtain an effective identification area, inputting a full convolution network for pre-training, replacing a full convolution network head fine-tuning network with a full connection layer, enabling the full convolution network to have the capacity of extracting the characteristics of the unconventional cells, further determining the position of the unconventional cells, classifying the effective identification area more effectively, voting by combining the prediction results of a plurality of common classification networks, and outputting a more stable classification result.

Description

Identification method of unconventional cells in pathological section
Technical Field
The invention belongs to the field of medical imaging, and particularly relates to a method for identifying unconventional cells in a pathological section.
Background
The unconventional cells (or abnormal morphological cells) in the traditional pathological section are screened manually: under the microscope, the work of scanning the whole section by a professional pathologist through the movement of the section and then visually searching whether the whole section has the abnormal cells is heavy and time-consuming, and the error rate is increased along with the increase of the reading time.
With the continuous development of science and technology, the identification of non-conventional cells in pathological sections can be subjected to preliminary screening with the help of a computer.
Iterative computer vision is continuously updated by Network improved structures such as VGGNet, ResNet, DenseNet and the like based on a Convolutional Neural Network (CNN) algorithm, and the accuracy rate of the iterative computer vision is higher than that of human eyes on natural images. Semantic Segmentation (Semantic Segmentation) of images is an important research direction of computer vision, and the task of the Semantic Segmentation is to classify each pixel of a single image through a computer algorithm. In medical imaging, semantic segmentation is often used to segment organs, tissues, or cells, etc. in an image for subsequent classification.
Jonathan L ong proposes that a convolution and Deconvolution (Deconvolution) are used for semantic segmentation tasks in a full convolution neural network (FCN) to replace a traditional full-connection semantic segmentation method, and the semantic segmentation tasks are one of main methods of a semantic segmentation model, U-Net is a typical full convolution network, and the main idea is that the network is divided into a down-sampling layer and an up-sampling layer, common convolution and pooling layers are adopted, and the feature map is amplified by up-sampling bilinear interpolation or Deconvolution, namely the feature map can be consistent with and connected with a shallow feature map in size.
Disclosure of Invention
The invention aims to provide a method for identifying abnormal cells in a pathological section, which greatly reduces the workload of manual screening of the abnormal cells in the pathological section and quickly and accurately screens the abnormal cells.
The conventional cells are normal cells of a human body, and the non-conventional cells correspond to the normal cells of the human body and are abnormal morphological cells of the human body.
The working principle of the technical scheme of the invention is as follows:
the method comprises the steps of preprocessing an electronic scanning pathological section into a plurality of areas, judging whether the areas are effective or not by taking the average value of an A channel after the electronic scanning pathological section is converted into an L AB channel as a basis to obtain all effective judging areas in the pathological section, inputting the effective judging areas preprocessed by using a redistribution and zscore method into a full convolution network for pre-training, and then using a full connecting layer to replace a head fine adjustment network of the full convolution network, so that the full convolution network has the capability of extracting the characteristics of unconventional cells, and further determining the positions of the unconventional cells, thereby classifying the effective judging areas more effectively.
A method for identifying unconventional cells on a pathological section comprises the following steps:
(1) preprocessing an electronic scanning pathological section to obtain an effective discrimination area in the pathological section, wherein an unconventional cell pixel area in the effective discrimination area is a positive sample, and a conventional cell pixel area is a negative sample;
(2) training the positive and negative samples obtained in the step (1) by adopting a full convolution network algorithm, and adjusting network parameters according to the coincidence degree of the model prediction result and the label to obtain a convergent slice segmentation model;
(3) replacing a head part cutter of the slice segmentation model obtained in the step (2) with a classifier, using a discrimination region containing unconventional cells as a positive example and a discrimination region completely not containing the unconventional cells as a negative example, and finely adjusting network parameters to adapt to a classification task to obtain a segmentation pre-training classification model;
(4) taking the effective discrimination region obtained in the step (1), taking the discrimination region containing the unconventional cells as a positive example and taking the discrimination region completely not containing the unconventional cells as a negative example, and training k common classification models by using a k-fold cross validation mode in a common convolutional neural network classification method;
the value range of k is an integer between 5 and 10;
(5) fusing the segmented pre-training classification model obtained in the step (3) with the k common classification models obtained in the step (4) by a model integration method to construct a final classification model;
(6) inputting the effective discrimination region obtained by processing the new pathological section which is not marked in the step (1) into a final classification model, and outputting the unconventional cells with the probability value of more than 0.5 as an identification result.
In the step (1), the pretreatment step is as follows:
(1-1) dividing the pathological section amplified by 20 × into areas with the same size of 512 × 512-2048 × 2048 pixels, and storing the areas respectively;
(1-2) converting each small block into an image after L AB channels, taking the small block of which the average value of the A channel exceeds a threshold value t as an effective judgment area, and discarding the rest small blocks;
the threshold value t is 120-150.
The reason why the average value of the A channel converted to L AB is used as the basis for distinguishing the effective region in the step (1-2) is that the effective region such as histiocytes in the pathological section is purple or red after being dyed, and in the L AB channel, the A channel represents the red degree of the pixel, so that the A channel is used as the basis for distinguishing, and the region is considered to contain effective tissues or cells when the threshold value t is exceeded.
In the step (2), the evaluation method for the coincidence degree of the model prediction result and the label comprises Dice L oss, Cross entry or Mean Squared Error.
In the step (2), the method for training the convergent slice segmentation model specifically comprises the following steps:
(2-1) compressing the input effective distinguishing area into a matrix with the pixels being 256 × 256-512 × 512 by using an image compression algorithm; this ratio ensures that most image features are preserved while some tiny features are discarded, which contribute less to the classification of positive and negative samples.
(2-2) normalizing and converting the above matrix to a standard normal distribution by a redistribution and z-score method;
the image (the matrix with the pixels of 256 × 256 to 512 × 512) obtained in the step (2-1) is RGB triple channels, and in order to be better learnt by the neural network, redistribution and normalization are generally required, and the specific operation flow is as follows: image pixels are first divided by 255 and projected into the [0,1] interval, and then the data is transformed to a standard normal distribution by subtracting the mean divided by the variance using a zscore normalized approach, which is calculated as follows:
Figure BDA0001565476200000031
wherein z isiRepresenting the final output of the z-score algorithm, xiWhich represents the input data, is,
Figure BDA0001565476200000032
the mean of the feature is indicated and s the standard deviation of the feature.
And (2-3) performing operations such as rotation, overturning, mirroring, brightness change, random offset and the like on the standard normally distributed image obtained after conversion in the step (2-2) by using a data enhancement (data augmentation) technology, so that the network can learn the characteristics of different directions and angles, and meanwhile, the overfitting degree of network prediction is reduced.
(2-4) inputting the matrix obtained by the processing in the step (2-3) into a full convolution network, and calculating Dice L oss;
in training the segmentation model, we use a loss function for the segmentation task, Dice L oss, Dice L oss is a loss function designed for image segmentation of positive and negative case pixel imbalance, which is defined as follows:
Figure BDA0001565476200000041
where i denotes the current calculated pixel point, pi,giRespectively representing the model prediction fraction of a pixel i and the fraction corresponding to a label, N representing the total number of pixel points, D representing the coincidence degree of a binary prediction result (thermodynamic diagram) and the label, and the value of DIn the range of [0,1]Interval, the closer its value is to 1, the higher the contact ratio; during the training process, we use 1-D as a loss function;
the labels are binary matrices of consistent size with the input image, 1 represents non-regular cell pixels, 0 represents regular cell pixels, in (2-1) the valid decision region is compressed to 512x512 pixels, and similarly, the segmentation labels are also compressed to 512x512 pixels to facilitate matching with the valid decision region.
(2-5) minimizing Dice L oss by using an Adam algorithm as an optimization method until the network converges to obtain a converged slice segmentation model.
In the fine tuning stage, the advantage of using the full connection layer to fine tune the U-Net pre-training weight is as follows: after the U-Net training is completed, the U-Net model has the capability of extracting the abnormal cell features in the deep layer, so that the classification task is greatly assisted, and the interpretability of the classification model is increased by the method. In the prediction phase, the classification result may be output simultaneously with the segmentation result to determine the region of the non-conventional cell.
The method for training the segmentation pre-training classification model in the steps (2) and (3) is a two-stage method, namely, a method for pre-training by using a segmentation label and then fine-tuning by using a classifier is a one-stage training method which is equivalent to the two-stage method, namely, the method for training the single stage is used for minimizing the sum of the output of the U-Net model as an auxiliary loss function and the classification loss function as a final loss function, but the effect of the one-stage method is poorer than that of the two-stage method.
In step (3), the network fine-tuning method includes the following steps:
(3-1) replacing the last convolution layer of the U-Net with a full connection layer with two classifications as output,
and (3-2) using the discrimination region containing the unconventional cells as a positive example and the discrimination region completely not containing the unconventional cells as a negative example, optimizing a cross entropy loss function by using an Adam algorithm and updating network parameters to make the classification task model reach convergence, thereby obtaining the segmentation pre-training classification model.
In the step (4), the k common classification models are trained in a k-fold cross validation mode, and the specific steps are as follows: firstly, dividing training data into k equal parts by layered sampling, wherein one part is used as verification data each time, and the other k-1 parts are used as training data for training to obtain k common classification models;
the value range of k is an integer between 5 and 10;
the training data includes a discrimination region containing an abnormal cell as a positive example and a discrimination region containing no abnormal cell at all as a negative example.
In the invention, a DenseNet with better generalization capability in the current classification effect is used for training, in the training process, a Cross Engine L oss is used for pre-training a model, and in order to enable the model to pay more attention to difficult samples, Focal L oss fine tuning parameters are used.
In step (5), the model integration and fusion method comprises:
(1) voting method: taking the mode of the output of the plurality of models as a final result; or the like, or, alternatively,
(2) weighted averaging method: giving different weights to the models, calculating weighted mean values of the models, and judging a final label through the weighted mean values; or the like, or, alternatively,
(3) the stacking method comprises the following steps: and training a linear classifier which takes the outputs of a plurality of models as the input to be used as a basis model for judging the label.
The model integration fusion method of the invention preferably selects a weighted average method, wherein, the weights of the common classification models are the same; the weight of the segmentation pre-trained classification model is k times of the weight of the common classification model, and k is the number of the common classification models.
The reason why the weighted averaging method is preferred is that: compared with a voting method, the weighted mean method can better balance the importance degree between the segmentation pre-training classification model and the common classification model, and effectively highlight the contribution of the segmentation pre-training model to the final model.
According to the segmentation pre-training classification model and the k DenseNet classification models respectively obtained in the above steps, a weighted mean method is used for fusing k +1 models, in order to highlight the contribution of the segmentation models, the following formula is used for weighted average, and p (x) is obtained as a final predicted value, wherein x represents an input image matrix:
Figure BDA0001565476200000051
wherein S represents a segmentation pre-training classification model function, d represents a DenseNet classification function, k represents a k-fold number, and λ represents the weight occupied by the segmentation pre-training classification model.
The method for identifying the unconventional cells in the pathological section provided by the invention combines a feature map generated by semantic segmentation with a plurality of classification networks, achieves a better test effect, and achieves an F1 value of more than 96% by using an index of a weighing algorithm of an F1 value. The F1 value is a harmonic mean value of accuracy and recall rate, and the calculation method is as follows:
Figure BDA0001565476200000061
where precision represents accuracy and recall represents recall, calculated using the following formula:
Figure BDA0001565476200000062
Figure BDA0001565476200000063
wherein tp, fp, fn represent the number of true positives, the number of false positives and the number of false negatives, respectively.
Compared with the prior art, the invention has the following beneficial effects:
1) the invention can greatly reduce the heavy workload of pathological doctors, and has better popularization effect for primary hospitals and community hospitals which lack the resources of the pathological doctors.
2) The invention can help doctors to quickly and accurately screen out unconventional cells, and the F1 value of the algorithm can reach more than 96% according to the index of the F1 value measuring algorithm.
Drawings
Fig. 1 is a general structure of a full convolutional network U-net in an embodiment of the present invention.
Fig. 2 shows the overall structure of the generic classification model DenseNet in the embodiment of the present invention.
FIG. 3 is a general diagram of the pathological section region classification according to the embodiment of the present invention.
Fig. 4 is a flowchart of a pathological section region classification model training process according to an embodiment of the present invention.
Detailed Description
For further understanding of the present invention, the following description will specifically describe the identification method of unconventional cells in a pathological section provided by the present invention with reference to specific implementation methods, but the present invention is not limited thereto, and the insubstantial modifications and adaptations made by those skilled in the art under the core teaching of the present invention still fall within the scope of the present invention.
A method for identifying unconventional cells on a pathological section specifically comprises the following steps:
1) pathological section pretreatment and effective area discrimination
The invention adopts a pathological section with 20x amplified input data, and the pathological section is divided into areas with pixel resolution 2048 × 2048 and respectively stored.
The area of the pixel resolution 2048 × 2048 is converted to L AB channels, and the area in which the average value of the a channels exceeds the threshold value t of 132 is regarded as a valid discrimination area, and the rest is discarded.
2) Training convergent slice segmentation models
(2-1) compressing the effective distinguishing area obtained in the step 1) to an area with the pixel resolution of 512x 512;
(2-2) image pixels are first divided by 255 and projected into [0,1] space, and then the data is subjected to a subtraction of mean divided by variance using a zscore normalization approach to convert to a standard normally distributed image, the zscore calculation method is as follows:
Figure BDA0001565476200000071
wherein xiWhich represents the input data, is,
Figure BDA0001565476200000072
represents the mean of the feature, s represents the standard deviation of the feature;
and (2-3) performing operations such as rotation, overturning, mirroring, brightness change, random offset and the like on the standard normally distributed image obtained after conversion in the step (2-2) by using a data enhancement (data augmentation) technology, so that the network can learn the characteristics of different directions and angles, and meanwhile, the overfitting degree of network prediction is reduced.
(2-4) inputting the matrix obtained by the processing in the step (2-3) into a full convolution network U-Net, wherein the structure of the matrix is shown in FIG. 1, and the following formula is used for calculating Dice L oss:
Figure BDA0001565476200000073
wherein p isi,giThe model prediction scores of the pixel i and the scores corresponding to the labels are respectively shown.
The labels are binary matrices of consistent size with the input image, 1 represents non-regular cell pixels, 0 represents regular cell pixels, in (2-1) the valid decision region is compressed to 512x512 pixels, and similarly, the segmentation labels are also compressed to 512x512 pixels to facilitate matching with the valid decision region.
(2-5) minimizing Dice L oss by using an Adam algorithm as an optimization method until the network converges to obtain a converged slice segmentation model.
3) On the basis of the slice segmentation model obtained in the step 2), replacing the last convolution layer of the U-Net with a full connection layer which is output as two classes, using a discrimination region containing non-conventional cells as a positive case, using a discrimination region completely not containing non-conventional cells as a negative case, and optimizing network parameters by using an Adam algorithm to optimize a Cross Entropy loss function (Cross Entropy L oss), so that the classification task model reaches convergence, thereby obtaining a segmentation pre-training classification model.
In order to avoid that the trained U-Net weight is damaged due to overlarge randomly generated full-connection layer weight gradient, the following steps are adopted for fine adjustment:
a) only the full connection layer is trained until convergence by fixing the U-Net weight;
b) and the learning rate of the U-Net part is reduced, and the whole network (comprising the U-Net and the full connection layer) is trained.
4) k-fold cross validation DenseNet training
(4-1) dividing the training set into 5 parts, taking 4 parts as the training set, and taking 1 part as the verification set;
(4-2) training the DenseNet model by using other data as a training set for each divided set, wherein the structure of the DenseNet model is shown in figure 2, and when the effect on the verification set is optimal, the model is saved to obtain 5 models.
5) Model fusion
According to the segmentation pre-training classification model and the 5 DenseNet classification models respectively obtained in the steps, 6 models are fused in a weighted mean mode, and in order to highlight the contribution of the segmentation models, the following formula is used for weighted average:
Figure BDA0001565476200000081
wherein S represents a segmentation pre-training classification model function, d represents a DenseNet classification function, k represents a k-fold number, and λ represents the weight occupied by the segmentation pre-training classification model.
After fusion, a final classification model is obtained, and the model training flow chart is shown in fig. 3.
6) Non-routine cell recognition
Inputting the effective distinguishing region obtained by processing the new pathological section which is not marked in the step 1) into a final classification model, judging the probability that each region in the pathological section contains unconventional cells according to a threshold value t being 132, and outputting the unconventional cells with the probability value being more than 0.5 as a recognition result.

Claims (7)

1. A method for identifying unconventional cells in a pathological section, comprising:
(1) preprocessing an electronic scanning pathological section to obtain an effective discrimination area in the pathological section, wherein an unconventional cell pixel area in the effective discrimination area is a positive sample, and a conventional cell pixel area is a negative sample;
(2) training the positive and negative samples obtained in the step (1) by adopting a full convolution network algorithm, and adjusting network parameters according to the coincidence degree of the model prediction result and the label to obtain a convergent slice segmentation model;
(3) replacing a head part cutter of the slice segmentation model obtained in the step (2) with a classifier, using a discrimination region containing unconventional cells as a positive example and a discrimination region completely not containing the unconventional cells as a negative example, and finely adjusting network parameters to adapt to a classification task to obtain a segmentation pre-training classification model; the method for fine tuning the network parameters comprises the following steps:
(3-1) replacing the last convolution layer of the U-Net with a full connection layer with two classified outputs;
(3-2) using a discrimination region containing unconventional cells as a positive example and a discrimination region completely not containing unconventional cells as a negative example, optimizing a cross entropy loss function and updating network parameters by using an Adam algorithm to enable a classification task model to reach convergence, and obtaining a segmentation pre-training classification model;
(4) taking the effective discrimination region obtained in the step (1), taking the discrimination region containing the unconventional cells as a positive example and taking the discrimination region completely not containing the unconventional cells as a negative example, and training k common classification models by using a k-fold cross validation mode in a common convolutional neural network classification method;
the value range of k is an integer between 5 and 10;
(5) fusing the segmented pre-training classification model obtained in the step (3) with the k common classification models obtained in the step (4) by a model integration method to construct a final classification model;
(6) inputting the effective discrimination region obtained by processing the new pathological section which is not marked in the step (1) into a final classification model, and outputting the unconventional cells with the probability value of more than 0.5 as an identification result.
2. The method for identifying abnormal cells in pathological sections according to claim 1, wherein the pretreatment in step (1) comprises:
(1-1) dividing the pathological section amplified by 20 × into small blocks with the same size as 512 × 512-2048 × 2048 pixels, and storing the small blocks respectively;
(1-2) converting each small block into an image after L AB channels, taking the small block of which the average value of the A channel exceeds a threshold value t as an effective judgment area, and discarding the rest small blocks;
the threshold value t is 120-150.
3. The method for identifying abnormal cells in pathological sections according to claim 1, wherein in step (2), the method for evaluating the degree of coincidence of the model prediction result and the label comprises Dice L oss, Cross entry or means squared Error.
4. The method for identifying abnormal cells in pathological section according to claim 1 or 3, wherein in step (2), the method for training the convergent section segmentation model comprises:
(2-1) compressing the input effective distinguishing area into a matrix with the pixels being 256 × 256-512 × 512 by using an image compression algorithm;
(2-2) normalizing and converting the above matrix to a standard normal distribution by a redistribution and z-score method;
(2-3) performing rotation, turnover, mirror image, brightness change and random offset operation on the image in the standard normal distribution obtained after the conversion in the step (2-2) by using a data enhancement technology;
(2-4) inputting the matrix obtained by the processing in the step (2-3) into a full convolution network, and calculating Dice L oss;
(2-5) minimizing Dice L oss by using an Adam algorithm as an optimization method until the network converges to obtain a converged slice segmentation model.
5. The method for identifying abnormal cells in pathological sections according to claim 1, wherein in step (4), the k generic classification models are trained by k-fold cross validation, and the method comprises the following specific steps: firstly, dividing training data into k equal parts by layered sampling, wherein one part is used as verification data each time, and the other k-1 parts are used as training data for training to obtain k common classification models;
the value range of k is an integer between 5 and 10;
the training data includes a discrimination region containing an abnormal cell as a positive example and a discrimination region containing no abnormal cell at all as a negative example.
6. The method for identifying abnormal cells in pathological section according to claim 1, wherein in step (5), the model integration fusion method comprises voting, weighted mean method or stacking.
7. The method for identifying abnormal cells in pathological section according to claim 6, wherein the weighted mean method is characterized in that the weights of the general classification models are the same; the weight of the segmentation pre-trained classification model is k times of the weight of the common classification model, and k is the number of the common classification models.
CN201810097641.1A 2018-01-31 2018-01-31 Identification method of unconventional cells in pathological section Active CN108346145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810097641.1A CN108346145B (en) 2018-01-31 2018-01-31 Identification method of unconventional cells in pathological section

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810097641.1A CN108346145B (en) 2018-01-31 2018-01-31 Identification method of unconventional cells in pathological section

Publications (2)

Publication Number Publication Date
CN108346145A CN108346145A (en) 2018-07-31
CN108346145B true CN108346145B (en) 2020-08-04

Family

ID=62961468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810097641.1A Active CN108346145B (en) 2018-01-31 2018-01-31 Identification method of unconventional cells in pathological section

Country Status (1)

Country Link
CN (1) CN108346145B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190682B (en) * 2018-08-13 2020-12-18 北京安德医智科技有限公司 Method and equipment for classifying brain abnormalities based on 3D nuclear magnetic resonance image
CN109191476B (en) * 2018-09-10 2022-03-11 重庆邮电大学 Novel biomedical image automatic segmentation method based on U-net network structure
US10579924B1 (en) * 2018-09-17 2020-03-03 StradVision, Inc. Learning method, learning device with multi-feeding layers and testing method, testing device using the same
CN109242849A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image processing method, device, system and storage medium
CN109544563B (en) * 2018-11-12 2021-08-17 北京航空航天大学 Passive millimeter wave image human body target segmentation method for security inspection of prohibited objects
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109685077A (en) * 2018-12-13 2019-04-26 深圳先进技术研究院 A kind of breast lump image-recognizing method and device
CN109620152B (en) * 2018-12-16 2021-09-14 北京工业大学 MutifacolLoss-densenert-based electrocardiosignal classification method
CN109785334A (en) * 2018-12-17 2019-05-21 深圳先进技术研究院 Cardiac magnetic resonance images dividing method, device, terminal device and storage medium
CN109857351A (en) * 2019-02-22 2019-06-07 北京航天泰坦科技股份有限公司 The Method of printing of traceable invoice
CN110110661A (en) * 2019-05-07 2019-08-09 西南石油大学 A kind of rock image porosity type recognition methods based on unet segmentation
CN112132166B (en) * 2019-06-24 2024-04-19 杭州迪英加科技有限公司 Intelligent analysis method, system and device for digital cell pathology image
CN110634134A (en) * 2019-09-04 2019-12-31 杭州憶盛医疗科技有限公司 Novel artificial intelligent automatic diagnosis method for cell morphology
CN110853021B (en) * 2019-11-13 2020-11-24 江苏迪赛特医疗科技有限公司 Construction of detection classification model of pathological squamous epithelial cells
CN110853022B (en) * 2019-11-14 2020-11-06 腾讯科技(深圳)有限公司 Pathological section image processing method, device and system and storage medium
CN111144488B (en) * 2019-12-27 2023-04-18 之江实验室 Pathological section visual field classification improving method based on adjacent joint prediction
CN111325103B (en) * 2020-01-21 2020-11-03 华南师范大学 Cell labeling system and method
CN111340064A (en) * 2020-02-10 2020-06-26 中国石油大学(华东) Hyperspectral image classification method based on high-low order information fusion
CN111627032A (en) * 2020-05-14 2020-09-04 安徽慧软科技有限公司 CT image body organ automatic segmentation method based on U-Net network
CN112084931B (en) * 2020-09-04 2022-04-15 厦门大学 DenseNet-based leukemia cell microscopic image classification method and system
US20220148189A1 (en) * 2020-11-10 2022-05-12 Nec Laboratories America, Inc. Multi-domain semantic segmentation with label shifts
CN112446876A (en) * 2020-12-11 2021-03-05 北京大恒普信医疗技术有限公司 anti-VEGF indication distinguishing method and device based on image and electronic equipment
CN112435259B (en) * 2021-01-27 2021-04-02 核工业四一六医院 Cell distribution model construction and cell counting method based on single sample learning
CN113034448B (en) * 2021-03-11 2022-06-21 电子科技大学 Pathological image cell identification method based on multi-instance learning
CN113192047A (en) * 2021-05-14 2021-07-30 杭州迪英加科技有限公司 Method for automatically interpreting KI67 pathological section based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101168067A (en) * 2007-09-28 2008-04-30 浙江大学 Method for decreasing immune cell surface antigenic sites immunogenicity and use
CN102289500A (en) * 2011-08-24 2011-12-21 浙江大学 Method and system for displaying pathological section multi-granularity medical information
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
US9782585B2 (en) * 2013-08-27 2017-10-10 Halo Neuro, Inc. Method and system for providing electrical stimulation to a user

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7951345B2 (en) * 2007-06-01 2011-05-31 Lary Research & Development, Llc Useful specimen transport apparatus with integral capability to allow three dimensional x-ray images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101168067A (en) * 2007-09-28 2008-04-30 浙江大学 Method for decreasing immune cell surface antigenic sites immunogenicity and use
CN102289500A (en) * 2011-08-24 2011-12-21 浙江大学 Method and system for displaying pathological section multi-granularity medical information
US9782585B2 (en) * 2013-08-27 2017-10-10 Halo Neuro, Inc. Method and system for providing electrical stimulation to a user
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CONVOLUTIONS OF PATHOLOGICAL SUBMEASURES;ILIJAS FARAH;《Measure Theory》;20050117;第1-4页 *
基于卷积神经网络的结肠病理图像中的腺体分割;吕力兢;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第3期);第I138-5141页 *

Also Published As

Publication number Publication date
CN108346145A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CN108346145B (en) Identification method of unconventional cells in pathological section
CN108447062B (en) Pathological section unconventional cell segmentation method based on multi-scale mixed segmentation model
CN113191215B (en) Rolling bearing fault diagnosis method integrating attention mechanism and twin network structure
CN111144496B (en) Garbage classification method based on hybrid convolutional neural network
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN111738302B (en) System for classifying and diagnosing Alzheimer's disease based on multi-modal data
CN109993236A (en) Few sample language of the Manchus matching process based on one-shot Siamese convolutional neural networks
CN111126482A (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN111598894B (en) Retina blood vessel image segmentation system based on global information convolution neural network
CN116935384B (en) Intelligent detection method for cell abnormality sample
CN112819063A (en) Image identification method based on improved Focal loss function
CN117371511A (en) Training method, device, equipment and storage medium for image classification model
CN116503932B (en) Method, system and storage medium for extracting eye periphery characteristics of weighted key areas
CN109934248B (en) Multi-model random generation and dynamic self-adaptive combination method for transfer learning
CN116129182A (en) Multi-dimensional medical image classification method based on knowledge distillation and neighbor classification
CN115987730A (en) Signal modulation identification method based on tree-shaped perception fusion convolutional network
CN116091763A (en) Apple leaf disease image semantic segmentation system, segmentation method, device and medium
CN115423788A (en) Digestive tract recognition system and method based on deep learning
CN115081514A (en) Industrial equipment fault identification method under data imbalance condition
CN114140647A (en) Fuzzy image recognition algorithm for pole pieces of battery cell pole group
CN111126444A (en) Classifier integration method
CN114494772B (en) Unbalanced sample classification method and device
CN112926442B (en) Construction method for image target data set balance completion
Monteiro Pollen grain recognition through deep learning convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant