CN108346145A - The recognition methods of unconventional cell in a kind of pathological section - Google Patents
The recognition methods of unconventional cell in a kind of pathological section Download PDFInfo
- Publication number
- CN108346145A CN108346145A CN201810097641.1A CN201810097641A CN108346145A CN 108346145 A CN108346145 A CN 108346145A CN 201810097641 A CN201810097641 A CN 201810097641A CN 108346145 A CN108346145 A CN 108346145A
- Authority
- CN
- China
- Prior art keywords
- pathological section
- unconventional
- unconventional cell
- cell
- critical region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Abstract
The invention discloses a kind of recognition methods of unconventional cell in pathological section, including:Electron scanning pathological section is pre-processed to obtain effective critical region, input full convolutional network pre-training, it reuses full articulamentum and replaces full convolutional network head trim network, make full convolutional network that there is the ability for extracting unconventional cell characteristic, and then determine unconventional cell position, more efficiently classify to effective critical region;By combining the prediction result of multiple general category networks to vote, the more stable classification results of output.The recognition methods of the present invention can there are the probability of unconventional cell in the visual field of each 20 × amplification in automatic discrimination pathological section, output probability value 0.5 or more unconventional cell as recognition result, the workload for largely mitigating unconventional cell in artificial screening pathological section, fast and accurately filters out unconventional cell.
Description
Technical field
The invention belongs to medical imaging fields, and in particular to the recognition methods of unconventional cell in a kind of pathological section.
Background technology
Unconventional cell (or improper morphological cellular) is by manually carrying out screening in traditional pathological section:In microscope
Under, by professional pathologist by the movement of slice, and then naked eyes scan entire slice, and the entire slice of searching whether there is non-
Regular growth, this work is heavy and time-consuming, and with the growth of scoring time, and error rate also improves therewith.
With the continuous development of science and technology, the identification of unconventional cell can be carried out by the help of computer in pathological section
Preliminary screening.
Based on the VGGNet of convolutional neural networks (Convolutional Neural Network, CNN) algorithm,
The network improvements structure such as ResNet, DenseNet constantly updates the computer vision of iteration, and on natural image, accuracy rate is
Through being more than human eye.Image, semantic segmentation (Semantic Segmentation) is an important research direction of computer vision,
Its task is classified to each pixel of single image by computerized algorithm completion.In medical imaging, semantic segmentation is normal
It is commonly used for dividing organ, tissue or cell etc. in image, in order to subsequent classification.
Jonathan Long are proposed in full convolutional neural networks (Fully Convolutional Networks, FCN)
Semantic segmentation task is carried out using convolution and deconvolution (Deconvolution) and substitutes traditional complete connecting semantic segmentation method and being
One of the main method of semantic segmentation model, U-Net are a kind of typical full convolutional networks, and main thought is to be divided into network
Down-sampling layer and up-sampling layer, upper using using common convolution, pond layer is up-sampled and is accumulated using bilinear interpolation or warp
Big characteristic pattern is to refer in the same size with shallow-layer characteristic pattern and connect.Pass through the characteristic pattern jump connection of shallow-layer and further feature
Figure splicing, obtains final characteristic pattern.
Invention content
The object of the present invention is to provide a kind of recognition methods of unconventional cell in pathological section, largely mitigate artificial screening
The workload of unconventional this hard work of cell, fast and accurately filters out unconventional cell in pathological section.
Regular growth of the present invention is human normal cell, and unconventional cell is corresponding with human normal cell, is
The improper morphological cellular of human body.
The operation principle of technical solution of the present invention:
It is multiple regions that electron scanning pathological section, which is carried out pretreatment, by the mean value for being converted to A channel behind the channels LAB
As according to judging whether region is effective, all effective critical regions in the pathological section are obtained, by the way that redistribution will be used
Effectively critical region inputs full convolutional network pre-training after being pre-processed with zscore methods, reuses full articulamentum and replaces full convolution
Network header trim network can make full convolutional network have the ability for extracting unconventional cell characteristic, and then determination is unconventional
Cell position, therefore more efficiently classify to effective critical region.By the prediction for combining multiple general category networks
As a result it votes, the more stable classification results of output can exist in automatic discrimination slice in the visual field of each 20 × amplification
The probability of unconventional cell.
The recognition methods of unconventional cell on a kind of pathological section, including:
(1) electron scanning pathological section is pre-processed, obtains effective critical region in the pathological section, it is described
Unconventional cell pixel region is positive sample in effective critical region, and regular growth pixel region is negative sample;
(2) positive and negative samples that step (1) obtains are trained using full convolutional network algorithm, according to model prediction knot
The parameter of network is adjusted in the registration of fruit and label, obtains convergent slice parted pattern;
(3) on the basis of the slice parted pattern that step (2) obtains, its Head segmentation device is replaced with into grader, is used
Contain the critical region of unconventional cell as positive example, is entirely free of the critical region of unconventional cell as example is born, finely tunes net
Network parameter is allowed to adapt to classification task, obtains segmentation pre-training disaggregated model;
(4) in the effective critical region for obtaining step (1), contain the critical region of unconventional cell as positive example, it is complete
The critical region of unconventional cell is not contained entirely as example is born, and is rolled over and is intersected using the k in common convolutional neural networks sorting technique
The mode of verification trains k general category model;
The value range of the k is the integer between 5~10;
(5) the k general category model that the segmentation pre-training disaggregated model that step (3) obtains is obtained with step (4) is led to
The method fusion for crossing model integrated, builds final classification model;
(6) by without the new pathological section of label, obtained effective critical region is handled by step (1) and is inputted finally
Disaggregated model, output probability value 0.5 or more unconventional cell as recognition result.
In step (1), the pre-treatment step is:
The pathological section of 20 × amplification is divided into the area that pixel is 512*512~2048*2048 same sizes by (1-1)
Domain stores respectively;
(1-2) is converted to each fritter the image behind the channels LAB, using the mean value of A channel be more than threshold value t fritter as
Effective critical region, remaining is given up;
The threshold value t is 120~150.
The reason of using conversion to the A channel mean value after LAB as the foundation for differentiating effective coverage in step (1-2) exists
In:After the dyeing of the effective coverages such as histocyte it is purple or red in pathological section, in the channels LAB, A channel represents the pixel
Red degree, therefore use A channel as distinguishing rule, think that effectively tissue or cell are contained in the region more than threshold value t.
In step (2), the appraisal procedure of the registration of the model prediction result and label include Dice Loss,
Cross Entropy or Mean Squared Error.
In step (2), the training method of the convergent slice parted pattern, the specific steps are:
(2-1) by effective critical region of input by image compression algorithm compress it to pixel be 256*256~
The matrix of 512*512;This ratio can ensure the preservation of most of characteristics of image, and give up some tiny features, these are special
Sign is contributed smaller in the classification of positive negative sample.
(2-2) by above-mentioned matrix normalization and is transformed into standardized normal distribution by redistribution and z-score methods;
The image (pixel is the matrix of 256*256~512*512) that step (2-1) obtains is RGB triple channels, in order to more preferable
By neural network learning, generally require and carry out redistribution and standardization to it, specific operating process is:First by image slices
Element divided by 255 is projected to [0,1] section, is then carried out subtracting mean value divided by variance to data using zscore normalized fashions
Operation to be transformed into standardized normal distribution, zscore computational methods are as follows:
Wherein ziIndicate the final output of z-score algorithms, xiIndicate input data,Indicate the average value of this feature, s
Indicate the standard deviation of this feature.
The image for the standardized normal distribution that (2-3) is obtained after being converted to step (2-2) enhances (Data using data
Augmentation) technology is rotated, overturning, mirror image, and brightness changes, the operations such as random offset, to enable the network to learn
To different directions, the feature of angle, while reducing the over-fitting degree of neural network forecast.
The full convolutional network of Input matrix that (2-4) obtains step (2-3) processing, calculates Dice Loss;
Training parted pattern during, we used for segmentation task loss function --- DiceLoss into
Row training, Dice Loss are for the positive and negative unbalanced image segmentation of example pixel and the loss function that designs, are defined as follows:
Wherein, i indicates current and calculates pixel, pi, giModel prediction score and the label institute for showing respectively pixel i are right
The score answered, N indicate that the total quantity of pixel, D illustrate the degree that overlaps of two classification prediction results (thermodynamic chart) and label, institute
The value range for stating D is [0,1] section, and for value closer to 1, registration is higher;In the training process, we use 1-D conducts
Loss function;
The label is to represent unconventional cell pixel with input picture two values matrix of the same size, 1, and 0 represents often
Cell pixel is advised, effective critical region is compressed into 512*512 pixels in (2-1), likewise, segmentation tag is also compressed into
512*512 pixels with effective critical region in order to matching.
(2-5) method minimizes Dice Loss as an optimization using Adam algorithms, until network convergence, obtains convergent
It is sliced parted pattern.
In the fine tuning stage, it is using the advantages of full articulamentum fine tuning U-Net pre-training weights:After the completion of U-Net is trained,
U-Net model deep layers have been provided with the ability for extracting unconventional cell characteristic, and then have larger help to classification task, and
And this method increase the interpretation of disaggregated model.In forecast period, can simultaneously output category result and segmentation result into
Row compares, to determine the region of unconventional cell.
Step (2) and the method for (3) training segmentation pre-training disaggregated model are dual stage process, i.e., first use segmentation tag
Pre-training, the method for reusing grader fine tuning, the mode with dual stage process equivalence are single phase training, the single-order
The method of Duan Xunlian uses the output of U-Net models to be added with Classification Loss function as final as auxiliary loss function
Loss function minimized, but the method for single phase is poor relative to the effect of two stage method.
In step (3), the trim network method includes the following steps:
Last layer of convolutional layer of U-Net is replaced with the full articulamentum that output is classified for two by (3-1),
(3-2) uses the critical region for containing unconventional cell as positive example, is entirely free of the differentiation area of unconventional cell
Domain using Adam algorithm optimization cross entropy loss functions and updates network parameter as example is born, and so that classification task model is reached and receives
It holds back, obtains segmentation pre-training disaggregated model.
It is described to train k general category model, specific steps in such a way that k rolls over cross validation in step (4)
For:Training data is divided into k equal portions by stratified sampling first, uses a copy of it as verify data every time, remaining k-1 parts
It is trained as training data, obtains k general category model;
The value range of the k is the integer between 5~10;
The training data includes the critical region that contains unconventional cell as positive example and has been entirely free of unconventional
The critical region of cell is as negative example.
But the grader for only using U-Net assisted extraction features still can not achieve the classification of some global features
Fine classification, also need to increase some graders at this time to ensure the accuracy rate integrally classified.We use in the present invention
The DenseNet of generalization ability better performances is trained on current classifying quality, and in the training process, we use Cross
Entropy Loss pre-training models have used Focal Loss to finely tune parameter to allow model to focus more on difficult sample.
In step (5), the model integrated fusion method includes:
(1) ballot method:Take the mode of the output of multiple models as final result;Or,
(2) weighted mean approach:Multiple models are assigned with different weights, its weighted mean is sought, is sentenced by weighted mean
Break final label;Or,
(3) stacking:Output as input linear classifier of the training one using multiple models is as judging label
According to model.
The preferred weighted mean approach of model integrated fusion method of the present invention, wherein the weights of general category model are identical;Segmentation
The weights of the disaggregated model of pre-training are k times of the weights of general category model, and k is the quantity of general category model.
It is preferred that the reason of weighted mean approach, is:Weighted mean approach being capable of better balanced division pre-training compared to ballot method
Significance level between disaggregated model and general category model, and tribute of the effectively prominent segmentation pre-training model to final mask
It offers.
The segmentation pre-training disaggregated model and k DenseNet disaggregated model respectively obtained according to above-mentioned steps, we make
K+1 model is merged with the mode of weighted mean, in order to protrude the contribution of parted pattern, we used following equation to be added
Weight average, it is final predicted value to obtain p (x), and x indicates input picture matrix:
Wherein, S indicates that segmentation pre-training disaggregated model function, d indicate that DenseNet classification functions, k indicate k broken numbers, λ tables
Show the weight shared by segmentation pre-training disaggregated model.
The recognition methods of unconventional cell in pathological section provided by the invention, the characteristic pattern for having used semantic segmentation to generate
It is combined with multiple sorter networks, has reached preferable test effect, with the index of the measure algorithm of F1 values, F1 values can reach
96% or more.The F1 values are the harmomic mean of accuracy and recall rate, and computational methods are as follows:
Wherein, precision represents accuracy, and recall represents recall rate, is calculated using following formula:
Wherein tp, fp, fn respectively represent true positives quantity, number of false positives and false negative quantity.
Compared with prior art, the invention has the advantages that:
1) present invention can largely mitigate pathologist hard work amount, and for the base for lacking pathologist resource
Hospital, community hospital play the role of preferably universal.
2) present invention can help doctor fast and accurately to filter out unconventional cell, with the finger of the measure algorithm of F1 values
Mark, the F1 values of this algorithm can reach 96% or more.
Description of the drawings
Fig. 1 is the overall structure of full convolutional network U-net in specific implementation method of the present invention.
Fig. 2 is the overall structure of general category model DenseNet in specific implementation method of the present invention.
Fig. 3 is the pathological section territorial classification overall schematic of specific implementation method of the present invention.
Fig. 4 is the pathological section territorial classification model training flow chart of specific implementation method of the present invention.
Specific implementation mode
For a further understanding of the present invention, with reference to specific implementation method in a kind of pathological section provided by the invention
The recognition methods of unconventional cell is specifically described, but the present invention is not limited thereto, and field technology personnel are in core of the present invention
The non-intrinsically safe modifications and adaptations made under heart guiding theory, still fall within protection scope of the present invention.
The recognition methods of unconventional cell on a kind of pathological section, the specific steps are:
1) pathological section pretreatment differentiates with effective coverage
The present invention uses the pathological section that input data is amplified for 20x, is divided into the area of pixel resolution 2048*2048
Domain stores respectively.
The region of above-mentioned pixel resolution 2048*2048 is converted into the channels LAB, is more than threshold value t=by the mean value of A channel
As effective critical region, remaining is given up in 132 region.
2) the convergent slice parted pattern of training
Effective critical region that step 1) obtains is compressed to the region of pixel resolution 512x512 by (2-1);
(2-2) projects to [0,1] space first by image pixel divided by 255, then uses zscore normalized fashions pair
Data subtract the operation of mean value divided by variance to be transformed into the image of standardized normal distribution, and zscore computational methods are as follows:
Wherein xiIndicate input data,Indicate that the average value of this feature, s indicate the standard deviation of this feature;
The image for the standardized normal distribution that (2-3) is obtained after being converted to step (2-2) enhances (Data using data
Augmentation) technology is rotated, overturning, mirror image, and brightness changes, the operations such as random offset, to enable the network to learn
To different directions, the feature of angle, while reducing the over-fitting degree of neural network forecast.
The full convolutional network U-Net of Input matrix that (2-4) obtains step (2-3) processing, structure is as shown in Figure 1, make
Dice Loss are calculated with following formula:
Wherein pi, giThe model prediction score and the score corresponding to label for showing respectively pixel i.
The label is to represent unconventional cell pixel with input picture two values matrix of the same size, 1, and 0 represents often
Cell pixel is advised, effective critical region is compressed into 512*512 pixels in (2-1), likewise, segmentation tag is also compressed into
512*512 pixels with effective critical region in order to matching.
(2-5) method minimizes Dice Loss as an optimization using Adam algorithms, until network convergence, obtains convergent
It is sliced parted pattern.
3) on the basis of the slice parted pattern that step 2) obtains, last layer of convolutional layer of U-Net is replaced with into output
Full articulamentum for two classification is entirely free of unconventional cell using the critical region for containing unconventional cell as positive example
Critical region is joined as example is born using Adam algorithm optimization cross entropies loss function (Cross Entropy Loss) optimization network
Number makes classification task model reach convergence, obtains segmentation pre-training disaggregated model.
Trained U-Net weights are caused to be destroyed in order to avoid the full articulamentum weights gradient generated at random is excessive, I
Using following steps finely tune:
A) fixed U-Net weights only train full articulamentum until convergence;
B) learning rate of the parts U-Net, training overall network (including U-Net and full articulamentum) are reduced.
4) k foldings folding cross validation DenseNet training
Training set is divided into 5 parts by (4-1), and 4 parts are used as training set, and 1 part as verification collection;
(4-2) respectively gathers every a divide, and DenseNet models, institute are trained using other data as training set
The structures of the DenseNet models stated as shown in Fig. 2, when effect is optimal on the verification collection preservation model, obtain 5 moulds
Type.
5) Model Fusion
The segmentation pre-training disaggregated model and 5 DenseNet disaggregated models respectively obtained according to above-mentioned steps, we make
6 models are merged with the mode of weighted mean, in order to protrude the contribution of parted pattern, we used following equation to be weighted
It is average:
Wherein, S indicates that segmentation pre-training disaggregated model function, d indicate that DenseNet classification functions, k indicate k broken numbers, λ tables
Show the weight shared by segmentation pre-training disaggregated model.
After fusion, final classification model is obtained, the model training flow chart is as shown in Figure 3.
6) unconventional cell recognition
By without the new pathological section of label, obtained effective critical region is handled by step 1) and inputs final classification
Model judges the probability that unconventional cell is contained in each region in pathological section according to threshold value t=132, and output probability value is 0.5
Above unconventional cell is as recognition result.
Claims (8)
1. the recognition methods of unconventional cell in a kind of pathological section, including:
(1) electron scanning pathological section is pre-processed, obtains effective critical region in the pathological section, described is effective
Unconventional cell pixel region is positive sample in critical region, and regular growth pixel region is negative sample;
(2) positive and negative samples that step (1) obtains are trained using full convolutional network algorithm, according to model prediction result with
The parameter of network is adjusted in the registration of label, obtains convergent slice parted pattern;
(3) on the basis of the slice parted pattern that step (2) obtains, its Head segmentation device is replaced with into grader, using containing
The critical region of unconventional cell as positive example, as example, trim network is born join by the critical region for being entirely free of unconventional cell
Number is allowed to adapt to classification task, obtains segmentation pre-training disaggregated model;
(4) in the effective critical region for obtaining step (1), contain the critical region of unconventional cell as positive example, completely not
The critical region for containing unconventional cell rolls over cross validation as example is born, using the k in common convolutional neural networks sorting technique
Mode train k general category model;
The value range of the k is the integer between 5~10;
(5) the k general category model that the segmentation pre-training disaggregated model that step (3) obtains is obtained with step (4) is passed through into mould
The integrated method fusion of type, builds final classification model;
(6) by without the new pathological section of label, obtained effective critical region is handled by step (1) and inputs final classification
Model, output probability value 0.5 or more unconventional cell as recognition result.
2. with the recognition methods according to unconventional cell in pathological section described in claim 1, which is characterized in that in step (1),
The pretreatment, including:
The pathological section of 20 × amplification is divided into the fritter that pixel is 512*512~2048*2048 same sizes by (1-1), point
It does not store;
(1-2) is converted to each fritter the image behind the channels LAB, using the mean value of A channel be more than threshold value t fritter as effectively
Critical region, remaining is given up;
The threshold value t is 120~150.
3. with the recognition methods according to unconventional cell in pathological section described in claim 1, which is characterized in that in step (2),
The appraisal procedure of the registration of the model prediction result and label includes Dice Loss, Cross Entropy or Mean
Squared Error。
4. with the recognition methods according to unconventional cell in the pathological section described in claim 1 or 3, which is characterized in that step (2)
In, the training method of the convergent slice parted pattern, including:
It is 256*256~512*512 that (2-1), which compresses it effective critical region of input to pixel by image compression algorithm,
Matrix;
(2-2) by above-mentioned matrix normalization and is transformed into standardized normal distribution by redistribution and z-score methods;
The image for the standardized normal distribution that (2-3) is obtained after being converted to step (2-2) is rotated using data enhancing technology, is turned over
Turn, mirror image, brightness change, random offset operation;
The full convolutional network of Input matrix that (2-4) obtains step (2-3) processing, calculates Dice Loss;
(2-5) method minimizes Dice Loss as an optimization using Adam algorithms, until network convergence, obtains convergent slice
Parted pattern.
5. with the recognition methods according to unconventional cell in pathological section described in claim 1, which is characterized in that in step (3),
The trim network method, including:
Last layer of convolutional layer of U-Net is replaced with output as the full articulamentum of two classification by (3-1);
(3-2) uses the critical region for containing unconventional cell as positive example, and the critical region for being entirely free of unconventional cell is made
To bear example, using Adam algorithm optimization cross entropy loss functions and network parameter is updated, so that classification task model is reached convergence, obtains
To segmentation pre-training disaggregated model.
6. with the recognition methods according to unconventional cell in pathological section described in claim 1, which is characterized in that in step (4),
Described trains k general category model in such a way that k rolls over cross validation, the specific steps are:Training data is led to first
It crosses stratified sampling and is divided into k equal portions, use a copy of it as verify data every time, remaining k-1 parts is trained as training data, is obtained
To k general category model;
The value range of the k is the integer between 5~10;
The training data includes the critical region that contains unconventional cell as positive example and has been entirely free of unconventional cell
Critical region as bear example.
7. with the recognition methods according to unconventional cell in pathological section described in claim 1, which is characterized in that in step (5),
The model integrated fusion method includes ballot method, weighted mean approach or stacking.
8. with the recognition methods according to unconventional cell in the pathological section described in claim 7, which is characterized in that the weighting
Averaging method, wherein the weights of general category model are identical;The weights for dividing the disaggregated model of pre-training are general category model
K times of weights, k are the quantity of general category model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097641.1A CN108346145B (en) | 2018-01-31 | 2018-01-31 | Identification method of unconventional cells in pathological section |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810097641.1A CN108346145B (en) | 2018-01-31 | 2018-01-31 | Identification method of unconventional cells in pathological section |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108346145A true CN108346145A (en) | 2018-07-31 |
CN108346145B CN108346145B (en) | 2020-08-04 |
Family
ID=62961468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810097641.1A Active CN108346145B (en) | 2018-01-31 | 2018-01-31 | Identification method of unconventional cells in pathological section |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108346145B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109190682A (en) * | 2018-08-13 | 2019-01-11 | 北京安德医智科技有限公司 | A kind of classification method and equipment of the brain exception based on 3D nuclear magnetic resonance image |
CN109242849A (en) * | 2018-09-26 | 2019-01-18 | 上海联影智能医疗科技有限公司 | Medical image processing method, device, system and storage medium |
CN109544563A (en) * | 2018-11-12 | 2019-03-29 | 北京航空航天大学 | A kind of passive millimeter wave image human body target dividing method towards violated object safety check |
CN109620152A (en) * | 2018-12-16 | 2019-04-16 | 北京工业大学 | A kind of electrocardiosignal classification method based on MutiFacolLoss-Densenet |
CN109685077A (en) * | 2018-12-13 | 2019-04-26 | 深圳先进技术研究院 | A kind of breast lump image-recognizing method and device |
CN109754403A (en) * | 2018-11-29 | 2019-05-14 | 中国科学院深圳先进技术研究院 | Tumour automatic division method and system in a kind of CT image |
CN109785334A (en) * | 2018-12-17 | 2019-05-21 | 深圳先进技术研究院 | Cardiac magnetic resonance images dividing method, device, terminal device and storage medium |
CN109857351A (en) * | 2019-02-22 | 2019-06-07 | 北京航天泰坦科技股份有限公司 | The Method of printing of traceable invoice |
CN110110661A (en) * | 2019-05-07 | 2019-08-09 | 西南石油大学 | A kind of rock image porosity type recognition methods based on unet segmentation |
CN110634134A (en) * | 2019-09-04 | 2019-12-31 | 杭州憶盛医疗科技有限公司 | Novel artificial intelligent automatic diagnosis method for cell morphology |
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
CN110853022A (en) * | 2019-11-14 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
US10579924B1 (en) | 2018-09-17 | 2020-03-03 | StradVision, Inc. | Learning method, learning device with multi-feeding layers and testing method, testing device using the same |
CN111144488A (en) * | 2019-12-27 | 2020-05-12 | 之江实验室 | Pathological section visual field classification improving method based on adjacent joint prediction |
CN111325103A (en) * | 2020-01-21 | 2020-06-23 | 华南师范大学 | Cell labeling system and method |
CN111340064A (en) * | 2020-02-10 | 2020-06-26 | 中国石油大学(华东) | Hyperspectral image classification method based on high-low order information fusion |
CN111627032A (en) * | 2020-05-14 | 2020-09-04 | 安徽慧软科技有限公司 | CT image body organ automatic segmentation method based on U-Net network |
CN112084931A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学 | DenseNet-based leukemia cell microscopic image classification method and system |
CN112132166A (en) * | 2019-06-24 | 2020-12-25 | 杭州迪英加科技有限公司 | Intelligent analysis method, system and device for digital cytopathology image |
CN112435259A (en) * | 2021-01-27 | 2021-03-02 | 核工业四一六医院 | Cell distribution model construction and cell counting method based on single sample learning |
CN112446876A (en) * | 2020-12-11 | 2021-03-05 | 北京大恒普信医疗技术有限公司 | anti-VEGF indication distinguishing method and device based on image and electronic equipment |
CN113034448A (en) * | 2021-03-11 | 2021-06-25 | 电子科技大学 | Pathological image cell identification method based on multi-instance learning |
CN113192047A (en) * | 2021-05-14 | 2021-07-30 | 杭州迪英加科技有限公司 | Method for automatically interpreting KI67 pathological section based on deep learning |
US20220148189A1 (en) * | 2020-11-10 | 2022-05-12 | Nec Laboratories America, Inc. | Multi-domain semantic segmentation with label shifts |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101168067A (en) * | 2007-09-28 | 2008-04-30 | 浙江大学 | Method for decreasing immune cell surface antigenic sites immunogenicity and use |
US20080299605A1 (en) * | 2007-06-01 | 2008-12-04 | Lary Todd P | Useful specimen transport apparatus with integral capability to allow three dimensional x-ray images |
CN102289500A (en) * | 2011-08-24 | 2011-12-21 | 浙江大学 | Method and system for displaying pathological section multi-granularity medical information |
CN106097391A (en) * | 2016-06-13 | 2016-11-09 | 浙江工商大学 | A kind of multi-object tracking method identifying auxiliary based on deep neural network |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN107145867A (en) * | 2017-05-09 | 2017-09-08 | 电子科技大学 | Face and face occluder detection method based on multitask deep learning |
US9782585B2 (en) * | 2013-08-27 | 2017-10-10 | Halo Neuro, Inc. | Method and system for providing electrical stimulation to a user |
-
2018
- 2018-01-31 CN CN201810097641.1A patent/CN108346145B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080299605A1 (en) * | 2007-06-01 | 2008-12-04 | Lary Todd P | Useful specimen transport apparatus with integral capability to allow three dimensional x-ray images |
CN101168067A (en) * | 2007-09-28 | 2008-04-30 | 浙江大学 | Method for decreasing immune cell surface antigenic sites immunogenicity and use |
CN102289500A (en) * | 2011-08-24 | 2011-12-21 | 浙江大学 | Method and system for displaying pathological section multi-granularity medical information |
US9782585B2 (en) * | 2013-08-27 | 2017-10-10 | Halo Neuro, Inc. | Method and system for providing electrical stimulation to a user |
CN106097391A (en) * | 2016-06-13 | 2016-11-09 | 浙江工商大学 | A kind of multi-object tracking method identifying auxiliary based on deep neural network |
CN106250812A (en) * | 2016-07-15 | 2016-12-21 | 汤平 | A kind of model recognizing method based on quick R CNN deep neural network |
CN107145867A (en) * | 2017-05-09 | 2017-09-08 | 电子科技大学 | Face and face occluder detection method based on multitask deep learning |
Non-Patent Citations (2)
Title |
---|
ILIJAS FARAH: "CONVOLUTIONS OF PATHOLOGICAL SUBMEASURES", 《MEASURE THEORY》 * |
吕力兢: "基于卷积神经网络的结肠病理图像中的腺体分割", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190682A (en) * | 2018-08-13 | 2019-01-11 | 北京安德医智科技有限公司 | A kind of classification method and equipment of the brain exception based on 3D nuclear magnetic resonance image |
CN109190682B (en) * | 2018-08-13 | 2020-12-18 | 北京安德医智科技有限公司 | Method and equipment for classifying brain abnormalities based on 3D nuclear magnetic resonance image |
US11887300B2 (en) | 2018-08-13 | 2024-01-30 | Beijing Ande Yizhi Technology Co., Ltd. | Method and apparatus for classifying a brain anomaly based on a 3D MRI image |
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109191476B (en) * | 2018-09-10 | 2022-03-11 | 重庆邮电大学 | Novel biomedical image automatic segmentation method based on U-net network structure |
EP3624015A1 (en) * | 2018-09-17 | 2020-03-18 | Stradvision, Inc. | Learning method, learning device with multi-feeding layers and testing method, testing device using the same |
CN110909748A (en) * | 2018-09-17 | 2020-03-24 | 斯特拉德视觉公司 | Image encoding method and apparatus using multi-feed |
KR102313604B1 (en) * | 2018-09-17 | 2021-10-19 | 주식회사 스트라드비젼 | Learning method, learning device with multi feeding layers and test method, test device using the same |
CN110909748B (en) * | 2018-09-17 | 2023-09-19 | 斯特拉德视觉公司 | Image encoding method and apparatus using multi-feed |
US10579924B1 (en) | 2018-09-17 | 2020-03-03 | StradVision, Inc. | Learning method, learning device with multi-feeding layers and testing method, testing device using the same |
KR20200031992A (en) * | 2018-09-17 | 2020-03-25 | 주식회사 스트라드비젼 | Learning method, learning device with multi feeding layers and test method, test device using the same |
CN109242849A (en) * | 2018-09-26 | 2019-01-18 | 上海联影智能医疗科技有限公司 | Medical image processing method, device, system and storage medium |
CN109544563A (en) * | 2018-11-12 | 2019-03-29 | 北京航空航天大学 | A kind of passive millimeter wave image human body target dividing method towards violated object safety check |
CN109754403A (en) * | 2018-11-29 | 2019-05-14 | 中国科学院深圳先进技术研究院 | Tumour automatic division method and system in a kind of CT image |
CN109685077A (en) * | 2018-12-13 | 2019-04-26 | 深圳先进技术研究院 | A kind of breast lump image-recognizing method and device |
CN109620152A (en) * | 2018-12-16 | 2019-04-16 | 北京工业大学 | A kind of electrocardiosignal classification method based on MutiFacolLoss-Densenet |
CN109620152B (en) * | 2018-12-16 | 2021-09-14 | 北京工业大学 | MutifacolLoss-densenert-based electrocardiosignal classification method |
CN109785334A (en) * | 2018-12-17 | 2019-05-21 | 深圳先进技术研究院 | Cardiac magnetic resonance images dividing method, device, terminal device and storage medium |
CN109857351A (en) * | 2019-02-22 | 2019-06-07 | 北京航天泰坦科技股份有限公司 | The Method of printing of traceable invoice |
CN110110661A (en) * | 2019-05-07 | 2019-08-09 | 西南石油大学 | A kind of rock image porosity type recognition methods based on unet segmentation |
CN112132166B (en) * | 2019-06-24 | 2024-04-19 | 杭州迪英加科技有限公司 | Intelligent analysis method, system and device for digital cell pathology image |
CN112132166A (en) * | 2019-06-24 | 2020-12-25 | 杭州迪英加科技有限公司 | Intelligent analysis method, system and device for digital cytopathology image |
CN110634134A (en) * | 2019-09-04 | 2019-12-31 | 杭州憶盛医疗科技有限公司 | Novel artificial intelligent automatic diagnosis method for cell morphology |
CN110853021A (en) * | 2019-11-13 | 2020-02-28 | 江苏迪赛特医疗科技有限公司 | Construction of detection classification model of pathological squamous epithelial cells |
US11967069B2 (en) | 2019-11-14 | 2024-04-23 | Tencent Technology (Shenzhen) Company Limited | Pathological section image processing method and apparatus, system, and storage medium |
CN110853022B (en) * | 2019-11-14 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
CN110853022A (en) * | 2019-11-14 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Pathological section image processing method, device and system and storage medium |
CN111144488B (en) * | 2019-12-27 | 2023-04-18 | 之江实验室 | Pathological section visual field classification improving method based on adjacent joint prediction |
CN111144488A (en) * | 2019-12-27 | 2020-05-12 | 之江实验室 | Pathological section visual field classification improving method based on adjacent joint prediction |
CN111325103A (en) * | 2020-01-21 | 2020-06-23 | 华南师范大学 | Cell labeling system and method |
CN111325103B (en) * | 2020-01-21 | 2020-11-03 | 华南师范大学 | Cell labeling system and method |
CN111340064A (en) * | 2020-02-10 | 2020-06-26 | 中国石油大学(华东) | Hyperspectral image classification method based on high-low order information fusion |
CN111627032A (en) * | 2020-05-14 | 2020-09-04 | 安徽慧软科技有限公司 | CT image body organ automatic segmentation method based on U-Net network |
CN112084931A (en) * | 2020-09-04 | 2020-12-15 | 厦门大学 | DenseNet-based leukemia cell microscopic image classification method and system |
US20220148189A1 (en) * | 2020-11-10 | 2022-05-12 | Nec Laboratories America, Inc. | Multi-domain semantic segmentation with label shifts |
CN112446876A (en) * | 2020-12-11 | 2021-03-05 | 北京大恒普信医疗技术有限公司 | anti-VEGF indication distinguishing method and device based on image and electronic equipment |
CN112435259A (en) * | 2021-01-27 | 2021-03-02 | 核工业四一六医院 | Cell distribution model construction and cell counting method based on single sample learning |
CN113034448A (en) * | 2021-03-11 | 2021-06-25 | 电子科技大学 | Pathological image cell identification method based on multi-instance learning |
CN113192047A (en) * | 2021-05-14 | 2021-07-30 | 杭州迪英加科技有限公司 | Method for automatically interpreting KI67 pathological section based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN108346145B (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108346145A (en) | The recognition methods of unconventional cell in a kind of pathological section | |
Kaymak et al. | Breast cancer image classification using artificial neural networks | |
CN110399929B (en) | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium | |
Ran et al. | Cataract detection and grading based on combination of deep convolutional neural network and random forests | |
CN109345538A (en) | A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks | |
CN109447998B (en) | Automatic segmentation method based on PCANet deep learning model | |
CN111798464A (en) | Lymphoma pathological image intelligent identification method based on deep learning | |
CN112215117A (en) | Abnormal cell identification method and system based on cervical cytology image | |
CN107679525A (en) | Image classification method, device and computer-readable recording medium | |
Guo et al. | Automated glaucoma screening method based on image segmentation and feature extraction | |
CN112132817A (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN109635846A (en) | A kind of multiclass medical image judgment method and system | |
CN112215807A (en) | Cell image automatic classification method and system based on deep learning | |
CN108537751A (en) | A kind of Thyroid ultrasound image automatic segmentation method based on radial base neural net | |
CN112365471B (en) | Cervical cancer cell intelligent detection method based on deep learning | |
CN104751186A (en) | Iris image quality classification method based on BP (back propagation) network and wavelet transformation | |
CN110728312A (en) | Dry eye grading system based on regional self-adaptive attention network | |
Elsalamony | Detection of anaemia disease in human red blood cells using cell signature, neural networks and SVM | |
CN113343755A (en) | System and method for classifying red blood cells in red blood cell image | |
CN116580394A (en) | White blood cell detection method based on multi-scale fusion and deformable self-attention | |
Song et al. | Red blood cell classification based on attention residual feature pyramid network | |
CN113129281B (en) | Wheat stem section parameter detection method based on deep learning | |
Dong et al. | Supervised learning-based retinal vascular segmentation by m-unet full convolutional neural network | |
CN114300099A (en) | Allolymphocyte typing method based on YOLOv5 and microscopic hyperspectral image | |
CN110490088A (en) | DBSCAN Density Clustering method based on region growth method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |