CN109117703B - Hybrid cell type identification method based on fine-grained identification - Google Patents
Hybrid cell type identification method based on fine-grained identification Download PDFInfo
- Publication number
- CN109117703B CN109117703B CN201810608329.4A CN201810608329A CN109117703B CN 109117703 B CN109117703 B CN 109117703B CN 201810608329 A CN201810608329 A CN 201810608329A CN 109117703 B CN109117703 B CN 109117703B
- Authority
- CN
- China
- Prior art keywords
- cell
- image
- fine
- grained
- mixed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention particularly relates to a method for identifying the types of mixed cells based on fine-grained identification, which comprises the following steps: pre-establishing a fine-grained recognition convolutional neural network model and a cell image database, wherein the cell image database comprises a mixed cell image which is an image comprising multiple types of cells; s1, collecting mixed cell images; s2, inputting the mixed cell image into a fine-grained identification convolutional neural network model to obtain a cell type heat map; s3, thresholding the mixed cell image to obtain a cell region binary image; and S4, combining the cell region binary image and the cell type heat map to obtain a cell type identification result. The invention accurately identifies the cell types according to the specificity of the cell morphological characteristics, and avoids the defects of long time consumption and complicated process of the traditional cell type identification method. The model can learn the morphological characteristics of fine-grained cells, identify the cell types through the information such as textures and the like, and has high identification accuracy and robustness.
Description
Technical Field
The invention relates to the field of biomedical image processing and machine learning, in particular to a method for identifying the types of mixed cells.
Background
In biomedical experiments, cell lines are frequently identified incorrectly or cross-contaminated, and the use of incorrectly identified or cross-contaminated cell lines can result in serious consequences such as unrepeatable experimental results, incorrect research conclusions, clinical cell therapy disasters and the like, and simultaneously waste a great deal of manpower, energy, money and the like. The traditional cell line identification method adopts a mode of comparing cell sample DNA information with a cell library gene locus to determine the cell line type and whether the cell line is cross-contaminated, and has higher cost and longer time consumption.
Recently, deep convolutional neural networks have enjoyed great success in many visual tasks. Compared with the traditional machine learning method, the convolutional neural network does not need expert experience, can automatically extract proper image characteristics and is applied to tasks such as classification, detection, semantic segmentation and the like, and therefore good performance is shown. More and more researchers are applying deep convolutional neural networks to the field of medical image processing and achieving good effects. In cell image recognition, most of the prior art divides a single cell from an image and then classifies the cell according to the morphological characteristics of the cell. These methods work well when the image contains only a single cell, but when the cell growth is dense and the detection area contains a plurality of cells, the cell segmentation in the image becomes difficult, and the cell morphological characteristics are easily interfered by other types of cells, resulting in a decrease in the cell identification accuracy.
The fine-grained identification refers to identification among different subclasses or examples of the same class, such as poodle dogs, shepherd dogs, bulldog dogs and the like, which belong to the class of dogs, have small form difference, and need to be distinguished by virtue of characteristics such as fur color, texture and the like. Fine-grained recognition can be roughly divided into two types of methods, namely a local model and a global model. The local model firstly positions the parts with higher object discrimination, and then extracts the characteristics of the positions to judge the object type. The global model classifies the images by extracting the characteristics of the whole image, and classical image representation modes such as a visual dictionary and related varieties of texture analysis of the visual dictionary belong to the method. Researchers at home and abroad have used a convolutional neural network to extract fine-grained classification features and have shown excellent performance in tasks such as texture recognition, scene recognition, fine-grained classification and the like.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the hybrid cell type identification method which is simple and convenient to operate and accurate in result, the cell type is accurately identified according to the specificity of the morphological characteristics of the cells, and the defects of long time consumption and complex process of the traditional cell type identification method are overcome. The method uses the fine-grained identification convolutional neural network model to identify the cell types of the pixel level of the mixed cell image, and the model can learn the morphological characteristics of the fine-grained cells and identify the cell types through the information such as textures. Compared with a general deep convolutional neural network, the method has higher identification accuracy and robustness under the conditions that the cells are small and dense in growth and the image contains various cells.
The specific scheme of the invention is as follows:
a method for identifying the types of mixed cells based on fine-grained identification comprises the following steps:
pre-establishing a fine-grained recognition convolutional neural network model and a cell image database, wherein the cell image database comprises a mixed cell image which is an image comprising multiple types of cells;
s1, collecting mixed cell images;
s2, inputting the mixed cell image into a fine-grained identification convolutional neural network model to obtain a cell type heat map;
s3, thresholding the mixed cell image to obtain a cell region binary image;
and S4, combining the cell region binary image and the cell type heat map to obtain a cell type identification result.
The invention acquires the mixed cell image under the microscope, and accurately identifies the cell type through the specificity of the cell morphological characteristics, thereby avoiding the defects of long time consumption and complex process of the traditional cell type identification method. Compared with a general deep convolutional neural network model, the method has higher identification accuracy and robustness under the conditions that cells are small, the cells grow densely and the images contain various cells. The thresholded binary image of the cell region can separate the background region from the cell region, so that the false recognition in the background region can be removed in the subsequent processing process, and the recognition accuracy is improved.
Further, the cell image database also comprises a single cell image labeled with a cell type label, wherein the single cell image is an image comprising a single type of cell; the step of pre-establishing a fine-grained identification convolutional neural network model comprises the following steps:
constructing a fine-grained identification convolutional neural network model;
collecting images of the individual cells;
and training a fine-grained identification convolutional neural network model through a single cell image.
The fine-grained recognition convolutional neural network used in the invention only needs to provide training data of a single cell image and corresponding cell type labels thereof in the training process, so that the pixel-level cell type labels used for semantic segmentation of the cell image are avoided, and a large amount of manpower and material resources can be saved.
Further, prior to training, data expansion was performed on the single cell images. By using the data amplification method, the data capacity can be improved to a certain extent, so that the training effect of the model is improved.
Further, the data amplification process comprises: translation, scaling, rotation, and color channel offset.
Further, the images in the cellular image database need to be preprocessed before being retrieved for use. Compared with the original image, the preprocessed image is more beneficial to extracting cell morphological characteristics for recognition by the fine-grained recognition convolutional neural network, and the accuracy of fine-grained recognition convolutional neural network model recognition and the training efficiency of the fine-grained recognition convolutional neural network are improved.
Further, the pretreatment process comprises: background illumination normalization, brightness normalization and contrast enhancement.
Further, before inputting the image into the fine-grained recognition convolutional neural network model, the image needs to be cut into a plurality of image blocks; and (3) inputting a fine-grained identification convolutional neural network model into a plurality of image blocks formed by the mixed-species cell image to obtain a plurality of cell species labels, and combining the plurality of cell species labels with the mixed-species cell image to form a cell species heat map. Specifically, an image is cut into a plurality of image blocks in a sliding window mode; the cell type labels are converted into pixel values and mapped to the center positions of the corresponding image blocks, and then the image blocks are rearranged and combined according to the original mixed cell image to form a final cell type heat map.
Further, after thresholding, morphological operations are required to remove noise and holes to obtain a binary image of the cell region.
Further, the step S4 is specifically: and for a connected region in the binary image of the cell region, counting cell type labels corresponding to pixels in the connected region, and taking the cell type label corresponding to the pixel with the largest number as the cell type labels of all the pixels in the connected region to obtain a cell type identification result. The cell type label corresponding to the pixel point substantially converts the cell type label into a pixel value to be mapped to the pixel point.
Further, the fine-grained identification convolutional neural network model comprises five convolution blocks, a bilinear extrinsic layer and a full connection layer.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention can take the cell image under the microscope as the system input, and the data acquisition process is convenient. The user only needs to collect clear cell images under the microscope and upload the images to the system, so that the cell identification work can be completed. The complicated process that the cell sample needs to be sent to an identification center and the gene information of the sample needs to be extracted to carry out cell identification in the traditional cell identification method is avoided.
(2) The method provided by the invention has stronger robustness. The preprocessing process can effectively eliminate uneven background illumination, normalize the image brightness and enhance the image contrast. Meanwhile, data amplification methods such as translation, scaling, rotation, color channel offset and the like are used, the number of training samples is increased, overfitting of the model is avoided, and the robustness of the model is improved.
(3) The method provided by the invention is suitable for identifying the cell images of various scenes. Most of the prior art only detects the case where the detection area contains a single, single cell. These methods do not perform well in situations where the cells are densely grown and the detection area contains multiple cells. When the detection area contains various and multiple cells, the method provided by the invention can effectively avoid the interference of other cells on the model and generate an accurate prediction result.
(4) The method provided by the invention can generate an accurate pixel-level cell type prediction result. The method is based on the fine-grained convolutional neural network, a bilinear pooling layer is added into the model, interaction between image-level features is modeled, and cell fine-grained morphological features can be extracted. When the position, the shape and the like of the cells are changed and the detection area contains other types of cells, the accurate classification result can still be generated.
(5) The fine-grained convolutional neural network model can be used for end-to-end training, and the training process is simple and convenient. Compared with the prior art, the method has the advantages that a multi-stage and multi-stage method is used, and the model learning and training process is effectively simplified. Meanwhile, the fine-grained convolutional neural network training process only needs a single cell image and a type label thereof, and the data set collection and labeling process is simple and convenient.
Drawings
FIG. 1 is a principal flow diagram of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the present invention will be clearly and completely described below with reference to the accompanying drawings in the present invention, and it is obvious that the embodiments described below are only a part of embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art without making creative efforts based on the present patent, belong to the protection scope of the present patent.
The invention is further described below with reference to the accompanying drawings:
a method for identifying a promiscuous cell type based on fine-grained identification as shown in fig. 1, comprising the following steps:
establishing a fine-grained recognition convolutional neural network model and a cell image database in advance, wherein the cell image database comprises a mixed cell image and a single cell image labeled with a cell type label, the mixed cell image is an image comprising multiple types of cells, and the single cell image is an image comprising single types of cells;
s1, collecting mixed cell images, and preprocessing the mixed cell images;
s2, inputting the mixed cell image into a fine-grained identification convolutional neural network model to obtain a cell type heat map;
s3, thresholding the preprocessed mixed cell image, and removing noise and holes by using morphological operation to obtain a cell region binary image, wherein the noise or the holes are objects or holes with the size smaller than 64 pixels;
and S4, combining the cell region binary image and the cell type heat map to obtain a cell type identification result.
The step of establishing the fine-grained recognition convolutional neural network model comprises the following steps:
constructing a fine-grained identification convolutional neural network model comprising five convolutional blocks, a bilinear outer layer and a full-connection layer;
collecting images of the single cells, and carrying out preprocessing and data amplification on the images of the single cells;
and inputting the single cell image subjected to preprocessing and data amplification into a fine-grained identification convolutional neural network model for training.
The pretreatment process comprises the following steps: background illumination normalization, brightness normalization and contrast improvement, specifically: the size of the gaussian convolution kernel (W) is chosen to be larger than the size of the cell (e.g., 64x64) based on the size of the cell in the imagekernel,Hkernel) And then convolving the cell image with a Gaussian kernel to obtain a background illumination brightness image of the cell image:
wherein G (x, y) is a two-dimensional Gaussian convolution kernel, σ is a standard deviation of Gaussian distribution, IsrcAs primary cell image, IbgFor the background illumination intensity image,is a convolution operation.
Then, subtracting the background illumination intensity from the original cell image, and then adding the background illumination mean value to obtain a cell image after the background illumination is homogenized:
wherein, Ibg_norm(x, y) is an image of the cell after background illumination is uniformized,is the background illumination mean.
And finally, carrying out gray scale normalization and contrast improvement. Firstly, expanding the periphery of an input image by using a recent value, then calculating the mean value and the standard deviation of the gray value of the input image, and calculating the gray value of the pixel point after gray normalization:
wherein, Iin(x,y),Iout(x, y) are the gray values of the input and output image pixels respectively,the mean value and the standard deviation of the gray scale of the input image,the set mean value and standard deviation of the gray value of the output image are obtained.
The data amplification process includes translation, scaling, rotation, and color channel shift, and specifically includes: and scaling the original image by scaling coefficients {0.9,1.0 and 1.1} respectively in sequence, and marking the scaled image as a cell type label of the original image.
And sequentially and respectively rotating the images obtained by the last step by the rotation angle of { -90,0,90}, and marking the scaled images as labels of the original images.
And (3) respectively carrying out color channel offset with coefficients of-10, 0 and 10 on the image gray value of the image obtained by the last step of processing in sequence, namely adding an offset coefficient to the brightness value of each channel of the original image. And marking the shifted image as an original image label.
By the above data amplification operation, the number of data sets can be increased by 27 times by 3 × 3.
Before an image is input into a fine-grained recognition convolutional neural network model, the image needs to be cut into a plurality of image blocks; and (3) inputting a fine-grained identification convolutional neural network model into a plurality of image blocks formed by the mixed-species cell image to obtain a plurality of cell species labels, and combining the plurality of cell species labels with the mixed-species cell image to form a cell species heat map. Specifically, an image is cut into a plurality of image blocks in a sliding window mode; the cell type labels are converted into pixel values and mapped to the center positions of the corresponding image blocks, and then the image blocks are rearranged and combined according to the original mixed cell image to form a final cell type heat map.
Before the mixed cell image is cut into a plurality of image blocks in a sliding window mode, the upper part and the lower part of the mixed cell image are respectively filled with a height Hwin(ii) pixel blocks with 0 gray scale value and W width are filled in the left and right sides of the pixel blockswin(W2) and a pixel block having a gray scale value of 0, and then a sliding window is performed using a pixel block having a size of (W)win,Hwin) The sliding window of (1). If the sliding window is large, the recognition effect is poor, and the recognition time is short. After a plurality of image blocks are input into a fine-grained identification convolutional neural network model, a plurality of cell type labels are obtained, and the cell type labels are used as the center (W) of each image block of the mixed cell imagecnt_win,Hcnt_win) And combining each image block into an original mixed cell image to obtain a cell type heat map. Wherein W is more than or equal to 1cnt_win≤Wwin,1≤Hcnt_win≤Hwin。(Wwin,Hwin) Width and height of the sliding window (W)cnt_win,Hcnt_win) Different selections can be carried out according to actual requirements, and the default is (1, 1). When (W)cnt_win,Hcnt_win) When the value is larger, the time consumption is shorter, but the recognition effect is poorer; when (W)cnt_win,Hcnt_win) When the value is small, the picture recognition effect is good, but the time is long.
The step S4 specifically includes: and for a connected region in the binary image of the cell region, counting cell type labels corresponding to pixels in the connected region, and taking the cell type label corresponding to the pixel with the largest number as the cell type labels of all the pixels in the connected region to obtain a cell type identification result. The cell type label corresponding to the pixel point substantially converts the cell type label into a pixel value to be mapped to the pixel point.
Although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (8)
1. A method for identifying the types of the mixed cells based on fine-grained identification is characterized by comprising the following steps:
pre-establishing a fine-grained recognition convolutional neural network model and a cell image database, wherein the cell image database comprises a mixed cell image which is an image comprising multiple types of cells; the step of establishing the fine-grained recognition convolutional neural network model comprises the following steps: constructing a fine-grained identification convolutional neural network model comprising five convolutional blocks, a bilinear outer layer and a full-connection layer;
s1, collecting mixed cell images;
s2, inputting the mixed cell image into a fine-grained identification convolutional neural network model to obtain a cell type heat map; before an image is input into a fine-grained recognition convolutional neural network model, the image needs to be cut into a plurality of image blocks; after a plurality of image blocks formed by the mixed seeding cell image are input into a fine-grained identification convolutional neural network model, a plurality of cell type labels are obtained, the plurality of cell type labels are converted into values represented by pixels and are mapped to the central positions of the corresponding image blocks, and then the plurality of image blocks are rearranged and combined according to the original mixed seeding cell image to form a final cell type heat map;
s3, thresholding the mixed cell image to obtain a cell region binary image;
s4, combining the cell region binary image and the cell type heat map to obtain a cell type identification result;
the step S4 specifically includes: and for a connected region in the binary image of the cell region, counting cell type labels corresponding to pixels in the connected region, and taking the cell type label corresponding to the pixel with the largest number as the cell type labels of all the pixels in the connected region to obtain a cell type identification result.
2. The method for identifying the mixed cell types based on the fine-grained identification as claimed in claim 1, wherein the cell image database further comprises single cell images labeled with cell type labels, and the single cell images are images comprising single cell types; the step of pre-establishing a fine-grained identification convolutional neural network model comprises the following steps:
constructing a fine-grained identification convolutional neural network model;
collecting images of the individual cells;
and training a fine-grained identification convolutional neural network model through a single cell image.
3. The method for identifying the miscellaneous cell types based on the fine-grained identification as claimed in claim 2, wherein the data amplification is performed on the single cell images before training.
4. The method for identifying the heterogeneous cell types based on fine-grained identification as claimed in claim 3, wherein the data amplification process comprises: translation, scaling, rotation, and color channel offset.
5. The method for identifying the types of the mixed cells based on the fine-grained identification as claimed in any one of claims 1 to 4, wherein the images in the cell image database need to be preprocessed before being taken out for use.
6. The method for identifying the mixed cell types based on fine-grained identification as claimed in claim 5, wherein the preprocessing process comprises the following steps: background illumination normalization, brightness normalization and contrast enhancement.
7. The method for identifying the types of the mixed cells based on the fine-grained identification as claimed in any one of claims 1 to 4, wherein after thresholding, morphological operations are also used to remove noise and holes so as to obtain a binary image of the cell region.
8. The method for identifying the kind of the mixed cells based on the fine-grained identification as claimed in any one of claims 1 to 4, wherein the fine-grained identification convolutional neural network model comprises five convolutional blocks, a bilinear outer layer and a full link layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810608329.4A CN109117703B (en) | 2018-06-13 | 2018-06-13 | Hybrid cell type identification method based on fine-grained identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810608329.4A CN109117703B (en) | 2018-06-13 | 2018-06-13 | Hybrid cell type identification method based on fine-grained identification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109117703A CN109117703A (en) | 2019-01-01 |
CN109117703B true CN109117703B (en) | 2022-03-22 |
Family
ID=64822925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810608329.4A Active CN109117703B (en) | 2018-06-13 | 2018-06-13 | Hybrid cell type identification method based on fine-grained identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109117703B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886321B (en) * | 2019-01-31 | 2021-02-12 | 南京大学 | Image feature extraction method and device for fine-grained classification of icing image |
CN110675368B (en) * | 2019-08-31 | 2023-04-07 | 中山大学 | Cell image semantic segmentation method integrating image segmentation and classification |
CN111291692B (en) * | 2020-02-17 | 2023-10-20 | 咪咕文化科技有限公司 | Video scene recognition method and device, electronic equipment and storage medium |
CN112816480A (en) * | 2021-02-01 | 2021-05-18 | 奎泰斯特(上海)科技有限公司 | Water quality enzyme substrate identification method |
CN112560999B (en) * | 2021-02-18 | 2021-06-04 | 成都睿沿科技有限公司 | Target detection model training method and device, electronic equipment and storage medium |
CN113516022B (en) * | 2021-04-23 | 2023-01-10 | 黑龙江机智通智能科技有限公司 | Fine-grained classification system for cervical cells |
CN115700821B (en) * | 2022-11-24 | 2023-06-06 | 广东美赛尔细胞生物科技有限公司 | Cell identification method and system based on image processing |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102737232A (en) * | 2012-06-01 | 2012-10-17 | 天津大学 | Cleavage cell recognition method |
CN106127255A (en) * | 2016-06-29 | 2016-11-16 | 深圳先进技术研究院 | The sorting technique of a kind of cancer numeral pathological cells image and system |
CN106650796A (en) * | 2016-12-06 | 2017-05-10 | 国家纳米科学中心 | Artificial intelligence based cell fluorescence image classification method and system |
CN106795558A (en) * | 2014-05-30 | 2017-05-31 | 维里纳塔健康公司 | Detection fetus Asia chromosomal aneuploidy and copy number variation |
CN106991417A (en) * | 2017-04-25 | 2017-07-28 | 华南理工大学 | A kind of visual projection's interactive system and exchange method based on pattern-recognition |
CN106991673A (en) * | 2017-05-18 | 2017-07-28 | 深思考人工智能机器人科技(北京)有限公司 | A kind of cervical cell image rapid classification recognition methods of interpretation and system |
CN107133569A (en) * | 2017-04-06 | 2017-09-05 | 同济大学 | The many granularity mask methods of monitor video based on extensive Multi-label learning |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
WO2018065027A1 (en) * | 2016-10-03 | 2018-04-12 | Total E&P Uk Limited | Modelling geological faults |
CN108010021A (en) * | 2017-11-30 | 2018-05-08 | 上海联影医疗科技有限公司 | A kind of magic magiscan and method |
CN108021788A (en) * | 2017-12-06 | 2018-05-11 | 深圳市新合生物医疗科技有限公司 | The method and apparatus of deep sequencing data extraction biomarker based on cell free DNA |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011243148A (en) * | 2010-05-21 | 2011-12-01 | Sony Corp | Information processor, information processing method and program |
US9390327B2 (en) * | 2013-09-16 | 2016-07-12 | Eyeverify, Llc | Feature extraction and matching for biometric authentication |
US9852501B2 (en) * | 2016-05-23 | 2017-12-26 | General Electric Company | Textural analysis of diffused disease in the lung |
US20180124437A1 (en) * | 2016-10-31 | 2018-05-03 | Twenty Billion Neurons GmbH | System and method for video data collection |
-
2018
- 2018-06-13 CN CN201810608329.4A patent/CN109117703B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102737232A (en) * | 2012-06-01 | 2012-10-17 | 天津大学 | Cleavage cell recognition method |
CN106795558A (en) * | 2014-05-30 | 2017-05-31 | 维里纳塔健康公司 | Detection fetus Asia chromosomal aneuploidy and copy number variation |
CN106127255A (en) * | 2016-06-29 | 2016-11-16 | 深圳先进技术研究院 | The sorting technique of a kind of cancer numeral pathological cells image and system |
WO2018065027A1 (en) * | 2016-10-03 | 2018-04-12 | Total E&P Uk Limited | Modelling geological faults |
CN106650796A (en) * | 2016-12-06 | 2017-05-10 | 国家纳米科学中心 | Artificial intelligence based cell fluorescence image classification method and system |
CN107133569A (en) * | 2017-04-06 | 2017-09-05 | 同济大学 | The many granularity mask methods of monitor video based on extensive Multi-label learning |
CN106991417A (en) * | 2017-04-25 | 2017-07-28 | 华南理工大学 | A kind of visual projection's interactive system and exchange method based on pattern-recognition |
CN106991673A (en) * | 2017-05-18 | 2017-07-28 | 深思考人工智能机器人科技(北京)有限公司 | A kind of cervical cell image rapid classification recognition methods of interpretation and system |
CN107563444A (en) * | 2017-09-05 | 2018-01-09 | 浙江大学 | A kind of zero sample image sorting technique and system |
CN108010021A (en) * | 2017-11-30 | 2018-05-08 | 上海联影医疗科技有限公司 | A kind of magic magiscan and method |
CN108021788A (en) * | 2017-12-06 | 2018-05-11 | 深圳市新合生物医疗科技有限公司 | The method and apparatus of deep sequencing data extraction biomarker based on cell free DNA |
Non-Patent Citations (3)
Title |
---|
"Bilinear CNN Models for Fine-Grained Visual Recognition";T. Lin等;《2015 IEEE International Conference on Computer Vision (ICCV)》;20160218;1449-1457 * |
"Fast Fine-grained Image Classification via Weakly Supervised Discriminative Localization";Xiangteng He 等;《Computer Vision and Pattern Recognition》;20170930;1-13 * |
"铜绿假单胞菌蹭行运动单细胞分析方法的建立及应用";倪磊 等;《生物工程学报》;20170619;第33卷(第9期);1611-1624 * |
Also Published As
Publication number | Publication date |
---|---|
CN109117703A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117703B (en) | Hybrid cell type identification method based on fine-grained identification | |
CN109034208B (en) | High-low resolution combined cervical cell slice image classification system | |
CN102651128B (en) | Image set partitioning method based on sampling | |
CN112750106B (en) | Nuclear staining cell counting method based on incomplete marker deep learning, computer equipment and storage medium | |
CN111415352B (en) | Cancer metastasis panoramic pathological section analysis method based on deep cascade network | |
CN103295013A (en) | Pared area based single-image shadow detection method | |
CN111798470B (en) | Crop image entity segmentation method and system applied to intelligent agriculture | |
CN116630971B (en) | Wheat scab spore segmentation method based on CRF_Resunate++ network | |
CN113222933A (en) | Image recognition system applied to renal cell carcinoma full-chain diagnosis | |
CN108776823A (en) | Cervical carcinoma lesion analysis method based on cell image recognition | |
CN111210447B (en) | Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
CN113344933B (en) | Glandular cell segmentation method based on multi-level feature fusion network | |
CN110084820A (en) | Purple soil image adaptive division and extracting method based on improved FCM algorithm | |
CN112508860B (en) | Artificial intelligence interpretation method and system for positive check of immunohistochemical image | |
CN111489369B (en) | Helicobacter pylori positioning method and device and electronic equipment | |
CN111401434A (en) | Image classification method based on unsupervised feature learning | |
CN110930369A (en) | Pathological section identification method based on group equal variation neural network and conditional probability field | |
CN111209879B (en) | Unsupervised 3D object identification and retrieval method based on depth circle view | |
Li et al. | A novel denoising autoencoder assisted segmentation algorithm for cotton field | |
CN109086774B (en) | Color image binarization method and system based on naive Bayes | |
CN112241954A (en) | Full-view self-adaptive segmentation network configuration method based on lump differential classification | |
CN113408523A (en) | Image generation technology-based junk article image data set construction method | |
CN111815554A (en) | Cervical cell image segmentation method based on edge search MRF model | |
Li et al. | MCFF: Plant leaf detection based on multi-scale CNN feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |