CN114187277A - Deep learning-based thyroid cytology multi-type cell detection method - Google Patents

Deep learning-based thyroid cytology multi-type cell detection method Download PDF

Info

Publication number
CN114187277A
CN114187277A CN202111526196.4A CN202111526196A CN114187277A CN 114187277 A CN114187277 A CN 114187277A CN 202111526196 A CN202111526196 A CN 202111526196A CN 114187277 A CN114187277 A CN 114187277A
Authority
CN
China
Prior art keywords
image
confidence
target
slide
classification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111526196.4A
Other languages
Chinese (zh)
Other versions
CN114187277B (en
Inventor
姚沁玥
汪进
陈李粮
林真
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Severson Guangzhou Medical Technology Service Co ltd
Original Assignee
Severson Guangzhou Medical Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Severson Guangzhou Medical Technology Service Co ltd filed Critical Severson Guangzhou Medical Technology Service Co ltd
Priority to CN202111526196.4A priority Critical patent/CN114187277B/en
Publication of CN114187277A publication Critical patent/CN114187277A/en
Application granted granted Critical
Publication of CN114187277B publication Critical patent/CN114187277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The present disclosure describes a deep learning based method for detecting multiple types of thyroid cytology cells comprising: the method comprises the steps of obtaining cytopathological slide images of thyroid cells, obtaining a plurality of target images from the cytopathological slide images by adopting slide window scanning with optional step length, processing the target images to obtain at least one characteristic image and a confidence coefficient image, wherein the number of the confidence coefficient images is the same as the number of types of pathological change cells of the thyroid, processing the at least one confidence coefficient image to obtain a plurality of confidence coefficients corresponding to different pathological change types, obtaining a classification result matched with the target images based on the confidence coefficients, and processing the at least one confidence coefficient image to obtain a region matched with the classification result in the target images based on the classification result. The detection efficiency of the detection system can be improved, and the labeling cost can be reduced when the detection system is trained.

Description

Deep learning-based thyroid cytology multi-type cell detection method
Technical Field
The disclosure particularly relates to a deep learning-based method for detecting various types of thyroid cytology cells.
Background
Liquid-based cytology, a branch of cytopathology, is the collection of cell samples into a liquid fixative, which, after staining, can be used for observation and diagnosis. A common application scenario includes cervical sectioning for screening for pre-cancerous cervical lesions that may lead to cervical cancer.
At present, in a relatively wide diagnosis process of a cytopathology slide image, the cytopathology slide image is firstly partitioned by an algorithm model, pathological cells in the image block are detected to obtain the position and the type (classification) of the pathological cells, and then the classification results are integrated. Currently, target detection algorithms are often used to locate and classify diseased cells. However, this method requires intensive labeling, which requires a significant labeling cost for thyroid cytopathology slide images with densely scattered cells. CN 113139931 a discloses a thyroid slice image classification model training method, which divides a cytopathology slide image into non-overlapping image blocks, and maps the probability that an image block is determined to be a malignant tumor to the position of the corresponding image block in a thyroid slice image, thereby obtaining a probability heat map of the thyroid slice image. CN 111079862A discloses a thyroid papillary carcinoma pathological image classification method based on deep learning, which includes obtaining an attention heat map through a VGG-F network, then partitioning the image to obtain image blocks, and then classifying the image blocks to obtain the category (benign or malignant) of the image blocks.
However, the above disclosed method is only for the probability that an image block belongs to a single lesion, and is not suitable for classification of multiple lesion types. For a part of cytopathology slide images (such as a cytopathology slide image of a thyroid), an image block usually contains a plurality of cell clusters, the cell clusters are difficult to fall into a single image block completely, and the integration of the classification results of the image blocks easily causes the target frame to deviate from the actual cell condition. Meanwhile, when the cytopathology slide image is divided into non-overlapping image blocks, a cell mass is easily cut by the two image blocks, so that characteristic loss is caused, and detection omission is caused.
Disclosure of Invention
The present disclosure has been made in view of the above circumstances, and an object thereof is to provide a method for detecting multiple types of thyroid cytology cells by deep learning, which can improve the detection efficiency of a detection system and reduce labeling cost.
To this end, a first aspect of the present disclosure provides a deep learning-based method for detecting a plurality of types of thyroid cytology cells, comprising: acquiring a cytopathology slide image of thyroid cells, wherein the cytopathology slide image is a full slice image, acquiring a plurality of target images from the cytopathology slide image in sequence, and adjacent target images have overlapping areas, processing the target image to obtain at least one feature image and obtaining at least one confidence image based on the at least one feature image, the number of the confidence images is the same as the number of kinds of lesion types of lesion cells of the thyroid, processing the at least one confidence image to obtain a plurality of confidences corresponding to different lesion types, obtaining a classification result matched with the target image based on the plurality of confidences, and processing the at least one confidence image based on the classification result to obtain a region matched with the classification result in the target image.
In the present disclosure, the detection efficiency can be improved by the cooperation of the classification model and the confidence level image, compared to the position of the lesion cell in the target image obtained directly through the target detection model. Meanwhile, only the classification result of the target image is needed when the detection system is trained, so that medical workers can independently mark any pathological change cell without finding out and marking the pathological change cells around the pathological change cell, and the marking cost can be met.
In addition, in the detection method according to the first aspect of the present disclosure, optionally, a ratio between a size of the overlapping area and a size of the target image is between preset ranges. In this case, it is possible to reduce the occurrence of the cell clusters that do not completely fall in a single image block, and thus it is possible to reduce the occurrence of the deviation from the actual cells after the classification results of the target images are merged.
In addition, in the detection method according to the first aspect of the present disclosure, optionally, at least one confidence image is pooled to obtain a plurality of confidence levels corresponding to different types of lesions, and the classification unit obtains a classification result matching the target image based on a maximum value of the confidence levels. In this case, the confidence of the target image in being judged as a different lesion type can be obtained.
In addition, in the detection method according to the first aspect of the present disclosure, optionally, a confidence image matching the classification result is acquired based on the classification result, and image binarization processing is performed on the confidence image to obtain a region in the target image matching the classification result. In this case, the target region and the background region can be obtained by binarization, so that the position of the diseased cell can be clearly expressed.
Further, in the detection method according to the first aspect of the present disclosure, optionally, the target image is processed by a deep convolutional network to obtain at least one feature image. In this case, the target image can be subjected to convolution processing, pooling processing using a deep convolution network to extract high-dimensional features and form a feature image.
Further, in the detection method according to the first aspect of the present disclosure, optionally, the types of the lesion types include: papillary thyroid carcinoma, medullary carcinoma and suspected follicular tumors. In this case, lesion cells with suspected follicular tumors, papillary thyroid carcinomas, medullary carcinomas and other malignant lesions in the cytopathological slide image can be classified and localized.
In addition, in the detection method according to the first aspect of the present disclosure, optionally, the target image is framed based on a region matching the classification result. In this case, the lesion cells can be framed in the target image, so that the medical staff can find the lesion cells conveniently, and the medical staff can provide assistance in diagnosis.
A second aspect of the present disclosure provides a deep learning-based system for detecting and classifying pathological cells in a cytopathology slide image of thyroid pathological cells, including: an acquisition module, a sliding window module, a feature extraction module, a classification module and a positioning module, wherein the acquisition module is configured to acquire the cytopathology slide image, the cytopathology slide image is a full slice image, the sliding window module is configured to sequentially acquire a plurality of target images from the cytopathology slide image, adjacent target images have overlapping regions, the feature extraction module is configured to process the target images to acquire at least one feature image and acquire at least one confidence image based on the at least one feature image, the number of the confidence images is the same as the number of the types of the pathological changes, the classification module is configured to process the at least one confidence image to acquire a plurality of confidence degrees corresponding to different pathological changes, and a classification result matched with the target image is acquired based on the confidence degrees, the localization module processes the at least one confidence image based on the classification result to obtain a region in the target image that matches the classification result.
In this case, since the classification model may have a faster detection efficiency with respect to the target detection model, the detection efficiency can be improved by the cooperation of the classification model and the confidence image with respect to the position of the lesion cell in the target image obtained directly by the target detection model. Meanwhile, only the classification result of the target image is needed when the detection system is trained, so that medical workers can independently mark any pathological change cell without finding out and marking the pathological change cells around the pathological change cell, and the marking cost can be met.
According to the present disclosure, a method for detecting multiple types of thyroid cytology cells based on deep learning is provided, which can improve the detection efficiency of a detection system and reduce the labeling cost.
Drawings
Embodiments of the present disclosure will now be explained in further detail, by way of example only, with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart diagram illustrating a method for detecting a plurality of types of cells based on deep learning according to an example of the present disclosure.
Fig. 2 is a block diagram showing a structure of a detection system according to an example of the present disclosure.
Fig. 3 is a schematic diagram illustrating a cytopathology slide image according to an example of the present disclosure.
Fig. 4 is a schematic diagram illustrating acquisition of an acquisition target image according to an example of the present disclosure.
Fig. 5 is a schematic diagram illustrating a target image according to an example of the present disclosure.
Fig. 6 is a schematic diagram illustrating a feature image according to an example of the present disclosure.
Fig. 7 is a schematic diagram illustrating a partitioned image according to an example of the present disclosure.
Fig. 8 is a schematic diagram illustrating a target image with a target frame according to an example of the present disclosure.
Fig. 9 is a flow diagram illustrating a training method according to an example of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones.
It is noted that the terms "comprises," "comprising," and "having," and any variations thereof, in this disclosure, for example, a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The first aspect of the disclosure discloses a detection system for detecting multiple lesion types of lesion cells based on deep learning, which is a detection system for detecting and classifying the lesion cells in a cytopathology slide image.
In some examples, when the diseased cell is a thyroid diseased cell, the deep learning based diseased cell multiple lesion type detection system may also be referred to as a deep learning based thyroid cell multiple lesion type detection system, a deep learning based thyroid cell multiple cell type detection system, a weak surveillance based diseased cell detection system, a thyroid diseased cell detection system, a detection system, an identification system, a classification system, a system, or the like.
The second aspect of the present disclosure discloses a method for detecting multiple lesion types based on deep learning. In some examples, when the diseased cell is a thyroid cell, the method for detecting multiple types of lesions based on deep learning may also be referred to as a method for detecting multiple types of cells based on deep learning, a method for detecting multiple types of lesions of a thyroid cell based on deep learning, a method for detecting a diseased cell, an image classification method, or a detection method.
In some examples, the detection methods contemplated by the present disclosure may be implemented by the detection systems described herein. Fig. 1 is a schematic flow chart diagram illustrating a method for detecting a plurality of types of cells based on deep learning according to an example of the present disclosure. Fig. 2 is a block diagram showing the configuration of a detection system according to an embodiment of the present disclosure. Fig. 3 is a schematic diagram illustrating a cytopathology slide image according to an example of the present disclosure.
In some examples, referring to fig. 1, the detection method may include: acquiring a cytopathology slide 40 image (step S110); acquiring a plurality of target images from the images of the cytopathology slide 40 in sequence (step S120); processing the target image to obtain at least one feature image, and obtaining at least one confidence image based on the at least one feature image (step S130); processing the at least one confidence level image to obtain a plurality of confidence levels corresponding to different lesion types, and obtaining a classification result matched with the target image based on the plurality of confidence levels (step S140); the at least one confidence image is processed based on the classification result to obtain a region in the target image that matches the classification result (step S150).
The cytopathology slide 40 image can be detected by the detection method and the detection system 10 of the present disclosure, and the lesion type (cell type of thyroid gland) of the lesion cell in the cytopathology slide 40 and the judgment basis (i.e. the position of the lesion cell in the cytopathology slide 40) can be obtained. Therefore, the medical staff can be subsequently assisted in diagnosing the examinee, in other words, the lesion type and the judgment basis are obtained, integration and statistical analysis are needed to obtain an analysis report, the analysis report can be referred to before the medical staff diagnoses the examinee, the medical staff is reminded of the position which can be heavily referred to, and the diagnosis accuracy and the diagnosis speed of the medical staff can be further improved. Meanwhile, compared with a target detection model, the classification model can have higher detection efficiency, the target detection model needs intensive labeling during training, and a large amount of labeling cost is needed for thyroid cell pathological slide 40 images with densely scattered cells. Therefore, compared with the method that the position of the pathological cell in the target image is directly obtained through the target detection model, the detection efficiency can be improved through the matching of the classification model and the confidence coefficient image.
In some examples, the cytopathology slide 40 image may be a cellular image acquired by scanning a cellular slide made of cells of the subject by a scanner (see fig. 3). For example, the cytopathology Slide 40 image may be a Whole Slide Image (WSI), and the WSI image is generally very large, e.g., the size of the WSI image may be 600Mb to 1Gb, so the conventional detection system 10 and detection method are generally not suitable for processing of WSI images. Through the weakly supervised thyroid lesion cell detection system 10 related to the present disclosure, the detection efficiency of the detection system 10 can be improved, the calculation resources can be saved, and then the lesion type of the lesion cells in the cytopathology slide 40 can be more efficiently detected, and meanwhile, the labeling cost can be reduced when the detection system 10 is trained.
In some examples, the cytopathology slide 40 image may be a cytopathology slide 40 image taken from thyroid, cervix, or other parts of lung tissue, among others. For convenience of description, the following description will be given by taking the image of the cytopathology slide 40 taken from the thyroid gland as an example, and it should be noted that the detection method of the present disclosure can also be applied to the detection of the image of the cytopathology slide 40 of other parts such as the cervix or lung tissue.
In some examples, the types of lesion types in the detection system 10 to which the present disclosure relates may include papillary thyroid carcinomas, medullary carcinomas, and suspected follicular tumors. In some examples, the types of lesion types in the detection system 10 to which the present disclosure relates may include papillary thyroid carcinomas, medullary carcinomas, suspected follicular tumors, and other positive (other malignant lesion) cells, among others. In this case, the pathological cells with suspected follicular tumors, papillary thyroid carcinomas, medullary carcinomas and other malignant lesions in the images of the cytopathological slide 40 can be classified and localized.
In some examples, the detection system 10 may implement the detection methods referred to herein.
In some examples, the detection system 10 may include: an acquisition module 110, a sliding window module, a feature extraction module 130, a classification module 140, and a localization module 150. (see fig. 2) in some examples, the acquisition module 110 can be configured to acquire a cytopathology slide 40 image, the sliding window module configured to sequentially acquire a plurality of target images from the cytopathology slide 40 image, adjacent target images having overlapping regions, the feature extraction module 130 configured to process the target images to obtain at least one feature image, and obtaining at least one confidence image based on the at least one feature image, the number of the confidence images being the same as the number of the types of lesion types of the lesion cells, the classification module 140 being configured to process the at least one confidence image to obtain a plurality of confidence levels corresponding to different lesion types, obtain a classification result matching the target image based on the confidence levels, and the localization module 150 processing the at least one confidence image based on the classification result to obtain a region in the target image matching the classification result.
In this case, since the classification model may have a faster detection efficiency with respect to the target detection model, the detection efficiency can be improved by the cooperation of the classification model and the confidence image with respect to the position of the lesion cell in the target image obtained directly by the target detection model. Meanwhile, only the classification result of the target image is needed when the detection system 10 is trained, so that medical personnel can independently mark any pathological change cell without finding out and marking the pathological change cells around the pathological change cell, and the marking cost can be met.
In some examples, the acquisition module 110 is configured to acquire a cytopathology slide 40 image, as described above. In particular, for example, an acquisition device (e.g., a scanner) may perform a high resolution scan of the slide to acquire images of the cytopathological slide 40. After the scan is complete, the cytopathology slide 40 image can be uploaded to the system. The system may implement processing and detection of the cytopathology slide 40 image by executing computer program instructions, for example, the detection system 10 may be used to detect the type and location of lesions of the lesion cells in the cytopathology slide 40 image, and the like.
In some examples, the acquisition module 110 may be used to acquire multiple images of the cytopathology slide 40 at different resolutions. In some examples, the acquisition module 110 may acquire a slice image that may contain multiple cytopathology slide 40 images of different resolutions. For example, the acquisition device (e.g., scanner) may separately perform a high resolution scan of the entire slide based on different multiples to acquire multiple cytopathology slide 40 images. The multiple cytopathology slide 40 images may be sorted by resolution to form a pyramid-form slice image. In general, the resolution of the bottom-most cytopathology slide 40 image of the pyramid is the greatest and the resolution of the top-most cytopathology slide 40 image of the pyramid is the smallest, e.g., the top-most cytopathology slide 40 image may correspond to a thumbnail of the cytological image of the slide, and the slice images may be uploaded to the inspection system 10 after the scan is completed.
In some examples, referring to fig. 3, the cytopathology slide 40 image may have an active area containing contents. In some examples, the cytopathology slide 40 image may also have a background region that is distinct from the active region. In some examples, the contents may be various types of cells (e.g., the contents in fig. 3, etc.). In some examples, the cytopathology slide 40 image may be a grayscale image. In other examples, the cytopathology slide 40 image may be a color image.
In some examples, the acquisition module 110 may include a pre-processing unit that may pre-process the cytopathology slide 40 images to identify valid regions within any of the cytopathology slide 40 images. Thereby, the effective area within the image of the cytopathology slide 40 can be determined. In some examples, in the pre-processing, the acquisition module 110 may acquire a plurality of different resolution images of the cytopathology slide 40. In some examples, the acquisition module 110 may select a cytopathology slide 40 image having a reference resolution as the reference image and a cytopathology slide 40 image having a target resolution as the target slice image from the plurality of cytopathology slide 40 images. In some examples, the reference resolution may be less than the target resolution. In some examples, the reference resolution may be a corresponding minimum resolution in the plurality of cytopathology slide 40 images. The target resolution may be the highest corresponding resolution of the plurality of images of the cytopathology slide 40. For example, in the preprocessing, the preprocessing unit may select a cytopathology slide 40 image having a reference resolution as a reference image and a cytopathology slide 40 image having a target resolution as a target slice image from the slice images. Among them, the cytopathology slide 40 image with the reference resolution may be a thumbnail contained in the slice image. The cytopathology slide 40 image with the target resolution may be the cytopathology slide 40 image with the highest resolution in the slice images.
In some examples, a valid region corresponding to a reference picture may be identified from the reference picture based on the reference picture. In some examples, the active area of the reference image may be mapped to the target slice image to determine the active area of the target slice image. In this case, the effective region of the target slice image can be confirmed, and the amount of calculation can be effectively reduced.
In some examples, the acquisition module 110 may send the target image to a sliding window module to obtain the target image.
Fig. 4 is a schematic diagram illustrating acquisition of an acquisition target image according to an example of the present disclosure. Fig. 5 is a schematic diagram illustrating a target image according to an example of the present disclosure.
In some examples, as described above, the sliding window module may be configured to acquire multiple target images from the cytopathology slide 40 images in sequence.
In some examples, the sliding window module may acquire a plurality of target images from the cytopathology slide 40 image (see fig. 4, e.g., target image a, target image B, and target image C, etc.). In some examples, the combination of image regions corresponding to the plurality of target images may cover at least the effective area of the cytopathology slide 40 image. This enables detection of the effective region of the image of the cytopathology slide 40. In some examples, the slide window module may process the cytopathology slide 40 image to a preset size using a slide window method based on the active area of the cytopathology slide 40 image to acquire a plurality of target images from the cytopathology slide 40 image. Specifically, the sliding window may acquire the target image from the cytopathology slide 40 image according to a preset size, that is, the acquired target image may be a preset size. In some examples, the target image may be a color image, and in other examples, the target image may also be a grayscale image (see fig. 5).
In some examples, adjacent target images have coincident regions. In some examples, a ratio between a size of the overlapping area and a size of the target image is between preset ranges. In this case, it is possible to reduce the number of instances in which the cell mass does not completely fall within a single image patch, thereby reducing the occurrence of feature loss due to the cell mass being cut by two target images, and further improving the sensitivity of the detection system 10.
In some examples, the preset range may be 10% -50%, for example, the ratio between the size of the overlap region and the size of the target image may be 10%, 15%, 20%, 30%, 40%, or 50%, etc. It should be noted that the predetermined range may be greater than 0% and less than 10% or greater than 50% and less than 100%. Preferably, the ratio between the size of the overlapping area and the size of the target image may be 10% or 20%. Thus, the ratio between the size of the overlapping area and the size of the target image can be selected according to the actual image. For example, the size of the cell mass and the size of the target image may be selected based on, and a larger ratio, e.g., 40%, 50%, 60%, etc., may be selected when the size of the cell mass in the image of the cytopathology slide 40 is close to the size of the target image.
In some examples, one-half of a preset size may be taken as a sliding distance of the window, and the window is slid along the lateral and longitudinal directions of the effective area of the target image by the sliding distance; the corresponding image of the slid window on the image of the cytopathology slide 40 can be used as the target image. For example, the sliding window may acquire the target image from the cytopathology slide 40 image in a preset size of 1024 × 1024, and the sliding distance of the window in the lateral and longitudinal directions may be 512, respectively. Thereby, a plurality of target images can be acquired from the images of the cytopathology slide 40. In some examples, the size of the cell mass may be counted first, and the sliding distance may be set based on the size of the cell mass in the cytopathology slide 40 image, e.g., the sliding distance may be slightly less than the longest diameter of the cell mass in the cytopathology slide 40 image.
However, the examples of the present disclosure are not limited thereto, and in other examples, the plurality of target images may be acquired from the cytopathology slide 40 image directly for the cytopathology slide 40 image without acquiring the effective area of the cytopathology slide 40 image.
In some examples, the sliding window module may send the target image to the feature extraction module 130 to obtain a feature atlas and a confidence atlas.
Fig. 6 is a schematic diagram illustrating a feature image according to an example of the present disclosure.
In some examples, as described above, the feature extraction module 130 may be configured to process the target image to obtain at least one feature image, and obtain at least one confidence image based on the at least one feature image, the number of confidence images being the same as the number of types of lesions of the lesion cells.
In some examples, the feature image may be a grayscale image (see fig. 6). In other examples, the feature image may be a color image. In some examples, the feature image may embody high feature information of the target image.
In some examples, the feature extraction module 130 processes the target image through a deep convolutional network to obtain at least one feature image. In this case, the target image can be subjected to convolution processing, pooling processing using a deep convolution network to extract high-dimensional features and form a feature image.
In some examples, the feature extraction module 130 may convolve the at least one feature image by a convolution kernel to obtain at least one confidence image. In some examples, the confidence image may include
In some examples, C H × W pieces of feature images may be obtained by the feature extraction module 130, where H may represent the height of the feature image and W may represent the width of the feature image. In some examples, C H × W feature images may be referred to as a feature atlas.
In some examples, C H × W-sized feature images (feature atlas) may be processed with a convolution kernel to obtain K H × W-sized confidence images, where K may be the number of types of lesion cells, which may be referred to as a confidence atlas.
In some examples, the confidence images in the confidence map set may have a one-to-one correspondence with the lesion types in the classification results. For example, the ith confidence image in the confidence map set may correspond to the ith lesion type. In this case, the confidence image having the highest correlation with the classification result in the confidence map set can be known from the classification result.
In some examples, the H W C feature atlas has been converted to an H W K confidence atlas by convolving the feature atlas with one or more layers of 1X 1 convolution kernels. In some examples, the size of the convolution kernel may be 1 × 1 × K.
In some examples, the feature extraction module 130 may send the confidence atlas to the classification module 140 to obtain classification results.
In some examples, the deep convolutional network may adjust parameters of the deep convolutional network through a back propagation algorithm.
In some examples, as described above, the classification module 140 may be configured to process the at least one confidence image to obtain a plurality of confidences corresponding to different lesion types, and obtain a classification result matching the target image based on the confidences.
In some examples, classification module 140 may include a pooling unit and a classification unit.
In some examples, the pooling unit pools the at least one confidence image (i.e., confidence atlas) for a plurality of confidences corresponding to different lesion types. In this case, the confidence of the target image in being judged as a different lesion type can be obtained.
In some examples, the Pooling unit may convert the confidence atlas into a 1-dimensional confidence atlas by generally going through Global Average Pooling (GAP) or flattening operations (Flatten). The confidence set may include a plurality of confidences corresponding to different lesion types.
In some examples, for an H W K confidence atlas, a length K confidence atlas may be obtained.
In some examples, the classification unit may obtain a classification result matching the target image based on a plurality of confidences.
In some examples, the classification unit may obtain a classification result matching the target image based on a maximum value of the plurality of confidence levels. Specifically, the classification unit may acquire a maximum value among the plurality of confidence levels, and take a lesion type corresponding to the maximum confidence level as a classification result of the target image. In this case, a classification result of the target image can be obtained. Weight 3
In some examples, the classification unit may obtain a classification result that matches the target image based on a greater confidence of the plurality of confidences. Specifically, the classification unit may sort the confidence degrees from high to low, extract the top 2 confidence degrees, and use the lesion types corresponding to the confidence degrees greater than a preset value as the classification result of the target image. In some examples, the top 3, 4, 5 confidences may also be extracted. In other words, the classification results of the same target image may include a plurality of lesion types.
In some examples, the classification unit may obtain classification results that match the target image based on the respective confidence levels. Specifically, the classification unit may determine whether each confidence of the confidence levels is greater than a preset value, extract each confidence level greater than the preset value, and use a lesion type corresponding to each confidence level greater than the preset value as a classification result of the target image, in other words, the classification result of the same target image may include multiple lesion types.
In some examples, the classification unit may also be any one or combination of nearest neighbor classifier, naive bayes classifier, decision tree classifier, selection tree classifier, and logistic regression.
In some examples, the classification unit may send the classification result to the location module 150.
Fig. 7 is a schematic diagram illustrating a partitioned image according to an example of the present disclosure.
In some examples, as described above, the localization module 150 may process the at least one confidence image based on the classification result to obtain a region in the target image that matches the classification result. In this case, since the cytopathology slide 40 image of the thyroid contains multiple cell clusters and the cell clusters are difficult to completely fall into a single image block, the classification result of the image block is integrated to easily cause the situation that the target frame deviates from the actual cell, and the region in the target image matching the classification result obtained by the confidence image can directly display the region matching the classification result (i.e., the position of the diseased cell), so that the target frame can be prevented from deviating from the actual cell to generate a meaningless target frame in the target image.
In some examples, the localization module 150 may process the at least one confidence image based on the classification results to obtain a segmented image. In some examples, the segmented image may include a target region and a background region (see fig. 7), wherein the target region may be a region having a relatively large correlation with the classification result, and the background region may be a region having a relatively small correlation with the classification result. In this case, since a region having a relatively large correlation with the classification result is in the vicinity of the diseased cell, the position of the diseased cell can be obtained.
In some examples, the localization module 150 obtains a confidence image matching the classification result based on the classification result, and performs image binarization processing on the confidence image to obtain a region in the target image matching the classification result. In this case, the target region and the background region can be obtained by binarization, so that the position of the diseased cell can be clearly expressed.
In some examples, the partition image may be obtained by binary methods such as the Otsu method (OTSU algorithm), histogram method, differential histogram method, and the like.
Fig. 8 is a schematic diagram illustrating a target image with a target frame according to an example of the present disclosure.
In some examples, the detection system 10 may include a framing module that may frame the target image based on the region that matches the classification result (i.e., the target region). In this case, the lesion cells can be framed in the target image, so that the medical staff can find the lesion cells conveniently, and the medical staff can provide assistance in diagnosis.
In some examples, referring to fig. 8, the target box may frame the target area. In some examples, the target frame may be a rectangular, circular, elliptical, etc. image. In some examples, the target box may frame the diseased cells in the target image (see papillary thyroid carcinoma framed in fig. 8).
In some examples, the framing module may frame the target area with a target frame that can accommodate the target area. In some examples, the frame selection module may select the target region using a target frame that has a minimum area and is capable of accommodating the target region.
In some examples, one target image may include one target frame. In some examples, the target box may match a lesion type in the classification result.
In the present embodiment, the acquisition or processing of the cytopathology slide 40 image, the target image, the feature image, the confidence image, the lesion type and the classification result in the detection method can be referred to the above description of the detection system 10 about the cytopathology slide 40 image, the target image, the feature image, the confidence image, the lesion type and the classification result.
In step S110, a cytopathology slide 40 image may be acquired, as described above. In some examples, step S110 may be performed using the acquisition module 110 described above.
In some examples, the cytopathology slide 40 image may be a cellular image acquired by a scanner scanning a cellular slide made of cells of the subject. In some examples, the cytopathology slide 40 image may have an active area containing contents. The obtaining and processing of the effective region may refer to the above description of the effective region. In some examples, the contents may be various types of cells.
In some examples, in step S110, a plurality of different resolution images of the cytopathology slide 40, such as slice images or the like, may be acquired. The acquisition and processing of the slice images can be referred to the above description of the slice images. In some examples, step S110 may be implemented by acquisition module 110 in system 10.
In step S120, as described above, a plurality of target images may be sequentially acquired from the images of the cytopathology slide 40. In some examples, multiple target images may be selected from the cytopathology slide 40 images using a selectable step size sliding window scan.
In some examples, adjacent target images have coincident regions. In some examples, a ratio between a size of the overlapping area and a size of the target image is between preset ranges. Preferably, the ratio between the size of the overlapping area and the size of the target image may be 10% or 20%. In this case, the cell mass can be reduced from incompletely falling into a single image block, so that the occurrence of feature loss caused by the cell mass being cut by two target images is reduced, and the sensitivity of the detection method is improved.
In some examples, in step S130, the target image may be processed to obtain at least one feature image, and at least one confidence image may be obtained based on the at least one feature image, as described above.
In some examples, the number of confidence images is the same as the number of categories of lesion types of the lesion cells.
In some examples, the target image may be processed by a deep convolutional network to obtain at least one feature image. In this case, the target image can be subjected to convolution processing, pooling processing using a deep convolution network to extract high-dimensional features and form a feature image.
In some examples, the at least one confidence image may be pooled to obtain a plurality of confidences corresponding to different lesion types and a classification result matching the target image may be obtained based on a maximum of the plurality of confidences. In this case, the confidence of the target image in being judged as a different lesion type can be obtained.
In some examples, the at least one feature image may be convolved to obtain a confidence image.
In some examples, in step S140, as described above, the at least one confidence image may be processed to obtain a plurality of confidences corresponding to different lesion types, and a classification result matching the target image is obtained based on the plurality of confidences.
In some examples, a classification result matching the target image may be obtained based on a maximum of the plurality of confidences.
In some examples, in step S150, the at least one confidence image may be processed based on the classification result to obtain a region in the target image that matches the classification result, as described above. In this case, since the cytopathology slide 40 image of the thyroid contains multiple cell clusters and the cell clusters are difficult to completely fall into a single image block, the classification result of the image block is integrated to easily cause the situation that the target frame deviates from the actual cell, and the region in the target image matching the classification result obtained by the confidence image can directly display the region matching the classification result (i.e., the position of the diseased cell), so that the target frame can be prevented from deviating from the actual cell to generate a meaningless target frame in the target image.
In some examples, the at least one confidence image may be processed based on the classification results to obtain a segmented image. In some examples, the segmented image may be acquired by a binary method. In some examples, the partition image may be obtained by binary methods such as the Otsu method (OTSU algorithm), histogram method, differential histogram method, and the like. In other words, it is possible to acquire a confidence image matching the classification result based on the classification result, and perform image binarization processing on the confidence image to obtain a region in the target image matching the classification result. In this case, the target region and the background region can be obtained by binarization, so that the position of the diseased cell can be clearly expressed.
In some examples, the detection method may also frame a diseased cell or a target region in the target image based on the classification result and the at least one confidence image. In this case, the lesion cells can be framed in the target image, so that the medical staff can find the lesion cells conveniently, and the medical staff can provide assistance in diagnosis.
Fig. 9 is a flow diagram illustrating a training method according to an example of the present disclosure.
In some examples, the present disclosure also relates to a training method, which is a training method for training the detection system 10 according to the present disclosure. In the present embodiment, the acquisition or processing of the cytopathology slide 40 image, the target image, the feature image, the confidence image, the lesion type and the classification result in the training method can be referred to the above description of the detection system 10 about the cytopathology slide 40 image, the target image, the feature image, the confidence image, the lesion type and the classification result.
In some examples, referring to fig. 9, the training method may include: acquiring a cytopathology slide 40 image (step S210); labeling the cytopathology slide 40 image (step S220); the detection system 10 is trained using the cytopathology slide 40 image (step S230).
In step S220, the medical staff can isolate and label any lesion cell, and in this case, because the detection method according to the present invention only needs to label the target image, the medical staff does not need to find out all the lesion cells around the medical staff when labeling, but only needs to label typical lesion cells.
In some examples, the healthcare worker may label the diseased cells with a labeling box. Specifically, the healthcare worker can frame the lesion cells in the image of the cytopathology slide 40 using the annotation box. In some examples, the size of the label box may be freely set by the healthcare worker. In some examples, when the healthcare worker labels the diseased cells, the framed diseased cells may be classified and the type of lesion of the diseased cells obtained.
In step S230, the cytopathology slide 40 image may be input into the detection system 10 described above and the classification result obtained, and the parameters of the deep convolutional network may be adjusted based on the annotation result and the classification result using a back propagation algorithm.
In some examples, when the target image does not contain an annotation box, the target image may be classified as a negative target image (i.e., a target image without diseased cells).
In some examples, the target image obtained in the negative slide (i.e., the image of the cytopathology slide 40 determined to be free of diseased cells) is a negative target image.
In some examples, when the target image contains an annotation box, the lesion type of the lesion cell in the annotation box may be used as the annotation result of the target image.
In some examples, when the target image includes a partial annotation box, the lesion type of the lesion cell in the annotation box may be used as the annotation result of the target image.
In some examples, when the target image contains a plurality of annotation boxes, the lesion type of the lesion cell in the plurality of annotation boxes may be used as the annotation result of the target image. In this case, since different kinds of positives (for example, papillary thyroid cancer and medullary thyroid cancer) do not coexist in thyroid cytology, labeling results of target images do not conflict with each other.
In some examples, the present disclosure also provides a computer device, which may include a memory storing a computer program and a processor implementing the detection method according to the present disclosure when the processor executes the computer program. In some examples, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the detection method to which the present disclosure relates.
While the present disclosure has been described in detail in connection with the drawings and examples, it should be understood that the above description is not intended to limit the disclosure in any way. Those skilled in the art can make modifications and variations to the present disclosure as needed without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (10)

1. A method for detecting multiple types of thyroid cytology cells based on deep learning is characterized by comprising the following steps: acquiring a cytopathology slide image of thyroid cells, wherein the cytopathology slide image is a full slice image, acquiring a plurality of target images from the cytopathology slide image in sequence, and adjacent target images have overlapping areas, processing the target image to obtain at least one feature image and obtaining at least one confidence image based on the at least one feature image, the number of the confidence images is the same as the number of kinds of lesion types of lesion cells of the thyroid, processing the at least one confidence image to obtain a plurality of confidences corresponding to different lesion types, obtaining a classification result matched with the target image based on the plurality of confidences, and processing the at least one confidence image based on the classification result to obtain a region matched with the classification result in the target image.
2. The detection method according to claim 1,
the ratio of the size of the overlapped area to the size of the target image is within a preset range.
3. The detection method according to claim 1,
and performing pooling processing on at least one confidence image to obtain a plurality of confidence degrees corresponding to different lesion types, and obtaining a classification result matched with the target image based on the maximum value of the confidence degrees.
4. The detection method according to claim 1 or 3,
and acquiring a confidence coefficient image matched with the classification result based on the classification result, and performing image binarization processing on the confidence coefficient image to obtain a region matched with the classification result in the target image.
5. The detection method according to claim 1,
and processing the target image through a depth convolution network to obtain at least one characteristic image.
6. The detection method according to claim 1,
the categories of the lesion types include: papillary thyroid carcinoma, medullary carcinoma, and suspected follicular tumors.
7. The detection method according to claim 1,
and performing frame selection on the target image based on the area matched with the classification result.
8. A deep learning-based thyroid cytology multi-type cell system is a system for detecting and classifying pathological cells in a cytopathology slide image of thyroid pathological cells, and is characterized by comprising the following steps: an acquisition module, a sliding window module, a feature extraction module, a classification module and a positioning module,
the acquisition module is configured to acquire the cytopathology slide image, which is a full slice image,
the sliding window module is configured to sequentially acquire a plurality of target images from the cytopathology slide image, adjacent target images having overlapping regions,
the feature extraction module is configured to process the target image to obtain at least one feature image and obtain at least one confidence image based on the at least one feature image, the number of confidence images being the same as the number of types of lesions of the lesion cells,
the classification module is configured to process the at least one confidence image to derive a plurality of confidences corresponding to different lesion types, obtain a classification result matching the target image based on the confidences,
the localization module processes the at least one confidence image based on the classification result to obtain a region in the target image that matches the classification result.
9. The system of claim 8,
the system further comprises a framing module, and the framing module is used for framing the target image based on the area matched with the classification result.
10. The system of claim 8,
the classification module comprises a pooling unit and a classification unit, the pooling unit is used for pooling at least one confidence coefficient image to obtain a plurality of confidence coefficients corresponding to different lesion types, and the classification unit is used for obtaining a classification result matched with the target image based on the maximum value of the confidence coefficients.
CN202111526196.4A 2021-12-14 2021-12-14 Detection method for thyroid cytology multiple cell types based on deep learning Active CN114187277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111526196.4A CN114187277B (en) 2021-12-14 2021-12-14 Detection method for thyroid cytology multiple cell types based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111526196.4A CN114187277B (en) 2021-12-14 2021-12-14 Detection method for thyroid cytology multiple cell types based on deep learning

Publications (2)

Publication Number Publication Date
CN114187277A true CN114187277A (en) 2022-03-15
CN114187277B CN114187277B (en) 2023-09-15

Family

ID=80604909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111526196.4A Active CN114187277B (en) 2021-12-14 2021-12-14 Detection method for thyroid cytology multiple cell types based on deep learning

Country Status (1)

Country Link
CN (1) CN114187277B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743195A (en) * 2022-04-13 2022-07-12 赛维森(广州)医疗科技服务有限公司 Thyroid cell pathology digital image recognizer training method and image recognition method
CN115100474A (en) * 2022-06-30 2022-09-23 武汉兰丁智能医学股份有限公司 Thyroid gland puncture image classification method based on topological feature analysis
CN115100646A (en) * 2022-06-27 2022-09-23 武汉兰丁智能医学股份有限公司 Cell image high-definition rapid splicing identification marking method
CN115170571A (en) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 Method and device for identifying pathological images of hydrothorax and ascites cells and medium
CN115601749A (en) * 2022-12-07 2023-01-13 赛维森(广州)医疗科技服务有限公司(Cn) Pathological image classification method and image classification device based on characteristic peak map
CN115661815A (en) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and image classification device based on global feature mapping

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334909A (en) * 2018-03-09 2018-07-27 南京天数信息科技有限公司 Cervical carcinoma TCT digital slices data analysing methods based on ResNet
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
CN112419248A (en) * 2020-11-13 2021-02-26 复旦大学 Ear sclerosis focus detection and diagnosis system based on small target detection neural network
CN112750121A (en) * 2021-01-20 2021-05-04 赛维森(广州)医疗科技服务有限公司 System and method for detecting digital image quality of pathological slide
CN113177554A (en) * 2021-05-19 2021-07-27 中山大学 Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN113256634A (en) * 2021-07-13 2021-08-13 杭州医策科技有限公司 Cervical carcinoma TCT slice vagina arranging method and system based on deep learning
US20210271852A1 (en) * 2020-02-27 2021-09-02 Wuhan University Automatic classification method of whole slide images of cervical tissue pathology based on confidence coefficient selection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334909A (en) * 2018-03-09 2018-07-27 南京天数信息科技有限公司 Cervical carcinoma TCT digital slices data analysing methods based on ResNet
CN109034221A (en) * 2018-07-13 2018-12-18 马丁 A kind of processing method and its device of cervical cytology characteristics of image
US20210271852A1 (en) * 2020-02-27 2021-09-02 Wuhan University Automatic classification method of whole slide images of cervical tissue pathology based on confidence coefficient selection
CN112419248A (en) * 2020-11-13 2021-02-26 复旦大学 Ear sclerosis focus detection and diagnosis system based on small target detection neural network
CN112750121A (en) * 2021-01-20 2021-05-04 赛维森(广州)医疗科技服务有限公司 System and method for detecting digital image quality of pathological slide
CN113177554A (en) * 2021-05-19 2021-07-27 中山大学 Thyroid nodule identification and segmentation method, system, storage medium and equipment
CN113256634A (en) * 2021-07-13 2021-08-13 杭州医策科技有限公司 Cervical carcinoma TCT slice vagina arranging method and system based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAKENA LOW等: "Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks", 2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI) *
顾婷菲 等: "结合多通道注意力的糖尿病性视网膜病变分级", 中国图象图形学报 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114743195A (en) * 2022-04-13 2022-07-12 赛维森(广州)医疗科技服务有限公司 Thyroid cell pathology digital image recognizer training method and image recognition method
CN114743195B (en) * 2022-04-13 2022-12-09 赛维森(广州)医疗科技服务有限公司 Thyroid cell pathology digital image recognizer training method and image processing method
CN115100646A (en) * 2022-06-27 2022-09-23 武汉兰丁智能医学股份有限公司 Cell image high-definition rapid splicing identification marking method
CN115100646B (en) * 2022-06-27 2023-01-31 武汉兰丁智能医学股份有限公司 Cell image high-definition rapid splicing identification marking method
CN115100474A (en) * 2022-06-30 2022-09-23 武汉兰丁智能医学股份有限公司 Thyroid gland puncture image classification method based on topological feature analysis
CN115170571A (en) * 2022-09-07 2022-10-11 赛维森(广州)医疗科技服务有限公司 Method and device for identifying pathological images of hydrothorax and ascites cells and medium
CN115170571B (en) * 2022-09-07 2023-02-07 赛维森(广州)医疗科技服务有限公司 Method for identifying pathological image of hydrothorax and ascites cells, image identification device and medium
CN115601749A (en) * 2022-12-07 2023-01-13 赛维森(广州)医疗科技服务有限公司(Cn) Pathological image classification method and image classification device based on characteristic peak map
CN115661815A (en) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and image classification device based on global feature mapping
CN115601749B (en) * 2022-12-07 2023-03-14 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and image classification device based on characteristic peak value atlas
CN115661815B (en) * 2022-12-07 2023-09-12 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and device based on global feature mapping

Also Published As

Publication number Publication date
CN114187277B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN114187277B (en) Detection method for thyroid cytology multiple cell types based on deep learning
US9747687B2 (en) System and method for detecting polyps from learned boundaries
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
US10783641B2 (en) Systems and methods for adaptive histopathology image unmixing
Soh et al. ARKTOS: An intelligent system for SAR sea ice image classification
US20120237109A1 (en) Histology analysis
CN112750121B (en) System and method for detecting digital image quality of pathological slide
US20040086161A1 (en) Automated detection of lung nodules from multi-slice CT image data
CN112633297B (en) Target object identification method and device, storage medium and electronic device
JP2017107543A (en) Method and system for automated analysis of cell images
Székely et al. A hybrid system for detecting masses in mammographic images
EP3721372A1 (en) Method of storing and retrieving digital pathology analysis results
Fuchs et al. Computational pathology analysis of tissue microarrays predicts survival of renal clear cell carcinoma patients
US11568657B2 (en) Method of storing and retrieving digital pathology analysis results
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN115170518A (en) Cell detection method and system based on deep learning and machine vision
Gao et al. Sea ice change detection in SAR images based on collaborative representation
Sharma et al. A comparative study of cell nuclei attributed relational graphs for knowledge description and categorization in histopathological gastric cancer whole slide images
Erener et al. A methodology for land use change detection of high resolution pan images based on texture analysis
CN116468690B (en) Subtype analysis system of invasive non-mucous lung adenocarcinoma based on deep learning
US9589360B2 (en) Biological unit segmentation with ranking based on similarity applying a geometric shape and scale model
CN114359279B (en) Image processing method, image processing device, computer equipment and storage medium
Jaimes et al. Unsupervised semantic segmentation of aerial images with application to UAV localization
Liu et al. Breast mass detection with kernelized supervised hashing
CN113222928B (en) Urine cytology artificial intelligence urothelial cancer identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant