CN114170224A - System and method for cellular pathology classification using generative staining normalization - Google Patents

System and method for cellular pathology classification using generative staining normalization Download PDF

Info

Publication number
CN114170224A
CN114170224A CN202210039250.0A CN202210039250A CN114170224A CN 114170224 A CN114170224 A CN 114170224A CN 202210039250 A CN202210039250 A CN 202210039250A CN 114170224 A CN114170224 A CN 114170224A
Authority
CN
China
Prior art keywords
image
color image
slice
trained
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210039250.0A
Other languages
Chinese (zh)
Other versions
CN114170224B (en
Inventor
刘凯
汪进
陈李粮
常亮亮
陈睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Severson Guangzhou Medical Technology Service Co ltd
Original Assignee
Severson Guangzhou Medical Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Severson Guangzhou Medical Technology Service Co ltd filed Critical Severson Guangzhou Medical Technology Service Co ltd
Publication of CN114170224A publication Critical patent/CN114170224A/en
Application granted granted Critical
Publication of CN114170224B publication Critical patent/CN114170224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure describes a system and method for cellular pathology classification using generative staining normalization, the system comprising an acquisition module for acquiring a cellular pathology image; the preprocessing module is used for determining an effective area of a cytopathology image with higher resolution by using a cytopathology image with lower resolution and also performing blocking processing on the basis of the effective area of the target section color image so as to acquire a plurality of blocking images of the target section color image; the dyeing normalization module is used for carrying out dyeing normalization processing on each block image of the target section color image based on the generation countermeasure network so as to obtain a plurality of normalized color images; the characteristic extraction module is used for acquiring the blocking characteristics of the normalized color image based on the characteristic extraction model and performing characteristic fusion processing on the blocking characteristics to generate slice characteristics; and a classification module that classifies the slice features based on the slice classification model. This can improve the accuracy of classification for classifying the cytopathology image.

Description

System and method for cellular pathology classification using generative staining normalization
Technical Field
The present disclosure relates generally to a system and method for cellular pathology classification using generative staining normalization.
Background
With the widespread use of computer-aided diagnosis (CAD) and computer-aided detection (CAD), more and more image processing techniques are applied to cell screening. Taking cervical cancer as an example, cervical cancer is a malignant tumor that seriously threatens female health. If cervical cancer is detected and treated at an early stage, the morbidity and mortality of cervical cancer can be reduced. Therefore, it is important to screen for cervical cancer regularly. The screening means for cervical cancer at present is cervical cell screening, and cervical cell screening is to analyze the morphology of the nucleus and cytoplasm of a diseased cell in a cervical cell slice image to assist a doctor in diagnosing cervical cancer.
However, because different slide-making methods are different in different hospitals and imaging effects of different scanners are different, the staining styles of different cytopathology images (such as cervical cell slice images) are greatly different, and collection of the cytopathology images with different staining styles is difficult. In this case, machine learning models trained on cytopathology images of a particular staining style often fail to behave consistently on another staining style. Therefore, the classification accuracy of classifying the cytopathology image is yet to be improved.
Disclosure of Invention
The present disclosure has been made in view of the above circumstances, and an object thereof is to provide a system and a method for cellular pathology classification using generative stain normalization, which can improve classification accuracy of classification.
To this end, a first aspect of the present disclosure provides a system for normalization of cellular pathology classification using generative staining, comprising: an acquisition module for acquiring a cytopathology image of a stained color image of a section including a plurality of resolutions; a preprocessing module that acquires an effective region of a slice color image of a second resolution based on a slice color image of a first resolution and performs a block processing on the target slice color image based on the effective region to acquire a plurality of block images, taking the slice color image of the second resolution as a target slice color image, wherein the first resolution is smaller than the second resolution; the dyeing normalization module is used for carrying out dyeing normalization processing on each block image of the target section color image based on a generation countermeasure network so as to obtain a plurality of normalized color images with consistent dyeing styles; the characteristic extraction module is used for acquiring the block characteristics of the normalized color image based on a characteristic extraction model, acquiring statistical characteristics based on a plurality of block characteristics of the target slice color image, and performing characteristic fusion processing on characteristic information comprising the statistical characteristics to generate slice characteristics; and a classification module that classifies the slice features based on a slice classification model to obtain a classification result of the slice features and as a classification result of the cytopathology image. In the disclosure, an effective region of a cytopathology image with a higher resolution is determined by using a cytopathology image with a lower resolution, a block image of a color image of a target slice is obtained based on the effective region, the block image of the color image of the target slice is subjected to dyeing normalization processing, block features of the block image are obtained by using a feature extraction model, statistical features are obtained based on the block features, slice features are generated based on feature information including the statistical features, and the slice features are classified based on a slice classification model to obtain a classification result. Under the condition, the effective area of the cytopathology image with higher resolution is determined by using the cytopathology image with lower resolution, the block image of the target section color image is obtained based on the effective area, the redundant calculation amount is reduced, the classification efficiency of classification can be improved, the block image of the target section color image is subjected to dyeing normalization processing, the block characteristics of the block image are obtained by using the characteristic extraction model, and the classification precision of classification can be improved.
Further, in the system according to the first aspect of the present disclosure, optionally, the effective region contains contents, and the blocking feature includes at least a position of the contents, a category of the contents, and a confidence level of the contents, and the contents are cells.
In the system according to the first aspect of the present disclosure, optionally, in the feature fusion process, feature information of the target slice color image is reduced in dimension and the feature information after the reduction in dimension is connected to generate the slice feature, and the statistical feature includes at least one of a distribution histogram of confidence of each category content, a distribution histogram of an area of each category content, and a distribution histogram of a perimeter of each category content. Thereby, the slice feature of the target slice color image can be obtained.
In addition, in the system according to the first aspect of the present disclosure, optionally, the slice color image of the first resolution is used as a reference slice color image, an effective region of the reference slice color image is mapped to the target slice color image to determine an effective region of the target slice color image, wherein in the effective region where the reference slice color image is obtained, the reference slice color image is converted into a reference grayscale image in a grayscale mode, the reference grayscale image is adaptively threshold-segmented and color-inverted by using a binarization threshold segmentation algorithm to obtain a reference binarized image, the reference binarized image is subjected to dilation and erosion processing to obtain a white region, and the white region is used as the effective region of the reference slice color image. Thereby, the effective region of the color image of the reference slice can be determined.
Further, in the system according to the first aspect of the present disclosure, optionally, the cytopathology image is a cervical cell slice image of a different staining style. Thus, cervical cell slice images of different staining styles can be classified.
In addition, in the system according to the first aspect of the present disclosure, optionally, in the staining normalization process, each of the segmented images of the target slice color image is grayed to be converted into a grayscale image having a grayscale pattern, the grayscale image is normalized based on a trained generation network, and a normalized color image corresponding to the grayscale image is obtained, wherein the training process of the generation network comprises the steps of preparing a plurality of block images to be trained, graying the block images to be trained to convert the block images to be trained into gray images to be trained with a gray pattern, constructing a normalization network based on the generation countermeasure network, wherein the normalization network comprises the generation network and a discrimination network, and training the normalized network to enable the reconstructed block image output by the generation network to be matched with the block image to be trained. Thereby, the staining normalization process can be performed on each block image of the target slice color image.
Further, in the system according to the first aspect of the present disclosure, optionally, the generation network receives the gray-scale image to be trained and generates the reconstructed block image, the discrimination network receives a first stitched image stitched by the block image to be trained and the gray-scale image to be trained and a second stitched image stitched by the gray-scale image to be trained and the reconstructed block image and outputs a discrimination result, and in the training, a countermeasure loss function of the discrimination network is constructed based on the discrimination result and a network parameter of the discrimination network is updated using the countermeasure loss function, a generation loss function of the generation network is constructed based on the countermeasure loss function, the block image to be trained and the reconstructed block image, and a network parameter of the generation network is updated using the generation loss function, such that the reconstructed block images generated by the generating network match the block images to be trained. In this case, the reconstructed block image output by the generation network can be matched with the block image to be trained.
A second aspect of the present disclosure provides a method of cytopathology classification using generative staining normalization comprising obtaining a cytopathology image stained and including a color image of a section at a plurality of resolutions; acquiring an effective area of a slice color image of a second resolution based on the slice color image of the first resolution and taking the slice color image of the second resolution as a target slice color image, wherein the first resolution is smaller than the second resolution; performing blocking processing on the target slice color image based on the effective area to obtain a plurality of blocking images; performing dyeing normalization processing based on a generation countermeasure network on each block image of the target section color image to obtain a plurality of normalized color images with consistent dyeing styles; acquiring block features of the normalized color image based on a feature extraction model, acquiring statistical features based on a plurality of block features of the target slice color image, and performing feature fusion processing on feature information including the statistical features to generate slice features; and classifying the slice features based on a slice classification model to obtain a classification result of the slice features as a classification result of the cytopathology image. Under the condition, the effective area of the cytopathology image with higher resolution is determined by using the cytopathology image with lower resolution, the block image of the target section color image is obtained based on the effective area, the redundant calculation amount is reduced, the classification efficiency of classification can be improved, the block image of the target section color image is subjected to dyeing normalization processing, the block characteristics of the block image are obtained by using the characteristic extraction model, and the classification precision of classification can be improved.
A third aspect of the present disclosure provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
A fourth aspect of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
According to the present disclosure, a system and method for cytopathological classification using generative staining normalization capable of improving classification accuracy of classification is provided.
Drawings
The disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram illustrating an application scenario of a method of applying generative stain normalization for cellular pathology classification according to an example of the present disclosure.
Fig. 2 is a flow chart illustrating a training method of cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
Fig. 3 is a schematic diagram illustrating a cervical cell slice image according to an example of the present disclosure.
Fig. 4(a) is a schematic diagram illustrating a reference slice color image according to an example of the present disclosure.
Fig. 4(b) is a schematic diagram illustrating an effective region of a reference slice color image according to an example of the present disclosure.
Fig. 5(a) is a schematic diagram showing a block image according to an example of the present disclosure.
Fig. 5(b) is a schematic diagram showing the location of contents to which examples of the present disclosure relate.
Fig. 6 is a flow chart illustrating training of a generation network according to an example of the present disclosure.
Fig. 7 is a block diagram illustrating a generation training apparatus that generates a network according to an example of the present disclosure.
Fig. 8 is a block diagram illustrating a training apparatus for cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
Fig. 9 is a flow chart illustrating a method of cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
Fig. 10 is a block diagram illustrating a system for cellular pathology classification using generative stain normalization in accordance with examples of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones. It is noted that the terms "comprises," "comprising," and "having," and any variations thereof, in this disclosure, for example, a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The pathology slide digital image to which the present disclosure relates may be a pathology image acquired by a pathology scanner, and the pathology slide digital image may be a pyramid image having different resolutions (i.e., the pathology slide digital image may include images of multiple resolutions). Pathology Slide digital images are typically very large, such as Whole Slide Image (WSI), and the size of the WSI Image may be 600Mb to 10Gb, so conventional Image processing methods are generally not suitable for processing pathology Slide digital images and processing pathology Slide digital images often takes a long time. Because the slide making methods of different hospitals are different and the imaging effects of the staining methods or different pathology scanners are different, the staining styles of digital images of pathological slides are often inconsistent. Generally, the digital image of the pathological slide can reflect the pathological changes of the tissue, so that the analysis of the digital image of the pathological slide can assist the doctor in analyzing the pathological changes of the tissue. For example, in assisting the analysis of cervical cancer, lesion cells in a cervical cell slice image may be analyzed to classify the cervical cell slice image, thereby assisting a doctor in analyzing cervical cancer using the classification result as an intermediate result. But the examples of the present disclosure are not limited thereto, and the scheme to which the examples of the present disclosure relate may be easily applied to the classification of other pathology slide digital images, such as an intestinal pathology slide digital image, a gastric cancer pathology slide digital image, or a lymphoma pathology slide digital image.
As described above, staining styles of pathology slide digital images are often inconsistent. In some examples, prior to training the machine learning model for feature recognition or classification recognition based on the pathology slide digital image, the pathology slide digital image may be subjected to a staining normalization process (described later) to convert a staining style of the pathology slide digital image into the same staining style. In this case, the machine learning model is trained for the digital image of the pathological slide with a single staining style, and the trained machine learning model can be generalized to feature recognition or classification recognition for the digital images of the pathological slides with different staining styles. This can improve the generalization ability of the machine learning model.
The following describes an example of the present disclosure taking a pathology slide digital image as a cytopathology image as an example, and does not represent a limitation of the present disclosure. That is, the contents of the digital image of the pathology slide may be cells.
The disclosed methods of using generative staining normalized cytopathology classifications can classify cytopathology images of different staining styles. The methods of the present disclosure employing generative staining normalized classification of cellular pathologies may also sometimes be referred to as methods or classification methods. Fig. 1 is a schematic diagram illustrating an application scenario of a method of applying generative stain normalization for cellular pathology classification according to an example of the present disclosure.
In some examples, as shown in fig. 1, the methods to which the present disclosure relates may be applied in an application scenario 100 as shown in fig. 1. In the application scenario 100, an acquisition device 110 (e.g., pathology scanner) may perform a high resolution scan of a slide 120 (e.g., HE slide) based on different magnifications (e.g., 20 or 40 magnifications) to acquire a cytopathology image 130. After the scan is complete, the cytopathology image 130 may be uploaded to the server 140. The server 140 may implement a method of the present disclosure by executing computer program instructions, which may classify the cytopathology image 130 to output a classification result of the cytopathology image 130. In some examples, operator 150 may assist in analyzing cytopathology image 130 with the classification results of cytopathology image 130 as an intermediate result.
In some examples, in the case where the cytopathology image 130 is a cervical cell slice image, the classification result may be no see intradermal lesion cells or malignant cells (NILM), atypical squamous cells-nonsense (ASC _ US), low-grade squamous intraepithelial lesions (LSIL), atypical squamous cells-nonexclusive HSIL (ASC _ H), high-grade squamous intraepithelial lesions (HSIL), atypical cervical canal cells (AGC). In some examples, server 140 may include one or more processors and one or more memories. Wherein the processor may include a central processing unit, a graphics processing unit, and any other electronic components capable of processing data, capable of executing computer program instructions. The memory may be used to store computer program instructions. In some examples, server 140 may also be a cloud server. In some examples, operator 150 may be a physician with expertise in analyzing cytopathology images 130.
The feature extraction model and the section classification model in the method can be obtained by the training method of applying the generative staining normalization cell pathology classification related to the disclosure. Hereinafter, the training method of cellular pathology classification using generative staining normalization according to the present disclosure will be described in detail with reference to the accompanying drawings. The training method for cellular pathology classification using generative stain normalization may sometimes also be referred to as a training method. Fig. 2 is a flow chart illustrating a training method of cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
In some examples, the training method may include preparing a cytopathology image to be trained, an annotation image, and an annotation label (step S110), preprocessing the cytopathology image to be trained and the annotation image to obtain a blocking image to be trained and a blocking annotation image to be trained (step S120), performing a dye normalization process on each blocking image of a target slice color image to obtain a plurality of normalized color images to be trained of the target slice color image (step S130), training a feature extraction model for obtaining blocking features of the blocking feature of the normalized color image to be trained based on the color image normalization to be trained and the blocking annotation image to be trained (step S140), obtaining blocking features of the normalized color image to be trained of the target slice color image based on the trained feature extraction model, and performing a feature fusion process on feature information of the target slice color image to generate a cut to be trained of the target slice color image The slice features (step S150) and a slice classification model for classifying the slice features to be trained of the target slice color image are trained based on the slice features to be trained of the target slice color image and the labeling labels (step S160). Under the condition, the effective area of the cytopathology image with higher resolution for training is determined by using the cytopathology image with lower resolution, and the block image of the color image of the target slice is obtained based on the effective area, so that the redundant calculation amount is reduced, the classification efficiency of classification can be improved, the dyeing normalization processing is carried out on the block image of the color image of the target slice, the block characteristics of the block image are obtained by using the trained characteristic extraction model for training the slice classification model, and the classification precision of classification can be improved.
In some examples, in step S110, the cytopathology image may include slice color images of multiple resolutions. In some examples, the slice color image may have an active area containing content. In some examples, the slice color image may have a background region. In some examples, the contents may be cells. In some examples, slice color images of multiple resolutions may be sorted by resolution to form images in a pyramid form. In general, the resolution of the slice color image at the bottom of the pyramid is the largest, and the resolution of the slice color image at the top of the pyramid is the smallest. In some examples, the cytopathology image may have a thumbnail.
Fig. 3 is a schematic diagram illustrating a cervical cell slice image according to an example of the present disclosure.
In some examples, the cytopathology images to be trained may be images of the same staining style. In some examples, the cytopathology images to be trained may be cervical cell slice images of the same staining style. In this case, the section classification model obtained by training based on the cervical cell section image to be trained of one staining style can classify cervical cell section images of different staining styles. As an example of a cervical cell slice image, fig. 3 shows a partial view of a cervical cell slice image acquired by a pathology scanner.
In some examples, the annotation image can be an annotation image of a content-level annotation corresponding to the cytopathology image to be trained (i.e., to annotate a content, such as a cell, in the annotation image). In some examples, the annotation image can include an annotation box to indicate the location of the content and a category of the content. For example, if labeling a cervical cell slice image, the labeled image may include a label box indicating the location of the cell and the cell type. In some examples, the content in the cytopathology image to be trained may be boxed and the category of the content determined by a professional annotating physician using an annotation box. In some examples, the shape of the label box may be a closed polygon. For example, the shape of the label box may be rectangular.
In some examples, the annotation label may be an annotation label of a slice level annotation corresponding to the cytopathology image to be trained (i.e., an annotation label is formed by annotating the entire cytopathology image to be trained). In some examples, the annotation tag can correspond to a classification result.
Fig. 4(a) is a schematic diagram illustrating a reference slice color image according to an example of the present disclosure. Fig. 4(b) is a schematic diagram illustrating an effective region of a reference slice color image according to an example of the present disclosure.
As mentioned above, cytopathology images are typically very large. In some examples, in step S120, the cytopathology image to be trained may be pre-processed. In some examples, in the pre-processing, a slice color image of a first resolution may be selected as a reference slice color image and a slice color image of a second resolution may be selected as a target slice color image from the cytopathology image. In some examples, the active region of the reference slice color image may be acquired based on the reference slice color image. As an example of the reference slice color image, fig. 4(a) shows a reference slice color image. In some examples, the active area of the reference slice color image may be mapped to the target slice color image to determine the active area of the target slice color image. In this case, the block image may be subsequently acquired based on the effective region of the color image of the target slice. This can reduce the amount of calculation.
In some examples, in acquiring the effective region of the reference slice color image, the reference slice color image may be converted into a reference grayscale image in a grayscale mode, the reference grayscale image is adaptively threshold-segmented and color-inverted using a binarization threshold segmentation algorithm (e.g., the attorney law (OTSU)) to acquire a reference binarized image, the reference binarized image is subjected to dilation and erosion processing to acquire a white region, and the white region is taken as the effective region of the reference slice color image (see fig. 4 (b)). Thereby, the effective region of the color image of the reference slice can be determined. In some examples, performing the expansion and erosion process on the reference binarized image may obtain a binary segmentation image containing white regions and black regions. For example, the reference binarized image may be subjected to the dilation and 2 erosion operations 2 times to obtain a binary segmented image containing white and black regions. In some examples, the black region may be a background region of the reference slice color image. In some examples, the reference grayscale image may be denoised (e.g., median blurred) prior to adaptive threshold segmentation of the reference grayscale image.
In some examples, the active area of the reference slice color image may be mapped to the target slice color image to determine the active area of the target slice color image. Specifically, a circumscribed rectangle of the effective region of the color image of the reference slice may be obtained, and a circumscribed rectangle corresponding to the color image of the target slice may be obtained based on a reduction multiple of the color image of the reference slice relative to the color image of the target slice. In some examples, a circumscribed rectangle corresponding to the target slice color image may be used as an effective region of the target slice color image. In some examples, the circumscribed rectangle corresponding to the color image of the target slice may be increased by 5% to 10% as the effective area of the color image of the target slice. Thereby, more contents can be obtained for subsequent training.
In some examples, the slice color image of the first resolution may be a thumbnail in the cytopathology image. In some examples, the first resolution may be less than the second resolution. This enables the effective region of the high-resolution slice color image to be determined based on the low-resolution slice color image.
Fig. 5(a) is a schematic diagram showing a block image according to an example of the present disclosure.
In some examples, in the pre-processing, the target slice color image may be block-processed based on an effective area of the target slice color image to obtain a plurality of block images to be trained of the target slice color image. In some examples, the target slice color image may be subjected to a blocking process in a preset size based on an effective area of the target slice color image using a sliding window method to obtain a plurality of block images of the target slice color image. As an example of a block image. Fig. 5(a) shows a schematic diagram of a block image of a color image of a target slice.
Specifically, a preset size (for example, 1024 × 1024) may be used as a sliding distance of the window, the window is slid along the transverse direction and the longitudinal direction of the effective area of the target slice color image according to the sliding distance, and an image corresponding to the slid window on the target slice color image is used as a block image. However, the examples of the present disclosure are not limited to this, and in other examples, the target slice color image may be directly subjected to the blocking process without acquiring the effective area of the target slice color image.
In some examples, in the pre-processing, the same blocking processing as the color image of the target slice may be performed on the annotation image to obtain a blocked annotation image to be trained. In some examples, a blocking annotation image to be trained may be discarded if the annotation box intersects with the annotation image and the ratio of the intersection area to the area of the annotation box is less than a preset value (e.g., 50%).
In some examples, in step S130, the staining styles of the plurality of normalized color images to be trained may be consistent. In some examples, the staining normalization process may be performed on each of the segmented images of the target slice color image in a parallel processing manner (i.e., the staining normalization process of each of the segmented images is distributed to different processing processes or threads). Therefore, the efficiency of dyeing and normalizing the block images can be effectively improved.
In some examples, the segmented images may be stain normalized based on a stain normalization process that generates a challenge network (GAN). Generating a countermeasure network is a deep learning model. The generation of the countermeasure network includes at least a generation network (generator) and a discrimination network (discriminator). In general, generating a network can generate similar data with training set features under the guidance of a discriminant network by learning the features of the training set. The discrimination network can discriminate whether the input data is real data or false data generated by the generator and feed back to the generation network. The discriminant network and the generation network are alternately trained until the data generated by the generation network can be falsified.
In some examples, the dyeing normalization process based on generating the antagonistic network may include graying the segmented image to obtain a grayscale image and normalizing the grayscale image based on the trained generation network to obtain a normalized color image corresponding to the grayscale image. Thereby, the staining normalization process can be performed on each block image of the target slice color image.
Fig. 6 is a flow chart illustrating training of a generation network according to an example of the present disclosure.
As shown in fig. 6, in some examples, the training process of generating the network may include preparing a patch image to be trained (step S131), graying the patch image to be trained to obtain a grayscale image to be trained (step S132), and training a normalization network based on the generation countermeasure network to match a reconstructed patch image output by the generation network based on the grayscale image to be trained with the patch image to be trained (step S133). In this case, the block image to be trained is grayed and the grayed block image is used to train the normalization network based on the generation countermeasure network, so that the normalization network based on the training of the cytopathology image to be trained of one staining style can normalize the cytopathology images of different staining styles.
In some examples, in step S131, a plurality of segmented images to be trained may be prepared.
In some examples, in step S132, the block image to be trained may be grayed out to be converted into a grayscale image to be trained having a grayscale pattern.
In some examples, in step S133, the normalization network may be trained based on the block image to be trained and the grayscale image to be trained to match a reconstructed block image output by the generation network based on the grayscale image to be trained with the block image to be trained. In some examples, the normalized network may be constructed based on the generation countermeasure network. In some examples, normalizing the network may include generating a network that may receive the grayscale image to be trained and generate the reconstructed block image, and discriminating the network that may receive a first stitched image stitched by the grayscale image to be trained and a second stitched image stitched by the grayscale image to be trained and the reconstructed block image and output a discrimination result (e.g., true or false).
In some examples, in training of the normalized network, a countermeasure loss function of the discriminative network may be constructed based on the discrimination results and network parameters of the discriminative network may be updated with the countermeasure loss function. In some examples, the challenge loss function may be determined by a challenge loss term. In some examples, the antagonistic loss term can include a desire to discriminate the first stitched image as true and a desire to discriminate the second stitched image as false. In some examples, the penalty function L is resistedDFormula (1) can be satisfied:
LD=Ey[log(D(y))]+Ey'[log(1-D(y'))] (1)
wherein y represents the first stitched image, D (y) is the probability that the first stitched image is true, y 'represents the second stitched image, and D (y') is the probability that the second stitched image is true. Ey[log(D(y))]To discriminate the first stitched image as true expectation, Ey'[log(1-D(y'))]To discriminate the second stitched image from a false expectation.
However, examples of the present disclosure are not limited thereto, and in other examples, the discrimination network may receive the block images to be trained and the reconstructed block images and output the discrimination result based on the block images to be trained and the reconstructed block images. That is, the block image to be trained and the reconstructed block image may not be respectively stitched with the grayscale image to be trained.
In some examples, a generative loss function of the generative network may be constructed based on the antagonistic loss function, the segmented image to be trained, and the reconstructed segmented image, and network parameters of the generative network may be updated with the generative loss function. In this case, the reconstructed block image output by the generation network can be matched with the block image to be trained.
In some examples, the generative loss function may be determined from the antagonistic loss term and the generative loss term. In some examples, generating the loss term may be obtained based on a difference between pixel points of the reconstructed block image and pixel points of the block image to be trained. In some examples, a loss is generatedThe terms may be obtained based on a first order norm between pixel points of the reconstructed block image and pixel points of the block image to be trained. In some examples, a loss function L is generatedGFormula (2) may be satisfied:
LG=LD+E[||x_rgb'-x_rgb||] (2)
wherein, x _ rgb is a pixel point of the block image to be trained, x _ rgb ' is a pixel point of the reconstructed block image, | | x _ rgb ' -x _ rgb | | is a first-order norm, and E | | | x _ rgb ' -x _ rgb | ] is an expectation that the reconstructed block image is closer to the block image to be trained.
In some examples, the loss function values may be calculated from the results of the forward propagation, and then the calculated gradient may be propagated backwards to update the network parameters. In some examples, the updating process of discriminating the network parameters of the network and generating the network parameters of the network may include: under the condition of generating a network fixed, updating the network parameters of the discrimination network by maximizing the countermeasure loss function; and in the case of judging that the network is fixed, updating the network parameters of the generation network by minimizing the generation loss function. In this case, the network parameters of the two networks are constantly updated with the opposing loss function and the generating loss function, enabling the reconstructed block image generated by the generating network to be matched with the block image to be trained.
Specifically, the network parameter of the generated network may be updated a first preset number of times (e.g., 3 times), and then the parameter of the discriminant network may be updated a second preset number of times (e.g., 1 time). The discriminant network and the generation network are alternately trained until the data generated by the generation network can be falsified. For example, for the first stitched image and the second stitched image, the probability of the discrimination network output belonging to a certain discrimination result (e.g., true or false) is about 0.5 (i.e., between true and false).
Examples of the disclosure are not limited thereto, and in other examples, the training process that generates the network may not perform the blocking process on the images for training. Specifically, an image to be trained may be prepared, and grayed to be converted into a grayscale image to be trained having a grayscale pattern. And training the normalized network based on the generation of the confrontation network based on the image to be trained and the gray level image to be trained. The generation network may receive a gray image to be trained and generate a reconstructed image, the determination network may receive a first stitched image stitched by the image to be trained and the gray image to be trained and a second stitched image stitched by the gray image to be trained and the reconstructed image, and output a determination result based on the first stitched image and the second stitched image. For details, refer to the related description of training the normalized network based on the generation of the countermeasure network based on the segmented image to be trained and the grayscale image to be trained in step S133.
In addition, the present disclosure relates to a generation training device 101 that generates a network. The generation training device 101 is configured to perform the training process of generating the network. Fig. 7 is a block diagram showing the generation training apparatus 101 that generates a network according to an example of the present disclosure. As shown in fig. 7, in some examples, generating training device 101 may include preparation module 111. The preparation module 111 may be used to prepare a plurality of segmented images to be trained. As shown in fig. 7, in some examples, the generation training device 101 may include a graying module 121. The graying module 121 can be used to graying the block image to be trained to convert into a grayscale image to be trained having a grayscale pattern. As shown in fig. 7, in some examples, generating training device 101 may include training module 131. The training module 131 may be configured to train the normalization network based on the segmented image to be trained and the grayscale image to be trained to match a reconstructed segmented image output by the generation network based on the grayscale image to be trained with the segmented image to be trained. For details, refer to the related description in step S133, which is not repeated herein.
Fig. 5(b) is a schematic diagram showing the location of contents to which examples of the present disclosure relate.
In some examples, in step S140 of the training method, the number of blocking features of each normalized color image may be one or more. In some examples, the feature extraction model may be an object detection network. In some examples, the feature extraction model may be an Object detection network based on an Efficient and Efficient Object detection architecture. In some examples, the feature extraction model may be a target detection network based on a retinet architecture. In some examples, the block features may include at least a location of the content, a category of the content, and a confidence level.
In some examples, where the cytopathological image is a cervical cell slice image, the category of content may be no-see intradermal lesion cells or malignant cells (NILM), atypical squamous cells-nonsense (ASC _ US), low-grade squamous intraepithelial lesions (LSIL), atypical squamous cells-nonexclusive HSIL (ASC _ H), high-grade squamous intraepithelial lesions (HSIL), atypical cervical canal cells (AGC). As an example of the location of the content, fig. 5(B) shows the location of the content in the segmented image of the cervical cell slice image, in which the category of the content of location a is low-grade squamous intraepithelial lesion (LSIL), the category of the content of location B is high-grade squamous intraepithelial lesion (HSIL), and the category of the content of location C is atypical squamous cell-sense ambiguous (ASC _ US).
In some examples, in step S150, the blocking features of the normalized color image to be trained of the color image of the target slice may be obtained based on the trained feature extraction model. In some examples, feature information of the target slice color image may be feature-fused to generate slice features to be trained of the target slice color image. In some examples, in the feature fusion process, feature information of the color image of the target slice may be reduced in dimension, and the feature information of the color image of the target slice after the dimension reduction may be connected to generate slice features of the color image of the target slice. In some examples, feature information of the color image of the target slice may be reduced in dimension using Principal Component Analysis (PCA). In some examples, the feature information of the color image of the reduced target slice may be concatenated into one feature vector of a preset dimension (e.g., 1 × 300 dimension). Thereby, the slice feature of the target slice color image can be obtained.
In some examples, the characteristic information may include statistical characteristics. In some examples, the statistical features may be obtained based on a plurality of block features of the color image of the target slice. As described above, the block features may include at least a location of the content, a category of the content, and a confidence level. In some examples, the statistical features of the target slice color image may include at least one of a distribution histogram of confidence of the category of the respective contents, a distribution histogram of an area of the category of the respective contents, and a distribution histogram of a circumference of the category of the respective contents. Therefore, the statistical characteristics of the target slice color image can be acquired based on the block characteristics of the target slice color image.
In some examples, blocking features (e.g., locations of sets of contents, categories of contents, and confidence levels) of respective blocking images of the color image of the target slice may be filtered, and statistical features may be obtained based on the filtered blocking features. Specifically, a preset number (e.g., 100) of block features having a confidence level greater than a preset confidence level may be acquired. In this case, the target blocking feature may be acquired therefrom according to the confidence of the blocking feature and the position of the content of the blocking feature, and the statistical feature may be acquired based on the target blocking feature. In some examples, the target segmented feature may be obtained therefrom based on an overlap area between the confidence of the segmented feature and the location of the contents of the segmented feature. For example, if the overlapping area between the positions of the content of the block feature with high confidence and the position of the content of the block feature with low confidence is larger than a preset ratio (0.3 to 0.5, for example, 0.5 is used) of the area of the content of the block feature with low confidence, the block feature with low confidence is discarded. In this case, the blocking features of the color image of the target slice are screened, and the statistical features of the color image of the target slice are obtained based on the screened blocking features, so that the classification efficiency of classification can be effectively improved. In some examples, the corresponding overlap area of any two block features may be determined according to the location of the content of the block features.
In some examples, in step S160, the slice classification model may be trained based on the slice features to be trained and the labeling labels of the target slice color image to obtain a trained slice classification model. The slice classification model may be used to classify slice features to be trained of color images of the target slice. In some examples, the slice classification model may be a model based on a random forest algorithm.
Hereinafter, the training apparatus 200 for cellular pathology classification using generative stain normalization according to the present disclosure will be described in detail with reference to the accompanying drawings. The training device 200 of the present disclosure employing generative staining normalized cytopathology classification may sometimes also be referred to as the training device 200. The training apparatus 200 is used to implement the training method described above. Fig. 8 is a block diagram illustrating a training apparatus 200 for cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
In some examples, as shown in fig. 8, the training apparatus 200 may include a preparation module 210, a pre-processing module 220, a stain normalization module 230, a feature extraction model training module 240, a feature extraction module 250, and a slice classification model training module 260.
In some examples, the preparation module 210 may be used to prepare a cytopathology image to be trained, an annotation image, and an annotation label. In some examples, the cytopathology image may include slice color images of multiple resolutions. In some examples, the slice color image may have an active area containing content. In some examples, the slice color image may have a background region. In some examples, the contents may be cells. In some examples, the cytopathology images to be trained may be images of the same staining style. In some examples, the cytopathology images to be trained may be cervical cell slice images of the same staining style. In this case, the section classification model obtained by training based on the cervical cell section image to be trained of one staining style can classify cervical cell section images of different staining styles. In some examples, the annotation image can be an annotation image of a content-level annotation corresponding to the cytopathology image to be trained (i.e., to annotate a content, such as a cell, in the annotation image). In some examples, the annotation image can include an annotation box to indicate the location of the content and a category of the content. In some examples, the annotation label may be an annotation label of a slice level annotation corresponding to the cytopathology image to be trained. For details, refer to the related description in step S110, and are not described herein again.
In some examples, the preprocessing module 220 may be configured to preprocess the cytopathology image to be trained and the annotation image to obtain a block image to be trained and a block annotation image to be trained. In some examples, a slice color image of a first resolution may be selected from the cytopathology image as a reference slice color image and a slice color image of a second resolution may be selected as a target slice color image. In some examples, the active region of the reference slice color image may be acquired based on the reference slice color image. In some examples, the active area of the reference slice color image may be mapped to the target slice color image to determine the active area of the target slice color image. In this case, the block image may be subsequently acquired based on the effective region of the target slice color image (i.e., the slice color image of the second resolution). This can reduce the amount of calculation. In some examples, in obtaining the active region of the reference slice color image, the reference slice color image may be converted into a reference grayscale image in a grayscale mode, the reference grayscale image may be adaptively threshold-segmented and color-inverted using a binarization threshold segmentation algorithm (e.g., the attorney law (OTSU)) to obtain a reference binarized image, the reference binarized image may be dilated and eroded to obtain a white region, and the white region may be used as the active region of the reference slice color image. Thereby, the effective region of the color image of the reference slice can be determined. In some examples, the slice color image of the first resolution may be a thumbnail in the cytopathology image. In some examples, the first resolution may be less than the second resolution. This enables the effective region of the high-resolution slice color image to be determined based on the low-resolution slice color image. For details, refer to the related description in step S120, and are not described herein again.
In some examples, the pre-processing module 220 may perform a blocking process on the target slice color image based on the effective area of the target slice color image to obtain a plurality of to-be-trained block images of the target slice color image. However, the examples of the present disclosure are not limited to this, and in other examples, the target slice color image may be directly subjected to the blocking process without acquiring the effective area of the target slice color image. For details, refer to the related description in step S120, and are not described herein again. In some examples, the pre-processing module 220 may perform the same blocking processing on the annotation image as the target slice color image to obtain a blocked annotation image to be trained. In some examples, the labeling box is discarded if the segmented labeling image to be trained intersects the labeling box and the ratio of the intersection area to the area of the labeling box is less than a preset value (e.g., 50%). For details, refer to the related description in step S120, and are not described herein again.
In some examples, the stain normalization module 230 may be configured to perform stain normalization on each segmented image of the target slice color image to obtain a plurality of normalized color images to be trained of the target slice color image. In some examples, the staining styles of the plurality of normalized color images to be trained may be consistent. In some examples, the staining normalization process may be performed on each of the segmented images of the target slice color image in a parallel processing manner (i.e., the staining normalization process of each of the segmented images is distributed to different processing processes or threads). This can effectively improve the efficiency of normalizing the block image. For details, refer to the related description in step S130, and are not described herein again.
In some examples, the feature extraction model training module 240 may train a feature extraction model for obtaining blocking features of the normalized color image to be trained based on the normalized color image to be trained and the blocking annotation image to be trained. In some examples, the number of blocking features of each normalized color image may be one or more. In some examples, the feature extraction model may be an object detection network. In some examples, the feature extraction model may be an object detection network. In some examples, the feature extraction model may be an Object detection network based on an Efficient and Efficient Object detection architecture. In some examples, the block features include at least a location of the content, a category of the content, and a confidence. For details, refer to the related description in step S140, and are not described herein again.
In some examples, the feature extraction module 250 may obtain blocking features of a normalized color image to be trained of the target slice color image based on the trained feature extraction model and perform feature fusion processing on feature information of the target slice color image to generate slice features to be trained of the target slice color image. In some examples, the feature extraction module 250 may obtain blocking features of a normalized color image to be trained of the color image of the target slice based on the trained feature extraction model. In some examples, feature information of the target slice color image may be feature-fused to generate slice features to be trained of the target slice color image. In some examples, in the feature fusion process, feature information of the color image of the target slice may be reduced in dimension, and the feature information of the color image of the target slice after the dimension reduction may be connected to generate slice features of the color image of the target slice. In some examples, the characteristic information may include statistical characteristics. In some examples, the statistical features may be obtained based on a plurality of block features of the color image of the target slice. As described above, the block features may include at least a location of the content, a category of the content, and a confidence level. In some examples, the statistical features of the target slice color image may include at least one of a distribution histogram of confidence of the category of the respective contents, a distribution histogram of an area of the category of the respective contents, and a distribution histogram of a circumference of the category of the respective contents. Therefore, the statistical characteristics of the target slice color image can be acquired based on the block characteristics of the target slice color image. For details, refer to the related description in step S150, and are not described herein again.
In some examples, the feature extraction module 250 may further filter blocking features (e.g., locations of groups of contents, categories and confidence levels of the contents) of the respective blocking images of the color image of the target slice, and obtain statistical features based on the filtered blocking features. Specifically, a preset number (e.g., 100) of block features having a confidence level greater than a preset confidence level may be acquired. In this case, the target blocking feature may be acquired therefrom according to the confidence of the blocking feature and the position of the content of the blocking feature, and the statistical feature may be acquired based on the target blocking feature. In some examples, the target segmented feature may be obtained therefrom based on an overlap area between the confidence of the segmented feature and the location of the contents of the segmented feature. In this case, the blocking features of the color image of the target slice are screened, and the statistical features of the color image of the target slice are obtained based on the screened blocking features, so that the classification efficiency of classification can be effectively improved. For details, refer to the related description in step S150, and are not described herein again.
In some examples, the slice classification model training module 260 may train the slice classification model based on the slice features to be trained and the label labels of the target slice color image to obtain a trained slice classification model. The slice classification model may be used to classify slice features to be trained of color images of the target slice. In some examples, the slice classification model may be a model based on a random forest algorithm. For details, refer to the related description in step S160, and are not described herein again.
The training device 200 according to the example of the present disclosure determines an effective region of a higher-resolution cytopathology image for training using a lower-resolution cytopathology image and obtains a block image of a target slice color image based on the effective region, thereby reducing redundant computation, improving classification efficiency of classification, and performing dyeing normalization processing on the block image of the target slice color image and obtaining block features of the block image using a trained feature extraction model for training a slice classification model, thereby improving classification accuracy of classification.
In some examples, the present disclosure also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the training method described above when the processor executes the computer program. In some examples, the present disclosure also provides a computer device, a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the training method described above.
In some examples, a trained feature extraction model may be obtained via the training method described above. In some examples, the network structure of the trained feature extraction model may be optimized to improve the computational efficiency of the feature extraction model. In some examples, optimizing the network structure of the trained feature extraction model may include, but is not limited to, converting the parameter type of the feature extraction model (e.g., the parameter type may be float32 to float16) or merging the network structure. In some examples, the network structure of the trained feature extraction model may be optimized using TensorRT. In this case, the inference speed can be effectively increased without losing accuracy.
Hereinafter, the method for classifying cellular pathologies using generative staining normalization according to the present disclosure will be described in detail with reference to the accompanying drawings. The method can be used for classifying the cytopathology images by utilizing the trained feature extraction model and the trained slice classification model. It should be noted that the above description of the training method applies equally to the method to which the present disclosure relates, unless otherwise specified. Fig. 9 is a flow chart illustrating a method of cellular pathology classification using generative stain normalization in accordance with an example of the present disclosure.
In some examples, as shown in fig. 9, the method to which the present disclosure relates may include acquiring a stained cytopathology image (step S210), preprocessing the cytopathology image to acquire patch images (step S220), performing a stain normalization process on each patch image of a target slice color image to acquire a plurality of normalized color images of the target slice color image (step S230), acquiring patch features of the normalized color images of the target slice color image based on a feature extraction model and performing a feature fusion process on feature information to generate slice features of the target slice color image (step S240), and classifying pairs of the slice features of the target slice color image based on a slice classification model to acquire classification results of the cytopathology image (step S250). Under the condition, the effective area of the cytopathology image with higher resolution is determined by using the cytopathology image with lower resolution, the block image of the target section color image is obtained based on the effective area, the redundant calculation amount is reduced, the classification efficiency of classification can be improved, the block image of the target section color image is subjected to dyeing normalization processing, the block characteristics of the block image are obtained by using the characteristic extraction model, and the classification precision of classification can be improved.
In some examples, in step S210, a cytopathology image may be acquired. In some examples, the cytopathology image may be stained. In some examples, the cytopathology image may include slice color images of multiple resolutions. In some examples, the slice color image may have an active area containing content. In some examples, the cytopathology images in the methods of the present disclosure may be cervical cell slice images of different staining styles. Thus, cervical cell slice images of different staining styles can be classified. For details, refer to the related description in step S110 of the training method.
In some examples, in step S220, the cytopathology image may be preprocessed to obtain a block image. In some examples, in the pre-processing, a slice color image of a first resolution may be selected as a reference slice color image and a slice color image of a second resolution may be selected as a target slice color image from the cytopathology image. In some examples, the first resolution may be less than the second resolution. In some examples, the active region of the reference slice color image may be acquired based on the reference slice color image (i.e., the active region of the slice color image of the second resolution may be acquired based on the slice color image of the first resolution). In some examples, the active area of the reference slice color image may be mapped to the target slice color image to determine the active area of the target slice color image. For details, refer to the related description of determining the effective area of the color image of the target slice in step S120 of the training method.
In some examples, in the pre-processing, the target slice color image may be block-processed based on an effective area of the target slice color image to acquire a plurality of block images of the target slice color image. However, the examples of the present disclosure are not limited to this, and in other examples, the target slice color image may be directly subjected to the blocking process without acquiring the effective area of the target slice color image. For details, refer to the related description about the blocking process for the color image of the target slice in step S120 in the training method.
In some examples, in step S230, a dye normalization process may be performed on each of the segmented images of the target slice color image to obtain a plurality of normalized color images of the target slice color image. In some examples, the staining styles of the plurality of normalized color images may be consistent. In some examples, the staining normalization process may be performed on each segmented image of the color image of the target slice by means of parallel processing. This can effectively improve the efficiency of normalizing the block image. In some examples, the dyeing normalization process may be based on generating a competing network. In some examples, in the staining normalization process, each segmented image of the target slice color image may be grayed to be converted into a grayscale image having a grayscale pattern, and the grayscale image is normalized based on a trained generation network to obtain a normalized color image corresponding to the grayscale image. For details, refer to the related description about the dyeing normalization process in step S130 in the training method.
In some examples, in step S240, blocking features of the normalized color image of the target slice color image may be obtained based on the feature extraction model. In some examples, feature information of the target slice color image may be feature fusion processed to generate slice features of the target slice color image. In some examples, in the feature fusion process, feature information of the color image of the target slice may be reduced in dimension, and the feature information of the color image of the target slice after the dimension reduction may be connected to generate slice features of the color image of the target slice. In some examples, the characteristic information may include statistical characteristics. In some examples, the statistical features of the target slice color image may include at least one of a distribution histogram of confidence of the category of the respective contents, a distribution histogram of an area of the category of the respective contents, and a distribution histogram of a circumference of the category of the respective contents. Therefore, the statistical characteristics of the target slice color image can be acquired based on the block characteristics of the target slice color image. In some examples, the statistical features may be obtained based on a plurality of block features of the color image of the target slice. For details, refer to the related description in step S150 in the training method, and are not described herein again.
In some examples, in step S240, a plurality of blocking features of the color image of the target slice may also be filtered, and a statistical feature may be obtained based on the filtered blocking features. Specifically, a preset number (e.g., 100) of block features having a confidence level greater than a preset confidence level may be acquired. In this case, the target blocking feature may be acquired therefrom according to the confidence of the blocking feature and the position of the content of the blocking feature, and the statistical feature may be acquired based on the target blocking feature. In some examples, the target segmented feature may be obtained therefrom based on an overlap area between the confidence of the segmented feature and the location of the contents of the segmented feature. In this case, the blocking features of the color image of the target slice are screened, and the statistical features of the color image of the target slice are obtained based on the screened blocking features, so that the classification efficiency of classification can be effectively improved. For details, refer to the related description in step S150 in the training method, and are not described herein again.
In some examples, in step S250, the pair of slice features of the target slice color image may be classified based on the slice classification model to obtain a classification result of the cytopathology image. Specifically, the slice features of the target slice color image may be classified based on the slice classification model to obtain a classification result of the slice features of the target slice color image, which is taken as a classification result of the cytopathology image. In some examples, the slice classification model may be a model based on a random forest algorithm.
In some examples, the methods to which the present disclosure relates may also display the distribution of various categories of content based on attention thermograms.
The system 300 for cellular pathology classification using generative staining normalization according to the present disclosure is described in detail below with reference to the accompanying drawings. The system 300 for cellular pathology classification using generative staining normalization may also sometimes be referred to as system 300. The system 300 may be used to implement the methods described above. Fig. 10 is a block diagram illustrating a system 300 for cellular pathology classification using generative stain normalization in accordance with examples of the present disclosure.
In some examples, as shown in fig. 10, the system 300 may include an acquisition module 310, a pre-processing module 320, a dye normalization module 330, a feature extraction module 340, and a classification module 350. The acquisition module 310 may be used to acquire cytopathology images. The pre-processing module 320 may be used to pre-process the cytopathology image to obtain a block image. The stain normalization module 330 may be configured to perform stain normalization on each of the segmented images of the target slice color image to obtain a plurality of normalized color images of the target slice color image. The feature extraction module 340 may obtain blocking features of the normalized color image of the target slice color image based on the feature extraction model and generate slice features of the target slice color image based on the feature information. The classification module 350 may classify the pair of slice features of the color image of the target slice based on the slice classification model to obtain a classification result of the cytopathology image. Under the condition, the effective area of the cytopathology image with higher resolution is determined by using the cytopathology image with lower resolution, the block image of the target section color image is obtained based on the effective area, the redundant calculation amount is reduced, the classification efficiency of classification can be improved, the block image of the target section color image is subjected to dyeing normalization processing, the block characteristics of the block image are obtained by using the characteristic extraction model, and the classification precision of classification can be improved.
In some examples, the acquisition module 310 may be used to acquire cytopathology images. In some examples, the cytopathology image may be stained. In some examples, the cytopathology image may include slice color images of multiple resolutions. In some examples, the slice color image may have an active area containing content. In some examples, the cytopathology images in the methods of the present disclosure may be cervical cell slice images of different staining styles. Thus, cervical cell slice images of different staining styles can be classified. For details, refer to the related description in step S110 of the training method.
In some examples, the pre-processing module 320 may be used to pre-process the cytopathology image to obtain a block image. In some examples, in the pre-processing, a slice color image of a first resolution may be selected as a reference slice color image and a slice color image of a second resolution may be selected as a target slice color image from the cytopathology image. In some examples, the first resolution may be less than the second resolution. In some examples, the active region of the reference slice color image may be acquired based on the reference slice color image (i.e., the active region of the slice color image of the second resolution may be acquired based on the slice color image of the first resolution). In some examples, the active area of the reference slice color image may be mapped to the target slice color image to determine the active area of the target slice color image. For details, refer to the related description of determining the effective area of the color image of the target slice in step S120 of the training method.
In some examples, in the pre-processing, the target slice color image may be block-processed based on an effective area of the target slice color image to acquire a plurality of block images of the target slice color image. However, the examples of the present disclosure are not limited to this, and in other examples, the target slice color image may be directly subjected to the blocking process without acquiring the effective area of the target slice color image. For details, refer to the related description about the blocking process for the color image of the target slice in step S120 in the training method.
In some examples, the stain normalization module 330 may be configured to perform stain normalization on individual block images of the target slice color image to obtain a plurality of normalized color images of the target slice color image. In some examples, the staining styles of the plurality of normalized color images may be consistent. In some examples, the staining normalization process may be performed on each segmented image of the color image of the target slice by means of parallel processing. This can effectively improve the efficiency of normalizing the block image. In some examples, the dyeing normalization process may be based on generating a competing network. In some examples, in the staining normalization process, each segmented image of the target slice color image may be grayed to be converted into a grayscale image having a grayscale pattern, and the grayscale image is normalized based on a trained generation network to obtain a normalized color image corresponding to the grayscale image. For details, refer to the related description about the dyeing normalization process in step S130 in the training method.
In some examples, the feature extraction module 340 may obtain the blocking features of the normalized color image of the target slice color image based on a feature extraction model. In some examples, feature information of the target slice color image may be feature fusion processed to generate slice features of the target slice color image. In some examples, in the feature fusion process, feature information of the color image of the target slice may be reduced in dimension, and the feature information of the color image of the target slice after the dimension reduction may be connected to generate slice features of the color image of the target slice. In some examples, the characteristic information may include statistical characteristics. In some examples, the statistical features of the target slice color image may include at least one of a distribution histogram of confidence of the category of the respective contents, a distribution histogram of an area of the category of the respective contents, and a distribution histogram of a circumference of the category of the respective contents. Therefore, the statistical characteristics of the target slice color image can be acquired based on the block characteristics of the target slice color image. In some examples, the statistical features may be obtained based on a plurality of block features of the color image of the target slice. For details, refer to the related description in step S150 in the training method, and are not described herein again.
In some examples, the feature extraction module 340 may be further configured to filter a plurality of block features of the color image of the target slice, and obtain the statistical feature based on the filtered block features. Specifically, a preset number (e.g., 100) of block features having a confidence level greater than a preset confidence level may be acquired. In this case, the target blocking feature may be obtained therefrom based on the confidence of the blocking feature and the location of the contents of the blocking feature. In some examples, a target blocking feature may be obtained therefrom according to an overlapping area between a confidence of the blocking feature and a location of a content of the blocking feature, and a statistical feature may be obtained based on the target blocking feature. Under the condition, the block features of the color image of the target slice are screened, and the statistical features are obtained based on the screened block features, so that the classification efficiency of classification can be effectively improved. For details, refer to the related description in step S150 in the training method, and are not described herein again.
In some examples, the classification module 350 may classify pairs of slice features of the target slice color image based on a slice classification model to obtain a classification result of the cytopathology image. Specifically, the slice features of the target slice color image may be classified based on the slice classification model to obtain a classification result of the slice features of the target slice color image, which is taken as a classification result of the cytopathology image. In some examples, the slice classification model may be a model based on a random forest algorithm.
In some examples, the system 300 may also display the distribution of various categories of content based on attention thermograms.
In some examples, the present disclosure also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the above method when the processor executes the computer program. In some examples, the present disclosure also provides a computer device, a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above method.
While the present disclosure has been described in detail in connection with the drawings and examples, it should be understood that the above description is not intended to limit the disclosure in any way. Those skilled in the art can make modifications and variations to the present disclosure as needed without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (10)

1. A system for normalization of cellular pathology classification using generative staining, comprising:
an acquisition module for acquiring a cytopathology image of a stained color image of a section including a plurality of resolutions;
a preprocessing module that acquires an effective region of a slice color image of a second resolution based on a slice color image of a first resolution and performs a block processing on the target slice color image based on the effective region to acquire a plurality of block images, taking the slice color image of the second resolution as a target slice color image, wherein the first resolution is smaller than the second resolution;
the dyeing normalization module is used for carrying out dyeing normalization processing on each block image of the target section color image based on a generation countermeasure network so as to obtain a plurality of normalized color images with consistent dyeing styles;
the characteristic extraction module is used for acquiring the block characteristics of the normalized color image based on a characteristic extraction model, acquiring statistical characteristics based on a plurality of block characteristics of the target slice color image, and performing characteristic fusion processing on characteristic information comprising the statistical characteristics to generate slice characteristics; and
a classification module that classifies the slice features based on a slice classification model to obtain a classification result of the slice features and as a classification result of the cytopathology image.
2. The system of claim 1, wherein:
the active region contains contents, and the blocking features include at least a location of the contents, a category of the contents, and a confidence level of the contents, the contents being cells.
3. The system of claim 2, wherein:
in the feature fusion processing, the feature information of the target slice color image is subjected to dimension reduction and the feature information after the dimension reduction is connected to generate the slice feature, and the statistical feature includes at least one of a distribution histogram of confidence degrees of each category content, a distribution histogram of areas of each category content, and a distribution histogram of circumferences of each category content.
4. The system of claim 1, wherein:
the method comprises the steps of taking a slice color image of a first resolution as a reference slice color image, mapping an effective region of the reference slice color image to a target slice color image to determine an effective region of the target slice color image, wherein in the effective region of the reference slice color image, the reference slice color image is converted into a reference gray-scale image of a gray-scale mode, the reference gray-scale image is subjected to adaptive threshold segmentation and color inversion by utilizing a binarization threshold segmentation algorithm to obtain a reference binarization image, the reference binarization image is subjected to dilation and erosion processing to obtain a white region, and the white region is taken as the effective region of the reference slice color image.
5. The system of claim 1, wherein:
the cytopathology image is a cervical cell slice image with different staining styles.
6. The system of claim 1, wherein:
in the dyeing normalization processing, graying each block image of the target slice color image to convert the block image into a grayscale image with a grayscale mode, normalizing the grayscale image based on a trained generation network to obtain a normalized color image corresponding to the grayscale image, wherein the training process of the generation network comprises preparing a plurality of block images to be trained, graying the block images to be trained to convert the block images to be trained into a grayscale image with a grayscale mode, and constructing a normalization network based on a generation countermeasure network, wherein the normalization network comprises the generation network and a discrimination network, and the normalization network is trained to enable a reconstructed block image output by the generation network to be matched with the block images to be trained.
7. The system of claim 6, wherein:
the generation network receives the grayscale image to be trained and generates the reconstructed block images, the discrimination network receives a first stitched image stitched by the block image to be trained and the grayscale image to be trained and a second stitched image stitched by the grayscale image to be trained and the reconstructed block image and outputs a discrimination result, and in the training, constructing a countermeasure loss function of the discrimination network based on the discrimination result and updating network parameters of the discrimination network using the countermeasure loss function, constructing a generation loss function of the generation network based on the countermeasure loss function, the block image to be trained, and the reconstructed block image, and updating network parameters of the generation network using the generation loss function, such that the reconstructed block images generated by the generating network match the block images to be trained.
8. A method of normalizing cytopathology classification using generative staining, comprising:
acquiring a cytopathology image of a stained color image of the section including a plurality of resolutions;
acquiring an effective area of a slice color image of a second resolution based on the slice color image of the first resolution and taking the slice color image of the second resolution as a target slice color image, wherein the first resolution is smaller than the second resolution;
performing blocking processing on the target slice color image based on the effective area to obtain a plurality of blocking images;
performing dyeing normalization processing based on a generation countermeasure network on each block image of the target section color image to obtain a plurality of normalized color images with consistent dyeing styles;
acquiring block features of the normalized color image based on a feature extraction model, acquiring statistical features based on a plurality of block features of the target slice color image, and performing feature fusion processing on feature information including the statistical features to generate slice features; and is
Classifying the slice features based on a slice classification model to obtain a classification result of the slice features and serve as a classification result of the cytopathology image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of claim 8 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as claimed in claim 8.
CN202210039250.0A 2021-01-20 2022-01-13 System and method for cellular pathology classification using generative staining normalization Active CN114170224B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110078244 2021-01-20
CN202110078226 2021-01-20
CN2021100782441 2021-01-20
CN2021100782263 2021-01-20

Publications (2)

Publication Number Publication Date
CN114170224A true CN114170224A (en) 2022-03-11
CN114170224B CN114170224B (en) 2022-09-02

Family

ID=80489342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039250.0A Active CN114170224B (en) 2021-01-20 2022-01-13 System and method for cellular pathology classification using generative staining normalization

Country Status (1)

Country Link
CN (1) CN114170224B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237952A (en) * 2023-11-15 2023-12-15 山东大学 Method and system for labeling cell distribution of dyed pathological section based on immune topographic map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923640A (en) * 2010-08-04 2010-12-22 中国科学院自动化研究所 Method for distinguishing false iris images based on robust texture features and machine learning
US20170333903A1 (en) * 2016-05-20 2017-11-23 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Automated Single Cell Cytological Classification in Flow
US20170333902A1 (en) * 2016-05-19 2017-11-23 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Automated Single Cell Cytological Classification in Flow
CN110322396A (en) * 2019-06-19 2019-10-11 怀光智能科技(武汉)有限公司 A kind of pathological section color method for normalizing and system
CN110648322A (en) * 2019-09-25 2020-01-03 杭州智团信息技术有限公司 Method and system for detecting abnormal cervical cells
CN111462036A (en) * 2020-02-18 2020-07-28 腾讯科技(深圳)有限公司 Pathological image processing method based on deep learning, model training method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923640A (en) * 2010-08-04 2010-12-22 中国科学院自动化研究所 Method for distinguishing false iris images based on robust texture features and machine learning
US20170333902A1 (en) * 2016-05-19 2017-11-23 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Automated Single Cell Cytological Classification in Flow
US20170333903A1 (en) * 2016-05-20 2017-11-23 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Automated Single Cell Cytological Classification in Flow
CN110322396A (en) * 2019-06-19 2019-10-11 怀光智能科技(武汉)有限公司 A kind of pathological section color method for normalizing and system
CN110648322A (en) * 2019-09-25 2020-01-03 杭州智团信息技术有限公司 Method and system for detecting abnormal cervical cells
CN111462036A (en) * 2020-02-18 2020-07-28 腾讯科技(深圳)有限公司 Pathological image processing method based on deep learning, model training method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237952A (en) * 2023-11-15 2023-12-15 山东大学 Method and system for labeling cell distribution of dyed pathological section based on immune topographic map
CN117237952B (en) * 2023-11-15 2024-02-09 山东大学 Method and system for labeling cell distribution of dyed pathological section based on immune topographic map

Also Published As

Publication number Publication date
CN114170224B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN109886986B (en) Dermatoscope image segmentation method based on multi-branch convolutional neural network
US20210118144A1 (en) Image processing method, electronic device, and storage medium
US7983486B2 (en) Method and apparatus for automatic image categorization using image texture
CN114140465B (en) Self-adaptive learning method and system based on cervical cell slice image
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
US20230005140A1 (en) Automated detection of tumors based on image processing
CN112862808A (en) Deep learning-based interpretability identification method of breast cancer ultrasonic image
CN111028923B (en) Digital pathological image staining normalization method, electronic device and storage medium
CN112001895B (en) Thyroid calcification detection device
CN110992377A (en) Image segmentation method, device, computer-readable storage medium and equipment
CN111444844A (en) Liquid-based cell artificial intelligence detection method based on variational self-encoder
CN113011450B (en) Training method, training device, recognition method and recognition system for glaucoma recognition
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position
CN114170224B (en) System and method for cellular pathology classification using generative staining normalization
CN114399510B (en) Skin focus segmentation and classification method and system combining image and clinical metadata
Rahman et al. MRI brain tumor classification using deep convolutional neural network
Sharma et al. A comparative study of cell nuclei attributed relational graphs for knowledge description and categorization in histopathological gastric cancer whole slide images
CN114782948A (en) Global interpretation method and system for cervical liquid-based cytology smear
Habib et al. Brain tumor segmentation and classification using machine learning
CN115775226B (en) Medical image classification method based on transducer
Zhang et al. Artifact detection in endoscopic video with deep convolutional neural networks
CN114359279B (en) Image processing method, image processing device, computer equipment and storage medium
Kalsoom et al. An efficient liver tumor detection using machine learning
Yancey Deep Feature Fusion for Mitosis Counting
Nancy et al. Skin lesion segmentation and classification using fcn-alexnet framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant