CN113781457A - Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium - Google Patents

Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium Download PDF

Info

Publication number
CN113781457A
CN113781457A CN202111085582.4A CN202111085582A CN113781457A CN 113781457 A CN113781457 A CN 113781457A CN 202111085582 A CN202111085582 A CN 202111085582A CN 113781457 A CN113781457 A CN 113781457A
Authority
CN
China
Prior art keywords
image
pathological
central point
sub
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111085582.4A
Other languages
Chinese (zh)
Inventor
高楠楠
王�华
刘昌灵
张亚军
凌少平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genowis Beijing Gene Technology Co ltd
Original Assignee
Genowis Beijing Gene Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genowis Beijing Gene Technology Co ltd filed Critical Genowis Beijing Gene Technology Co ltd
Priority to CN202111085582.4A priority Critical patent/CN113781457A/en
Publication of CN113781457A publication Critical patent/CN113781457A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The application provides a cell detection method, a cell detection device, a cell detection equipment and a storage medium based on pathological images, wherein the method comprises the following steps: acquiring a plurality of labeled pathological images; training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model; acquiring a central point expansion image of a pathological image to be processed by using a qualified cell detection model; performing spot detection processing on the central point expansion image to obtain central point pixel positions of each central point expansion area in the central point expansion image; counting the number of central point pixel positions in each central point expansion area group to obtain the number of cells under each cell type in the pathological image to be processed; by the method, the labor load is reduced, and meanwhile, the positioning efficiency and the counting efficiency of different types of cells are improved.

Description

Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for cell detection based on a pathological image.
Background
Immunohistochemistry (IHC) is a method widely used for pathological diagnosis and guidance of tumor treatment, which dyes specific antigens in tissue cells by an antigen-antibody enzyme-labeled chromogenic chemical reaction method, obtains pathological images by digital scanning of the dyed tissue sections, reflects the cell morphology in tissues and the expression condition of specific protein molecular markers, provides important information for doctors in clinical diagnosis, and guides the targeting of clinical tumors and the immunotherapy to make treatment strategies and schemes; moreover, in order to facilitate a doctor to accurately and quickly understand a pathological image, different types of cells and the number of each type of cells in the pathological image need to be identified first.
In the existing method, in order to obtain the number of cells of different types in a pathological image, an identification method based on manual visual identification is required to accurately position each cell in the pathological image and distinguish the cell type, and the method not only requires that an identifier is a professional with abundant related experience, but also occupies a large amount of time for the identifier in the whole identification process.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, an apparatus, a device and a storage medium for cell detection based on pathological images, which are beneficial to improving the positioning efficiency and the counting efficiency of different types of cells while reducing the labor burden.
Mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a cell detection method based on a pathological image, including the following steps:
step S1: acquiring a plurality of pathology images added with labels, wherein the labels comprise a label of a cell center position and a label of a cell type of at least one cell in the pathology images;
step S2: training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model, wherein the pre-constructed cell detection model is a semantic segmentation model;
step S3: acquiring a central point expansion image of a pathological image to be processed by using the qualified cell detection model, wherein the central point expansion image comprises at least one central point expansion area group, different central point expansion area groups are used for representing different cell types, and for each central point expansion area group, the central point expansion area group comprises at least one central point expansion area which is used for representing the position of a cell in the pathological image to be processed;
step S4: performing spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, wherein the central point pixel position is used for representing a cell central position of a cell in the pathological image to be processed;
step S5: and counting the number of central point pixel positions in each central point expansion area group to obtain the number of cells under each cell type in the pathological image to be processed.
Optionally, the pathological image in step S1 includes the first full pathological image or the first designated area in the first full pathological image; the pathology image to be processed in step S3 includes the second full-size pathology image or the second designated area in the second full-size pathology image.
Optionally, before the training of the pre-constructed cell probing model in step S2, the method further includes:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points;
and expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
Optionally, the step S2 specifically includes the following steps:
step S201: dividing each pathological image into a training set and a test set;
step S202: segmenting each pathological image included in the training set and a target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to the second pathological sub-images;
step S203: training the pre-constructed cell detection model by using each first pathological sub-image and a first target label sub-image corresponding to each first pathological sub-image;
step S204: evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model;
step S205: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, repeating the step S203, the step S204 and the step S205 until the qualified cell probing model is obtained.
Optionally, the step S3 specifically includes the following steps:
step S301: segmenting the pathological image to be processed to obtain at least one pathological sub-image to be processed with a second preset size;
step S302: inputting each pathological sub-image to be processed into the qualified cell detection model to obtain a central point expansion sub-image corresponding to the pathological sub-image to be processed, wherein the central point expansion sub-image comprises at least one central point expansion area group used for representing cell types, the pixel values of central point expansion areas in the central point expansion sub-image, which are included in different central point expansion area groups, are different, and the central point expansion area is used for representing the positions of cells in the pathological sub-image to be processed;
step S303: and splicing the central point expansion sub-images corresponding to the pathological sub-images to be processed according to the position information of the pathological sub-images to be processed in the pathological images to be processed to obtain the central point expansion images.
In a second aspect, an embodiment of the present application provides a cell detection apparatus based on a pathological image, including:
the acquiring module is used for acquiring a plurality of pathology images added with labels, wherein the labels comprise a label of a cell center position of at least one cell and a label of a cell type in the pathology images;
the execution module is used for training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model, wherein the pre-constructed cell detection model is a semantic segmentation model;
the detection module is used for acquiring a central point expansion image of a pathological image to be processed by using the qualified cell detection model, wherein the central point expansion image comprises at least one central point expansion area group, different central point expansion area groups are used for representing different cell types, and for each central point expansion area group, the central point expansion area group comprises at least one central point expansion area which is used for representing the position of a cell in the pathological image to be processed;
the processing module is used for carrying out spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, wherein the central point pixel position is used for representing the cell central position of cells in the pathological image to be processed;
and the counting module is used for counting the number of the central point pixel positions in each central point expansion area group to obtain the number of the cells under each cell type in the pathological image to be processed.
Optionally, the pathology image includes a first full-scale pathology image or a first designated area in the first full-scale pathology image; the pathology image to be processed includes a second full-scale pathology image or a second designated area in the second full-scale pathology image.
Optionally, before the executing module is configured to train the pre-constructed cell probing model, the executing module is further configured to:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points;
and expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
Optionally, the execution module specifically includes:
a dividing unit for dividing each of the pathological images into a training set and a test set;
the first segmentation unit is used for segmenting each pathological image included in the training set and the target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to the second pathological sub-images;
the training unit is used for training the pre-constructed cell detection model by using each first pathological sub-image and the first target label sub-image corresponding to each first pathological sub-image;
the evaluation unit is used for evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model;
a judging unit, configured to judge whether the accuracy and the sensitivity are both greater than a preset threshold, and if both are greater than the preset threshold, use the trained cell detection model as the qualified cell detection model; otherwise, repeating step S203, step S204 and step S205 until the qualified cell probing model is obtained, wherein step S203 is: training the pre-constructed cell detection model by using each first pathological sub-image and a first target label sub-image corresponding to each first pathological sub-image; the step S204 is as follows: evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model; the step S205 is: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, repeating the step S203, the step S204 and the step S205 until the qualified cell probing model is obtained.
Optionally, the detection module specifically includes:
the second segmentation unit is used for segmenting the pathological image to be processed to obtain at least one pathological subimage to be processed with a second preset size;
the detection unit is used for inputting the pathological sub-image to be processed into the qualified cell detection model aiming at each pathological sub-image to be processed to obtain a central point expansion sub-image corresponding to the pathological sub-image to be processed, wherein the central point expansion sub-image comprises at least one central point expansion area group used for representing cell types, pixel values of central point expansion areas contained in different central point expansion area groups in the central point expansion sub-image are different, and the central point expansion areas are used for representing positions of cells in the pathological sub-image to be processed;
and the splicing unit is used for splicing the central point expansion sub-images corresponding to the pathological sub-images to be processed according to the position information of the pathological sub-images to be processed in the pathological images to be processed to obtain the central point expansion images.
In a third aspect, an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the cell detection method based on a pathological image according to any one of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the cell detection method based on pathological images according to any one of the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the cell detection method based on the pathological image, the obtained pathological images with labels are used for training and evaluating the preset cell detection model to obtain the qualified cell detection model, and the labels comprise the labels of cell types and the labels of cell center positions of cells in the pathological image, so that after the qualified cell detection model is obtained, the qualified cell detection model can be used for positioning the cells of different types in the pathological image to be processed to obtain a central point expansion image of the pathological image to be processed, and the central point expansion image is in the central point expansion image; a central point expansion area represents the position of a cell, and central point expansion areas corresponding to different types of cells are positioned in different central point expansion area groups; after the central point expansion image is obtained, carrying out spot detection processing on the central point expansion image by using a spot detection method to obtain central point pixel positions of each central point expansion area in the central point expansion image; because different central point expansion area groups in the central point expansion image represent different cell types, in order to determine the number of cells under each cell type, the number of the cells under the cell type represented by each central point expansion area group is determined by counting the number of the detected central point pixel positions in each central point expansion area group, so that the number of the cells under each cell type in the pathological image to be processed is obtained, and a pathologist is assisted in clinical diagnosis and scientific research; compare with the discernment mode of artifical naked eye resolution among the prior art, this application is whole to realize the location and the count of different types of cell through the server, is favorable to improving the location efficiency and the count efficiency of different types of cell when reducing the manpower burden.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a cell detection method based on pathological images according to an embodiment of the present application;
FIG. 2 illustrates an example diagram of a pathological image provided by an embodiment of the present application;
fig. 3 is a diagram illustrating an example of a label image of a pathology image provided in an embodiment of the present application;
FIG. 4 is a diagram illustrating an example of a target label image of a pathology image provided in an embodiment of the present application;
fig. 5 is a diagram illustrating an example of a pathology image to be processed according to an embodiment of the present application;
fig. 6 illustrates an exemplary diagram of a pathological sub-image to be processed according to an embodiment of the present application;
fig. 7 illustrates an exemplary diagram of center point expanded sub-images of a pathology sub-image to be processed according to an embodiment of the present application;
fig. 8 is a diagram illustrating an example of a center point expansion image of a pathology image to be processed according to an embodiment of the present application;
fig. 9 is a schematic structural diagram illustrating a cell detection device based on pathological image according to a second embodiment of the present application;
fig. 10 shows a schematic structural diagram of a computer device provided in the third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a cell detection method, a cell detection device, a cell detection equipment and a storage medium based on pathological images, and the following description is provided through the embodiment.
Example one
Fig. 1 is a flowchart illustrating a cell detection method based on pathological images according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101: acquiring a plurality of pathology images added with labels, wherein the labels comprise a label of a cell center position of at least one cell in the pathology images and a label of a cell type.
Specifically, each pathological image includes at least one cell (i.e., at least one cell type exists), in order to train and evaluate a pre-constructed cell detection model, a plurality of labeled pathological images (i.e., labeled pathological images) need to be obtained, for each pathological image, the label on the pathological image includes a label for the cell center position of each cell in the pathological image and a label for the cell type of each cell, the cell center position refers to the position where the center point of the cell is located, the cell type can be set according to the actual situation, for example, the cell type includes negative cells and positive cells, and the specific cell type is not specifically limited, so that the position and the type of each cell in the pathological image can be determined through the label on the pathological image.
The pathological image is a stained pathological image, the type and central position of cells can be more easily determined according to staining characteristics of different types of cells, the staining characteristics include the morphology of the stained cells, the staining intensity and the like, and the pathological image can be a stained image of any immunohistochemical molecular marker with different expression patterns, such as: cell membrane expression (PD-L1, HER2, CD3, CD20, CD79a, etc.), cell plasma expression (AE1/AE3, CK5/6, CK19, etc.), and nuclear expression (ER, PR, Ki-67, TdT, CyclinD1, etc.).
It should be noted again that the acquired pathological image may be an RGB three-channel image, and each channel of the RGB three-channel image represents three colors of red, green, and blue.
It should be noted that the above labeling method may be set according to actual situations, for example, pathological image labeling software may be used to import the pathological image into the labeling software, and different color or different shape frames are used in the labeling software to select the central points of cells in different categories in the pathological image, where the different colors represent different cell categories or the different shapes represent different cell categories, and the position of the central point of the frame is used to indicate the position of the central point of the cell (i.e., the central position of the cell), and the labeling software may automatically record the central position of each cell and the category to which the cell belongs, and store the central position and the category in a file, where the labeling method is not specifically limited herein.
By way of example, fig. 2 shows an exemplary diagram of a pathological image provided in an embodiment of the present application, as shown in fig. 2, the pathological image is a pathological image stained by PD-L1, PD-L1(Programmed Death Ligand-1) is a protein marker in immunohistochemical detection, the staining characteristics are cell membrane staining, the protein marker is a membrane-positively expressed molecular marker, and after tissue sections are stained by immunohistochemical PD-L1, cell nuclei in tissues are blue, and cell membranes are brown; in the first embodiment of the present application, cells in a pathological image are classified into negative cells and positive cells according to cell staining characteristics, wherein the negative cells have no cell membrane staining, and the positive cells have cell membrane staining (brown).
Step S102: and training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model, wherein the pre-constructed cell detection model is a semantic segmentation model.
Specifically, after a plurality of labeled pathological images are acquired, a pre-constructed cell detection model (semantic segmentation model) needs to be trained by using each pathological image, the trained cell detection model is evaluated, and a qualified cell detection model is obtained through the training and evaluating manner.
It should be noted that, for the structure of the pre-constructed cell probing model, the structure may be an existing semantic segmentation model structure, such as FCN (full Convolution Network), U-Net, deep lab, etc., or a model structure improved based on the existing semantic segmentation model structure, such as: the improved U-Net model structure is added with a residual block on the basis of the original U-Net.
Step S103: and acquiring a central point expansion image of the pathological image to be processed by using the qualified cell detection model, wherein the central point expansion image comprises at least one central point expansion area group, different central point expansion area groups are used for representing different cell types, and for each central point expansion area group, the central point expansion area group comprises at least one central point expansion area which is used for representing the position of a cell in the pathological image to be processed.
Specifically, after obtaining the qualified cell detection model, the qualified cell detection model can be used to process the pathological image to be processed, that is: and acquiring a central point expansion image of the pathology image to be processed by using a qualified cell detection model, wherein the pathology image to be processed is an image without adding a label, and the pathology image to be processed comprises at least one cell (namely at least one cell type).
It should be noted that the pixel values of the centroid expansion area included in different centroid expansion area groups in the centroid expansion image are different, for example: the central point expansion image comprises two central point expansion area groups, namely a first central point expansion area group and a second central point expansion area group, the first central point expansion area group comprises two central point expansion areas, namely a central point expansion area A and a central point expansion area B, the pixel values of the central point expansion area A and the central point expansion area B in the central point expansion image are both 1, the second central point expansion area group comprises a central point expansion area, namely a central point expansion area C, the pixel value of the central point expansion area C in the central point expansion image is 2, and the pixel values of areas except the central point expansion area group are 0.
It should be noted again that the representation form of the expanded center point region in the expanded center point image may be a solid dot or a solid box, and the specific representation form of the expanded center point region is not specifically limited herein.
For example, the following steps are carried out: the pathological images to be processed include cell A, cell B, cell C, cell D and cell E, wherein, cell A, cell B and cell C are all negative cells, cell D and cell E are positive cells, then, the expanded central point image of the image to be processed includes two expanded central point region groups, which are a first expanded central point region group and a second expanded central point region group, respectively, the first expanded central point region group represents a negative cell, the first central point expansion area group comprises three central point expansion areas which respectively represent the positions of a cell A, a cell B and a cell C, the second central point expansion area group represents a positive cell, the second expanded center point region group includes two expanded center point regions respectively indicating the locations of the cells D and E.
It should be noted that, when the pathology image to be processed includes only cells of one cell type, the centroid expansion image of the pathology image to be processed includes only one set of centroid expansion region set.
Step S104: and carrying out spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, wherein the central point pixel position is used for representing the cell central position of cells in the pathological image to be processed.
Specifically, for the obtained center point pixel position of each center point expansion region, the center point pixel position refers to a position of a pixel point corresponding to the center point of the center point expansion region in the center point expansion image, and one center point pixel position is used for representing a cell center position of one cell in the image to be processed.
It should be noted that the speckle detection method can detect the areas in the image that have differences from the surrounding pixel values; the algorithm used in the speckle detection method can be set according to actual situations, such as: the method may be a laplacian of gaussian algorithm in a derivative-based differentiation method, where the algorithm performs convolution operation on the image through a gaussian filter and a laplacian filter to perform speckle detection, or may be a watershed algorithm based on local extremum, and a specific algorithm used in the speckle detection method is not specifically limited herein.
Step S105: and counting the number of central point pixel positions in each central point expansion area group to obtain the number of cells under each cell type in the pathological image to be processed.
Specifically, after the central point pixel position of each central point expansion region in the central point expansion image is obtained, for each central point expansion region group, the number of the central point pixel positions of the central point expansion regions included in the central point expansion region group is counted, so that the number is used as the number of cells in the cell category represented by the central point expansion region group.
For example, the cell classes of the cells in the pathology image to be processed include: positive cell and negative cell, this central point inflation image of pending pathology image includes two central point inflation regional groups, is central point inflation regional group A and central point inflation regional group B respectively, and central point inflation regional group A is used for expressing positive cell, and central point inflation regional group B is used for expressing negative cell, and central point inflation regional group A includes 2 central point inflation regions, promptly: the number of central point pixel positions in the central point expansion region group a is 2, and thus the number of positive cells is 2; the set of centerpoint expansion regions B includes 3 centerpoint expansion regions, namely: the number of center point pixel positions in the center point expanded region group B was 3, and thus the number of negative cells was 3.
Firstly, training and evaluating a preset cell detection model by using a plurality of acquired pathological images carrying labels to obtain a qualified cell detection model, wherein the labels comprise labels for cell types and cell center positions of cells in the pathological images, so that after the qualified cell detection model is obtained, different types of cells in the pathological images to be processed can be positioned by using the qualified cell detection model to obtain a central point expansion image of the pathological images to be processed, and the central point expansion image is in the central point expansion image; a central point expansion area represents the position of a cell, and central point expansion areas corresponding to different types of cells are positioned in different central point expansion area groups; after the central point expansion image is obtained, carrying out spot detection processing on the central point expansion image by using a spot detection method to obtain central point pixel positions of each central point expansion area in the central point expansion image; because different central point expansion area groups in the central point expansion image represent different cell types, in order to determine the number of cells under each cell type, the number of the cells under the cell type represented by each central point expansion area group is determined by counting the number of the detected central point pixel positions in each central point expansion area group, so that the number of the cells under each cell type in the pathological image to be processed is obtained, and a pathologist is assisted in clinical diagnosis and scientific research; compare with the discernment mode of artifical naked eye resolution among the prior art, this application is whole to realize the location and the count of different types of cell through the server, is favorable to improving the location efficiency and the count efficiency of different types of cell when reducing the manpower burden.
In another possible embodiment, after performing step S105, the method further includes: in the pathology image to be processed, the ratio of the number of cells under each cell class to the total number of all cells is determined.
In a possible embodiment, the pathological image in step S101 includes the first full pathological image or the first designated area in the first full pathological image; the pathology image to be processed in step S103 includes the second full-size pathology image or the second designated area in the second full-size pathology image.
Specifically, the pathology image with the added label acquired in step S101 may be a whole pathology image (i.e., a first whole pathology image), or may be a designated area in the whole pathology image (i.e., a first designated area); similarly, the pathological image to be processed in step S103 may be a whole pathological image (i.e., a second whole pathological image) or a designated area (i.e., a second designated area) in the whole pathological image.
The Whole-screen pathology Image is a Whole-field digital slice Image (WSI), and the Whole-screen pathology Image is acquired through the following process: the pathological tissue slices are continuously scanned, acquired and imaged block by scanning equipment such as a digital scanner and the like, and are spliced into a whole full-view digital slice image through image compression storage software, wherein a designated area in the full-view digital slice image is an area selected from frames in the full-view pathological image, and it needs to be noted that the designated area can be selected by frames according to the aggregation tendency of cells in the full-view pathological image, or selected by frames according to the interests of related personnel, and the specific limitation is not specifically made herein.
In a possible embodiment, before the training of the pre-constructed cell detection model in step S102 is performed, the following operations are performed:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points.
Specifically, after acquiring a plurality of labeled pathological images, in order to train and evaluate a pre-constructed cell detection model, a target label image corresponding to each pathological image needs to be acquired, where the target label image is obtained by performing expansion processing on target pixel points in the label image, so that the label image of each pathological image needs to be acquired first, and for each pathological image, the label image of the pathological image has the following characteristics:
first, the height value of the label image is the same as the height value of the pathological image, and the width value of the label image is the same as the width value of the pathological image, such as: if the height value and the width value of the pathological image are H and W, the height value and the width value of the label image of the pathological image are H and W; secondly, the positions of the pixel points in the label image correspond to the positions of the pixel points in the pathological image one by one; thirdly, the label image is a single-channel image; fourthly, the label image comprises at least one target pixel point set, one target pixel point set is used for expressing a cell type, the pixel values of target pixel points in different target pixel point sets in the label image are different, and in the label image: the other pixel points except for the target pixel points are used for representing the image background, so the pixel values of the other pixel points except for the target pixel points are different from the pixel values of the target pixel points, such as: two types of cells are respectively negative cells and positive cells in the pathological image, the label image of the pathological image comprises two target pixel point sets, one target pixel point set is used for representing the negative cells, the other target pixel point set is used for representing the positive cells, wherein the pixel value of each target pixel point in the target pixel point set representing the positive cells in the label image is 2, the pixel value of each target pixel point in the target pixel point set representing the negative cells in the label image is 1, and the pixel values of other pixel points except for the target pixel points in the label image are 0 (pixel points used for representing the background); fifthly, for each target pixel point set of the label image, the target pixel point set includes at least one target pixel point, where one target pixel point is used to represent a cell center position of one cell, that is: the target pixel points are pixel points corresponding to the cell center positions of the cells in the label image.
And expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
After the label images of the pathological images are obtained, expansion processing can be performed on target pixel points in the label images of the pathological images to obtain target label images of the pathological images, and the expansion processing specifically comprises the following steps for the label images of each pathological image: expanding each target pixel point in the label image into a target area with a first preset size, wherein the target pixel points include target pixel points in a set of target pixel points in the label image, and the first preset size cannot exceed the average size of cells, such as: the first predetermined size may be half the size of the cells, and the average size of the cells may be set by human experience; the shape of the target area may be set according to actual conditions, for example, the target area may be a solid dot or a solid box, and the specific shape of the target area is not specifically limited herein.
For example, fig. 3 shows an exemplary diagram of a label image of a pathological image provided in an embodiment of the present application, and as shown in fig. 3, the label image shown in fig. 3 is a label image corresponding to the pathological image shown in fig. 2, where in the label image, a pixel value of each target pixel point representing a negative cell is 1, a pixel value of each target pixel point representing a positive cell is 2, and pixel values of other pixel points except for the target pixel points are 0; fig. 4 shows an exemplary diagram of a target label image of a pathological image according to a first embodiment of the present application, as shown in fig. 4, the target label image shown in fig. 4 is obtained by expanding each target pixel point in the label image shown in fig. 3 to be a solid dot with a first preset size, in the target label image, a pixel value of an expanded region of each target pixel point representing a negative cell is 1, a pixel value of an expanded region of each target pixel point representing a positive cell is 2, and pixel values of other pixel points are 0.
In a possible embodiment, when the step S102 is executed, the following steps are specifically implemented:
step S201: each of the pathology images is divided into a training set and a test set.
It should be noted that, the division ratio of the training set and the test set may be set according to actual situations, for example, the division ratio may be set according to the training set: the test set is divided into 8:2, and the test set can also be divided into the following training sets: the test set is divided into 6:4, and the specific division ratio is not specifically limited herein.
Step S202: segmenting each pathological image included in the training set and a target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; and segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to each second pathological sub-image.
Specifically, for each pathological image included in the training set, the pathological image and a corresponding target label image are segmented to obtain a plurality of first pathological sub-images with second preset sizes, and for each first pathological sub-image, a unique first target label sub-image with the second preset size exists; similarly, for each pathological image included in the test set, the pathological image and the corresponding target label image are segmented to obtain a plurality of second pathological sub-images with second preset sizes, and for each second pathological sub-image, a second target label sub-image with the second preset size only corresponding to the second pathological sub-image exists.
Taking any pathological image in the training set or the test set as an example, a pathological image 1 with 768 × 256 pixels in size, a target label image corresponding to the pathological image 1 is also 768 × 256 pixels in size, a preset second preset size is 256 × 256 pixels in size, the pathological image 1 is cut according to the second preset size to obtain three pathological sub-images, the pathological sub-images 1, the pathological sub-images 2 and the pathological sub-images 3 are sequentially arranged from left to right, the target label image corresponding to the pathological image 1 is cut according to the second preset size to obtain three target label sub-images, the target label sub-images 1, the target label sub-images 2 and the target label sub-images 3 are sequentially arranged from left to right, wherein the target label sub-image uniquely corresponding to the pathological sub-image 1 is the target label sub-image 1, and the target label sub-image uniquely corresponding to the pathological sub-image 2 is the target label sub-image 2, the target label sub-image uniquely corresponding to the pathological sub-image 3 is the target label sub-image 3.
It should be noted that the second predetermined size may be set according to practical situations, for example, the second predetermined size is 256 × 256 pixels in the first embodiment of the present application, and the second predetermined size is not specifically limited herein.
It should be noted that, before training the pre-constructed cell detection model, the target label sub-image corresponding to each first pathological sub-image is converted into a one-hot coding (one-hot coding) format.
Step S203: and training the pre-constructed cell detection model by using each first pathological sub-image and the first target label sub-image corresponding to each first pathological sub-image.
Specifically, each first pathological sub-image is used as an input image of a pre-constructed cell detection model, the input image is processed through the cell detection model to obtain predicted images, and loss between the first target label sub-image corresponding to each first pathological sub-image and the predicted images corresponding to each first pathological sub-image is gradually reduced, so that training of the pre-constructed cell detection model is achieved.
In a possible embodiment, the hyper-parameters, loss functions and optimization algorithms used by the pre-constructed cell probing model during the training process can be set, such as: the batch size was set to 32, the learning rate was set to 0.001, the maximum number of iterations was set to 200epochs (iteration rounds), the loss function was set to interstitial _ cross entropy, and the optimization algorithm was set to Adam algorithm.
Step S204: and evaluating the trained cell detection model by using each second pathological sub-image and the second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model.
Step S205: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, the step S203, the step S204 and the step S205 are repeatedly executed until the qualified cell probing model is obtained.
Specifically, each second pathological sub-image is input into the trained cell detection model to obtain a center point expansion sub-image corresponding to each second pathological sub-image, and for each center point expansion sub-image corresponding to each second pathological sub-image, the center point expansion sub-image includes at least one center point expansion area for representing the position information and the category information of the cells in the second pathological sub-image, so that the first position information and the first number of the cells in the second pathological sub-image can be determined through the center point expansion sub-image, then the second position information and the second number of the cells in the second pathological sub-image are determined through the second target label sub-image corresponding to the second pathological sub-image, the cells represented by the first position information are used as the first cells, the cells represented by the second position information are used as the second cells, and according to the first position information of each first cell and the second position information of each second cell, determining a target cell in each first cell, wherein a second cell which is less than a preset distance away from the target cell exists in each second cell; calculating the ratio of the target number of the target cells to the first number of the first cells to obtain the accuracy of the trained cell detection model; calculating the ratio of the target number of the target cells to the second number of the second cells to obtain the sensitivity of the trained cell model; judging whether the accuracy and the sensitivity are both greater than the preset threshold value, namely: if the accuracy is greater than a preset threshold and the sensitivity is greater than a preset threshold, taking the trained cell model as a qualified cell detection model; otherwise, the step S203, the step S204 and the step S205 are repeatedly and sequentially executed until the qualified cell probing model is obtained.
It should be noted that the value of the preset threshold may be set according to actual situations, for example: in the first embodiment of the present application, both the sensitivity and the accuracy of the qualified cell detection model are greater than 85%, and the magnitude of the relevant preset threshold is not specifically limited herein.
In a possible embodiment, when the step S103 is executed, the following steps are specifically implemented:
step S301: and segmenting the pathological image to be processed to obtain at least one pathological sub-image to be processed with a second preset size.
It should be noted that the resolution value of the pathological image to be processed is the same as the resolution value of the pathological image included in the training set, and the value of the second preset size in step S301 is the same as the value of the second preset size in step S202.
Step S302: and inputting the pathological sub-image to be processed into the qualified cell detection model aiming at each pathological sub-image to be processed to obtain a central point expansion sub-image corresponding to the pathological sub-image to be processed, wherein the central point expansion sub-image comprises at least one central point expansion area group used for representing cell types, the central point expansion area groups comprise central point expansion areas with different pixel values in the central point expansion sub-image, and the central point expansion area is used for representing the position of cells in the pathological sub-image to be processed.
Specifically, for each pathological sub-image to be processed, inputting the pathological sub-image to be processed into a qualified cell detection model, and obtaining a pixel probability map corresponding to the pathological sub-image to be processed output by the qualified cell detection model, where the length and width of the pixel probability map are the same as those of the pathological sub-image to be processed, the pixel probability map includes at least two image channels, different image channels are used to represent different pixel point categories, and the pixel point categories include: the pixel point classification used for expressing the cell classification and the pixel point classification used for expressing the background are specific to the pixel point classification used for expressing the cell classification: one cell type corresponds to one pixel point type, and aiming at each pixel point in the pixel probability graph: the pixel point corresponds to at least two probability values, the number of the probability values is the same as that of the image channels, one probability value corresponds to one image channel, and the sum of the probability values is 1; the pixel value corresponding to the pixel point under each image channel of the pixel probability graph represents the probability of the corresponding pixel point on the pathological subimage to be processed belonging to the pixel point category represented by the image channel; and aiming at each pixel point in the pixel probability map, in each probability value corresponding to the pixel point, using the image channel number corresponding to the probability value with the maximum value as the pixel value of the pixel point on a central point expansion subimage corresponding to the pixel probability map to obtain a central point expansion subimage corresponding to the pathological subimage to be processed, wherein the central point expansion subimage is a single-channel image, and the pixel values of the pixel points under different pixel point types in the central point expansion subimage are different.
For each central point expansion sub-image corresponding to each pathological sub-image to be processed, by way of example, fig. 5 shows an exemplary diagram of a pathological image to be processed provided in an embodiment of the present application, and as shown in fig. 5, the pathological image to be processed is a PD-L1 stained pathological image, and there are two types of cells, namely, negative cells and positive cells; fig. 6 is a diagram illustrating an example of a pathology sub-image to be processed according to an embodiment of the present application, where, as shown in fig. 6, the pathology sub-image to be processed in fig. 6 is obtained by segmenting the pathology image to be processed in fig. 5; fig. 7 shows an exemplary diagram of a expanded center-point sub-image of a pathology sub-image to be processed according to a first embodiment of the present application, as shown in fig. 7, the expanded center-point sub-image includes two expanded center-point region groups, a first expanded center-point region group composed of gray dots and a second expanded center-point region group composed of white dots, where the first expanded center-point region group is used to represent positive cells, the second expanded center-point region group is used to represent negative cells, each gray dot in the first expanded center-point region group represents a location of a positive cell in the pathology sub-image to be processed, a pixel value of each gray dot in the expanded center-point sub-image is 2, each white dot in the second expanded center-point region group represents a location of a negative cell in the pathology sub-image to be processed, and a pixel value of each white dot in the expanded center-image is 1, the pixel value of each pixel point for representing the background except the dots is 0.
Step S303: and splicing the central point expansion sub-images corresponding to the pathological sub-images to be processed according to the position information of the pathological sub-images to be processed in the pathological images to be processed to obtain the central point expansion images.
For example, there are three to-be-processed pathological sub-images corresponding to the to-be-processed pathological image, namely a to-be-processed pathological sub-image 1, a to-be-processed pathological sub-image 2 and a to-be-processed pathological sub-image 3, wherein, the pathological sub-image 1 to be processed is positioned at the left side of the pathological image to be processed, the pathological sub-image 2 to be processed is positioned at the middle of the pathological image to be processed, the pathological sub-image 2 to be processed is positioned at the right side of the pathological image to be processed, when the central point expansion sub-images corresponding to the pathological sub-images to be processed are spliced, the central point expansion sub-image corresponding to the pathological sub-image 1 to be processed is placed on the left side, the central point expansion sub-image corresponding to the pathological sub-image 2 to be processed is placed in the middle, the central point expansion sub-image corresponding to the pathological sub-image 3 to be processed is placed on the right side, and the central point expansion image corresponding to the pathological image to be processed is obtained through the splicing mode; fig. 8 shows an exemplary diagram of a center point expanded image of a pathology image to be processed according to an embodiment of the present application, and as shown in fig. 8, the center point expanded image shown in fig. 8 is the center point expanded image of the pathology image to be processed shown in fig. 5.
Example two
Fig. 9 is a schematic structural diagram of a cell detection device based on a pathological image according to a second embodiment of the present application, and as shown in fig. 9, the cell detection device based on a pathological image includes:
an obtaining module 501, configured to obtain a plurality of pathology images to which labels have been added, where the labels include a label of a cell center position of at least one cell in the pathology images and a label of a cell type;
an executing module 502, configured to train and evaluate a pre-constructed cell detection model using each pathological image to obtain a qualified cell detection model, where the pre-constructed cell detection model is a semantic segmentation model;
a detection module 503, configured to obtain a central point expansion image of the pathological image to be processed by using the qualified cell detection model, where the central point expansion image includes at least one central point expansion area group, different central point expansion area groups are used to represent different cell types, and for each central point expansion area group, the central point expansion area group includes at least one central point expansion area, and the central point expansion area is used to represent a location of a cell in the pathological image to be processed;
a processing module 504, configured to perform spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, where the central point pixel position is used to represent a cell center position of a cell in the pathological image to be processed;
and a counting module 505, configured to count the number of central point pixel positions in each central point expansion region group, so as to obtain the number of cells in each cell category in the pathological image to be processed.
In a possible embodiment, the pathology image comprises a first full pathology image or a first designated area in a first full pathology image; the pathology image to be processed includes a second full-scale pathology image or a second designated area in the second full-scale pathology image.
In a possible embodiment, the executing module 502 is further configured to, before being configured to train the pre-constructed cell probing model:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points;
and expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
In a possible implementation, the executing module 502 specifically includes:
a dividing unit for dividing each of the pathological images into a training set and a test set;
the first segmentation unit is used for segmenting each pathological image included in the training set and the target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to the second pathological sub-images;
the training unit is used for training the pre-constructed cell detection model by using each first pathological sub-image and the first target label sub-image corresponding to each first pathological sub-image;
the evaluation unit is used for evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model;
a judging unit, configured to judge whether the accuracy and the sensitivity are both greater than a preset threshold, and if both are greater than the preset threshold, use the trained cell detection model as the qualified cell detection model; otherwise, repeating step S203, step S204 and step S205 until the qualified cell probing model is obtained, wherein step S203 is: training the pre-constructed cell detection model by using each first pathological sub-image and a first target label sub-image corresponding to each first pathological sub-image; the step S204 is as follows: evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model; the step S205 is: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, repeating the step S203, the step S204 and the step S205 until the qualified cell probing model is obtained.
In a possible embodiment, the detection module 503 specifically includes:
the second segmentation unit is used for segmenting the pathological image to be processed to obtain at least one pathological subimage to be processed with a second preset size;
the detection unit is used for inputting the pathological sub-image to be processed into the qualified cell detection model aiming at each pathological sub-image to be processed to obtain a central point expansion sub-image corresponding to the pathological sub-image to be processed, wherein the central point expansion sub-image comprises at least one central point expansion area group used for representing cell types, pixel values of central point expansion areas contained in different central point expansion area groups in the central point expansion sub-image are different, and the central point expansion areas are used for representing positions of cells in the pathological sub-image to be processed;
and the splicing unit is used for splicing the central point expansion sub-images corresponding to the pathological sub-images to be processed according to the position information of the pathological sub-images to be processed in the pathological images to be processed to obtain the central point expansion images.
The apparatus provided in the embodiments of the present application may be specific hardware on a device, or software or firmware installed on a device, etc. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
According to the cell detection method based on the pathological image, the obtained pathological images with labels are used for training and evaluating the preset cell detection model to obtain the qualified cell detection model, and the labels comprise the labels of cell types and the labels of cell center positions of cells in the pathological image, so that after the qualified cell detection model is obtained, the qualified cell detection model can be used for positioning the cells of different types in the pathological image to be processed to obtain a central point expansion image of the pathological image to be processed, and the central point expansion image is in the central point expansion image; a central point expansion area represents the position of a cell, and central point expansion areas corresponding to different types of cells are positioned in different central point expansion area groups; after the central point expansion image is obtained, carrying out spot detection processing on the central point expansion image by using a spot detection method to obtain central point pixel positions of each central point expansion area in the central point expansion image; because different central point expansion area groups in the central point expansion image represent different cell types, in order to determine the number of cells under each cell type, the number of the cells under the cell type represented by each central point expansion area group is determined by counting the number of the detected central point pixel positions in each central point expansion area group, so that the number of the cells under each cell type in the pathological image to be processed is obtained, and a pathologist is assisted in clinical diagnosis and scientific research; compare with the discernment mode of artifical naked eye resolution among the prior art, this application is whole to realize the location and the count of different types of cell through the server, is favorable to improving the location efficiency and the count efficiency of different types of cell when reducing the manpower burden.
EXAMPLE III
Fig. 10 is a schematic structural diagram of a computer device 600 provided in the third embodiment of the present application, and as shown in fig. 10, the device includes a memory 601, a processor 602, and a computer program stored in the memory 601 and executable on the processor 602, where the processor 602 implements the cell detection method based on a pathological image when executing the computer program.
Specifically, the memory 601 and the processor 602 can be general memories and processors, which are not limited in particular, and when the processor 602 runs a computer program stored in the memory 601, the pathological image-based cell detection method can be executed, which is beneficial to improving the positioning efficiency and counting efficiency of different types of cells while reducing the labor burden.
Example four
The embodiment of the present application also provides a computer-readable storage medium, which stores a computer program, and the computer program is executed by a processor to execute the steps of the above cell detection method based on pathological images.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is executed, the above cell detection method based on pathological images can be executed, which is beneficial to improving the positioning efficiency and counting efficiency of different types of cells while reducing the labor burden.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A cell detection method based on pathological images, characterized in that the method comprises the following steps:
step S1: acquiring a plurality of pathology images added with labels, wherein the labels comprise a label of a cell center position and a label of a cell type of at least one cell in the pathology images;
step S2: training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model, wherein the pre-constructed cell detection model is a semantic segmentation model;
step S3: acquiring a central point expansion image of a pathological image to be processed by using the qualified cell detection model, wherein the central point expansion image comprises at least one central point expansion area group, different central point expansion area groups are used for representing different cell types, and for each central point expansion area group, the central point expansion area group comprises at least one central point expansion area which is used for representing the position of a cell in the pathological image to be processed;
step S4: performing spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, wherein the central point pixel position is used for representing a cell central position of a cell in the pathological image to be processed;
step S5: and counting the number of central point pixel positions in each central point expansion area group to obtain the number of cells under each cell type in the pathological image to be processed.
2. The method as claimed in claim 1, wherein the pathological image in step S1 includes a first full pathological image or a first designated region in the first full pathological image; the pathology image to be processed in step S3 includes the second full-size pathology image or the second designated area in the second full-size pathology image.
3. The method of claim 1, wherein prior to training the pre-constructed cell probing model in step S2, the method further comprises:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points;
and expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
4. The method according to claim 3, wherein the step S2 specifically comprises the steps of:
step S201: dividing each pathological image into a training set and a test set;
step S202: segmenting each pathological image included in the training set and a target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to the second pathological sub-images;
step S203: training the pre-constructed cell detection model by using each first pathological sub-image and a first target label sub-image corresponding to each first pathological sub-image;
step S204: evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model;
step S205: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, repeating the step S203, the step S204 and the step S205 until the qualified cell probing model is obtained.
5. The method according to claim 1, wherein the step S3 specifically comprises the steps of:
step S301: segmenting the pathological image to be processed to obtain at least one pathological sub-image to be processed with a second preset size;
step S302: inputting each pathological sub-image to be processed into the qualified cell detection model to obtain a central point expansion sub-image corresponding to the pathological sub-image to be processed, wherein the central point expansion sub-image comprises at least one central point expansion area group used for representing cell types, the pixel values of central point expansion areas in the central point expansion sub-image, which are included in different central point expansion area groups, are different, and the central point expansion area is used for representing the positions of cells in the pathological sub-image to be processed;
step S303: and splicing the central point expansion sub-images corresponding to the pathological sub-images to be processed according to the position information of the pathological sub-images to be processed in the pathological images to be processed to obtain the central point expansion images.
6. A cell detection device based on pathological image, comprising:
the acquiring module is used for acquiring a plurality of pathology images added with labels, wherein the labels comprise a label of a cell center position of at least one cell and a label of a cell type in the pathology images;
the execution module is used for training and evaluating a pre-constructed cell detection model by using each pathological image to obtain a qualified cell detection model, wherein the pre-constructed cell detection model is a semantic segmentation model;
the detection module is used for acquiring a central point expansion image of a pathological image to be processed by using the qualified cell detection model, wherein the central point expansion image comprises at least one central point expansion area group, different central point expansion area groups are used for representing different cell types, and for each central point expansion area group, the central point expansion area group comprises at least one central point expansion area which is used for representing the position of a cell in the pathological image to be processed;
the processing module is used for carrying out spot detection processing on the central point expansion image to obtain a central point pixel position of each central point expansion area in the central point expansion image, wherein the central point pixel position is used for representing the cell central position of cells in the pathological image to be processed;
and the counting module is used for counting the number of the central point pixel positions in each central point expansion area group to obtain the number of the cells under each cell type in the pathological image to be processed.
7. The apparatus of claim 6, wherein the execution module, prior to being configured to train the pre-constructed cell probing model, is further configured to:
for each pathological image, acquiring a label image corresponding to the pathological image, wherein the label image includes at least one target pixel point set, different target pixel point sets are used for representing different cell types, target pixel points in different target pixel point sets have different pixel values on the label image, the target pixel points are used for representing cell center positions of cells in the pathological image, and in the label image: the pixel values of other pixel points except the target pixel points are different from the pixel values of the target pixel points;
and expanding each target pixel point in the label image into a target area with a first preset size to obtain a target label image.
8. The apparatus of claim 6, wherein the execution module specifically comprises:
a dividing unit for dividing each of the pathological images into a training set and a test set;
the first segmentation unit is used for segmenting each pathological image included in the training set and the target label image corresponding to each pathological image to obtain a plurality of first pathological sub-images with second preset sizes and first target label sub-images with second preset sizes corresponding to each first pathological sub-image; segmenting each pathological image included in the test set and the target label image corresponding to each pathological image to obtain a plurality of second pathological sub-images with second preset sizes and second target label sub-images with second preset sizes corresponding to the second pathological sub-images;
the training unit is used for training the pre-constructed cell detection model by using each first pathological sub-image and the first target label sub-image corresponding to each first pathological sub-image;
the evaluation unit is used for evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model;
a judging unit, configured to judge whether the accuracy and the sensitivity are both greater than a preset threshold, and if both are greater than the preset threshold, use the trained cell detection model as the qualified cell detection model; otherwise, repeating step S203, step S204 and step S205 until the qualified cell probing model is obtained, wherein step S203 is: training the pre-constructed cell detection model by using each first pathological sub-image and a first target label sub-image corresponding to each first pathological sub-image; the step S204 is as follows: evaluating the trained cell detection model by using each second pathological sub-image and a second target label sub-image corresponding to each second pathological sub-image to obtain the sensitivity and the accuracy of the trained cell detection model; the step S205 is: judging whether the accuracy and the sensitivity are both greater than a preset threshold value, and if so, taking the trained cell detection model as the qualified cell detection model; otherwise, repeating the step S203, the step S204 and the step S205 until the qualified cell probing model is obtained.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of the preceding claims 1-5 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1-5.
CN202111085582.4A 2021-09-16 2021-09-16 Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium Pending CN113781457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111085582.4A CN113781457A (en) 2021-09-16 2021-09-16 Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111085582.4A CN113781457A (en) 2021-09-16 2021-09-16 Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113781457A true CN113781457A (en) 2021-12-10

Family

ID=78844479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111085582.4A Pending CN113781457A (en) 2021-09-16 2021-09-16 Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113781457A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661815A (en) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and image classification device based on global feature mapping
CN117218139A (en) * 2023-09-12 2023-12-12 珠海横琴圣澳云智科技有限公司 Method and device for determining cell density of sample

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661815A (en) * 2022-12-07 2023-01-31 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and image classification device based on global feature mapping
CN115661815B (en) * 2022-12-07 2023-09-12 赛维森(广州)医疗科技服务有限公司 Pathological image classification method and device based on global feature mapping
CN117218139A (en) * 2023-09-12 2023-12-12 珠海横琴圣澳云智科技有限公司 Method and device for determining cell density of sample
CN117218139B (en) * 2023-09-12 2024-05-24 珠海横琴圣澳云智科技有限公司 Method and device for determining cell density of sample

Similar Documents

Publication Publication Date Title
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
US11436718B2 (en) Image analysis method, image analysis apparatus, program, learned deep layer learning algorithm manufacturing method and learned deep layer learning algorithm
US11593656B2 (en) Using a first stain to train a model to predict the region stained by a second stain
US10755138B2 (en) Systems and methods for finding regions of interest in hematoxylin and eosin (H and E) stained tissue images and quantifying intratumor cellular spatial heterogeneity in multiplexed/hyperplexed fluorescence tissue images
CN111242961B (en) Automatic film reading method and system for PD-L1 antibody staining section
US20140219538A1 (en) Method and software for analysing microbial growth
CN113781457A (en) Pathological image-based cell detection method, pathological image-based cell detection device, pathological image-based cell detection equipment and storage medium
CN111931751B (en) Deep learning training method, target object identification method, system and storage medium
US11538261B2 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN112819821B (en) Cell nucleus image detection method
CN106780522A (en) A kind of bone marrow fluid cell segmentation method based on deep learning
CN108305253A (en) A kind of pathology full slice diagnostic method based on more multiplying power deep learnings
KR101813223B1 (en) Method and apparatus for detecting and classifying surface defect of image
CN115578560A (en) Cancer region segmentation method of IHC membrane plasma expression pathological image based on deep learning
CN111950544A (en) Method and device for determining interest region in pathological image
CN111951271A (en) Method and device for identifying cancer cells in pathological image
JP7393769B2 (en) Computer-implemented process for images of biological samples
DK2901415T3 (en) PROCEDURE FOR IDENTIFICATION OF CELLS IN A BIOLOGICAL Tissue
CN116153497A (en) Automatic scoring system for immunohistochemical images of colorectal cancer P53 protein
CN114898346A (en) Training method and device for scene text detection model and storage medium
CN115641578A (en) Method and system for screening positive mutant cells based on nucleoplasm ratio
CN118135312A (en) Method, electronic device, storage medium for determining an operating node in a cultivation of organoids
CN116343203A (en) Rapid target extraction method based on microscopic light field model
CN116596897A (en) Image post-processing method, positive cell counting method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination