CN111402267B - Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image - Google Patents

Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image Download PDF

Info

Publication number
CN111402267B
CN111402267B CN202010175593.0A CN202010175593A CN111402267B CN 111402267 B CN111402267 B CN 111402267B CN 202010175593 A CN202010175593 A CN 202010175593A CN 111402267 B CN111402267 B CN 111402267B
Authority
CN
China
Prior art keywords
image
channel
cell nucleus
features
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010175593.0A
Other languages
Chinese (zh)
Other versions
CN111402267A (en
Inventor
赖义明
郭正辉
彭圣萌
吴宛桦
范辉阳
雷震
李嘉路
华芮
张游龙
李骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huajia Biological Intelligence Technology Co ltd
Sun Yat Sen Memorial Hospital Sun Yat Sen University
Original Assignee
Shenzhen Huajia Biological Intelligence Technology Co ltd
Sun Yat Sen Memorial Hospital Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huajia Biological Intelligence Technology Co ltd, Sun Yat Sen Memorial Hospital Sun Yat Sen University filed Critical Shenzhen Huajia Biological Intelligence Technology Co ltd
Priority to CN202010175593.0A priority Critical patent/CN111402267B/en
Publication of CN111402267A publication Critical patent/CN111402267A/en
Application granted granted Critical
Publication of CN111402267B publication Critical patent/CN111402267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a segmentation method, a segmentation device and a segmentation terminal of epithelial nuclei in a prostate cancer pathological image, wherein the segmentation method comprises the following steps: performing color space conversion on the acquired pathological staining image, and performing cell nucleus segmentation based on a single-channel image of the converted color image; performing region segmentation on an initial size image and a scaled image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image to obtain single-channel region images, and performing feature extraction on each single-channel region image; inputting the acquired single-channel image features and multi-channel image features of the cell nucleus into a cell nucleus classification model to classify the cell nucleus, and determining the epithelial cell nucleus in the pathological staining image according to the classification result. The technical scheme of the invention can well solve the difficult problem that the epithelial cell nucleus in the prostate is difficult to accurately divide in the prior art, thereby improving the accuracy of judging the pathological diagnosis, the severity degree and the like of the prostate cancer and the like.

Description

Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image
Technical Field
The invention relates to the technical field of pathological image processing, in particular to a segmentation method, a segmentation device and a segmentation terminal of epithelial nuclei in a prostate cancer pathological image.
Background
The prostate is an important organ of the male genitourinary system, and the normal prostate gland consists of a gland cavity (lumen) and epithelial cells surrounding the cavity, with the region between glands consisting of stroma and stromal cells. Cancerous changes of the prostate mainly occur in epithelial cells, and malignant expansion of the epithelial cells can lead to shrinkage or even complete blockage of the prostate cavity, seriously affect various functions of the prostate, seriously threaten male health and life quality, and even threaten life of patients.
In clinical diagnosis, the pathological diagnosis is a 'gold standard', and a pathological image plays an important role, and various structural areas in a tissue section are presented with different colors through the characteristic that a staining agent (such as hematoxylin & eosin) can generate different chemical reactions with nucleic acid and protein. The pathologist can classify the severity of prostate cancer of the patient by observing the information such as morphology of epithelial cells, abnormal shape of nuclei, glandular structure and arrangement mode in the pathological image, thereby performing targeted treatment. However, in practical application, the number of cells contained in a pathological image is large, it is difficult to completely observe all cells, and meanwhile, at the present stage, the pathological talents are short, the culture period is long, the labor intensity is high, the diagnosis level is good and uneven, and the subjectivity of diagnosis affects the efficiency and the accuracy of pathological diagnosis, so that an efficient and high-accuracy method is urgently needed to solve the problems.
The traditional image segmentation algorithm utilizes the difference of foreground and background colors to segment the object, and can preliminarily meet the segmentation requirement of the cell nucleus region in the pathological image. However, since the epithelial and stromal cell nuclei both contain nucleic acid material, there is no significant difference in color after specific binding to the stain, and it is difficult to classify the epithelial and stromal cell nuclei using conventional image segmentation algorithms. In addition, the adjoining and overlapping phenomenon between cell nuclei makes the accuracy of cell nucleus segmentation still have a great room for improvement.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method, a device and a terminal for segmenting the epithelial cell nuclei in the prostate cancer pathological image, which can realize accurate classification of the epithelial cell nuclei in the prostate cancer pathological image and the like.
An embodiment of the present invention provides a method for segmenting epithelial nuclei in a pathological image of prostate cancer, including: performing color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after cell nucleus segmentation based on a single-channel image of the color image obtained by conversion;
the method comprises the steps of respectively carrying out region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a scaled image scaled to a preset fixed size to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features;
Inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model to classify the cell nuclei, and determining the epithelial cell nuclei in the pathology staining image according to the classification result.
Further, in the above method for segmenting epithelial nuclei in a pathological image of prostate cancer, the step of obtaining a segmented color image of nuclei after performing segmentation of nuclei based on a single-channel image of the converted color image includes:
selecting a single-channel image which makes the difference between the cell nucleus and the background maximum from the color image obtained by conversion;
performing Gaussian smoothing on the selected single-channel image, and then detecting the edge pixels of the cell nucleus by using an edge detection algorithm to obtain the gray value of the edge pixels of the cell nucleus;
calculating a gray value threshold by using a threshold segmentation algorithm according to the gray value of the obtained cell nucleus edge pixel, and judging that the current pixel belongs to a cell nucleus if the gray value of the pixel in the selected single-channel image is larger than the gray value threshold;
and acquiring coordinates of each cell nucleus in the selected single-channel image, and mapping each coordinate into the color image to obtain a cell nucleus segmentation color image.
Further, in the above method for segmenting epithelial nuclei in a pathological image of prostate cancer, before the acquiring coordinates of each nucleus in the selected single-channel image, the method further includes:
morphological processing is carried out on the cell nuclei obtained by segmentation in the selected single-channel image;
then counting the areas of all the cell nuclei and calculating an area threshold of a single cell nucleus, wherein the area threshold is used for filtering false positive cell nuclei with the area smaller than the area threshold;
the adjacent cell nuclei and the overlapped cell nuclei are segmented based on a morphological image segmentation algorithm.
Further, in the above method for segmenting epithelial nuclei in a pathological image of prostate cancer, the "segmenting adjacent nuclei and overlapping nuclei based on a morphological image segmentation algorithm" includes:
calculating the shortest distance between each foreground pixel and a background pixel in the currently adjacent cell nucleus or the overlapped cell nucleus, and setting the distance of the background pixel to be zero to obtain a distance mark graph;
selecting a plurality of points with the shortest distance between the background pixels being the local minimum as bottom points on the distance mark graph;
respectively expanding the areas by taking each bottom point as a respective starting point and a preset step length until the boundary between two adjacent expansion areas is obtained;
And performing cell nucleus segmentation on the currently adjacent cell nuclei or the overlapped cell nuclei according to the boundary.
Further, in the above segmentation method of epithelial nuclei in a prostate cancer pathological image, each region obtained by segmentation of the nuclei includes an inner region, an inner and outer neighborhood, an outer region, and a minimum rectangular region including the outer region of the nuclei, the single-channel region image includes three single-channel region images corresponding to each of the inner region, the inner and outer neighborhood, the outer region, and the minimum rectangular region of the initial size image, which are not scaled, and three single-channel region images corresponding to each of the inner region, the inner and outer neighborhood, the outer region, and the minimum rectangular region of the scaled image;
the "extracting features of the single channel region image to obtain single channel image features of corresponding nuclei, and obtaining corresponding multi-channel image features based on the single channel image features" includes:
extracting one type of feature from the three single-channel region images of the un-zoomed inner region, the inner neighborhood, the outer neighborhood and the outer region and the three single-channel region images of the zoomed inner region, the inner neighborhood and the outer neighborhood and extracting two types of feature from the three single-channel region images of the zoomed minimum rectangular region image to obtain each single-channel image feature of the cell nucleus;
Multiplying the corresponding elements of any two single-channel image features to obtain a first-type multi-channel image feature, multiplying the corresponding elements of three single-channel image features to obtain a second-type multi-channel image feature, and combining the first-type multi-channel image feature and the second-type multi-channel image feature to obtain the multi-channel image feature of the cell nucleus.
Further, in the segmentation method of the epithelial cell nuclei in the prostate cancer pathological image, the first class of features include texture features, morphological features and color statistics features, and the second class of features include local binary pattern statistics histogram features and fractal dimension features;
the texture features comprise a gray level co-occurrence matrix, a gray level area size matrix, a gray level travel matrix, a neighborhood gray level difference matrix and a gray level dependency matrix;
the morphological features include area, perimeter-to-area ratio, and longest diameter of the subject region;
the color statistic characteristics comprise minimum value, average absolute deviation, median, variance, energy, total energy, kurtosis and skewness of the gray value of the image;
the local binary pattern statistical histogram feature is calculated based on a preset radius and surrounding pixels taking the preset radius as a unit;
The extraction of the fractal dimension features is based on calculating a preset number of thresholds, each threshold including above-threshold area, above-threshold average, above-threshold fractal dimension, between-adjacent-threshold area, between-adjacent-threshold average and between-adjacent fractal dimension.
Further, in the segmentation method of the epithelial cell nucleus in the prostate cancer pathological image, the cell nucleus classification model is constructed based on a logistic regression model, and the logistic regression model is constructed by adopting L1 regularization as follows:
Figure BDA0002410705740000051
wherein N represents the total number of nuclei input into the logistic regression model, and P represents the number of all input features; y is i A true value representing the ith cell nucleus classification; x is x i Input features representing the ith nucleus, β representing all inputsCoefficient of characteristic, beta j The coefficient representing the j-th input feature, λ, is the L1 regularized penalty coefficient.
Another embodiment of the present invention provides a segmentation apparatus for epithelial nuclei in a pathological image of prostate cancer, including:
the cell nucleus segmentation module is used for carrying out color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after carrying out cell nucleus segmentation on the single-channel image of the color image obtained by conversion;
The cell nucleus feature extraction module is used for respectively carrying out region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a zoom image zoomed to a preset fixed size to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features;
and the epithelial cell nucleus classification module is used for inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model to classify the cell nuclei, and determining the epithelial cell nuclei in the pathological staining image according to the classification result.
A further embodiment of the present invention proposes a terminal comprising: the processor and the memory are used for executing the computer program to implement the segmentation method of the epithelial cell nuclei in the prostate cancer pathological image.
Yet another embodiment of the present invention provides a computer-readable storage medium storing a computer program which, when executed, implements a method for segmentation of epithelial nuclei in a pathological image of prostate cancer according to the above.
The technical scheme of the embodiment of the invention has the following beneficial effects:
according to the method provided by the embodiment of the invention, the automatic segmentation of the epithelial cell nuclei in the pathological image of the prostate cancer is realized by adopting three steps, namely, the pathological image is firstly converted into a color space which enhances the difference between the cell nuclei and the background, so that the cell nuclei can be conveniently segmented, and the cell nuclei are segmented based on a single-channel image of the converted color image; then, carrying out different area segmentation and area image feature extraction on each cell nucleus in different single-channel images of the obtained cell nucleus segmentation color image, wherein the area segmentation and area image feature extraction comprise morphological features, texture features, color statistics value features, LBP histogram statistics value features, fractal dimension features and the like; finally, the characteristics are input into a trained cell nucleus classification model, so that whether the cell nucleus is an epithelial cell nucleus or not can be accurately judged. The method can well solve the difficult problem that epithelial cell nuclei and stromal cell nuclei in the prostate are difficult to distinguish in the prior art, thereby improving the accuracy of judging the severity degree and the like of the prostate cancer and the like.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
FIG. 1 is a flow chart of a method for segmenting epithelial nuclei in a prostate cancer pathology image according to an embodiment of the present invention;
FIG. 2 shows a first flow diagram of nuclear segmentation in a prostate cancer pathology image according to an embodiment of the present invention;
FIG. 3 shows a second flow diagram of nuclear segmentation in a prostate cancer pathology image according to an embodiment of the present invention;
FIG. 4 shows a schematic representation of a nuclear fraction color image according to an embodiment of the present invention;
FIG. 5 is a flow chart of nuclear feature extraction in a prostate cancer pathology image according to an embodiment of the present invention;
FIG. 6 shows ROC curves tested against a predictive model in accordance with an embodiment of the invention;
FIG. 7 shows a schematic representation of an epithelial cell nucleus segmentation in accordance with an embodiment of the present invention;
fig. 8 shows a schematic structural diagram of an epithelial cell nucleus segmentation device in a prostate cancer pathological image according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
Example 1
Referring to fig. 1, the present embodiment provides a method for segmenting epithelial nuclei in a pathological image of prostate cancer, which can be applied to the processing of the pathological image of prostate cancer, and particularly includes analyzing the morphology, texture, color, etc. of the epithelial nuclei in the pathological image, so as to effectively predict the prostate cancer.
The method for segmenting the epithelial nuclei in the pathological image of prostate cancer will be described in detail.
Step S1, performing color space conversion on the obtained pathology staining image, and obtaining a cell nucleus segmentation color image after cell nucleus segmentation by utilizing a single-channel image of the converted color image.
The step S1 is mainly used for segmenting nuclei in the prostate, i.e. identifying nuclei in the pathology staining image. In this embodiment, considering that the pathological images may have different coloring degrees, the pathological staining images may be obtained by performing color normalization pretreatment on pathological images that are stained with a staining agent (such as hematoxylin & eosin). This reduces the influence on the accuracy of the segmentation result due to different coloring conditions. Preferably, the color normalization can be performed using a histogram matching method.
Illustratively, the histogram matching method-based color normalization process mainly includes the following sub-steps:
a. selecting a pathological image with better dyeing condition as a standard image, and calculating a cumulative histogram of the standard image; the so-called dyeing condition is good, and the dyeing condition can be judged through the actual experience of staff, such as judging whether the details of the image are clear enough, whether the colors are saturated and rich, and the like;
b. calculating a cumulative histogram of a pathological image to be standardized;
c. for each gray level A of the pathological image to be standardized, each gray level B of the standard image is acquired, and when the ith gray isGrade A i And the jth gray level B j When the cumulative probability of the two are closest, then mark A i And B is connected with j Matching;
d. each gray level of the image to be normalized is mapped to the gray level of the standard image that matches it.
Through the steps, the standardized pretreatment of pathological images with different coloring conditions can be realized. The principle of the histogram matching method can be referred to in the prior related literature, and is not described in detail herein.
For the color space conversion described above, it means that the pathology image in the original color space is converted into another color space in which the difference between the nucleus and the background is more pronounced, and then a single channel image in which the difference between the nucleus and the background is maximized is selected as the basic image of the nucleus segmentation. For example, for hematoxylin & eosin stained images, the pathology stained image in the original RGB color space can be converted into the HEO color space, wherein the difference between the nuclei and the background in the H-channel image is significantly enhanced, so that the nuclei can be conveniently identified and segmented.
Exemplary, as shown in fig. 2, the step S1 of acquiring the cell nucleus segmented color image after cell nucleus segmentation based on a single channel image of the converted color image includes:
in the substep S11, a single-channel image that maximizes the difference between the nucleus and the background is selected from the color images obtained by conversion. In this embodiment, it is preferable to convert the acquired pathology-stained image into the HEO color space, and select an H-channel image in which the difference between the nucleus and the background is maximized as a base image of the nucleus segmentation.
Exemplarily, the pathology-stained image in RGB color space may be deconvolved into HEO color space using a color space conversion operator. For example, the color space conversion operator is selected as { [0.644211000,0.716556000,0.266844000]; [0.09278900,0.95411100,0.28311100]; [0.00000000,0.00000000,0.0000000]}.
And S12, performing Gaussian smoothing on the selected single-channel image, and detecting the edge pixels of the cell nucleus by using an edge detection algorithm to obtain the gray values of the edge pixels of the cell nucleus.
The gaussian noise in the image can be filtered through the gaussian smoothing process. For this edge detection algorithm, illustratively, it may include, but is not limited to, sobel edge detection, canny edge detection, LOG edge detection, laplacian edge detection, and so forth. In this embodiment, the detection of the nuclear edge pixels is preferably performed using the laplace algorithm. The principle is that in the selected single-channel image, the change of the gray values of pixels inside and outside the cell nucleus is small, the change of the gray values of pixels at the edge of the cell nucleus is large, and the edge pixels of the cell nucleus can be detected by calculating the second partial differentiation of the gray values of the pixels of the image with respect to the x direction and the y direction.
For example, in some embodiments, the Laplace operator { [1, 1] may be used; [1, -8,1]; [1, 1] } convolving the image to calculate the gradient of each pixel instead of the second order differential, and taking the first 6.+ -. 4% of the absolute value of the gradient at maximum as the edge pixel of the nucleus. It should be understood that the laplace operator and the values of 6±4% can be adjusted accordingly according to the actual requirements.
It will be appreciated that, since the cell nucleus edge pixels obtained through the substep S12 often include the pixel of the cell nucleus boundary and the partial pixels around the boundary thereof, it is also necessary to further determine the pixels around the boundary, so as to obtain more accurate cell nucleus boundary pixels.
And S13, calculating a gray value threshold by using a threshold segmentation algorithm according to the gray value of the obtained cell nucleus edge pixel, judging that the current pixel belongs to a cell nucleus if the gray value of the pixel in the selected single-channel image is larger than the gray value threshold, and judging that the current pixel belongs to a background if the gray value of the pixel in the selected single-channel image is not larger than the gray value threshold.
Illustratively, for the threshold segmentation algorithm, gray value thresholds are preferably calculated using the Otsu method (OTSU). In one embodiment, the step of calculating the gray value threshold using the Ojin method includes:
a. Selecting a series of candidate thresholds according to the gray level of the acquired cell nucleus edge pixels;
b. for each candidate threshold, a corresponding inter-class variance var is calculated class
var class =w 00 -μ) 2 +w 11 -μ) 2 =w 0 w 101 ) 2
Wherein w is 0 The proportion of foreground pixels to all pixels of the whole image; w (w) 1 The proportion of the background pixels to all pixels of the whole image; mu (mu) 0 Is the average value of the foreground pixels; mu (mu) 1 Is the average value of background pixels; mu is the average value of all pixels of the whole image;
c. a candidate threshold that maximizes the inter-class variance is selected as the gray value threshold.
In the substep S14, coordinates of each cell nucleus in the selected single-channel image are obtained, and each coordinate is mapped into the color image to obtain a cell nucleus segmentation color image.
Illustratively, after each cell nucleus in the selected single-channel image is determined by the above substeps, the coordinates of each cell nucleus in the single-channel image are marked, and then when the coordinates marks are mapped into the original color image, a color image including the segmented cell nuclei is obtained.
Furthermore, in another embodiment, for the above-described cell nucleus segmentation color image, it is considered that there may be some segmentation areas of too small area at the time of cell nucleus segmentation, which are mostly not true cell nuclei, and thus will be referred to as false positive cell nuclei in this application. In addition, when the cell nucleus is segmented, a plurality of adjacent and overlapped cell nuclei can be obtained, and in order to further improve the segmentation accuracy of the cell nuclei, as shown in fig. 3, the method comprises:
And S15, performing morphological processing on the cell nuclei obtained by segmentation in the selected single-channel image.
Illustratively, the morphological open operation may be used to remove burrs on the cell nucleus boundary, followed by the morphological closed operation to fill the pores of the cell nucleus, and so forth.
Substep S16, the areas of all nuclei are then counted and an area threshold for a single nucleus is calculated for filtering out false positive nuclei having an area smaller than said area threshold.
Since there may be some noise in the pathology stain image that is similar in color to the nuclei, for these too small areas of false positive nuclei, it can be filtered by setting the nuclei area threshold. For example, for each pathological image, the area threshold t of the nucleus can be chosen using the following formula: t=max (a m 3, N); wherein A is m Is the average of all nuclear areas; n is a preset minimum area of the nucleus, and can be specifically determined according to the size of the image, the scanning magnification, the actual scaling magnification and the like. Then, if the area of the current nucleus is smaller than the area threshold, it can be judged as a false positive nucleus.
In a substep S17, adjacent nuclei and overlapping nuclei are segmented based on a morphological image segmentation algorithm.
Further, exemplarily, after filtering out false positive nuclei, some adjacent nuclei or overlapping nuclei that are present may be further segmented using a morphological image segmentation algorithm. Preferably, the segmentation may be performed using a watershed algorithm. In one embodiment, the step of using a watershed algorithm to perform a secondary segmentation of adjacent, overlapping nuclei comprises:
a. calculating the shortest distance between each foreground pixel and the background pixel in the adjacent cell nuclei or the overlapped cell nuclei, and setting the distance of the background pixel to 0, thereby obtaining a distance mark graph.
b. A number of points are selected as bottom points on the distance map, the shortest distance between these points and the background pixels being locally minimum. Illustratively, at the edges of adjacent or overlapping nuclei to be secondarily divided, positions where foreground pixels having the shortest distance from background pixels are located may be selected as bottom points at which water diffusion starts, i.e., bottom points at which the region expands.
c. And respectively expanding the areas by taking each bottom point as a respective starting point and a preset step length until the boundary of the intersection of two adjacent expansion areas is obtained. For example, the preset step size may be 1 pixel, etc.
d. Cell division is performed on the currently adjacent cell nuclei or overlapping cell nuclei according to the boundary.
The expansion of the area is performed from the bottom point, and each time the expansion is performed according to the shortest distance of the mark, namely, the distances of all pixels on the dam with water flowing each time are equal until two adjacent expansion areas meet on a boundary, and the intersected boundary can be regarded as a watershed for dividing the pixels inside the dam and the pixels outside the dam. By means of these watersheds an efficient segmentation of the adjoining or overlapping nuclei is achieved.
It is understood that the accuracy of segmentation of each cell nucleus in a color image can be further improved by morphologically performing processing such as image segmentation. For example, fig. 4 shows a cell nucleus division color image in which each cell nucleus is effectively divided in an actual cell nucleus division operation.
Step S2, performing region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a scaled image scaled to a preset fixed size to obtain corresponding single-channel region images, performing feature extraction on each single-channel region image to obtain single-channel image features of the corresponding cell nucleus, and obtaining corresponding multi-channel image features based on the single-channel image features.
The step S2 is mainly used for extracting image features of each cell nucleus. When the region segmentation is carried out, binarization processing can be carried out on each single-channel image of the cell nucleus segmentation color image, so that the binarization image of each single channel is obtained, and the extraction of image features is convenient. It should be understood that the following region segmentation and feature extraction are both based on a single-channel binarized image.
In one embodiment, the image of each cell nucleus is segmented according to the following 4 types, namely, for each cell nucleus, the segmentation results in: an inner region, an inner and outer neighborhood (short for inner and outer regions), an outer region, and a minimum rectangular region containing the outer region. In this embodiment, the size of the smallest rectangular area is the size of the nuclear image.
Taking a certain cell nucleus as an example, four different areas of the cell nucleus can be obtained through area division, and the four different areas are respectively marked as: the inner nuclear region ori, the inner and outer nuclear neighbors dia, the outer nuclear region xor, and the smallest rectangular region nor containing the outer nuclear region.
The method for obtaining the inside and outside cell nucleus neighborhood dia comprises the steps of expanding a cell nucleus region by using a morphological expansion algorithm, wherein the size of an expansion operator is k, and the calculation method of k is as follows:
Figure BDA0002410705740000141
Wherein S is the area of the nucleus; pi is the circumference ratio; n is a predetermined minimum expansion value, which may be selected based on the size distribution of the nuclei, for example, N may be 10 in one embodiment.
For the nucleus outer region xor, it can be obtained by exclusive-or-operating the nucleus inner and outer neighborhood dia with the nucleus inner region ori. For the smallest rectangular region nor, i.e. the smallest outer rectangular region containing the outer region xor of the nuclei.
Wherein the nucleus split color image is composed of three single channel images, and exemplarily, when converted into the HEO color space, the nucleus split color image will be composed of an H-channel image, an E-channel image, and an O-channel image. Thus, for the inner region, inner and outer neighborhood, outer region and smallest rectangular region of a cell nucleus, each region is composed of corresponding three single channel region images. For example, taking an internal region as an example, an internal region image including an H channel, an internal region image of an O channel, and an internal region image of an E channel are included.
In this embodiment, the initial image of each cell nucleus and the scaled image scaled to a preset fixed size are respectively subjected to region segmentation to obtain corresponding single-channel region images. Wherein the initial image is an image which is not scaled after the cell nucleus is segmented; the scaled image is an image obtained by scaling the initial image of the cell nucleus to a preset fixed size. It will be appreciated that the predetermined fixed size may be selected according to practical requirements.
Thus, each cell nucleus will include an initial image and a scaled image, and each image includes corresponding three single-channel images, so that the same cell nucleus is subjected to region segmentation for each of the three single channels of the two images, respectively, resulting in 24 single-channel region images, i.e., three single-channel region images corresponding to each of the inner region, inner and outer neighborhood, outer region, and minimum rectangular region of its initial size image, and three single-channel region images corresponding to each of the scaled inner region, inner and outer neighborhood, outer region, and minimum rectangular region of its scaled image.
For example, for four regions that are not scaled, we denote: ori-inner region, dia-inner and outer neighborhood, xor-outer region and nor-minimum rectangular region, each region after scaling can be noted as: rori—a scaled nuclear internal region; rdia-scaled inner and outer cell nuclei neighborhood; rxor-scaled outer region of the nucleus; rnor—the smallest rectangular box area after scaling.
As shown in fig. 5, for "feature extraction of a single-channel region image to obtain a single-channel image feature of a corresponding cell nucleus and obtaining a corresponding multi-channel image feature based on the single-channel image feature" in the above step S2 includes:
Step S21, extracting one type of feature from the three single-channel area images of the inner area, the outer area and the inner and outer neighborhoods of the un-zoomed cell nucleus and the three single-channel area images of the zoomed inner area, the outer area and the inner and outer neighborhoods, and extracting two types of feature from the three single-channel area images of the zoomed minimum rectangular area image to obtain each single-channel image feature of the cell nucleus.
In this embodiment, feature extraction of different dimensions is performed on a cell nucleus, which can be used to determine which type of cell nucleus the cell nucleus belongs to, for example, whether it belongs to an epithelial cell nucleus or a non-epithelial cell nucleus such as a stromal cell nucleus and other cell nuclei. Since the analysis of prostate cancer is based mainly on analysis of the morphology, texture, color, etc. of the epithelial nuclei, it can be better used for analysis of the severity of prostate cancer, etc. after the epithelial nuclei are accurately identified.
Preferably, at least 5 types of image features will be extracted for each cell nucleus, and in this embodiment, the at least 5 types of image features may be classified into: a class of features consisting of texture features, morphological features, color statistics features, etc., and a class of features consisting of local binary pattern statistics histogram features, fractal dimension features, etc.
Taking a certain cell nucleus as an example, 3 kinds of characteristics including texture characteristics, morphological characteristics and color statistics characteristics can be extracted from the image of the 6 single-channel areas, namely, the non-zoomed inner area ori, the inner and outer neighborhood dia, the outer area xor, the zoomed inner area ror, the zoomed inner and outer neighborhood rdia and the zoomed outer area rxor, of the cell nucleus. In particular, for the extraction of morphological features, only one single-channel region image of each of the 6 regions is required to be extracted, so that the redundant calculation amount and the like can be reduced. And meanwhile, extracting 2 types of features, namely a local binary pattern statistical histogram feature and a fractal dimension feature, from the three single-channel region images of the scaled minimum rectangular region rnor.
For the above-mentioned class 5 features, the texture features are mainly calculated based on feature matrices such as a Gray Level Co-occurrence Matrix (GLCM), a Gray area size Co-matrix (GLSZM, gray Level Size Zone Matrix), a Gray travel Co-matrix (GLRLM, gray Level Run Length Matrix), a neighborhood Gray difference matrix (NGTDM, neighbouring Gray Tone Difference Matrix), and a Gray dependency matrix (GLDM, gray Level Dependence Matrix). The calculation formulas of the matrix features listed above can be referred to in the related literature, and are not repeated here. In this embodiment, extraction of the texture features of the nucleus will be performed based on these five matrices.
Morphological features mainly include extraction of the area, perimeter-to-area ratio, longest diameter, etc. of the subject region. For example, taking each single-channel region image of a cell nucleus as an example, the area, perimeter-to-area ratio, longest diameter, etc. of each single-channel region image can be calculated, thereby obtaining the morphological feature of the cell nucleus.
The color statistic features mainly comprise calculation of minimum value, average absolute deviation, median, variance energy, total energy, kurtosis, skewness and the like of the gray value of the image, and can also comprise maximum value, polar error, 10-bit value, 90-bit value, four-bit difference and the like, and the color statistic features can be determined according to actual requirements. The calculation formulas for these parameters listed above can be found in the relevant literature and will not be described in detail here.
The local binary pattern statistical histogram feature (i.e., LBP statistical histogram feature) is calculated based mainly on a preset radius and surrounding pixels in units of the preset radius. For example, the preset radius r has a value of 1-9, 8r surrounding pixels are taken to obtain a rotation-invariant local binary pattern, and the statistical histogram feature of the rotation-invariant local binary pattern is obtained.
The fractal dimension features are based on a calculation of a preset number of gray value thresholds, preferably 8 gray value thresholds. For example, in one embodiment, for the calculation method of 8 gray value thresholds, all pixels of the scaled minimum rectangular region rnor under each single channel may be classified into 9 classes by using a Gaussian Mixture model (Gaussian Mixture), the 9 classes of pixels are sorted from small to large according to the average value of the gray values of the pixels, and the distance between the average values of adjacent classes is divided according to the variance ratio of the adjacent classes, where the division point is the gray value threshold between the adjacent classes.
Wherein each gray value threshold comprises 6 features above a threshold area, above a threshold average, above a threshold fractal dimension, between adjacent threshold areas, between adjacent threshold averages, and between adjacent fractal dimensions. Illustratively, taking the first gray value threshold g1 obtained as an example, dividing the current image into two partial areas higher than g1 and not higher than g1 according to the area, the average value and the fractal dimension respectively to obtain 3 features higher than g1, namely 3 features higher than the threshold area, the average value and the fractal dimension, namely the area higher than the threshold value, the average value higher than the threshold value and the fractal dimension higher than the threshold value; for the first gray value threshold g1 and the second gray value threshold g2, the image can be divided into three parts, pixels with gray values between g1 and g2 are taken, and 3 features of areas, average values and fractal dimensions of the pixels are calculated, namely the 3 features of areas between adjacent thresholds, average values between adjacent thresholds and adjacent fractal dimensions are corresponding to the above. The 6 features thus obtained are combined to form the feature of the first gray value threshold g 1. Other gray value thresholds are similarly calculated, and therefore will not be described in detail.
It will be appreciated that by extracting the 5 main features of the texture feature, morphological feature, color statistics feature, local binary pattern statistics histogram feature and fractal dimension feature of the cell nucleus, each single-channel image feature of the cell nucleus stored in vector form or matrix form can be obtained.
Step S22, multiplying the element correspondence of any two single-channel image features to obtain a first-type multi-channel image feature, multiplying the element correspondence of three single-channel image features to obtain a second-type multi-channel image feature, and combining the first-type multi-channel image feature and the second-type multi-channel image feature to obtain the multi-channel image feature of the cell nucleus.
Then, after each single-channel image feature and multi-channel image feature of the cell nucleus are obtained, the cell nucleus classification can be performed in the next step.
And S3, inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model to classify the cell nuclei, and determining the epithelial cell nuclei in the pathological staining image according to the classification result.
The step S3 is mainly used for classifying the nuclei, so as to identify the epithelial nuclei in the pathological staining image. Wherein the classification includes epithelial cell nuclei and non-epithelial cell nuclei, which may include stromal cell nuclei and other types of cell nuclei, and the like.
Preferably, the cell nucleus classification model obtained through prior training is constructed based on a logistic regression model. Further preferably, the logistic regression model employs L1 regularization to construct a logistic regression optimal function as follows:
Figure BDA0002410705740000191
wherein N represents the total number of nuclei input into the logistic regression model, and P represents the number of all input features; y is i A true value representing the ith cell nucleus classification; x is x i Input features representing the ith nucleus, β represents the coefficients of all input features, β j The coefficient representing the j-th input feature, λ, is the L1 regularized penalty coefficient.
Illustratively, L1 regularization may penalize some of the input feature coefficients to 0, preserving the feature values most relevant to the predicted outcome. The logistic regression model constructed by L1 regularization can screen a plurality of input image features before classification, so that effective image features favorable for classification are reserved for classifying cell nuclei. Therefore, the operation amount and operation time can be greatly reduced, and the classification efficiency can be improved.
In one embodiment, the process of constructing the logistic regression model mainly includes the steps of:
a. normalizing the input image features, and deleting the image features containing infinite values and missing values;
b. Dividing the samples into a test set and a training set by adopting a hierarchical random sampling method;
c. using a k-fold cross validation method to evaluate the prediction accuracy of the logistic regression model under different penalty coefficients, and selecting a penalty coefficient corresponding to the optimal prediction accuracy from the prediction accuracy;
d. c, constructing a logistic regression model to be trained by adopting the penalty coefficient selected in the step c, and fitting all training sets to be used as a constructed prediction model;
e. the test set is predicted using the constructed prediction model, which is evaluated using two parameters, classification accuracy and AUC value (Area under the Curve of ROC, i.e., the area under the ROC curve).
In one example, from 46 pathology stain images stained with hematoxylin & eosin at the same magnification, 895 nuclei were randomly extracted from the segmented nuclei for labeling, including 355 epithelial cells and 540 stromal cells, after the nuclei were segmented from the pathology stain images. Wherein, the classification accuracy of the constructed prediction model on the epithelial cell nuclei in the test set is up to 90.2490 +/-0.0016%, and the AUC value is up to 96.1103 +/-0.0003%. Fig. 6 shows ROC curves of the predictive model, i.e. it shows that the classification of epithelial nuclei is more accurate, by means of which efficient classification of epithelial nuclei can be performed.
By using the prediction model constructed as described above as a cell nucleus classification model, the single-channel image features and the multi-channel image features obtained in the step S2 are input into the cell nucleus classification model, and the classification result of the cell nucleus, which is either epithelial or non-epithelial, is output. Then, the epithelial cell nuclei are labeled in a color image based on the classification result of each cell nucleus, and a schematic diagram of epithelial cell nucleus division as shown in fig. 7 can be obtained.
The method for dividing the epithelial cell nuclei in the prostate cancer pathological image of the embodiment mainly adopts three steps to realize automatic division of the epithelial cell nuclei in the prostate cancer pathological image, and firstly converts the pathological image into a color space which enhances the difference between the cell nuclei and the background, so that the division of the cell nuclei can be conveniently carried out; then, carrying out different region segmentation and region image feature extraction on each cell nucleus in the obtained cell nucleus segmentation color image, wherein the feature extraction mainly comprises morphological features, texture features, color statistics value features, LBP histogram statistics value features, fractal dimension features and the like; finally, the 5 main characteristics are input into a trained cell nucleus classification model, so as to accurately judge whether the cell nucleus is an epithelial cell nucleus or not. The method can well solve the difficult problem that the epithelial cell nucleus in the prostate is difficult to accurately divide in the prior art, and can greatly improve the accuracy of judging the severity degree and the like of the prostate cancer. In addition, in the process of cell nucleus segmentation, a morphological processing method can be used for filtering out some noisy cell nuclei, and a morphological image segmentation algorithm such as watershed is used for further automatically segmenting adjacent and overlapped cell nuclei, so that the segmentation accuracy of the cell nuclei in the prostate cancer pathological image can be greatly improved, and the subsequent segmentation of epithelial cell nuclei and the like can be facilitated.
Example 2
Referring to fig. 8, based on the method of the above embodiment 1, the present embodiment provides a segmentation apparatus 10 for epithelial nuclei in a pathological image of prostate cancer, comprising:
the cell nucleus segmentation module 110 is configured to perform color space conversion on the obtained pathology staining image, and obtain a cell nucleus segmentation color image after performing cell nucleus segmentation based on a single-channel image of the color image obtained by the conversion;
the cell nucleus feature extraction module 120 is configured to perform region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a scaled image scaled to a preset fixed size to obtain a corresponding single-channel region image, perform feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtain corresponding multi-channel image features based on the single-channel image features;
the epithelial cell nucleus classification module 130 is configured to input the single-channel image feature and the multi-channel image feature of the corresponding cell nucleus into a cell nucleus classification model for cell nucleus classification, and determine epithelial cell nuclei in the pathology staining image according to the classification result.
It will be appreciated that the segmentation apparatus 10 of the epithelial nuclei in the prostate cancer pathology image described above corresponds to the method of example 1. Any of the alternatives in embodiment 1 are also applicable to this embodiment and will not be described in detail here.
The invention also provides a terminal, such as a computer, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor enables the terminal equipment to execute the functions of each module in the segmentation method of the epithelial cell nucleus in the prostate cancer pathological image or the segmentation device of the epithelial cell nucleus in the prostate cancer pathological image by running the computer program.
The memory may include a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the terminal, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The present invention also provides a computer readable storage medium storing the computer program for use in the above terminal.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the invention may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (9)

1. A method for segmenting an epithelial cell nucleus in a pathological image of prostate cancer, comprising:
performing color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after cell nucleus segmentation based on a single-channel image of the color image obtained by conversion;
the method comprises the steps of respectively carrying out region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a scaled image scaled to a preset fixed size to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features;
inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model to classify the cell nuclei, and determining the epithelial cell nuclei in the pathological staining image according to the classification result;
Each cell nucleus segmented area comprises an inner area, an inner neighborhood, an outer area and a minimum rectangular area containing the outer area of the cell nucleus, wherein the single-channel area image comprises three single-channel area images corresponding to the un-zoomed inner area, the inner neighborhood, the outer neighborhood and the minimum rectangular area of the initial size image, and three single-channel area images corresponding to the zoomed inner area, the inner neighborhood, the outer area and the minimum rectangular area of the zoomed image;
the feature extraction of the single-channel region image to obtain a single-channel image feature of a corresponding cell nucleus, and the obtaining of the corresponding multi-channel image feature based on the single-channel image feature comprise:
extracting one type of feature from the three single-channel region images of the un-zoomed inner region, the inner neighborhood, the outer neighborhood and the outer region and the three single-channel region images of the zoomed inner region, the inner neighborhood and the outer neighborhood and extracting two types of feature from the three single-channel region images of the zoomed minimum rectangular region to obtain each single-channel image feature of the cell nucleus;
Multiplying the corresponding elements of any two single-channel image features to obtain a first-type multi-channel image feature, multiplying the corresponding elements of three single-channel image features to obtain a second-type multi-channel image feature, and combining the first-type multi-channel image feature and the second-type multi-channel image feature to obtain the multi-channel image feature of the cell nucleus.
2. The method of claim 1, wherein acquiring the nucleus segmented color image after the nucleus segmentation based on a single channel image of the converted color image comprises:
selecting a single-channel image which makes the difference between the cell nucleus and the background maximum from the color image obtained by conversion;
performing Gaussian smoothing on the selected single-channel image, and then detecting the edge pixels of the cell nucleus by using an edge detection algorithm to obtain the gray value of the edge pixels of the cell nucleus;
calculating a gray value threshold by using a threshold segmentation algorithm according to the gray value of the obtained cell nucleus edge pixel, and judging that the current pixel belongs to a cell nucleus if the gray value of the pixel in the selected single-channel image is larger than the gray value threshold;
And acquiring coordinates of each cell nucleus in the selected single-channel image, and mapping each coordinate into the color image to obtain a cell nucleus segmentation color image.
3. The method of claim 2, wherein the acquiring coordinates of each cell nucleus in the selected single channel image is preceded by:
morphological processing is carried out on the cell nuclei obtained by segmentation in the selected single-channel image;
then counting the areas of all the cell nuclei and calculating an area threshold of a single cell nucleus, wherein the area threshold is used for filtering false positive cell nuclei with the area smaller than the area threshold;
the adjacent cell nuclei and the overlapped cell nuclei are segmented based on a morphological image segmentation algorithm.
4. The method of claim 3, wherein the "segmenting adjacent nuclei and overlapping nuclei based on a morphological image segmentation algorithm" comprises:
calculating the shortest distance between each foreground pixel and a background pixel in the currently adjacent cell nucleus or the overlapped cell nucleus, and setting the distance of the background pixel to be zero to obtain a distance mark graph;
selecting a plurality of points with the shortest distance between the background pixels being the local minimum as bottom points on the distance mark graph;
Respectively expanding the areas by taking each bottom point as a respective starting point and a preset step length until the boundary between two adjacent expansion areas is obtained;
and performing cell nucleus segmentation on the currently adjacent cell nuclei or the overlapped cell nuclei according to the boundary.
5. The method of claim 1, wherein the class of features includes texture features, morphology features, and color statistics features, and wherein the class of features includes local binary pattern statistics histogram features and fractal dimension features;
the texture features comprise a gray level co-occurrence matrix, a gray level area size matrix, a gray level travel matrix, a neighborhood gray level difference matrix and a gray level dependency matrix;
the morphological features include area, perimeter-to-area ratio, and longest diameter of the subject region;
the color statistic characteristics comprise minimum value, average absolute deviation, median, variance, energy, total energy, kurtosis and skewness of the gray value of the image;
the local binary pattern statistical histogram feature is calculated based on a preset radius and surrounding pixels taking the preset radius as a unit;
the fractal dimension features are calculated based on a preset number of thresholds, each threshold including above a threshold area, above a threshold average, above a threshold fractal dimension, between adjacent threshold areas, between adjacent threshold average, and between adjacent fractal dimensions.
6. The method of claim 1, wherein the nuclear classification model is constructed based on a logistic regression model that is a logistic regression optimization function constructed using L1 regularization as follows:
Figure FDA0004075013130000031
wherein N represents the total number of nuclei input into the logistic regression model, and P represents the number of all input features; y is i A true value representing the ith cell nucleus classification; x is x i Input features representing the ith nucleus, β represents the coefficients of all input features, β j The coefficient representing the j-th input feature, λ, is the L1 regularized penalty coefficient.
7. A segmentation apparatus for epithelial nuclei in a pathological image of prostate cancer, comprising:
the cell nucleus segmentation module is used for carrying out color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after carrying out cell nucleus segmentation on the single-channel image of the color image obtained by conversion;
the cell nucleus feature extraction module is used for respectively carrying out region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a zoom image zoomed to a preset fixed size to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features;
The epithelial cell nucleus classification module is used for inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model to classify the cell nuclei, and determining the epithelial cell nuclei in the pathological staining image according to the classification result;
each cell nucleus segmented area comprises an inner area, an inner neighborhood, an outer area and a minimum rectangular area containing the outer area of the cell nucleus, wherein the single-channel area image comprises three single-channel area images corresponding to the un-zoomed inner area, the inner neighborhood, the outer neighborhood and the minimum rectangular area of the initial size image, and three single-channel area images corresponding to the zoomed inner area, the inner neighborhood, the outer area and the minimum rectangular area of the zoomed image;
the cell nucleus feature extraction module is used for carrying out feature extraction on the single-channel area image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features comprises:
extracting one type of feature from the three single-channel region images of the un-zoomed inner region, the inner neighborhood, the outer neighborhood and the outer region and the three single-channel region images of the zoomed inner region, the inner neighborhood and the outer neighborhood and extracting two types of feature from the three single-channel region images of the zoomed minimum rectangular region to obtain each single-channel image feature of the cell nucleus;
Multiplying the corresponding elements of any two single-channel image features to obtain a first-type multi-channel image feature, multiplying the corresponding elements of three single-channel image features to obtain a second-type multi-channel image feature, and combining the first-type multi-channel image feature and the second-type multi-channel image feature to obtain the multi-channel image feature of the cell nucleus.
8. A terminal, comprising: a processor and a memory, the memory storing a computer program for executing the computer program to implement the segmentation method of epithelial nuclei in prostate cancer pathology image according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, implements the segmentation method of epithelial nuclei in prostate cancer pathology image according to any one of claims 1 to 6.
CN202010175593.0A 2020-03-13 2020-03-13 Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image Active CN111402267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010175593.0A CN111402267B (en) 2020-03-13 2020-03-13 Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010175593.0A CN111402267B (en) 2020-03-13 2020-03-13 Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image

Publications (2)

Publication Number Publication Date
CN111402267A CN111402267A (en) 2020-07-10
CN111402267B true CN111402267B (en) 2023-06-16

Family

ID=71430776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010175593.0A Active CN111402267B (en) 2020-03-13 2020-03-13 Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image

Country Status (1)

Country Link
CN (1) CN111402267B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070722A (en) * 2020-08-14 2020-12-11 厦门骁科码生物科技有限公司 Fluorescence in situ hybridization cell nucleus segmentation method and system
CN112184696B (en) * 2020-10-14 2023-12-29 中国科学院近代物理研究所 Cell nucleus and organelle counting and area calculating method and system thereof
CN112446892A (en) * 2020-11-18 2021-03-05 黑龙江机智通智能科技有限公司 Cell nucleus segmentation method based on attention learning
CN113033287A (en) * 2021-01-29 2021-06-25 杭州依图医疗技术有限公司 Pathological image display method and device
CN113178228B (en) * 2021-05-25 2023-02-10 郑州中普医疗器械有限公司 Cell analysis method based on nuclear DNA analysis, computer device, and storage medium
CN113762395B (en) * 2021-09-09 2022-08-19 深圳大学 Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method
CN113763370B (en) * 2021-09-14 2024-09-06 佰诺全景生物技术(北京)有限公司 Digital pathology image processing method and device, electronic equipment and storage medium
CN116580216B (en) * 2023-07-12 2023-09-22 北京大学 Pathological image matching method, device, equipment and storage medium
CN116959712B (en) * 2023-07-28 2024-06-21 成都市第三人民医院 Lung adenocarcinoma prognosis method, system, equipment and storage medium based on pathological image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020167A (en) * 2012-11-26 2013-04-03 南京大学 Chinese text classification method for computer
CN110415255A (en) * 2019-06-14 2019-11-05 广东省人民医院(广东省医学科学院) A kind of immunohistochemistry pathological image CD3 positive nucleus dividing method and system
CN110517273A (en) * 2019-08-29 2019-11-29 麦克奥迪(厦门)医疗诊断系统有限公司 Cytology image partition method based on dynamic gradient threshold value

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020167A (en) * 2012-11-26 2013-04-03 南京大学 Chinese text classification method for computer
CN110415255A (en) * 2019-06-14 2019-11-05 广东省人民医院(广东省医学科学院) A kind of immunohistochemistry pathological image CD3 positive nucleus dividing method and system
CN110517273A (en) * 2019-08-29 2019-11-29 麦克奥迪(厦门)医疗诊断系统有限公司 Cytology image partition method based on dynamic gradient threshold value

Also Published As

Publication number Publication date
CN111402267A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111402267B (en) Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image
CN110110799B (en) Cell sorting method, cell sorting device, computer equipment and storage medium
CN110120040B (en) Slice image processing method, slice image processing device, computer equipment and storage medium
Bejnordi et al. Automated detection of DCIS in whole-slide H&E stained breast histopathology images
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN106780522B (en) A kind of bone marrow fluid cell segmentation method based on deep learning
CN111462042A (en) Cancer prognosis analysis method and system
Van Zon et al. Segmentation and classification of melanoma and nevus in whole slide images
Atupelage et al. Computational hepatocellular carcinoma tumor grading based on cell nuclei classification
Xu et al. Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients
CN112990214A (en) Medical image feature recognition prediction model
CN111784711A (en) Lung pathology image classification and segmentation method based on deep learning
CN117252893B (en) Segmentation processing method for breast cancer pathological image
Chatterjee et al. A novel method for IDC prediction in breast cancer histopathology images using deep residual neural networks
Nateghi et al. Automatic detection of mitosis cell in breast cancer histopathology images using genetic algorithm
KR102373985B1 (en) Classification method of prostate cancer using support vector machine
CN113160185A (en) Method for guiding cervical cell segmentation by using generated boundary position
KR20240012738A (en) Cluster analysis system and method of artificial intelligence classification for cell nuclei of prostate cancer tissue
Gonzalez et al. Solving the over segmentation problem in applications of Watershed Transform
Singh et al. A robust her2 neural network classification algorithm using biomarker-specific feature descriptors
CN113762395B (en) Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method
CN104933723A (en) Tongue image segmentation method based on sparse representation
Lal et al. A robust method for nuclei segmentation of H&E stained histopathology images
Teverovskiy et al. Improved prediction of prostate cancer recurrence based on an automated tissue image analysis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 107, Yanjiang West Road, Yuexiu District, Guangzhou, Guangdong 510000

Applicant after: SUN YAT-SEN MEMORIAL HOSPITAL, SUN YAT-SEN University

Applicant after: Shenzhen Huajia Biological Intelligence Technology Co.,Ltd.

Address before: No. 107 Yanjiang West Road, Tianhe District, Guangzhou, Guangdong Province, 510000

Applicant before: SUN YAT-SEN MEMORIAL HOSPITAL, SUN YAT-SEN University

Applicant before: Shenzhen Huajia Biological Intelligence Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant