CN111402267A - Segmentation method, device and terminal for epithelial cell nucleus in prostate cancer pathological image - Google Patents
Segmentation method, device and terminal for epithelial cell nucleus in prostate cancer pathological image Download PDFInfo
- Publication number
- CN111402267A CN111402267A CN202010175593.0A CN202010175593A CN111402267A CN 111402267 A CN111402267 A CN 111402267A CN 202010175593 A CN202010175593 A CN 202010175593A CN 111402267 A CN111402267 A CN 111402267A
- Authority
- CN
- China
- Prior art keywords
- image
- cell nucleus
- channel
- segmentation
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/45—Analysis of texture based on statistical description of texture using co-occurrence matrix computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a segmentation method, a device and a terminal of an epithelial cell nucleus in a prostate cancer pathological image, wherein the method comprises the following steps: performing color space conversion on the obtained pathological staining image, and performing cell nucleus segmentation on the basis of a single-channel image of the converted color image; performing area segmentation on an initial size image and a scaled image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image to obtain a single-channel area image, and performing feature extraction on each single-channel area image; and inputting the acquired single-channel image characteristics and the multi-channel image characteristics of the cell nucleus into a cell nucleus classification model for cell nucleus classification, and determining the epithelial cell nucleus in the pathological staining image according to a classification result. The technical scheme of the invention can well solve the problem that the epithelial cell nucleus in the prostate is difficult to be accurately segmented in the prior art, thereby improving the accuracy of pathological diagnosis, severity and other judgments of the prostate cancer and the like.
Description
Technical Field
The invention relates to the technical field of pathological image processing, in particular to a segmentation method, a segmentation device and a segmentation terminal for an epithelial cell nucleus in a prostate cancer pathological image.
Background
The prostate is an important organ of the male genitourinary system, and the normal prostate gland consists of a luminal cavity (lumen) of the gland and epithelial cells (epithelial cells) surrounding the cavity, with the area between the glands consisting of stroma and stromal cells. The canceration of the prostate mainly occurs in epithelial cells, and the malignant expansion of the epithelial cells can cause the shrinkage and even the complete blockage of the prostate cavity, seriously affect the functions of the prostate, seriously threaten the health and the quality of life of the male and even threaten the life of the patient.
Prostate cancer is a malignant tumor of epithelial origin, and in clinical diagnosis, pathological diagnosis is "gold standard", and pathological images play an important role, and various structural regions in tissue sections show different colors due to the characteristic that staining agents (such as hematoxylin & eosin) can generate different chemical reactions with nucleic acids and proteins. The pathological researchers can grade the severity of the prostate cancer of the patients by observing information such as the morphology of epithelial cells, the abnormal shape of cell nuclei, the glandular structure and arrangement mode in pathological images, and thus, the patients can be treated in a targeted manner. However, in practical applications, the number of cells contained in one pathological image is huge, and it is difficult to completely observe all cells, and meanwhile, the efficiency and accuracy of pathological diagnosis are affected by shortage of pathological talents, long culture period, high labor intensity, good and uneven diagnosis level and subjectivity of diagnosis at present, so that an efficient and high-accuracy method is urgently needed to solve the problems.
The traditional image segmentation algorithm carries out object segmentation by utilizing the difference of foreground and background colors, and can preliminarily meet the segmentation requirement of a cell nucleus area in a pathological image. However, since both epithelial nuclei and stromal nuclei contain nucleic acid material, there is no significant difference in color after specific binding with stain, and it is difficult to classify epithelial nuclei and stromal nuclei using conventional image segmentation algorithms. In addition, the adjacency and overlap phenomena between the cell nucleuses still make a great deal of room for improving the accuracy of cell nucleus segmentation.
Disclosure of Invention
In view of this, the embodiment of the present invention provides a segmentation method, a segmentation device, and a segmentation terminal for epithelial cell nuclei in a pathological image of prostate cancer, which can accurately classify epithelial cell nuclei in the pathological image of prostate cancer.
An embodiment of the present invention provides a method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, including: performing color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after performing cell nucleus segmentation on a single-channel image of the color image obtained by conversion;
respectively performing area segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a zoomed image zoomed to a preset fixed size to obtain a corresponding single-channel area image, performing feature extraction on the single-channel area image to obtain single-channel image features of the corresponding cell nucleus, and obtaining corresponding multi-channel image features based on the single-channel image features;
inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model for cell nucleus classification, and determining epithelial cell nuclei in the pathological staining image according to the classification result.
Further, in the above method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, the obtaining a cell nucleus segmentation color image after cell nucleus segmentation based on a single-channel image of the color image obtained by conversion includes:
selecting a single-channel image which enables the difference between the cell nucleus and the background to be maximum from the color image obtained by conversion;
after the selected single-channel image is subjected to Gaussian smoothing processing, detecting the edge pixels of the cell nucleus by using an edge detection algorithm and obtaining the gray value of the edge pixels of the cell nucleus;
calculating a gray value threshold by utilizing a threshold segmentation algorithm according to the obtained gray value of the cell nucleus edge pixel, and if the gray value of the pixel in the selected single-channel image is greater than the gray value threshold, judging that the current pixel belongs to the cell nucleus;
and acquiring coordinates of each cell nucleus in the selected single-channel image, and mapping each coordinate to the color image to obtain a cell nucleus segmentation color image.
Further, in the above method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, before the obtaining coordinates of each cell nucleus in the selected single-channel image, the method further includes:
performing morphological processing on cell nucleuses obtained by segmentation in the selected single-channel image;
then counting the areas of all cell nuclei and calculating the area threshold of a single cell nucleus so as to filter out false positive cell nuclei with the areas smaller than the area threshold;
and segmenting the adjacent cell nucleuses and the overlapped cell nucleuses based on a morphological image segmentation algorithm.
Further, in the above method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, the "segmenting adjacent cell nuclei and overlapping cell nuclei based on a morphological image segmentation algorithm" includes:
calculating the shortest distance between each foreground pixel and a background pixel in the current adjacent cell nucleus or the overlapped cell nucleus, and setting the distance of the background pixel to be zero to obtain a distance mark map;
selecting a plurality of points with local minimum shortest distances from the background pixels on the distance mark map as bottom points;
respectively expanding the areas by using each bottom point as a respective starting point and using a preset step length until a boundary of two adjacent expanded areas is obtained;
and performing cell nucleus segmentation on the current adjacent cell nucleus or the overlapped cell nucleus according to the boundary.
Further, in the above method for segmenting an epithelial cell nucleus in a prostate cancer pathology image, each of the segmented regions of the cell nucleus includes an inner region, an inner and outer neighborhood, an outer region, and a minimum rectangular region including the outer region, and the single-channel region image includes three single-channel region images corresponding to the respective non-scaled inner region, inner and outer neighborhood, outer region, and minimum rectangular region of the initial size image, and three single-channel region images corresponding to the respective scaled inner region, inner and outer neighborhood, outer region, and minimum rectangular region of the scaled image;
the "extracting features of the single-channel region image to obtain single-channel image features of corresponding cell nuclei, and obtaining corresponding multi-channel image features based on the single-channel image features" includes:
performing first-class feature extraction on the three single-channel region images of the non-zoomed internal region, the internal and external neighborhoods and the external region and the three single-channel region images of the zoomed internal region, the internal and external neighborhoods and the external region, and performing second-class feature extraction on the three single-channel region images of the zoomed minimum rectangular region image to obtain the single-channel image features of the cell nucleus;
multiplying element corresponding phases of any two single-channel image characteristics to obtain a first-class multi-channel image characteristic, multiplying element corresponding phases of three single-channel image characteristics to obtain a second-class multi-channel image characteristic, and combining the first-class multi-channel image characteristic and the second-class multi-channel image characteristic to obtain the multi-channel image characteristic of the cell nucleus.
Further, in the above method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, the first class of features includes texture features, morphological features and color statistics, and the second class of features includes local binary pattern statistical histogram features and fractal dimension features;
the texture features comprise a gray level co-occurrence matrix, a gray level area size matrix, a gray level travel matrix, a neighborhood gray level difference matrix and a gray level dependency matrix;
the morphological characteristics include area, perimeter to area ratio, and longest diameter of the object region;
the color statistic characteristics comprise the minimum value, the average value, the absolute deviation of the average value, the median, the variance, the energy, the total energy, the kurtosis and the skewness of the gray value of the image;
the local binary pattern statistical histogram feature is calculated based on a preset radius and surrounding pixels taking the preset radius as a unit;
the fractal dimension feature extraction is based on calculation of a preset number of thresholds, and each threshold comprises a fractal dimension higher than a threshold area, a fractal dimension higher than a threshold average value, an adjacent threshold area, an adjacent threshold average value and an adjacent fractal dimension.
Further, in the above method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, the cell nucleus classification model is constructed based on a logistic regression model, and the logistic regression model is a logistic regression optimal function constructed by adopting L1 regularization as follows:
wherein N represents the total number of cell nuclei input into the logistic regression model, and P represents the number of all input features; y isiA true value representing the ith nucleus classification; x is the number ofiRepresenting input features of the ith cell nucleus, β representing coefficients of all input features, βjThe coefficient representing the jth input feature, λ is L1 the penalty coefficient for regularization.
Another embodiment of the present invention provides an apparatus for segmenting epithelial cell nuclei in a pathological image of prostate cancer, including:
the cell nucleus segmentation module is used for carrying out color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after carrying out cell nucleus segmentation on a single-channel image of the color image obtained by conversion;
the cell nucleus feature extraction module is used for respectively carrying out region segmentation on an initial size image of each cell nucleus and a zoomed image zoomed to a preset fixed size in each single-channel image of the cell nucleus segmented color image to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of the corresponding cell nucleus, and obtaining corresponding multi-channel image features based on the single-channel image features;
and the epithelial cell nucleus classification module is used for inputting the single-channel image characteristics and the multi-channel image characteristics of the corresponding cell nucleus into a cell nucleus classification model for cell nucleus classification, and determining the epithelial cell nucleus in the pathological staining image according to the classification result.
Another embodiment of the present invention provides a terminal, including: a processor and a memory, the memory storing a computer program for execution by the processor to perform the above-described method of segmentation of epithelial nuclei in prostate cancer pathology images.
Yet another embodiment of the invention proposes a computer-readable storage medium storing a computer program which, when executed, implements a method for segmentation of epithelial nuclei in prostate cancer pathology images according to the above.
The technical scheme of the embodiment of the invention has the following beneficial effects:
the method provided by the embodiment of the invention realizes the automatic segmentation of the epithelial cell nucleus in the prostate cancer pathological image by adopting three steps, namely, the pathological image is converted into a color space which enhances the difference between the cell nucleus and the background, so that the cell nucleus segmentation can be conveniently carried out, the cell nucleus segmentation is carried out on the basis of a single-channel image of a converted color image, then different region segmentation and region image characteristic extraction are carried out on each cell nucleus in different single-channel images of the obtained cell nucleus segmented color image, such as morphological characteristics, texture characteristics, color statistic characteristics, L BP statistic histogram value characteristics, fractal dimension characteristics and the like, and finally the characteristics are input into a trained cell nucleus classification model, so that whether the cell nucleus is the epithelial cell nucleus or not is accurately judged.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
FIG. 1 is a flow chart illustrating a method for segmenting epithelial nuclei in a pathological image of prostate cancer according to an embodiment of the present invention;
FIG. 2 illustrates a first flow diagram of nuclear segmentation in prostate cancer pathology images in accordance with an embodiment of the present invention;
FIG. 3 illustrates a second flow diagram of nuclear segmentation in prostate cancer pathology images in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a nuclear segmentation color image according to an embodiment of the present invention;
FIG. 5 is a flow chart of the extraction of the nuclear features in the pathological image of prostate cancer according to the embodiment of the present invention;
FIG. 6 illustrates ROC curves for a prediction model test of an embodiment of the present invention;
FIG. 7 is a diagram illustrating segmentation of an epithelial nucleus according to embodiments of the present invention;
fig. 8 is a schematic structural diagram of a segmentation apparatus for epithelial cell nuclei in a pathological image of prostate cancer according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Example 1
Referring to fig. 1, the present embodiment provides a segmentation method for epithelial cell nuclei in a pathological image of prostate cancer, which can be applied to processing the pathological image of prostate cancer, and particularly includes analyzing the morphology, texture, color, and the like of epithelial cell nuclei in the pathological image, so as to effectively predict prostate cancer.
The method for segmenting epithelial cell nuclei in the pathological image of prostate cancer will be described in detail below.
And step S1, performing color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after performing cell nucleus segmentation by using a single-channel image of the color image obtained by conversion.
This step S1 is mainly used for segmentation of nuclei in the prostate, i.e. identification of nuclei in the pathologically stained image. In the present embodiment, in consideration of the fact that pathological images may have different degrees of staining, the pathological staining image may be obtained by performing color normalization preprocessing on the pathological image stained with a staining agent (such as hematoxylin & eosin). Therefore, the influence on the accuracy of the segmentation result caused by different coloring conditions can be reduced. Preferably, the color normalization may be performed using a histogram matching method.
Exemplarily, the color normalization process based on the histogram matching method mainly includes the following sub-steps:
a. selecting a pathological image with better dyeing condition as a standard image, and calculating a cumulative histogram of the standard image; the dyeing condition is better, and can be judged through the actual experience of workers, such as judging whether the image details are clear enough, whether the colors are saturated and rich, and the like;
b. calculating a cumulative histogram of a pathological image to be standardized;
c. for each gray level A of the pathological image to be standardized, acquiring each gray level B of the standard image when the ith gray level AiAnd the jth gray level BjWhen the cumulative probabilities of (A) are the closest, mark AiAnd BjMatching;
d. and mapping each gray level of the image to be standardized to the gray level of the standard image matched with the gray level.
The steps can realize the standardized preprocessing of pathological images with different coloring conditions. For the principle of the histogram matching method, reference may be made to the existing relevant literature, and details thereof will not be described here.
The color space conversion means that a pathological image in an original color space is converted into another color space in which the difference between the cell nucleus and the background is more obvious, and then a single-channel image in which the difference between the cell nucleus and the background is the largest is selected as a basic image for cell nucleus segmentation. For example, for the hematoxylin & eosin stained image, the pathologically stained image in the original RGB color space can be converted into the HEO color space, wherein the difference between the cell nucleus and the background in the H-channel image is significantly enhanced, which can facilitate the cell nucleus identification and segmentation.
Exemplarily, as shown in fig. 2, the step S1 of acquiring a cell nucleus segmentation color image after cell nucleus segmentation based on a single-channel image of the converted color image includes:
in sub-step S11, a single-channel image that maximizes the difference between the cell nuclei and the background is selected from the converted color image. In this embodiment, preferably, the obtained pathological staining image is converted into an HEO color space, and an H-channel image in which the difference between the cell nucleus and the background is maximized is selected as a basic image for cell nucleus segmentation.
Exemplarily, a pathologically stained image in an RGB color space can be deconvoluted into an HEO color space using a color space conversion operator. For example, the color space conversion operator is selected as { [0.644211000,0.716556000,0.266844000 ]; [0.09278900,0.95411100,0.28311100 ]; [0.00000000,0.00000000,0.0000000]}.
And a substep S12, after performing Gaussian smoothing processing on the selected single-channel image, performing edge pixel detection of the cell nucleus by using an edge detection algorithm and obtaining a gray value of the edge pixel of the cell nucleus.
The edge detection algorithm can exemplarily include but is not limited to Sobel edge detection, Canny edge detection, L OG edge detection, L aplian edge detection, and the like.
For example, in some embodiments, the laplacian operator { [1,1,1 ]; [1, -8,1 ]; [1,1,1] } convolve the image to calculate the gradient of each pixel instead of the second order differential, and take the pixels whose absolute values of the gradients are at the maximum, the first 6 + -4%, as the edge pixels of the cell nucleus. It should be understood that the laplace operator and the values of 6 ± 4% can be adjusted according to actual requirements.
It can be understood that, since the cell nucleus edge pixels obtained through the sub-step S12 often include the pixels of the cell nucleus boundary and the partial pixels around the boundary, the pixels around the boundary need to be further determined, so as to obtain more accurate cell nucleus boundary pixels.
And a substep S13, calculating a gray value threshold value by using a threshold segmentation algorithm according to the obtained gray value of the cell nucleus edge pixel, if the gray value of the pixel in the selected single-channel image is greater than the gray value threshold value, judging that the current pixel belongs to the cell nucleus, otherwise, judging that the current pixel belongs to the background.
Exemplarily, for the threshold segmentation algorithm, preferably, the gray value threshold is calculated using the Otsu method (OTSU). In one embodiment, the step of calculating the gray value threshold using Otsu method comprises:
a. selecting a series of candidate thresholds according to the obtained gray level of the cell nucleus edge pixels;
b. for each candidate threshold, the corresponding inter-class variance var is calculatedclass:
varclass=w0(μ0-μ)2+w1(μ1-μ)2=w0w1(μ0-μ1)2;
Wherein, w0The ratio of foreground pixels to all pixels of the whole image is defined; w is a1The proportion of background pixels to all pixels of the whole image is obtained; mu.s0Is the average of the foreground pixels; mu.s1Is the average of the background pixels; mu is the average value of all pixels of the whole image;
c. a candidate threshold that maximizes the inter-class variance is selected as the gray value threshold.
And a substep S14, obtaining coordinates of each cell nucleus in the selected single-channel image, and mapping each coordinate to the color image to obtain a cell nucleus segmentation color image.
Exemplarily, after each cell nucleus in the selected single-channel image is judged by using the sub-steps, the coordinates of each cell nucleus in the single-channel image are marked, and then when the coordinate marks are mapped to the original color image, the color image containing the segmented cell nucleus can be obtained.
In addition, in another embodiment, for the above-mentioned color image of cell nucleus segmentation, it is considered that some segmented regions with too small area may exist in the cell nucleus segmentation, and most of these segmented regions are not true cell nuclei, and therefore, these segmented regions will be referred to as false positive cell nuclei in this application. In addition, when the cell nucleus segmentation is performed, some adjacent and overlapped cell nuclei and the like may be obtained, and in order to further improve the segmentation accuracy of the cell nuclei, as shown in fig. 3, the method includes:
and a substep S15, performing morphological processing on the cell nucleus obtained by segmentation in the selected single-channel image.
Illustratively, the morphological open operation may be used to remove the burrs at the boundary of the nucleus, and then the morphological close operation may be used to fill the pores of the nucleus, etc.
Substep S16, then count the area of all nuclei and calculate the area threshold of a single nucleus for filtering out false positive nuclei with an area smaller than the area threshold.
Since there may be some noise in the pathological stain images that is similar in color to the nuclei, false positive nuclei with too small an area can be filtered by setting a nuclear area threshold. Exemplarily, the area threshold t of the cell nucleus for each pathological image can be selected by using the following formula: t ═ max (A)m3, N); wherein A ismThe average value of all cell nucleus areas; n is a preset minimum area of the cell nucleus, and may be determined according to the size of the image, the scanning magnification, the actual zoom magnification, and the like. Therefore, if the area of the current cell nucleus is smaller than the area threshold, the cell nucleus can be judged to be false positive.
In sub-step S17, adjacent nuclei and overlapping nuclei are segmented based on a morphological image segmentation algorithm.
Further, after filtering out false positive nuclei, a morphological image segmentation algorithm may be used to further segment some adjacent nuclei or overlapping nuclei that are present, as an example. Preferably, a watershed algorithm can be used for the segmentation. In one embodiment, the step of using a watershed algorithm to bi-segment the adjacent, overlapping nuclei comprises:
a. and calculating the shortest distance between each foreground pixel and the background pixel in the adjacent cell nucleuses or the overlapped cell nucleuses, and setting the distance of the background pixel to be 0 so as to obtain a distance mark map.
b. Several points are selected on the distance map as bottom points, the shortest distance between these points and the background pixels being locally smallest. Exemplarily, positions of some foreground pixels with the shortest distance to background pixels at the edges of adjacent or overlapped cell nuclei to be segmented secondarily can be selected as bottom points for starting flooding, namely bottom points for expanding the area.
c. And respectively expanding the areas by taking each bottom point as a respective starting point and preset step length until a boundary of two adjacent expanded areas is obtained. For example, the preset step size may take 1 pixel, etc.
d. The current adjacent cell nucleus or the overlapped cell nucleus is subjected to cell nucleus segmentation according to the boundary.
Exemplarily, the region expansion is performed from the bottom point, and the expansion is performed according to the shortest distance marked above each time, that is, the distance of all pixels on the dam for each time of overflowing water is equal, until two adjacent expansion regions meet at a boundary, and the intersecting boundary can be regarded as a watershed for dividing pixels inside the dam and pixels outside the dam. Efficient segmentation of adjacent or overlapping nuclei is achieved by these watersheds.
It is understood that the accuracy of segmentation of each cell nucleus in the color image can be further improved by performing processing such as image segmentation morphologically. For example, fig. 4 shows a cell nucleus segmentation color image in which each cell nucleus is effectively segmented in an actual cell nucleus segmentation application.
Step S2, performing region segmentation on the initial size image of each cell nucleus in each single channel image of the cell nucleus segmented color image and the scaled image scaled to a preset fixed size to obtain a corresponding single channel region image, performing feature extraction on each single channel region image to obtain a single channel image feature of the corresponding cell nucleus, and obtaining a corresponding multi-channel image feature based on the single channel image feature.
The above step S2 is mainly used to perform image feature extraction on each cell nucleus. When the region segmentation is carried out, each single-channel image of the cell nucleus segmentation color image can be subjected to binarization processing firstly, so that a binarization image of each single channel is obtained, and the extraction of image features is convenient. It should be understood that the following region segmentation and feature extraction are both performed based on a single-channel binarized image.
In one embodiment, the image of each cell nucleus is segmented according to the following 4 types, namely, the segmentation is obtained for each cell nucleus: an inner region, inner and outer neighborhoods (short for inner and outer regions), an outer region, and a minimum rectangular region containing the outer region. In this embodiment, the size of the minimum rectangular area is the size of the cell nucleus image.
Exemplarily, taking a certain cell nucleus as an example, four different regions of the cell nucleus can be obtained through region segmentation, which are respectively denoted as: an inner nuclear region ori, an inner and outer nuclear neighborhood dia, an outer nuclear region xor, and a minimum rectangular region nor containing the outer nuclear region.
The method for obtaining the inside and outside neighborhood dia of the cell nucleus is to perform region expansion on the cell nucleus region by using a morphological expansion algorithm, wherein the size of an expansion operator is k × k, and the calculation method of k is as follows:wherein S is the area of the nucleus; pi is the circumference ratio; n is a predetermined minimum expansion value, which may be selected based on the size distribution of the nuclei, for example, N may be 10 in one embodiment.
For the nucleus outer region xor, it can be obtained by xoring the nucleus inner and outer neighborhood dia with the nucleus inner region ori. For the smallest rectangular region nor, i.e. the smallest outer rectangular region containing the outer region of the cell nucleus xor.
Where the cell nucleus segmented color image is composed of three single channel images, then the cell nucleus segmented color image will be composed of an H channel image, an E channel image, and an O channel image, exemplarily, when converted to an HEO color space. Thus, for the inner region, the inner and outer neighborhoods, the outer region and the smallest rectangular region of a cell nucleus, each region is composed of corresponding three single-channel region images. For example, the internal area includes an internal area image of an H channel, an internal area image of an O channel, and an internal area image of an E channel.
In this embodiment, the initial image of each cell nucleus and the zoomed image zoomed to a preset fixed size are respectively subjected to region segmentation to respectively obtain corresponding single-channel region images. Wherein, the initial image is an image which is not zoomed after the cell nucleus is segmented; the zoom image is an image obtained by zooming the initial image of the cell nucleus into a preset fixed size. It is understood that the predetermined fixed size can be selected according to actual requirements.
Thus, each cell nucleus will include an initial image and a scaled image, each image including three corresponding single-channel images, and then, performing respective region segmentation on three single channels of the two images of the same cell nucleus to obtain 24 single-channel region images, i.e., three single-channel region images including respective non-scaled inner region, inner and outer neighborhoods, outer region, and minimum rectangular region corresponding to the initial size image thereof, and three single-channel region images including respective scaled inner region, inner and outer neighborhoods, outer region, and minimum rectangular region corresponding to the scaled image thereof.
For example, for the four regions that are not scaled, it is represented as: ori-inner region, dia-inner and outer neighborhoods, xor-outer region and nor-minimum rectangular region, then the scaled regions can be recorded as: rori-zoomed nuclear inner region; rdia-the scaled intra-and-outer neighborhood of the nucleus; rxor-zoomed nuclear outer region; rnor-the minimum rectangular box area after scaling.
As shown in fig. 5, the "performing feature extraction on the single-channel region image to obtain single-channel image features of corresponding cell nuclei and obtaining corresponding multi-channel image features based on the single-channel image features" in the above step S2 includes:
step S21, performing first-class feature extraction on the three single-channel region images of the inside region, the outside region, and the inside and outside neighborhoods of the cell nucleus that is not zoomed, and the three single-channel region images of the inside region, the outside region, and the inside and outside neighborhoods that are zoomed, and performing second-class feature extraction on the three single-channel region images of the minimum rectangular region image that is zoomed, to obtain the single-channel image features of the cell nucleus.
In this embodiment, feature extraction with different dimensions is performed on a cell nucleus, and can be used to determine which type of cell nucleus the cell nucleus belongs to, for example, whether the cell nucleus belongs to an epithelial cell nucleus or a non-epithelial cell nucleus such as a stromal cell nucleus and other cell nuclei. Since the prostate canceration analysis is mainly based on analysis of the morphology, texture, color, etc. of epithelial nuclei, it can be better used for analysis of the severity of prostate cancer, etc. when epithelial nuclei are accurately identified.
Preferably, at least 5 types of image features are extracted for each cell nucleus, and in this embodiment, the at least 5 types of image features can be divided into: the feature set comprises a first class of features formed by texture features, morphological features, color statistic features and the like, and a second class of features formed by local binary pattern statistical histogram features, fractal dimension features and the like.
Still taking the above-mentioned certain cell nucleus as an example, the extraction of 3 kinds of features, i.e., texture features, morphological features and color statistics features, can be performed on 18 single-channel region images of 6 regions, i.e., the un-scaled inner region ori, the inner and outer neighborhoods dia, the outer region xor, the scaled inner region rori, the scaled inner and outer neighboria, and the scaled outer region rxor of the cell nucleus, respectively. In particular, for the extraction of morphological features, only one single-channel region image of each of the 6 regions needs to be extracted, which can reduce the amount of redundant computation and the like. Meanwhile, the extraction of 2 binary features, namely local binary pattern statistical histogram features and fractal dimension features, is performed on the three single-channel region images of the scaled minimum rectangular region rnor.
For the above 5 types of features, the texture features are mainly calculated based on feature matrices such as Gray level Co-occurrence Matrix (G L CM, fully named Gray L ev Co-occurence Matrix), Gray level area size Matrix (G L SZM, fully named Gray L ev sizezzone Matrix), Gray level travel Matrix (G L R L M, fully named Gray L ev Run L ength Matrix), neighborhood Gray level Difference Matrix (NGTDM, fully named neighbor Gray Tone Difference Matrix), and Gray level dependency Matrix (G L DM, fully named Gray L ev dependency Matrix).
The morphological characteristics mainly include extraction of the area, the perimeter-to-area ratio, the longest diameter, and the like of the object region. For example, taking the single-channel region images of a cell nucleus as an example, the area, the perimeter, the ratio of the perimeter to the area, the longest diameter, etc. of each single-channel region image can be calculated, so as to obtain the morphological characteristics of the cell nucleus.
The color statistic characteristics mainly include the calculation of the minimum value, the average value, the absolute deviation of the average value, the median, the variance energy, the total energy, the kurtosis, the skewness and the like of the image gray value, and certainly may also include the maximum value, the range, the 10-step value, the 90-step value, the quartile difference and the like, and may be specifically determined according to the actual requirements. The calculation formulas for these several parameters listed above can be found in the relevant literature and will not be described in detail here.
For example, the value of the preset radius r is 1 to 9, 8r surrounding pixels are taken to obtain a local binary pattern with unchanged rotation, and the statistical histogram feature of the local binary pattern with unchanged rotation is obtained.
The fractal dimension characteristics are based on calculating a preset number of gray value thresholds, preferably 8 gray value thresholds. For example, in an embodiment, for the calculation method of 8 gray value thresholds, all the pixels of the scaled minimum rectangular region rnor under each single channel may be divided into 9 classes by a gaussian mixture model (gaussian mixture), the 9 classes of pixels are sorted from small to large according to the average value of the gray value of the pixel, and the distance between the average values of adjacent classes is divided according to the variance ratio of the adjacent classes, where the division point is the gray value threshold between the adjacent classes.
Each gray value threshold comprises 6 characteristics of being higher than a threshold area, higher than a threshold average value, higher than a threshold fractal dimension, between adjacent threshold areas, between adjacent threshold average values and between adjacent fractal dimensions. Exemplarily, taking the obtained first gray value threshold g1 as an example, the current image is divided into two regions higher than g1 and not higher than g1 according to the area, the average value and the fractal dimension, respectively, so as to obtain 3 features of the area higher than g1, the average value and the fractal dimension, namely, the 3 features of the area higher than the threshold, the average value higher than the threshold and the fractal dimension higher than the threshold are obtained; for the first gray value threshold g1 and the second gray value threshold g2, the image may be divided into three parts, pixels with gray values between g1 and g2 are taken, and the 3 features of the area, the average value and the fractal dimension of the pixels are calculated, that is, the 3 features between the adjacent threshold areas, between the adjacent threshold average values and between the adjacent fractal dimensions are corresponding to the above-mentioned features. Then, these 6 features obtained are combined to be the feature of the first gray-scale value threshold g 1. Other gray value thresholds are calculated similarly, and therefore are not described in detail.
It can be understood that by extracting 5 main features, namely texture features, morphological features, color statistic features, local binary pattern statistical histogram features and fractal dimension features of the cell nucleus, single-channel image features stored in a vector form or a matrix form of the cell nucleus can be obtained.
And step S22, multiplying the element corresponding phases of any two single-channel image characteristics to obtain a first-class multi-channel image characteristic, multiplying the element corresponding phases of the three single-channel image characteristics to obtain a second-class multi-channel image characteristic, and combining the first-class multi-channel image characteristic and the second-class multi-channel image characteristic to obtain the multi-channel image characteristic of the cell nucleus.
Thus, after obtaining the single-channel image features and the multi-channel image features of the cell nucleus, the next cell nucleus classification can be carried out.
And step S3, inputting the single-channel image characteristics and the multi-channel image characteristics of the corresponding cell nucleus into a cell nucleus classification model for cell nucleus classification, and determining the epithelial cell nucleus in the pathological staining image according to the classification result.
The above step S3 is mainly used to classify the cell nuclei, thereby identifying epithelial cell nuclei in the pathological stain image. The classification includes, among other things, epithelial nuclei and non-epithelial nuclei, which may include stromal nuclei and other types of nuclei.
The cell nucleus classification model obtained by training in advance is preferably constructed based on a logistic regression model, and further preferably, the logistic regression model adopts L1 regularization to construct a logistic regression optimal function as follows:
wherein N represents the total number of cell nuclei input into the logistic regression model, and P represents the number of all input features; y isiA true value representing the ith nucleus classification; x is the number ofiRepresenting input features of the ith cell nucleus, β representing coefficients of all input features, βjThe coefficient representing the jth input feature, λ is L1 the penalty coefficient for regularization.
The logistic regression model constructed by adopting L regularization can screen a plurality of input image features before classification, so that effective image features beneficial to classification can be reserved for the classification of cell nuclei.
In one embodiment, the logistic regression model is constructed by the following steps:
a. normalizing the input image features, and deleting the image features containing infinite values and missing values;
b. dividing a sample into a test set and a training set by adopting a layering random sampling method;
c. evaluating the prediction accuracy of the logistic regression model under different penalty coefficients by using a k-fold cross validation method, and selecting a penalty coefficient corresponding to the optimal prediction accuracy from the prediction accuracy;
d. c, adopting the penalty coefficients selected in the step c to construct a logistic regression model to be trained, and fitting all training sets as constructed prediction models;
e. the constructed prediction model is used for predicting the test set, and the prediction model is evaluated by using two parameters of classification accuracy and AUC (Area under the ROC Curve).
In one example, the pathological stained images were labeled from 46 hematoxylin & eosin stained images at the same magnification by first nuclear segmentation of the pathological stained images and then randomly extracting 895 nuclei from the segmented nuclei, including 355 epithelial cells and 540 stromal cells. The classification accuracy of the constructed prediction model on epithelial cell nuclei in a test set is 90.2490 +/-0.0016%, and the AUC value is 96.1103 +/-0.0003%. FIG. 6 shows the ROC curve of the prediction model, which indicates that the classification of epithelial nuclei is accurate, and the epithelial nuclei can be effectively classified by the prediction model.
Using the prediction model constructed above as a cell nucleus classification model, inputting the single-channel image features and the multi-channel image features obtained in step S2 into the cell nucleus classification model, and outputting the classification result of the cell nucleus as an epithelial cell nucleus or a non-epithelial cell nucleus. Then, labeling the epithelial cell nuclei in a color image according to the classification result of each cell nucleus, a schematic diagram of epithelial cell nucleus segmentation as shown in fig. 7 can be obtained.
The segmentation method of the epithelial cell nucleus in the pathological image of the prostate cancer mainly adopts three steps to realize automatic segmentation of the epithelial cell nucleus in the pathological image of the prostate cancer, firstly converts the pathological image into a color space which enhances the difference between the cell nucleus and the background, so that the segmentation of the cell nucleus can be conveniently carried out, then carries out different region segmentation and region image feature extraction on each cell nucleus in the obtained cell nucleus segmented color image, and mainly comprises morphological features, texture features, color statistic features, L BP histogram statistic features, fractal dimension features and the like, and finally inputs the 5 types of main features into a trained cell nucleus classification model so as to accurately judge whether the cell nucleus is the epithelial cell nucleus.
Example 2
Referring to fig. 8, based on the method of embodiment 1, the present embodiment provides an apparatus 10 for segmenting epithelial cell nuclei in a pathological image of prostate cancer, including:
a cell nucleus segmentation module 110, configured to perform color space conversion on the obtained pathological staining image, and obtain a cell nucleus segmentation color image after performing cell nucleus segmentation based on a single-channel image of the color image obtained through the conversion;
a cell nucleus feature extraction module 120, configured to perform region segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a scaled image scaled to a preset fixed size, respectively, to obtain a corresponding single-channel region image, perform feature extraction on the single-channel region image to obtain a single-channel image feature of the corresponding cell nucleus, and obtain a corresponding multi-channel image feature based on the single-channel image feature;
an epithelial cell nucleus classification module 130, configured to input the single-channel image features and the multi-channel image features of the corresponding cell nucleus into a cell nucleus classification model for cell nucleus classification, and determine an epithelial cell nucleus in the pathological stain image according to a result of the classification.
It is understood that the above-described segmentation apparatus 10 of epithelial cell nuclei in a pathology image of prostate cancer corresponds to the method of embodiment 1. Any of the options in embodiment 1 are also applicable to this embodiment, and will not be described in detail here.
The present invention also provides a terminal, such as a computer or the like, which includes a memory storing a computer program and a processor that, by executing the computer program, causes a terminal device to execute the above-described method for segmenting an epithelial cell nucleus in a pathological image of prostate cancer or the functions of the respective modules in the above-described apparatus for segmenting an epithelial cell nucleus in a pathological image of prostate cancer.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The present invention also provides a computer-readable storage medium for storing the computer program used in the above-mentioned terminal.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.
Claims (10)
1. A method for segmenting epithelial cell nuclei in a pathological image of prostate cancer, which is characterized by comprising the following steps:
performing color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after performing cell nucleus segmentation on a single-channel image of the color image obtained by conversion;
respectively performing area segmentation on an initial size image of each cell nucleus in each single-channel image of the cell nucleus segmentation color image and a zoomed image zoomed to a preset fixed size to obtain a corresponding single-channel area image, performing feature extraction on the single-channel area image to obtain single-channel image features of the corresponding cell nucleus, and obtaining corresponding multi-channel image features based on the single-channel image features;
inputting the single-channel image features and the multi-channel image features of the corresponding cell nuclei into a cell nucleus classification model for cell nucleus classification, and determining epithelial cell nuclei in the pathological staining image according to the classification result.
2. The method as claimed in claim 1, wherein said obtaining a cell nucleus segmented color image after cell nucleus segmentation based on a single channel image of said transformed color image comprises:
selecting a single-channel image which enables the difference between the cell nucleus and the background to be maximum from the color image obtained by conversion;
after the selected single-channel image is subjected to Gaussian smoothing processing, detecting the edge pixels of the cell nucleus by using an edge detection algorithm and obtaining the gray value of the edge pixels of the cell nucleus;
calculating a gray value threshold by utilizing a threshold segmentation algorithm according to the obtained gray value of the cell nucleus edge pixel, and if the gray value of the pixel in the selected single-channel image is greater than the gray value threshold, judging that the current pixel belongs to the cell nucleus;
and acquiring coordinates of each cell nucleus in the selected single-channel image, and mapping each coordinate to the color image to obtain a cell nucleus segmentation color image.
3. The method of claim 2, wherein said obtaining coordinates of individual nuclei in said selected single-channel image is preceded by:
performing morphological processing on cell nucleuses obtained by segmentation in the selected single-channel image;
then counting the areas of all cell nuclei and calculating the area threshold of a single cell nucleus so as to filter out false positive cell nuclei with the areas smaller than the area threshold;
and segmenting the adjacent cell nucleuses and the overlapped cell nucleuses based on a morphological image segmentation algorithm.
4. The method of claim 3, wherein the "segmenting neighboring nuclei and overlapping nuclei based on a morphological image segmentation algorithm" comprises:
calculating the shortest distance between each foreground pixel and a background pixel in the current adjacent cell nucleus or the overlapped cell nucleus, and setting the distance of the background pixel to be zero to obtain a distance mark map;
selecting a plurality of points with local minimum shortest distances from the background pixels on the distance mark map as bottom points;
respectively expanding the areas by using each bottom point as a respective starting point and using a preset step length until a boundary of two adjacent expanded areas is obtained;
and performing cell nucleus segmentation on the current adjacent cell nucleus or the overlapped cell nucleus according to the boundary.
5. The method of claim 1, wherein each cell nucleus segmented region comprises an inner region, an inner-outer neighborhood, an outer region, and a smallest rectangular region containing the outer region of a cell nucleus, the single-channel region image comprising three single-channel region images corresponding to each of the un-scaled inner region, the inner-outer neighborhood, the outer region, and the smallest rectangular region of the initial size image, and three single-channel region images corresponding to each of the scaled inner region, the inner-outer neighborhood, the outer region, and the smallest rectangular region of the scaled image;
the "performing feature extraction on the single-channel region image to obtain the single-channel image features of the corresponding cell nucleus, and obtaining the corresponding multi-channel image features based on the single-channel image features" includes:
performing first-class feature extraction on the three single-channel region images of the non-zoomed internal region, the internal and external neighborhoods and the external region and the three single-channel region images of the zoomed internal region, the internal and external neighborhoods and the external region, and performing second-class feature extraction on the three single-channel region images of the zoomed minimum rectangular region to obtain the single-channel image features of the cell nucleus;
multiplying element corresponding phases of any two single-channel image characteristics to obtain a first-class multi-channel image characteristic, multiplying element corresponding phases of three single-channel image characteristics to obtain a second-class multi-channel image characteristic, and combining the first-class multi-channel image characteristic and the second-class multi-channel image characteristic to obtain the multi-channel image characteristic of the cell nucleus.
6. The method of claim 5, wherein the one class of features includes texture features, morphological features and color statistics features, and the two classes of features include local binary pattern statistical histogram features and fractal dimension features;
the texture features comprise a gray level co-occurrence matrix, a gray level area size matrix, a gray level travel matrix, a neighborhood gray level difference matrix and a gray level dependency matrix;
the morphological characteristics include area, perimeter to area ratio, and longest diameter of the object region;
the color statistic characteristics comprise the minimum value, the average value, the absolute deviation of the average value, the median, the variance, the energy, the total energy, the kurtosis and the skewness of the gray value of the image;
the local binary pattern statistical histogram feature is calculated based on a preset radius and surrounding pixels taking the preset radius as a unit;
the fractal dimension features are calculated based on a preset number of thresholds, each threshold including a fractal dimension above a threshold area, above a threshold mean, above a threshold, between adjacent threshold areas, between adjacent threshold mean and between adjacent fractal dimensions.
7. The method of claim 1, wherein the cell nucleus classification model is constructed based on a logistic regression model, and the logistic regression model is a logistic regression optimization function constructed by using L1 regularization as follows:
wherein N represents the total number of cell nuclei input into the logistic regression model, and P represents the number of all input features; y isiA true value representing the ith nucleus classification; x is the number ofiRepresenting input features of the ith cell nucleus, β representing coefficients of all input features, βjThe coefficient representing the jth input feature, λ is L1 the penalty coefficient for regularization.
8. An apparatus for segmenting epithelial cell nuclei in a pathological image of prostate cancer, comprising:
the cell nucleus segmentation module is used for carrying out color space conversion on the obtained pathological staining image, and obtaining a cell nucleus segmentation color image after carrying out cell nucleus segmentation on a single-channel image of the color image obtained by conversion;
the cell nucleus feature extraction module is used for respectively carrying out region segmentation on an initial size image of each cell nucleus and a zoomed image zoomed to a preset fixed size in each single-channel image of the cell nucleus segmented color image to obtain a corresponding single-channel region image, carrying out feature extraction on the single-channel region image to obtain single-channel image features of the corresponding cell nucleus, and obtaining corresponding multi-channel image features based on the single-channel image features;
and the epithelial cell nucleus classification module is used for inputting the single-channel image characteristics and the multi-channel image characteristics of the corresponding cell nucleus into a cell nucleus classification model for cell nucleus classification, and determining the epithelial cell nucleus in the pathological staining image according to the classification result.
9. A terminal, comprising: a processor and a memory storing a computer program for execution by the processor to implement the method of segmentation of epithelial nuclei in prostate cancer pathology images according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, implements the method of segmentation of epithelial nuclei in prostate cancer pathology images according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010175593.0A CN111402267B (en) | 2020-03-13 | 2020-03-13 | Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010175593.0A CN111402267B (en) | 2020-03-13 | 2020-03-13 | Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111402267A true CN111402267A (en) | 2020-07-10 |
CN111402267B CN111402267B (en) | 2023-06-16 |
Family
ID=71430776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010175593.0A Active CN111402267B (en) | 2020-03-13 | 2020-03-13 | Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111402267B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070722A (en) * | 2020-08-14 | 2020-12-11 | 厦门骁科码生物科技有限公司 | Fluorescence in situ hybridization cell nucleus segmentation method and system |
CN112184696A (en) * | 2020-10-14 | 2021-01-05 | 中国科学院近代物理研究所 | Method and system for counting cell nucleus and cell organelle and calculating area of cell nucleus and cell organelle |
CN112446892A (en) * | 2020-11-18 | 2021-03-05 | 黑龙江机智通智能科技有限公司 | Cell nucleus segmentation method based on attention learning |
CN113033287A (en) * | 2021-01-29 | 2021-06-25 | 杭州依图医疗技术有限公司 | Pathological image display method and device |
CN113178228A (en) * | 2021-05-25 | 2021-07-27 | 郑州中普医疗器械有限公司 | Cell analysis method based on nuclear DNA analysis, computer device, and storage medium |
CN113762395A (en) * | 2021-09-09 | 2021-12-07 | 深圳大学 | Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method |
CN113763370A (en) * | 2021-09-14 | 2021-12-07 | 佰诺全景生物技术(北京)有限公司 | Digital pathological image processing method and device, electronic equipment and storage medium |
CN116580216A (en) * | 2023-07-12 | 2023-08-11 | 北京大学 | Pathological image matching method, device, equipment and storage medium |
CN116959712A (en) * | 2023-07-28 | 2023-10-27 | 成都市第三人民医院 | Lung adenocarcinoma prognosis method, system, equipment and storage medium based on pathological image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020167A (en) * | 2012-11-26 | 2013-04-03 | 南京大学 | Chinese text classification method for computer |
CN110415255A (en) * | 2019-06-14 | 2019-11-05 | 广东省人民医院(广东省医学科学院) | A kind of immunohistochemistry pathological image CD3 positive nucleus dividing method and system |
CN110517273A (en) * | 2019-08-29 | 2019-11-29 | 麦克奥迪(厦门)医疗诊断系统有限公司 | Cytology image partition method based on dynamic gradient threshold value |
-
2020
- 2020-03-13 CN CN202010175593.0A patent/CN111402267B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020167A (en) * | 2012-11-26 | 2013-04-03 | 南京大学 | Chinese text classification method for computer |
CN110415255A (en) * | 2019-06-14 | 2019-11-05 | 广东省人民医院(广东省医学科学院) | A kind of immunohistochemistry pathological image CD3 positive nucleus dividing method and system |
CN110517273A (en) * | 2019-08-29 | 2019-11-29 | 麦克奥迪(厦门)医疗诊断系统有限公司 | Cytology image partition method based on dynamic gradient threshold value |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070722A (en) * | 2020-08-14 | 2020-12-11 | 厦门骁科码生物科技有限公司 | Fluorescence in situ hybridization cell nucleus segmentation method and system |
CN112184696B (en) * | 2020-10-14 | 2023-12-29 | 中国科学院近代物理研究所 | Cell nucleus and organelle counting and area calculating method and system thereof |
CN112184696A (en) * | 2020-10-14 | 2021-01-05 | 中国科学院近代物理研究所 | Method and system for counting cell nucleus and cell organelle and calculating area of cell nucleus and cell organelle |
CN112446892A (en) * | 2020-11-18 | 2021-03-05 | 黑龙江机智通智能科技有限公司 | Cell nucleus segmentation method based on attention learning |
CN113033287A (en) * | 2021-01-29 | 2021-06-25 | 杭州依图医疗技术有限公司 | Pathological image display method and device |
CN113178228A (en) * | 2021-05-25 | 2021-07-27 | 郑州中普医疗器械有限公司 | Cell analysis method based on nuclear DNA analysis, computer device, and storage medium |
CN113178228B (en) * | 2021-05-25 | 2023-02-10 | 郑州中普医疗器械有限公司 | Cell analysis method based on nuclear DNA analysis, computer device, and storage medium |
CN113762395A (en) * | 2021-09-09 | 2021-12-07 | 深圳大学 | Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method |
CN113762395B (en) * | 2021-09-09 | 2022-08-19 | 深圳大学 | Pancreatic bile duct type ampulla carcinoma classification model generation method and image classification method |
CN113763370A (en) * | 2021-09-14 | 2021-12-07 | 佰诺全景生物技术(北京)有限公司 | Digital pathological image processing method and device, electronic equipment and storage medium |
CN113763370B (en) * | 2021-09-14 | 2024-09-06 | 佰诺全景生物技术(北京)有限公司 | Digital pathology image processing method and device, electronic equipment and storage medium |
CN116580216B (en) * | 2023-07-12 | 2023-09-22 | 北京大学 | Pathological image matching method, device, equipment and storage medium |
CN116580216A (en) * | 2023-07-12 | 2023-08-11 | 北京大学 | Pathological image matching method, device, equipment and storage medium |
CN116959712A (en) * | 2023-07-28 | 2023-10-27 | 成都市第三人民医院 | Lung adenocarcinoma prognosis method, system, equipment and storage medium based on pathological image |
Also Published As
Publication number | Publication date |
---|---|
CN111402267B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402267A (en) | Segmentation method, device and terminal for epithelial cell nucleus in prostate cancer pathological image | |
CN111145209B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN111091527B (en) | Method and system for automatically detecting pathological change area in pathological tissue section image | |
CN108364288B (en) | Segmentation method and device for breast cancer pathological image | |
CN107274386B (en) | artificial intelligent auxiliary cervical cell fluid-based smear reading system | |
Jiang et al. | A novel white blood cell segmentation scheme using scale-space filtering and watershed clustering | |
CN112017191A (en) | Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism | |
CN110120040A (en) | Sectioning image processing method, device, computer equipment and storage medium | |
CN107256558A (en) | The cervical cell image automatic segmentation method and system of a kind of unsupervised formula | |
CN106780522B (en) | A kind of bone marrow fluid cell segmentation method based on deep learning | |
Veta et al. | Detecting mitotic figures in breast cancer histopathology images | |
Jiang et al. | A novel white blood cell segmentation scheme based on feature space clustering | |
US11538261B2 (en) | Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy | |
CN110517273B (en) | Cytology image segmentation method based on dynamic gradient threshold | |
CN110490159B (en) | Method, device, equipment and storage medium for identifying cells in microscopic image | |
CN111784711A (en) | Lung pathology image classification and segmentation method based on deep learning | |
CN112990214A (en) | Medical image feature recognition prediction model | |
CN117252893B (en) | Segmentation processing method for breast cancer pathological image | |
Chatterjee et al. | A novel method for IDC prediction in breast cancer histopathology images using deep residual neural networks | |
CN115439493A (en) | Method and device for segmenting cancerous region of breast tissue section | |
KR20240012738A (en) | Cluster analysis system and method of artificial intelligence classification for cell nuclei of prostate cancer tissue | |
CN118230052A (en) | Cervical panoramic image few-sample classification method based on visual guidance and language prompt | |
CN104933723A (en) | Tongue image segmentation method based on sparse representation | |
Lal et al. | A robust method for nuclei segmentation of H&E stained histopathology images | |
CN112508860B (en) | Artificial intelligence interpretation method and system for positive check of immunohistochemical image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: No. 107, Yanjiang West Road, Yuexiu District, Guangzhou, Guangdong 510000 Applicant after: SUN YAT-SEN MEMORIAL HOSPITAL, SUN YAT-SEN University Applicant after: Shenzhen Huajia Biological Intelligence Technology Co.,Ltd. Address before: No. 107 Yanjiang West Road, Tianhe District, Guangzhou, Guangdong Province, 510000 Applicant before: SUN YAT-SEN MEMORIAL HOSPITAL, SUN YAT-SEN University Applicant before: Shenzhen Huajia Biological Intelligence Technology Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |