CN111507992A - Low-differentiation gland segmentation method based on internal and external stresses - Google Patents

Low-differentiation gland segmentation method based on internal and external stresses Download PDF

Info

Publication number
CN111507992A
CN111507992A CN202010317512.6A CN202010317512A CN111507992A CN 111507992 A CN111507992 A CN 111507992A CN 202010317512 A CN202010317512 A CN 202010317512A CN 111507992 A CN111507992 A CN 111507992A
Authority
CN
China
Prior art keywords
image
gland
dividing
lumen
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010317512.6A
Other languages
Chinese (zh)
Other versions
CN111507992B (en
Inventor
张堃
付君红
朱洪堃
李子杰
吴建国
张培建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202010317512.6A priority Critical patent/CN111507992B/en
Publication of CN111507992A publication Critical patent/CN111507992A/en
Application granted granted Critical
Publication of CN111507992B publication Critical patent/CN111507992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a low-differentiation gland segmentation method based on internal and external stresses, which comprises the following steps: 1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image; 2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function; 3) taking the hematoxylin channel map as an SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses; 4) and drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code. The invention enables information contained in the H & E staining image to be more independent and easy to identify, so as to process the situations of uneven staining intensity and unobvious staining difference, develop and combine a group of new characteristics of segmenting the gland outline, and definitely provide a method for representing the characteristics of lumen and extraglandular broad shape.

Description

Low-differentiation gland segmentation method based on internal and external stresses
Technical Field
The invention relates to the technical field of image information processing, in particular to a low-differentiation gland segmentation method based on internal and external stress.
Background
Adenocarcinoma is a malignant tumor formed by glandular structures in epithelial tissue. It affects the distribution of cells and also alters the structure of the gland. A biopsy is a tissue that is removed from a suspect organ in a minimally invasive manner, and is distinguished by a microscope by a pathologist who needs to be accurate and capable of detecting large amounts of data in order to detect minor abnormalities in the biopsy. Taking the digestive system as an example, a histopathological staining image of colon is the basis for detecting lesions. A typical histopathological image of the colon gland contains four tissue components: lumen, cytoplasm, epithelial cells and stroma (connective tissue, blood vessels, neural tissue, etc.). The luminal area is surrounded by an oval structure called epithelial cells, and the overall structure is bounded by bold lines called epithelial nuclei.
The traditional method mainly studies the gland appearance characteristics and contour characteristics. Appearance characteristics are mainly based on the fact that the glands are composed of nuclei, cytoplasm and lumen. Sirinukunwattana, Jacobs et al, by low-level features: color, texture, etc. information is used to identify the glandular object. Contour features are based on the fact that the gland structure is surrounded by a ring of epithelial cells, and many research methods segment the gland by identifying epithelial cells. The random polygon model proposed by sirinunkwattana, Fu et al, proposed that the spatial random field model is capable of well segmenting benign gland contours, but is not suitable for segmenting malignant and diseased glands.
Recent developments in deep learning in the field of computer vision have made it possible to apply it to histopathological studies. Ronneberger et al have proposed that U-net has achieved a good effect in the field of medical image segmentation. The deep learning framework trains the model based on the original image and the segmentation mask marked manually, so that the loss function is minimum as a target back propagation error, and parameters are updated layer by layer, thereby achieving the purpose of automatically segmenting the image by the model. The depth contour perception network proposed by Chen et al shows that the contour plays an important role in gland segmentation, and the double-parallel branch depth neural network proposed by Wang et al combines the fusion characteristics of the contour and the target to accurately segment the gland. The above methods all require a large number of manually labeled images, however, labeling a large number of medical images is very difficult.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a low-differentiation gland segmentation method based on internal and external stress. Firstly, an improved U-net is used for staining and separating H & E images to respectively obtain a hematoxylin channel (the hematoxylin channel images contain cell nucleus information), an eosin channel and a background channel (considering that the lumens are similar to the background, the lumen information is contained in the background channel); then taking a hematoxylin channel as the input of an SC-CNN frame to obtain the boundary of an epithelial cell region, namely the gland boundary formed by epithelial cell nucleuses, simultaneously segmenting the cavity based on an improved SPF method, and then segmenting the gland by applying a graphic characteristic method represented by a minimum inertia axis and a chain code according to the similarity of lumens and the gland boundary and the condition that the bonded gland and the gland are fused with stroma.
In order to achieve the purpose, the invention provides the following technical scheme: a hypo-differentiated gland segmentation method based on internal and external stresses comprises the following steps:
1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image;
2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function;
3) taking the hematoxylin channel map as an SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses;
4) and drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code.
Preferably, the step 1) of performing staining separation based on the ResUnet architecture specifically includes: the network is supported by three parts, namely a contraction path, a bridging path and an expansion path, so as to complete the dyeing intensity prediction of each channel of Hematoxylin, Eosin and Background.
Preferably, the contraction path is used for reducing the spatial dimension of the feature map, increasing the number of the feature map layer by layer, and extracting the input image into compact features; the bridging part is connected with the receiving and expanding paths and realizes the function of dyeing color matrix prediction; the dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for staining intensity matrix prediction.
Preferably, the systolic path and the diastolic path each comprise a number of residual blocks, and in each residual block, the feature mapping is reduced by half by convolution.
Preferably, before each of said residual blocks there is a concatenation of upsampling from the feature maps of lower levels and feature maps from the respective coding paths.
Preferably, the Kullback-L eibler constraint term is added in the model prediction process in the step 1) of staining separation, and the model is trained by minimizing the reconstruction loss between the input image and each reconstruction.
Preferably, the step 2) constructs an SPF function by specifically using the statistical information of the image, so that the constructed SPF function has a function of maintaining or even enhancing the foreground object.
Preferably, the method specifically comprises the following steps:
the profile C divides the image I into an inner portion and an outer portion, which are respectively designated as omega1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image, with P1、P2The staining intensity distribution function of the regions Ω 1, Ω 2 is represented:
Figure BDA0002460001650000031
Figure BDA0002460001650000032
where u and σ are the mean and standard deviation of the gaussian distribution of the staining intensity, respectively, according to the level set method, a level set function Φ is embedded, assuming Ω 1 ═ Φ >0} and Ω 2 ═ Φ <0}, and the corresponding contour line C can be represented by a zero level set { Φ ═ 0 };
the following SPF function was constructed using the above stain intensity distribution function:
Figure BDA0002460001650000041
the level set equation is obtained as follows:
Figure BDA0002460001650000042
preferably, in the step 3), the space-limited SC-CNN for nuclear detection and the softmaxCNN method for nuclear classification are used, and the hematoxylin intensity obtained by staining separation is used as an input feature of the CNN, so as to obtain the pixel set V representing the epithelial cell nucleus and the outline L of the area where the epithelial cell nucleus is located.
Preferably, the step 4) specifically includes: the method comprises the steps of establishing a coordinate system through a perpendicular line of the coordinate system by taking a minimum inertia axis as a reference axis, taking an image centroid as an origin O of the coordinate system, and dividing a plurality of areas of the coordinate system into a plurality of equal parts from a plurality of directions according to a direction chain code method, so that the whole image generates a chain code from the plurality of directions; step 4) also comprises the following steps: the membership value mu of the nth characteristic triangle of the lumen area is searchednAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nAnd carrying out similarity comparison on all similarity values of all characteristic values.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention provides a novel unsupervised dyeing separation method, which enables information contained in an H & E dyeing image to be more independent and easy to identify so as to process the conditions of uneven dyeing intensity and unobvious dyeing difference;
(2) a new set of features of the segmented gland contour is developed and combined. The morphological characteristics of lumen inside a gland structure are considered, the shape of the lumen is obviously distorted in the canceration process of the gland, so that the arrangement of the epithelial cells around the gland is irregular, but most of the situations are still distributed around the periphery of the light, therefore, the lumen and the external contour morphological characteristics of the gland are represented by taking the minimum inertia axis as the reference and combining a chain code method, and the lumen is more independent and easier to segment relative to the epithelial cells, so that the segmentation method based on the lumen shape can be used for segmenting the condition that the gland is adhered and the epithelial cells are fused with stroma;
(3) the invention specifically provides a method for expressing the characteristics of lumen and extraglandular broad shape, which can be used for glandular segmentation work, and in the following work, the method is applied to the characteristic extraction of benign and malignant tumors and provides an effective solution in the aspect of tumor classification, so that clinical decisions can be effectively made.
Drawings
FIG. 1 is a typical histopathological image of colon glands and glandular structure;
FIG. 2 is a schematic diagram of the segmentation method of the present invention;
FIG. 3 is a schematic view of a staining separation model of the present invention;
FIG. 4 is a representation of the minimum inertia axis and chain code based features of the present invention;
FIG. 5 is a graph showing the effect of the separation of dyeing according to the present invention;
FIG. 6 is a graph of the lumen segmentation effect of the present invention;
fig. 7 is a diagram showing the effect of gland segmentation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and fig. 2, the gland segmentation method based on internal and external stresses of the invention comprises the following steps:
1) as shown in fig. 3, the staining images of pathological tissues are stained and separated based on the ResUnet architecture, and hematoxylin channel and background channel images are obtained.
In the step 1), a ResUnet framework is constructed for dyeing and separation. The network is supported by three parts of a contraction path, a bridge and an expansion path to complete the dyeing intensity prediction of each channel of H (hematoxylin), E (eosin) and B (background). The contraction path is used for reducing the space dimension of the feature map, meanwhile, the number of the feature map is increased layer by layer, and the input image is extracted into compact features. The bridge portion connects the reception and expansion paths and implements a coloring color matrix prediction function. The dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for the staining intensity matrix prediction.
In step 1), the contraction path has a plurality of residual blocks. In each residual block, the feature mapping is reduced by half by convolution. Accordingly, the expansion path is also composed of the corresponding residual block. Before each residual block there is a concatenation of upsampling from the feature maps of lower levels and the feature maps from the corresponding coding path.
The specific residual unit is interpreted as: assuming that the input of a neural network unit is x, the desired output is h (x), and a residual map f (x) ═ h (x) -x is additionally defined, if x is directly passed to the output, the target to be learned by the neural network unit is the residual map f (x) ═ h (x) -x, the residual learning unit is composed of a series of convolution layers and a shortcut (shortcuts connections), and the input x is passed to the output of the residual learning unit through the shortcut, the output of the residual learning unit is z ═ f (x) + x, and the bias of z to x is as follows:
Figure BDA0002460001650000061
in addition, each residual block comprises BN (batch normalization) and Re L U (Rectified L initial Unit), thereby effectively increasing the convergence speed.
In the step 1), a Kullback-L eibler constraint term is added in the model prediction process, and the model is trained by minimizing the reconstruction loss between the input image and each reconstruction.
The first data set included the colon tissue image challenge match (GlaS) data set held by MICCAI in 2015 and 34H & E stained tissue section images obtained at 10X magnification by an Aperio digital section scanner, down-sampled to 128 × 128 pixels for model training and validation.
The training set consists of 22000 RGB organization pictures with size 128x128 pixels. Training the model to set the batch size to 64 takes approximately 30 minutes to achieve good results. Training the initial learning rate using the ADAM optimizer, 1-e3, was gradually reduced at the end of each round. The standard mean square error loss is used as a reconstruction penalty. The results of fig. 5 show that the background of hematoxylin and eosin staining and mixed RGB images can be successfully isolated while preserving the structure of the tissue. During training, we will sample a randomly selected point in the gaussian distribution of the image to form an estimate of the color distribution of a region in the image. This process is repeated for each case and the distributions are combined to form an estimated staining matrix. The mean of each distribution represents the value to which our model assigns the maximum probability, while the standard deviation describes the accuracy of the model.
Wherein the K L divergence between the two Gaussian variables is as follows:
Figure BDA0002460001650000071
in the formula, σ1、σ2、μ1、μ2The standard deviation and the mean of two normal distributions are shown respectively.
If we define N as the number of pixels per image, M as the number of images per round, K as the stain class, and C as the number of image channels, then the constraint term can be derived:
Figure BDA0002460001650000072
in the formula, mum,n,k,c、μ'm,k,c
Figure BDA0002460001650000073
The mean and standard deviation of the original image and the predictive reconstructed normal distribution are respectively shown. Subscripts m, n, k, c represent the number of images per round, the number of image pixels, the number of stain types, and the number of image channels, respectively.
For the stain separation task, to examine its separation effect, the following loss function was defined:
Figure BDA0002460001650000074
in the formula, xn,mN-th pixel, x 'representing m-th image'n,mIndicating the corresponding predicted image pixel.
2) The variational level set image segmentation algorithm based on the improved sign pressure function segments the glandular lumen region from the background channel image.
In step 2), the lumen area is segmented from the background image of the stain separation, taking into account that the glandular lumens are close to the staining distribution of the non-nuclear and cytoplasmic areas. And constructing the spf function by utilizing the statistical information of the image, so that the constructed spf function has the function of keeping or even enhancing the prominent foreground object. As with the classical C-V model, the contour C divides the image I into two parts, called omega respectively1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image. The staining intensity distribution function of the regions Ω 1, Ω 2 is represented by P1, P2:
Figure BDA0002460001650000081
Figure BDA0002460001650000082
wherein u and sigma are dyes respectivelyMean and standard deviation of gaussian distribution of color intensity. Embedding level set functions according to a level set method
Figure BDA0002460001650000083
Suppose that
Figure BDA0002460001650000084
And
Figure BDA0002460001650000085
the corresponding contour line C can be set from zero level
Figure BDA0002460001650000086
To indicate.
The following SPF function was constructed using the above stain intensity distribution function:
Figure BDA0002460001650000087
the level set equation is obtained as follows:
Figure BDA0002460001650000088
comparing the traditional SPF method with the improved SPF method in the lumen division effect. The spf function in the level set method of the traditional binary selection and Gaussian filter regularization has the function of highlighting the segmentation target. The target to be segmented will be more emphasized after each iteration in the algorithm implementation. The improved spf method mainly utilizes the statistical information of the image to construct a new spf function, and meanwhile, the new spf function has the function of keeping or even enhancing the prominent foreground object. As can be seen from fig. 6c, the spf method taking into account the image statistics enables to accurately identify lumen regions.
Since the improved spf method is based on image statistical information, the background channel image obtained from the dyeing separation process, where the lumens and the background probability tend to be consistent, will segment some small background blocks in the image, so that small objects are removed from the segmented image, and the final lumen segmentation effect is as shown in fig. 6 d.
3) The hematoxylin intensity obtained by staining separation is used as the input characteristic of the CNN, and a pixel set V representing the epithelial cell nucleus and the outer contour L of the area where the epithelial cell nucleus is located are obtained.
4) And drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code.
The minimum inertia axis is the line with the smallest integrated value of the square of the total distance between all points of the graph boundary, and the physical meaning of the minimum inertia axis is the moment of inertia of the graph around the axis, and the minimum inertia axis is the only reference line for storing the shape positioning of the graph. It must pass through the centroid O of the graph, as physically defined by the minimum axis of inertia. The mathematical expression is as follows: if the straight line x + By + C is 0, the minimum inertia axis is:
Figure BDA0002460001650000091
wherein
Figure BDA0002460001650000092
Is a set of edge points. Reuse of minimum inertia axis through centroid O (x)0,y0) This condition is: x is the number of0+By0When + C is 0, B, C is obtained, and the minimum inertia axis expression is obtained. In past work, search experiments have been conducted using the minimum inertia axis and feature points on the image boundary, which can use both shape boundary contour and region information, invariant to shape transformations (translation, rotation, projection, scaling).
Chain-code-analogous representation of the graph: the chain code describes the object by a sequence of straight line segments of unit size in a given direction. If a chain code is used for matching, it must depend on the selection of the first boundary pixel in the sequence. One possibility for a normalized chain code is to find the pixels of the edge sequence; another way is to represent the boundary by the difference in successive directions on the chain code instead of by the opposite direction. A rotation invariant chain code is obtained by a cyclic permutation that yields a minimum index. From a selected starting point, a chain code is generated by using a 4-way or 8-way chain code.
In step 4), a coordinate system is established by taking the minimum inertia axis as a reference axis and the vertical lines thereof together, the image centroid is the origin O of the coordinate system, as in the embodiment of fig. 4, and then 4 regions of the coordinate system are respectively divided into 3 equal parts in 3 directions according to the method of direction chain codes, so that the whole image is generated into a chain code in 12 directions. The direction perpendicular to the minimum inertia axis and closest to the lumen contour is defined as 0 direction, and the directions of 0-11 are respectively defined by rotating 30 degrees anticlockwise, so that 12 straight lines with the point O as the vertex are compared with the lumen contour C0,C1,...,C11The 12 points constitute chain codes representing lumen contours, and similarly the intersections of these lines with the set of epithelial nuclei V represent candidate contour chain codes. C in FIG. 4b0、C1Respectively, the intersection points of the straight line 0 and the straight line 1 with the contour C, from C0,C1The triangle formed by the three points O is the characteristic triangle of the lumen region (the points of the lumen profile in each direction are unique). V0And V1The intersections of the straight lines 0 and 1 with V indicate the characteristic triangles of the gland region (there are a plurality of points in each direction in the epithelial cell region). And (4) utilizing a triangular membership function to measure the similarity. For each characteristic triangle, let θ 1, θ 2, θ 3 be the triangle interior angles, respectively, for which there is the following relationship:
θ1≥θ2≥θ3>0,θ123=180°
its trigonometric membership function can be found:
Figure BDA0002460001650000101
Figure BDA0002460001650000102
wherein d is the Euclidean distance between the vertexes of the characteristic triangle. Membership value mu for the nth signature triangle for the search lumen regionnAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nThe similarity between them is:
Figure BDA0002460001650000103
the overall similarity of all eigenvalues is:
Figure BDA0002460001650000111
if TotalSim (c, v) is 1, it indicates that the more similar the 2 contours are.
The method aims to find a more accurate gland contour based on two constraints:
1. the target profile S is similar to the lumen profile C based on the epithelial outer profile shape feature L, so a feature similarity constraint term is constructed:
α≤TotalSim(l,v)≤1
β≤TotalSim(c,v)≤1
2. the target contour S is as close as possible to the epithelial cell nucleus outer contour L, so a distance minimum constraint is constructed:
Figure BDA0002460001650000112
where i 0, 1., 11 denotes a sequence of directions, J0, 1.., J denotes a candidate contour similar to a lumen contour. liIndicating the intersection of the outer contour of the epithelial nucleus in the ith direction. v. ofi,jRepresenting the intersection point of the candidate contour in the ith direction, using the reference line 0 as the starting direction, searching the similarity of the feature triangles in each direction in turn counterclockwise, taking FIG. 4b as an example, comparing the features △ v respectively0,0v1,0O,△v0,0v1,1O,△v0,0v1,2O and lumen characterization △ c0c1The method comprises the steps of determining candidate contour points in a direction 1 by using a constraint condition 1, determining candidate contour points in a direction 2 by using the candidate contour points in the direction 1 as a reference starting point in the same way, sequentially determining 12 candidate contour points in the direction as 1 candidate contour chain code, forming J candidate contours according to the method if J candidate points exist in a starting reference direction 0, and determining the optimal gland contour from the candidate contours according to the constraint condition 2 by using light yellow as a lumen contour, orange as a lumen contour feature triangle and red as one of the candidate contours obtained according to the similarity in FIG. 4 c.
We calculated overlap (O L), Sensitivity (SN), Specificity (SP) and Positive Predictive Value (PPV). for each image, the group truth set of pixels is denoted as a (t). a(s) is the set of pixels within the automatically segmented optimal contour.o L and PPV are defined as:
Figure BDA0002460001650000121
table 1 quantitatively shows the results of the example segmentation of the method. To illustrate the effectiveness of the method, we overlaid the segmentation mask on the original image to visually represent the segmentation effect. Fig. 7a shows the respective example segmentation effect for Glas datasets, where the first action is to mark the mask, the second action is to segment the mask only according to the epithelial glands, and the third action is to segment the mask according to the method of the present invention. We also apply the method of the invention to additional colorectal cancer data sets and the segmentation effect is shown in fig. 7b, where the first action is a marker mask and the second action is a segmentation mask of the method of the invention.
TABLE 1
Figure BDA0002460001650000122
To verify the effectiveness of the method presented herein, we compared the results of the segmentation of 60 examples with the other three gland segmentation methods and our proposed method. The predicted masks of these methods were compared to the actual case and the associated measurement indices are shown in table 2. From table 2 it can be seen that our method yields the best segmentation results.
TABLE 2
Figure BDA0002460001650000131
As can be seen from table 2, the proposed segmentation method based on lumen similarity improves the average pixel accuracy by at least 3%, and the Dice similarity factor achieves an improvement of 0.033. Meanwhile, the pixel precision and the standard deviation of Dice are in a lower level, which shows that the segmentation method is relatively stable and can effectively improve the problem of abnormal segmentation error of glands.
In conclusion, the invention provides a new unsupervised dyeing separation method, so that the information contained in the H & E dyeing image is more independent and easy to identify, and the situations of uneven dyeing intensity and unobvious dyeing difference are processed; a new set of features of the segmented gland contour is developed and combined. The morphological characteristics of lumen inside a gland structure are considered, the shape of the lumen is obviously distorted in the canceration process of the gland, so that the arrangement of the epithelial cells around the gland is irregular, but most of the situations are still distributed around the periphery of the light, therefore, the lumen and the external contour morphological characteristics of the gland are represented by taking the minimum inertia axis as the reference and combining a chain code method, and the lumen is more independent and easier to segment relative to the epithelial cells, so that the segmentation method based on the lumen shape can be used for segmenting the condition that the gland is adhered and the epithelial cells are fused with stroma; the invention specifically provides a method for expressing the characteristics of lumen and extraglandular broad shape, which can be used for glandular segmentation work, and in the following work, the method is applied to the characteristic extraction of benign and malignant tumors and provides an effective solution in the aspect of tumor classification, so that clinical decisions can be effectively made.
The invention is not described in detail, but is well known to those skilled in the art.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A hypo-differentiated gland segmentation method based on internal and external stress is characterized in that: the method comprises the following steps:
1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image;
2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function;
3) taking the hematoxylin channel map as an SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses;
4) and drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code.
2. The method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: the step 1) of performing dyeing separation based on the ResUnet architecture specifically comprises the following steps: the network is supported by three parts, namely a contraction path, a bridging path and an expansion path, so as to complete the dyeing intensity prediction of each channel of Hematoxylin, Eosin and Background.
3. The method for dividing a poorly differentiated gland according to claim 2, wherein the method comprises: the contraction path is used for reducing the space dimension of the feature map, increasing the number of the feature map layer by layer, and extracting the input image into compact features; the bridging part is connected with the receiving and expanding paths and realizes the function of dyeing color matrix prediction; the dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for staining intensity matrix prediction.
4. The method for dividing a poorly differentiated gland according to claim 2, wherein the method comprises: the contraction path and the expansion path both comprise a plurality of residual blocks, and in each residual block, the feature mapping is reduced by half by convolution.
5. The method for dividing a poorly differentiated gland according to claim 4, wherein the method comprises: before each of the residual blocks there is a concatenation of upsampling from the feature maps of lower levels and the feature maps from the corresponding encoding paths.
6. The method for segmenting the poorly differentiated glands based on the internal and external stresses as claimed in claim 1, wherein a Kullback-L eibler constraint term is added in the model prediction process in the step 1) of staining separation, and the model is trained by minimizing the reconstruction loss between the input image and each reconstruction.
7. The method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: and 2) specifically constructing an SPF function by using the statistical information of the image, so that the constructed SPF function has the function of keeping or even enhancing the outstanding foreground target.
8. The method for dividing a poorly differentiated gland according to claim 7, wherein the method comprises: the method specifically comprises the following steps:
the profile C divides the image I into an inner portion and an outer portion, which are respectively designated as omega1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image, with P1、P2The staining intensity distribution function of the regions Ω 1, Ω 2 is represented:
Figure FDA0002460001640000021
Figure FDA0002460001640000022
where u and σ are the mean and standard deviation of the gaussian distribution of the staining intensity, respectively, according to the level set method, a level set function Φ is embedded, assuming Ω 1 ═ Φ >0} and Ω 2 ═ Φ <0}, and the corresponding contour line C can be represented by a zero level set { Φ ═ 0 };
the following SPF function was constructed using the above stain intensity distribution function:
Figure FDA0002460001640000023
the level set equation is obtained as follows:
Figure FDA0002460001640000024
9. the method for segmenting the poorly differentiated glands based on the internal and external stresses as claimed in claim 1, wherein the space-limited SC-CNN method for nuclear detection and the softmax CNN method for nuclear classification are used in the step 3), and the hematoxylin intensity obtained by the staining separation is used as the input feature of the CNN to obtain the pixel set V representing the epithelial cell nucleus and the outline L of the area where the epithelial cell nucleus is located.
10. The method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: the step 4) specifically comprises the following steps: establishing a coordinate system by taking the minimum inertia axis as a reference axis and the perpendicular line thereof, taking the image centroid as the origin O of the coordinate system, and dividing a plurality of areas of the coordinate system into a plurality of areas from a plurality of directions according to the method of direction chain codesDry-equally dividing, so that the whole image is generated into a chain code from multiple directions; step 4) also comprises the following steps: the membership value mu of the nth characteristic triangle of the lumen area is searchednAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nAnd carrying out similarity comparison on all similarity values of all characteristic values.
CN202010317512.6A 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses Active CN111507992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010317512.6A CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010317512.6A CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Publications (2)

Publication Number Publication Date
CN111507992A true CN111507992A (en) 2020-08-07
CN111507992B CN111507992B (en) 2021-10-08

Family

ID=71876257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010317512.6A Active CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Country Status (1)

Country Link
CN (1) CN111507992B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790946A (en) * 2021-11-14 2021-12-14 梅傲科技(广州)有限公司 Intercellular substance staining kit for digital pathological scanning analysis system
CN115063383A (en) * 2022-06-29 2022-09-16 北京理工大学 Bright red mole segmentation method and device based on multi-color space adaptive fusion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
US20070058836A1 (en) * 2005-09-15 2007-03-15 Honeywell International Inc. Object classification in video data
CN107798684A (en) * 2017-11-07 2018-03-13 河南师范大学 A kind of active contour image partition method and device based on symbol pressure function
US10055551B2 (en) * 2013-10-10 2018-08-21 Board Of Regents Of The University Of Texas System Systems and methods for quantitative analysis of histopathology images using multiclassifier ensemble schemes
US10244991B2 (en) * 2014-02-17 2019-04-02 Children's National Medical Center Method and system for providing recommendation for optimal execution of surgical procedures
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110223271A (en) * 2019-04-30 2019-09-10 深圳市阅影科技有限公司 The automatic horizontal collection dividing method and device of blood-vessel image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070058836A1 (en) * 2005-09-15 2007-03-15 Honeywell International Inc. Object classification in video data
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
US10055551B2 (en) * 2013-10-10 2018-08-21 Board Of Regents Of The University Of Texas System Systems and methods for quantitative analysis of histopathology images using multiclassifier ensemble schemes
US10244991B2 (en) * 2014-02-17 2019-04-02 Children's National Medical Center Method and system for providing recommendation for optimal execution of surgical procedures
CN107798684A (en) * 2017-11-07 2018-03-13 河南师范大学 A kind of active contour image partition method and device based on symbol pressure function
CN110110634A (en) * 2019-04-28 2019-08-09 南通大学 Pathological image polychromatophilia color separation method based on deep learning
CN110223271A (en) * 2019-04-30 2019-09-10 深圳市阅影科技有限公司 The automatic horizontal collection dividing method and device of blood-vessel image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KORSUK SIRINUKUNWATTANA ET.AL: "Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
KUN ZHANG ET.AL: "Multiple Morphological Constraints-Based Complex Gland Segmentation in Colorectal Cancer Pathology Image Analysis", 《COMPLEXITY》 *
孙林 等: "基于双符号压力函数的活动轮廓图像分割方法", 《计算机工程与应用》 *
李宗民 等: "基于最小惯性轴及链码的图像形状描述方法", 《通信学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790946A (en) * 2021-11-14 2021-12-14 梅傲科技(广州)有限公司 Intercellular substance staining kit for digital pathological scanning analysis system
CN115063383A (en) * 2022-06-29 2022-09-16 北京理工大学 Bright red mole segmentation method and device based on multi-color space adaptive fusion

Also Published As

Publication number Publication date
CN111507992B (en) 2021-10-08

Similar Documents

Publication Publication Date Title
Raza et al. Micro-Net: A unified model for segmentation of various objects in microscopy images
Ding et al. Multi-scale fully convolutional network for gland segmentation using three-class classification
Manivannan et al. Structure prediction for gland segmentation with hand-crafted and deep convolutional features
Naik et al. Gland segmentation and computerized gleason grading of prostate histology by integrating low-, high-level and domain specific information
CN111091527A (en) Method and system for automatically detecting pathological change area in pathological tissue section image
Ye et al. Automatic graph cut segmentation of lesions in CT using mean shift superpixels
Ahmad et al. [Retracted] Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning
CN111507992B (en) Low-differentiation gland segmentation method based on internal and external stresses
Lv et al. Nuclei R-CNN: improve mask R-CNN for nuclei segmentation
Bourigault et al. Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention
CN117392093A (en) Breast ultrasound medical image segmentation algorithm based on global multi-scale residual U-HRNet network
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN114742758A (en) Cell nucleus classification method in full-field digital slice histopathology picture
Dinsdale et al. STAMP: Simultaneous Training and Model Pruning for low data regimes in medical image segmentation
Kitrungrotsakul et al. Interactive deep refinement network for medical image segmentation
Nawaz et al. MSeg‐Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy K‐Means Clustering
CN118230052A (en) Cervical panoramic image few-sample classification method based on visual guidance and language prompt
Nie et al. Spatial attention-based efficiently features fusion network for 3D-MR brain tumor segmentation
Jin et al. Automatic primary gross tumor volume segmentation for nasopharyngeal carcinoma using ResSE-UNet
Zhang et al. Multiple Morphological Constraints‐Based Complex Gland Segmentation in Colorectal Cancer Pathology Image Analysis
Tepe et al. Graph neural networks for colorectal histopathological image classification
Inamdar et al. A Novel Attention based model for Semantic Segmentation of Prostate Glands using Histopathological Images
Du et al. Semi-Supervised Skin Lesion Segmentation via Iterative Mask Optimization
Fu et al. Poorly differentiated colorectal gland segmentation approach based on internal and external stress in histology images
Guo et al. MRI Image Segmentation of Nasopharyngeal Carcinoma Using Multi-Scale Cascaded Fully Convolutional Network.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200807

Assignee: Hangzhou lanque technology partnership (L.P.)

Assignor: NANTONG University

Contract record no.: X2021980012590

Denomination of invention: A segmentation method of poorly differentiated glands based on internal and external stress

Granted publication date: 20211008

License type: Common License

Record date: 20211119

EE01 Entry into force of recordation of patent licensing contract