CN111507992B - Low-differentiation gland segmentation method based on internal and external stresses - Google Patents

Low-differentiation gland segmentation method based on internal and external stresses Download PDF

Info

Publication number
CN111507992B
CN111507992B CN202010317512.6A CN202010317512A CN111507992B CN 111507992 B CN111507992 B CN 111507992B CN 202010317512 A CN202010317512 A CN 202010317512A CN 111507992 B CN111507992 B CN 111507992B
Authority
CN
China
Prior art keywords
contour
image
gland
lumen
epithelial cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010317512.6A
Other languages
Chinese (zh)
Other versions
CN111507992A (en
Inventor
张堃
付君红
朱洪堃
李子杰
吴建国
张培建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202010317512.6A priority Critical patent/CN111507992B/en
Publication of CN111507992A publication Critical patent/CN111507992A/en
Application granted granted Critical
Publication of CN111507992B publication Critical patent/CN111507992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a low-differentiation gland segmentation method based on internal and external stresses, which comprises the following steps: 1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image; 2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function; 3) taking the hematoxylin channel map as an SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses; 4) and drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code. The invention enables information contained in the H & E staining image to be more independent and easy to identify, so as to process the situations of uneven staining intensity and unobvious staining difference, develop and combine a group of new characteristics of segmenting the gland outline, and definitely provide a method for representing the characteristics of lumen and extraglandular broad shape.

Description

Low-differentiation gland segmentation method based on internal and external stresses
Technical Field
The invention relates to the technical field of image information processing, in particular to a low-differentiation gland segmentation method based on internal and external stress.
Background
Adenocarcinoma is a malignant tumor formed by glandular structures in epithelial tissue. It affects the distribution of cells and also alters the structure of the gland. A biopsy is a tissue that is removed from a suspect organ in a minimally invasive manner, and is distinguished by a microscope by a pathologist who needs to be accurate and capable of detecting large amounts of data in order to detect minor abnormalities in the biopsy. Taking the digestive system as an example, a histopathological staining image of colon is the basis for detecting lesions. A typical histopathological image of the colon gland contains four tissue components: lumen, cytoplasm, epithelial cells and stroma (connective tissue, blood vessels, neural tissue, etc.). The luminal area is surrounded by an oval structure called epithelial cells, and the overall structure is bounded by bold lines called epithelial nuclei.
The traditional method mainly studies the gland appearance characteristics and contour characteristics. Appearance characteristics are mainly based on the fact that the glands are composed of nuclei, cytoplasm and lumen. Sirinukunwattana, Jacobs et al, by low-level features: color, texture, etc. information is used to identify the glandular object. Contour features are based on the fact that the gland structure is surrounded by a ring of epithelial cells, and many research methods segment the gland by identifying epithelial cells. The random polygon model proposed by sirinunkwattana, Fu et al, proposed that the spatial random field model is capable of well segmenting benign gland contours, but is not suitable for segmenting malignant and diseased glands.
Recent developments in deep learning in the field of computer vision have made it possible to apply it to histopathological studies. Ronneberger et al have proposed that U-net has achieved a good effect in the field of medical image segmentation. The deep learning framework trains the model based on the original image and the segmentation mask marked manually, so that the loss function is minimum as a target back propagation error, and parameters are updated layer by layer, thereby achieving the purpose of automatically segmenting the image by the model. The depth contour perception network proposed by Chen et al shows that the contour plays an important role in gland segmentation, and the double-parallel branch depth neural network proposed by Wang et al combines the fusion characteristics of the contour and the target to accurately segment the gland. The above methods all require a large number of manually labeled images, however, labeling a large number of medical images is very difficult.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a low-differentiation gland segmentation method based on internal and external stress. Firstly, an improved U-net is used for staining and separating H & E images to respectively obtain a hematoxylin channel (the hematoxylin channel images contain cell nucleus information), an eosin channel and a background channel (considering that the lumens are similar to the background, the lumen information is contained in the background channel); then taking a hematoxylin channel as the input of an SC-CNN frame to obtain the boundary of an epithelial cell region, namely the gland boundary formed by epithelial cell nucleuses, simultaneously segmenting the cavity based on an improved SPF method, and then segmenting the gland by applying a graphic characteristic method represented by a minimum inertia axis and a chain code according to the similarity of lumens and the gland boundary and the condition that the bonded gland and the gland are fused with stroma.
In order to achieve the purpose, the invention provides the following technical scheme: a hypo-differentiated gland segmentation method based on internal and external stresses comprises the following steps:
1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image;
2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function;
3) taking a hematoxylin channel map as a space-limited SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses;
4) drawing the gland contour according to the lumen shape characteristic by using a graphic shape description method based on the minimum inertia axis and the chain code;
in the step 3), a space-limited SC-CNN method for nuclear detection and a softmax CNN method for nuclear classification are used, and the hematoxylin intensity obtained by dyeing and separating is used as the input characteristic of the CNN to obtain a pixel set V representing epithelial cell nuclei and an outer contour L of an area where the epithelial cell nuclei are located;
the step 4) specifically comprises the following steps: the minimum inertia axis is used as a reference axis, a coordinate system is established through the vertical lines of the minimum inertia axis and the image centroid is used as the origin O of the coordinate system, and then the multiple parts of the coordinate system are combined according to the method of direction chain codesThe areas are divided into a plurality of equal parts from a plurality of directions respectively, so that the whole image generates a chain code from the directions; step 4) also comprises the following steps: the membership value mu of the nth characteristic triangle of the lumen area is searchednAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nAnd carrying out similarity comparison on all similarity values of all characteristic values.
Preferably, the step 1) of performing staining separation based on the ResUnet architecture specifically includes: the network is supported by three parts, namely a contraction path, a bridging path and an expansion path, so as to complete the dyeing intensity prediction of each channel of Hematoxylin, Eosin and Background.
Preferably, the contraction path is used for reducing the spatial dimension of the feature map, increasing the number of the feature map layer by layer, and extracting the input image into compact features; the bridging part is connected with the receiving and expanding paths and realizes the function of dyeing color matrix prediction; the dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for staining intensity matrix prediction.
Preferably, the systolic path and the diastolic path each comprise a number of residual blocks, and in each residual block, the feature mapping is reduced by half by convolution.
Preferably, before each of said residual blocks there is a concatenation of upsampling from the feature maps of lower levels and feature maps from the respective coding paths.
Preferably, a Kullback-Leibler constraint term is added in the model prediction process in the dyeing separation in the step 1), and the model is trained by minimizing the reconstruction loss between the input image and each reconstruction.
Preferably, the step 2) constructs an SPF function by specifically using the statistical information of the image, so that the constructed SPF function has a function of maintaining or even enhancing the foreground object.
Preferably, the method specifically comprises the following steps:
the profile C divides the image I into an inner portion and an outer portion, which are respectively designated as omega1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image, with P1、P2Indication areaStaining intensity distribution function of domains Ω 1, Ω 2:
Figure GDA0003231128310000041
Figure GDA0003231128310000042
where u and σ are the mean and standard deviation of the gaussian distribution of the staining intensity, respectively, according to the level set method, a level set function Φ is embedded, assuming Ω 1 ═ Φ >0} and Ω 2 ═ Φ <0}, and the corresponding contour line C can be represented by a zero level set { Φ ═ 0 };
the following SPF function was constructed using the above stain intensity distribution function:
Figure GDA0003231128310000051
the level set equation is obtained as follows:
Figure GDA0003231128310000052
compared with the prior art, the invention has the beneficial effects that:
(1) the invention provides a novel unsupervised dyeing separation method, which enables information contained in an H & E dyeing image to be more independent and easy to identify so as to process the conditions of uneven dyeing intensity and unobvious dyeing difference;
(2) a new set of features of the segmented gland contour is developed and combined. The morphological characteristics of lumen inside a gland structure are considered, the shape of the lumen is obviously distorted in the canceration process of the gland, so that the arrangement of the epithelial cells around the gland is irregular, but most of the situations are still distributed around the periphery of the light, therefore, the lumen and the external contour morphological characteristics of the gland are represented by taking the minimum inertia axis as the reference and combining a chain code method, and the lumen is more independent and easier to segment relative to the epithelial cells, so that the segmentation method based on the lumen shape can be used for segmenting the condition that the gland is adhered and the epithelial cells are fused with stroma;
(3) the invention specifically provides a method for expressing lumen and glandular outer contour shape characteristics, which can be used for glandular segmentation work, and in the following work, the method is applied to characteristic extraction of benign and malignant tumors and provides an effective solution in the aspect of tumor classification, so that clinical decisions can be effectively made.
Drawings
FIG. 1 is a typical histopathological image of colon glands and glandular structure;
FIG. 2 is a schematic diagram of the segmentation method of the present invention;
FIG. 3 is a schematic view of a staining separation model of the present invention;
FIG. 4 is a representation of the minimum inertia axis and chain code based features of the present invention;
FIG. 5 is a graph showing the effect of the separation of dyeing according to the present invention;
FIG. 6 is a graph of the lumen segmentation effect of the present invention;
fig. 7 is a diagram showing the effect of gland segmentation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and fig. 2, the gland segmentation method based on internal and external stresses of the invention comprises the following steps:
1) as shown in fig. 3, the staining images of pathological tissues are stained and separated based on the ResUnet architecture, and hematoxylin channel and background channel images are obtained.
In the step 1), a ResUnet framework is constructed for dyeing and separation. The network is supported by three parts of a contraction path, a bridge and an expansion path to complete the dyeing intensity prediction of each channel of H (hematoxylin), E (eosin) and B (background). The contraction path is used for reducing the space dimension of the feature map, meanwhile, the number of the feature map is increased layer by layer, and the input image is extracted into compact features. The bridge portion connects the reception and expansion paths and implements a coloring color matrix prediction function. The dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for the staining intensity matrix prediction.
In step 1), the contraction path has a plurality of residual blocks. In each residual block, the feature mapping is reduced by half by convolution. Accordingly, the expansion path is also composed of the corresponding residual block. Before each residual block there is a concatenation of upsampling from the feature maps of lower levels and the feature maps from the corresponding coding path.
The specific residual unit is interpreted as: assuming that the input of a neural network unit is x, the desired output is h (x), and a residual map f (x) ═ h (x) -x is additionally defined, if x is directly passed to the output, the target to be learned by the neural network unit is the residual map f (x) ═ h (x) -x, the residual learning unit is composed of a series of convolution layers and a Shortcut (shortcuts Connections), and the input x is passed to the output of the residual learning unit through the Shortcut, the output of the residual learning unit is z ═ f (x) + x, and the bias of z to x is as follows:
Figure GDA0003231128310000071
in the formula, the partial derivative of z to x is larger than 1, so that the problem of gradient disappearance is effectively avoided in the back propagation process. Besides, each residual block contains BN (batch normalization) and ReLU (rectified Linear Unit), which effectively accelerates the convergence speed.
In the step 1), a Kullback-Leibler constraint term is added in the model prediction process, and the model is trained by minimizing the reconstruction loss between the input image and each reconstruction.
The first dataset comprised a colon tissue image challenge tournament (GlaS) dataset held by MICCAI in 2015 and 34H & E stained tissue section images obtained at 10X magnification by Aperio digital section scanner, down-sampled to 128X128 pixels, for model training and validation. In the process of nuclear detection and classification, nuclei are manually annotated mainly by experienced pathologists, and the study needs to identify epithelial nuclei, so that nuclear annotation is divided into epithelial nuclei and others.
The training set consists of 22000 RGB organization pictures with size 128x128 pixels. Training the model to set the batch size to 64 takes approximately 30 minutes to achieve good results. Training the initial learning rate using the ADAM optimizer, 1-e3, was gradually reduced at the end of each round. The standard mean square error loss is used as a reconstruction penalty. The results of fig. 5 show that the background of hematoxylin and eosin staining and mixed RGB images can be successfully isolated while preserving the structure of the tissue. During training, we will sample a randomly selected point in the gaussian distribution of the image to form an estimate of the color distribution of a region in the image. This process is repeated for each case and the distributions are combined to form an estimated staining matrix. The mean of each distribution represents the value to which our model assigns the maximum probability, while the standard deviation describes the accuracy of the model.
Wherein the KL divergence between the two Gaussian variables is as follows:
Figure GDA0003231128310000081
in the formula, σ1、σ2、μ1、μ2The standard deviation and the mean of two normal distributions are shown respectively.
If we define N as the number of pixels per image, M as the number of images per round, K as the stain class, and C as the number of image channels, then the constraint term can be derived:
Figure GDA0003231128310000082
in the formula, mum,n,k,c、μ'm,k,c
Figure GDA0003231128310000083
The mean and standard deviation of the original image and the predictive reconstructed normal distribution are respectively shown. Subscripts m, n, k, c represent the number of images per round, the number of image pixels, the number of stain types, and the number of image channels, respectively.
For the stain separation task, to examine its separation effect, the following loss function was defined:
Figure GDA0003231128310000091
in the formula, xn,mN-th pixel, x 'representing m-th image'n,mIndicating the corresponding predicted image pixel.
2) The variational level set image segmentation algorithm based on the improved sign pressure function segments the glandular lumen region from the background channel image.
In step 2), the lumen area is segmented from the background image of the stain separation, taking into account that the glandular lumens are close to the staining distribution of the non-nuclear and cytoplasmic areas. And constructing the spf function by utilizing the statistical information of the image, so that the constructed spf function has the function of keeping or even enhancing the prominent foreground object. As with the classical C-V model, the contour C divides the image I into two parts, called omega respectively1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image. The staining intensity distribution function of the regions Ω 1, Ω 2 is represented by P1, P2:
Figure GDA0003231128310000092
Figure GDA0003231128310000093
wherein u and sigma are mean and standard deviation of Gaussian distribution of staining intensity respectively. Embedding level set functions according to a level set method
Figure GDA0003231128310000094
Suppose that
Figure GDA0003231128310000095
And
Figure GDA0003231128310000096
the corresponding contour line C can be set from zero level
Figure GDA0003231128310000097
To indicate.
The following SPF function was constructed using the above stain intensity distribution function:
Figure GDA0003231128310000098
the level set equation is obtained as follows:
Figure GDA0003231128310000101
comparing the traditional SPF method with the improved SPF method in the lumen division effect. The spf function in the level set method of the traditional binary selection and Gaussian filter regularization has the function of highlighting the segmentation target. The target to be segmented will be more emphasized after each iteration in the algorithm implementation. The improved spf method mainly utilizes the statistical information of the image to construct a new spf function, and meanwhile, the new spf function has the function of keeping or even enhancing the prominent foreground object. As can be seen from fig. 6c, the spf method taking into account the image statistics enables to accurately identify lumen regions.
Since the improved spf method is based on image statistical information, the background channel image obtained from the dyeing separation process, where the lumens and the background probability tend to be consistent, will segment some small background blocks in the image, so that small objects are removed from the segmented image, and the final lumen segmentation effect is as shown in fig. 6 d.
3) Taking the hematoxylin channel map as an SC-CNN input characteristic to obtain the boundary of the epithelial cell region, namely the gland boundary formed by the epithelial cell nucleus. Using a spatially restricted CNN (SC-CNN) for nuclear detection and a softmax CNN method for nuclear classification, the intensity of hematoxylin obtained by staining separation is used as an input feature of the CNN to obtain a pixel set V representing epithelial nuclei and an outer contour L of a region where the epithelial nuclei are located.
4) And drawing the gland contour according to the lumen shape characteristics by using a graphic shape description method based on the minimum inertia axis and the chain code.
The minimum inertia axis is the line with the smallest integrated value of the square of the total distance between all points of the graph boundary, and the physical meaning of the minimum inertia axis is the moment of inertia of the graph around the axis, and the minimum inertia axis is the only reference line for storing the shape positioning of the graph. It must pass through the centroid O of the graph, as physically defined by the minimum axis of inertia. The mathematical expression is as follows: if the straight line x + By + C is 0, the minimum inertia axis is:
Figure GDA0003231128310000111
wherein
Figure GDA0003231128310000112
Is a set of edge points. Reuse of minimum inertia axis through centroid O (x)0,y0) This condition is: x is the number of0+By0When + C is 0, B, C is obtained, and the minimum inertia axis expression is obtained. In past work, search experiments have been conducted using the minimum inertia axis and feature points on the image boundary, which can use both shape boundary contour and region information, invariant to shape transformations (translation, rotation, projection, scaling).
Chain-code-analogous representation of the graph: the chain code describes the object by a sequence of straight line segments of unit size in a given direction. If a chain code is used for matching, it must depend on the selection of the first boundary pixel in the sequence. One possibility for a normalized chain code is to find the pixels of the edge sequence; another way is to represent the boundary by the difference in successive directions on the chain code instead of by the opposite direction. A rotation invariant chain code is obtained by a cyclic permutation that yields a minimum index. From a selected starting point, a chain code is generated by using a 4-way or 8-way chain code.
In step 4), a coordinate system is established by taking the minimum inertia axis as a reference axis and the vertical lines thereof together, the image centroid is the origin O of the coordinate system, as in the embodiment of fig. 4, and then 4 regions of the coordinate system are respectively divided into 3 equal parts in 3 directions according to the method of direction chain codes, so that the whole image is generated into a chain code in 12 directions. The direction perpendicular to the minimum inertia axis and closest to the lumen contour is defined as 0 direction, and the directions of 0-11 are respectively defined by rotating 30 degrees anticlockwise, so that 12 straight lines with the point O as the vertex are compared with the lumen contour C0,C1,...,C11The 12 points constitute chain codes representing lumen contours, and similarly the intersections of these lines with the set of epithelial nuclei V represent candidate contour chain codes. C in FIG. 4b0、C1Respectively, the intersection points of the straight line 0 and the straight line 1 with the contour C, from C0,C1The triangle formed by the three points O is the characteristic triangle of the lumen region (the points of the lumen profile in each direction are unique). V0And V1The intersections of the straight lines 0 and 1 with V indicate the characteristic triangles of the gland region (there are a plurality of points in each direction in the epithelial cell region). And (4) utilizing a triangular membership function to measure the similarity. For each characteristic triangle, let θ 1, θ 2, θ 3 be the triangle interior angles, respectively, for which there is the following relationship:
θ1≥θ2≥θ3>0,θ123=180°
its trigonometric membership function can be found:
Figure GDA0003231128310000121
Figure GDA0003231128310000122
wherein d is the Euclidean distance between the vertexes of the characteristic triangle. Membership value mu for the nth signature triangle for the search lumen regionnAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nThe similarity between them is:
Figure GDA0003231128310000123
the overall similarity of all eigenvalues is:
Figure GDA0003231128310000124
if TotalSim (c, v) is 1, it indicates that the more similar the 2 contours are.
The method aims to find a more accurate gland contour based on two constraints:
1. the target profile S is similar to the lumen profile C based on the epithelial outer profile shape feature L, so a feature similarity constraint term is constructed:
α≤TotalSim(l,v)≤1
β≤TotalSim(c,v)≤1
2. the target contour S is as close as possible to the epithelial cell nucleus outer contour L, so a distance minimum constraint is constructed:
Figure GDA0003231128310000131
where i 0, 1., 11 denotes a sequence of directions, J0, 1.., J denotes a candidate contour similar to a lumen contour. liIndicating the intersection of the outer contour of the epithelial nucleus in the ith direction. v. ofi,jRepresenting the intersection of the candidate contours in the ith direction. And sequentially retrieving the similarity of the feature triangles in all directions counterclockwise by taking the reference line 0 as the starting direction. Taking FIG. 4b as an example, first, the characteristics Δ v are compared0,0v1,0O,△v0,0v1,1O,△v0,0v1,2O and lumen characteristic Δ c0c1The similarity of O and the similarity with the characteristic triangle of the outer contour L. Candidate contour points in direction 1 are determined by constraint 1. And similarly, the candidate contour point in the direction 2 is determined by taking the candidate contour point in the direction 1 as a reference starting point. And sequentially determining candidate contour points in 12 directions to serve as 1 candidate contour chain code. Assuming that there are J candidate points in the initial reference direction 0, J candidate contours are formed according to the above method. In fig. 4c, light yellow is the lumen profile, orange is the lumen profile feature triangle, and red is one of the candidate profiles obtained according to the similarity. And determining the optimal glandular contour from the candidate contours according to the constraint condition 2, and finally smoothing the obtained glandular contour by cubic spline interpolation.
The method was evaluated by selecting 10 tissue sections ROI. We calculated Overlap (OL), Sensitivity (SN), Specificity (SP) and Positive Predictive Value (PPV). For each image, the group truth pixel set is denoted as a (t). A (S) is the automatic segmentation of the set of pixels within the optimal contour. OL, SN, SP and PPV are defined as:
Figure GDA0003231128310000141
table 1 quantitatively shows the results of the example segmentation of the method. To illustrate the effectiveness of the method, we overlaid the segmentation mask on the original image to visually represent the segmentation effect. Fig. 7a shows the respective example segmentation effect for Glas datasets, where the first action is to mark the mask, the second action is to segment the mask only according to the epithelial glands, and the third action is to segment the mask according to the method of the present invention. We also apply the method of the invention to additional colorectal cancer data sets and the segmentation effect is shown in fig. 7b, where the first action is a marker mask and the second action is a segmentation mask of the method of the invention.
TABLE 1
Figure GDA0003231128310000142
To verify the effectiveness of the method presented herein, we compared the results of the segmentation of 60 examples with the other three gland segmentation methods and our proposed method. The predicted masks of these methods were compared to the actual case and the associated measurement indices are shown in table 2. From table 2 it can be seen that our method yields the best segmentation results.
TABLE 2
Figure GDA0003231128310000151
As can be seen from table 2, the proposed segmentation method based on lumen similarity improves the average pixel accuracy by at least 3%, and the Dice similarity factor achieves an improvement of 0.033. Meanwhile, the pixel precision and the standard deviation of Dice are in a lower level, which shows that the segmentation method is relatively stable and can effectively improve the problem of abnormal segmentation error of glands.
In conclusion, the invention provides a new unsupervised dyeing separation method, so that the information contained in the H & E dyeing image is more independent and easy to identify, and the situations of uneven dyeing intensity and unobvious dyeing difference are processed; a new set of features of the segmented gland contour is developed and combined. The morphological characteristics of lumen inside a gland structure are considered, the shape of the lumen is obviously distorted in the canceration process of the gland, so that the arrangement of the epithelial cells around the gland is irregular, but most of the situations are still distributed around the periphery of the light, therefore, the lumen and the external contour morphological characteristics of the gland are represented by taking the minimum inertia axis as the reference and combining a chain code method, and the lumen is more independent and easier to segment relative to the epithelial cells, so that the segmentation method based on the lumen shape can be used for segmenting the condition that the gland is adhered and the epithelial cells are fused with stroma; the invention specifically provides a method for expressing lumen and glandular outer contour shape characteristics, which can be used for glandular segmentation work, and in the following work, the method is applied to characteristic extraction of benign and malignant tumors and provides an effective solution in the aspect of tumor classification, so that clinical decisions can be effectively made.
The invention is not described in detail, but is well known to those skilled in the art.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1. A hypo-differentiated gland segmentation method based on internal and external stress is characterized in that: the method comprises the following steps:
1) performing dyeing separation on the pathological tissue dyeing image based on the ResUnet framework to obtain a hematoxylin channel image and a background channel image;
2) dividing the glandular lumen region from the background channel image based on a variational level set image segmentation algorithm of the improved symbol pressure function;
3) taking a hematoxylin channel map as a space-limited SC-CNN input characteristic to obtain an epithelial cell area boundary, namely a gland boundary formed by epithelial cell nucleuses;
4) drawing the gland contour according to the lumen shape characteristic by using a graphic shape description method based on the minimum inertia axis and the chain code;
in the step 3), a space-limited SC-CNN method for nuclear detection and a softmax CNN method for nuclear classification are used, and the hematoxylin intensity obtained by dyeing and separating is used as the input characteristic of the CNN to obtain a pixel set V representing epithelial cell nuclei and an outer contour L of an area where the epithelial cell nuclei are located;
the step 4) specifically comprises the following steps: establishing a coordinate system by taking the minimum inertia axis as a reference axis and the perpendicular line thereof, taking the image centroid as the origin O of the coordinate system, and dividing a plurality of areas of the coordinate system into a plurality of directions respectively according to the method of direction chain codesA plurality of equal divisions, so that the whole image is generated into a chain code from a plurality of directions; the direction perpendicular to the minimum inertia axis and closest to the lumen contour is defined as 0 direction, the directions are 0-11 directions by rotating 30 degrees anticlockwise, and then 12 straight lines with the point O as the vertex intersect with the lumen contour C at the point C0,C1,...,C1112 points form a chain code representing a lumen contour, and similarly, the intersection points of the straight lines and the pixel set V of the epithelial cell nucleus represent candidate contour chain codes;
step 4) also comprises the following steps: the membership value mu of the nth characteristic triangle of the lumen area is searchednAnd the membership value mu 'of the n-th characteristic triangle of the gland region'nAnd carrying out similarity comparison on all similarity values of all characteristic values;
wherein, determining the gland contour needs to be based on two constraints as follows:
the target contour S is similar to the lumen contour C based on the epithelial cell nucleus outer contour L, so that a feature similarity constraint term is constructed:
α≤TotalSim(l,v)≤1
β≤TotalSim(c,v)≤1
secondly, the target contour S is as close to the outer contour L of the epithelial cell nucleus as possible, so that a distance minimum constraint term is constructed:
Figure FDA0003231128300000021
where i 0, 1.., 11 denotes a sequence of directions, J0, 1.., J denotes a candidate contour similar to a lumen contour; liRepresenting the intersection point of the outline of the epithelial cell nucleus in the ith direction; v. ofi,jRepresenting the intersection point of the candidate contour in the ith direction; sequentially retrieving the similarity of the feature triangles in all directions anticlockwise by taking the reference line 0 as the starting direction;
Figure FDA0003231128300000022
2. the method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: the step 1) of performing dyeing separation based on the ResUnet architecture specifically comprises the following steps: the network is supported by three parts, namely a contraction path, a bridging path and an expansion path, so as to complete the dyeing intensity prediction of each channel of Hematoxylin, Eosin and Background.
3. The method for dividing a poorly differentiated gland according to claim 2, wherein the method comprises: the contraction path is used for reducing the space dimension of the feature map, increasing the number of the feature map layer by layer, and extracting the input image into compact features; the bridging part is connected with the receiving and expanding paths and realizes the function of dyeing color matrix prediction; the dilation path is used to gradually restore the details and corresponding spatial dimensions of the target, and the output is used for staining intensity matrix prediction.
4. The method for dividing a poorly differentiated gland according to claim 2, wherein the method comprises: the contraction path and the expansion path both comprise a plurality of residual blocks, and in each residual block, the feature mapping is reduced by half by convolution.
5. The method for dividing a poorly differentiated gland according to claim 4, wherein the method comprises: before each of the residual blocks there is a concatenation of upsampling from the feature maps of lower levels and the feature maps from the corresponding encoding paths.
6. The method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: adding a Kullback-Leibler constraint term in a model prediction process in the dyeing separation in the step 1), and training the model by minimizing the reconstruction loss between the input image and each reconstruction.
7. The method for dividing a poorly differentiated gland according to claim 1, wherein the method comprises: and 2) specifically constructing an SPF function by using the statistical information of the image, so that the constructed SPF function has the function of keeping or even enhancing the outstanding foreground target.
8. The method for dividing a poorly differentiated gland according to claim 7, wherein the method comprises: the method specifically comprises the following steps:
the contour line C divides the image I into an inner part and an outer part, which are respectively marked as omega1=in(C),Ω2Out (c), the SPF function is constructed using the global stain intensity distribution of the image, with P1、P2The staining intensity distribution function of the regions Ω 1, Ω 2 is represented:
Figure FDA0003231128300000041
Figure FDA0003231128300000042
where u and σ are the mean and standard deviation of the gaussian distribution of the staining intensity, respectively, according to the level set method, a level set function Φ is embedded, assuming Ω 1 ═ Φ >0} and Ω 2 ═ Φ <0}, and the corresponding contour line C can be represented by a zero level set { Φ ═ 0 };
the following SPF function was constructed using the above stain intensity distribution function:
Figure FDA0003231128300000043
the level set equation is obtained as follows:
Figure FDA0003231128300000044
CN202010317512.6A 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses Active CN111507992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010317512.6A CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010317512.6A CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Publications (2)

Publication Number Publication Date
CN111507992A CN111507992A (en) 2020-08-07
CN111507992B true CN111507992B (en) 2021-10-08

Family

ID=71876257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010317512.6A Active CN111507992B (en) 2020-04-21 2020-04-21 Low-differentiation gland segmentation method based on internal and external stresses

Country Status (1)

Country Link
CN (1) CN111507992B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790946A (en) * 2021-11-14 2021-12-14 梅傲科技(广州)有限公司 Intercellular substance staining kit for digital pathological scanning analysis system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
CN107798684A (en) * 2017-11-07 2018-03-13 河南师范大学 A kind of active contour image partition method and device based on symbol pressure function
US10055551B2 (en) * 2013-10-10 2018-08-21 Board Of Regents Of The University Of Texas System Systems and methods for quantitative analysis of histopathology images using multiclassifier ensemble schemes
US10244991B2 (en) * 2014-02-17 2019-04-02 Children's National Medical Center Method and system for providing recommendation for optimal execution of surgical procedures
CN110223271A (en) * 2019-04-30 2019-09-10 深圳市阅影科技有限公司 The automatic horizontal collection dividing method and device of blood-vessel image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070058836A1 (en) * 2005-09-15 2007-03-15 Honeywell International Inc. Object classification in video data
CN110110634B (en) * 2019-04-28 2023-04-07 南通大学 Pathological image multi-staining separation method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924926A (en) * 2006-09-21 2007-03-07 复旦大学 Two-dimensional blur polymer based ultrasonic image division method
US10055551B2 (en) * 2013-10-10 2018-08-21 Board Of Regents Of The University Of Texas System Systems and methods for quantitative analysis of histopathology images using multiclassifier ensemble schemes
US10244991B2 (en) * 2014-02-17 2019-04-02 Children's National Medical Center Method and system for providing recommendation for optimal execution of surgical procedures
CN107798684A (en) * 2017-11-07 2018-03-13 河南师范大学 A kind of active contour image partition method and device based on symbol pressure function
CN110223271A (en) * 2019-04-30 2019-09-10 深圳市阅影科技有限公司 The automatic horizontal collection dividing method and device of blood-vessel image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images;Korsuk Sirinukunwattana et.al;《IEEE TRANSACTIONS ON MEDICAL IMAGING》;20160228;第1-12页 *
Multiple Morphological Constraints-Based Complex Gland Segmentation in Colorectal Cancer Pathology Image Analysis;Kun Zhang et.al;《Complexity》;20200728;第1-16页 *
基于双符号压力函数的活动轮廓图像分割方法;孙林 等;《计算机工程与应用》;20181231;第54卷(第20期);第213-218页 *
基于最小惯性轴及链码的图像形状描述方法;李宗民 等;《通信学报》;20090430;第30卷(第4期);第1-5页 *

Also Published As

Publication number Publication date
CN111507992A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US20210397966A1 (en) Systems and methods for image segmentation
CN111091527B (en) Method and system for automatically detecting pathological change area in pathological tissue section image
Mesejo et al. Biomedical image segmentation using geometric deformable models and metaheuristics
Li et al. Automatic cardiothoracic ratio calculation with deep learning
Manivannan et al. Structure prediction for gland segmentation with hand-crafted and deep convolutional features
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN109272512B (en) Method for automatically segmenting left ventricle inner and outer membranes
Ye et al. Automatic graph cut segmentation of lesions in CT using mean shift superpixels
CN112991365B (en) Coronary artery segmentation method, system and storage medium
Soleymanifard et al. Multi-stage glioma segmentation for tumour grade classification based on multiscale fuzzy C-means
Bourigault et al. Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
Lv et al. Nuclei R-CNN: improve mask R-CNN for nuclei segmentation
CN112598613A (en) Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis
CN111507992B (en) Low-differentiation gland segmentation method based on internal and external stresses
Tan et al. Automatic prostate segmentation based on fusion between deep network and variational methods
Kitrungrotsakul et al. Interactive deep refinement network for medical image segmentation
Dinsdale et al. Stamp: simultaneous training and model pruning for low data regimes in medical image segmentation
CN114445328A (en) Medical image brain tumor detection method and system based on improved Faster R-CNN
Zhang et al. Multiple morphological constraints-based complex gland segmentation in colorectal cancer pathology image analysis
Jin et al. Automatic primary gross tumor volume segmentation for nasopharyngeal carcinoma using ResSE-UNet
Rashid et al. Segmenting melanoma lesion using single shot detector (SSD) and level set segmentation technique
Fu et al. Poorly differentiated colorectal gland segmentation approach based on internal and external stress in histology images
Thiruvenkadam et al. Fully automatic brain tumor extraction and tissue segmentation from multimodal MRI brain images
Guo et al. MRI Image Segmentation of Nasopharyngeal Carcinoma Using Multi-Scale Cascaded Fully Convolutional Network.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200807

Assignee: Hangzhou lanque technology partnership (L.P.)

Assignor: NANTONG University

Contract record no.: X2021980012590

Denomination of invention: A segmentation method of poorly differentiated glands based on internal and external stress

Granted publication date: 20211008

License type: Common License

Record date: 20211119

EE01 Entry into force of recordation of patent licensing contract