GB2466818A - Cell image segmentation using binary threshold and greyscale image processing - Google Patents

Cell image segmentation using binary threshold and greyscale image processing Download PDF

Info

Publication number
GB2466818A
GB2466818A GB0900248A GB0900248A GB2466818A GB 2466818 A GB2466818 A GB 2466818A GB 0900248 A GB0900248 A GB 0900248A GB 0900248 A GB0900248 A GB 0900248A GB 2466818 A GB2466818 A GB 2466818A
Authority
GB
United Kingdom
Prior art keywords
image
objects
nuclei
processing
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0900248A
Other versions
GB0900248D0 (en
GB2466818B (en
Inventor
John R Maddison
Havard Emil Danielsen
Birgitte Nielsen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ROOM4 GROUP Ltd
Original Assignee
ROOM4 GROUP Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ROOM4 GROUP Ltd filed Critical ROOM4 GROUP Ltd
Priority to GB0900248.6A priority Critical patent/GB2466818B/en
Publication of GB0900248D0 publication Critical patent/GB0900248D0/en
Publication of GB2466818A publication Critical patent/GB2466818A/en
Application granted granted Critical
Publication of GB2466818B publication Critical patent/GB2466818B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • G06T7/0081
    • G06T7/0083
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Abstract

An image segmentation method comprises: thresholding a greyscale image of plural nuclei to create a black and white image; post processing the B/W image to remove objects failing to meet criteria based on the B/W image and greyscale image (e.g. calculating the gradient in the greyscale image at boundaries identified in the B/W image and comparing this with a gradient threshold); segmenting to extract objects corresponding to objects remaining after this processing; and further using edge detection on the segmented image, to determine edges of real nuclei. Objects with areas less than a predetermined number of pixels may be removed. Grey scale images may be smoothed via Gaussian filtering, and convolved with the derivative of a Gaussian, prior to gradient calculation. A Canny edge detector and gradient vector flow snake may be used for edge detection. The invention relates to optimizing the initialization and convergence of active contours (or 'snakes') for segmentation of cell nuclei into objects and holes in histological sections.

Description

Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections
Background
Image segmentation is the process of dividing a digital image into meaningful regions, and is generally the first step in automated image analysis. The objects of interest are extracted from the image for subsequent processing, such as description and recognition. In general, segmentation is one of the most important and most difficult steps in automated image analysis. The success or failure of the image analysis task is often a direct consequence of the success or failure of the segmentation.
Nuclear image analysis is one of the main application fields of automated image analysis and is a useful method to obtain quantitative information for the diagnosis and prognosis of human cancer. In segmentation of cell nuclei the complexity of the problem largely depends on the type of specimen.
Nuclei in cytological specimens may be segmented by simple automatic gray level thresholding.
Nuclei in tissue sections, however, are in general very difficult to segment by purely automatic means.
It may be necessary to use interactive techniques in order to obtain sufficient quality as discussed in E. Bengtsson, C. Wahlby and J. Lindblad, "Robust cell image segmentation methods, Pattern Recognition and Image Analysis 14 (2004), 157-167. Bengtsson et al. discussed the relative advantages of different approaches to cell segmentation.
Active contour models or snakes are widely used in medical image segmentation. See M. Kass, A. Witkin and D. Terzopoulos, "Snakes -Active contour models, Int J Comput Vision 1(1988), 321-331 and T. Mclmemey and D. Terzopoulos, "Deformable models in medical image analysis: a survey, Medical Image Analysis 1 (1996), 91-1 08.
Active contours were originally designed as interactive models and the idea behind active contours for image segmentation is quit simple; 1) the user specifies an initial guess for the contour, 2) the contour is then moved by image driven forces to the boundary of the object. However, one general problem related to these algorithms is that the initial contour need to be relatively close to the target object boundary in order to get convergence. This is known as the "capture range problem".
Xu and Prince suggested a method for increasing the capture range of the external gradient vector field, in C. Xu and J.L. Prince, "Gradient Vector Flow (GVF) Active Contour Toolbox, down-loaded from: /LjacLeceujduLoeçsLgvf and C. Xu and J.L. Prince, "Snakes, shapes, and gradient vector flow, IEEE Trans Imag Proc 7(1998), 359-369.
Trier and Taxt and Trier and Jam evaluated eleven locally adaptive thresholding methods on gray scale document images with low contrast, variable background intensity and noise. See �.D. Trier and T. Taxt, "Evaluation of binarization methods for document images, IEEE trans on Pattern Analysis and Machine Intelligence 17 (1995), 312-31 5, and �.D. Trier and A.K. Jam, "Goal-directed evaluation of binarization methods, IEEE Trans on Pattern Analysis and Machine Intelligence 17 (1995), 1191-1201.
Summary of Invention
According to the invention, there is provided a method according to claim 1.
The inventors have developed a method which is capable, for example, of automatic segmentation of nuclei from Feulgen stained tissue sections. Automatic segmentation of nuclei from these sections is difficult because; 1) the cells may be clustered, 2) the image background varies, 3) there are intensity variations within the nuclei, and 4) the nuclear boundary may be diffuse.
The method of the invention is based on several steps in order to 1) detect the nuclei 2) optimize the initial contours for the snakes (i.e., finding a coarse segmentation of the nuclei) and 3) optimize the convergence of the snakes (i.e. performing the final segmentation of the nuclei), addressing the "capture range problem" indicated above.
The thresholding and post-processing steps may be carried out on a resized image shrunk by a predetermined factor. This speeds up these steps without greatly impacting the effects of the method.
The images may be scaled by a factor in the range 0.2 to 0.7, 0.4 to 0.6, or about 0.5.
The thresholding step may calculate for each pixel (i,j) a threshold t based on t = m(i,j) + k.s(i,j) where m and s are the mean and standard deviation of a neighborhood of pixels around the pixel (ii) and k is a constant in the range 0.1 to 0.4.
The post-processing step may include: labelling all objects and holes in the black and white image; calculating the average gradient magnitude of the grey scale image around the boundary of each labelled object and hole; removing objects for which the average gradient magnitude does not exceed a predetermined threshold; filling holes; and removing objects with an area less than a predetermined number of pixels.
The predetermined number of pixels may be 5 to 50, preferably around 10. The predetermined threshold may be 0.05 to 0.2, or 0.08 to 0.12.
The method may further comprising smoothing the gray scale image with a Gaussian filter and convolving the grey scale image with the derivative of a Gaussian before calculating the average gradient magnitude. The standard deviation of the Gaussian filter used may be in the range 1.5 to 2.5.
Segmenting may include extracting a sub image from the grey-scale image corresponding to each object remaining after the post-processing step.
The step of applying an edge detector may apply a Canny edge detector. The standard deviation of the Gaussian filter in the edge detector may be set to 1.5 to 2.5, or around 2.0, and high and low threshold values may be set to 0.1 to 0.25 for the low threshold and 0.6 to 0.8 for the high threshold.
A gradient vector flow snake method may be applied, using a regularization coefficient of 0.1 to 0.3 and at least 10 iterations, for example 50 to 100 iterations.
The invention also relates to a computer program product adapted to cause a computer to carry out a method as set out above.
Brief Description of Drawings
For a better understanding of the invention, embodiments will be described, purely by way of example, with reference to the accompanying drawings, in which: Figure 1 is a original image, the corresponding shade image and the shade corrected frame image; Figure 2 is the image obtained after local adaptive thresholding; Figure 3 is the image after post-processing; Figure 4 is the image after filling holes and removing objects; Figure 5 indicates seeds added on the original frame image; Figure 6 shows examples of sub-images; Figure 7 illustrates splitting nuclei; and Figure 8 shows the segmentation result.
Detailed description
1 Materials and methods 1.1 Tissue preparation Paraffin embedded tissue samples fixed in 4% buffered formalin were sectioned (5u m). The tissue sections were first stained with H&E, then de-stained and re-stained with Feulgen.
1.2 Image acquisition The Zeiss Axiolmager.Z1 automated microscope equipped with a 63/1.4 objective lens, a 546 nm green filter and a black and white high-resolution digital camera (Axiocam MRM, Zeiss) with 1040x1 388 pixels and a gray level resolution of 12 bits per pixel was used to capture each image field. The pixel resolution was 102 nm per pixel on the tissue specimen.
The images were passed to image processing apparatus arranged to take an input digital image from the digital camera and output one or more processed digital image(s) with the nuclei segmented. The image processing apparatus may be implemented in hardware or software; each of the method steps described below may be implemented in separate hardware or software or alternatively a single unit of hardware or software may implement some or all of the steps.
1.2.1 Preprocessing Varying illumination, reflectance and optical transmission over the field of view may play a role in the success of any image segmentation algorithm based on gray level thresholding. Although a locally adaptive thresholding method is used to detect the cell nuclei within each frame image, we still want to remove such shading effects. Thus, a shade correction was performed for each image field, i.e., each frame image was divided by a background image. Figure 1 shows an example of a frame image, the corresponding shade (background) image and the shade corrected frame image. In the present study, we use this frame image to illustrate the different steps of our segmentation method. Each frame image was inverted in order to obtain "bright" objects (nuclei) on a "dark" background.
The images were then filtered with a 5x5 median filter, i.e., the gray level of each pixel in the image was replaced by the median of the gray levels in a 5x5 neighborhood of that pixel. This was done in order to reduce possible noise without too much blurring of edges.
1.3 Optimizing the initialization and convergence of nuclear snakes 1.3.1 Detecting the nuclei / making a binary seed image For the object detection step, we do not need the full resolution of the frame images. Therefore, the preprocessed frame images were resized by a scale factor of 0.5 (i.e., scrinked by a factor of two) using bicubic interpolation and anti-aliasing, i.e., the output pixel value was a weighted average of the pixels in the nearest 4x4 neighborhood.
The resized frame images were then thresholded by using the Niblack method of local adaptive thresholding -see W. Niblack, An Introduction to Digital Image Processing, Prentice Hall, 1986. For each pixel (ii) in the image a threshold is calculated. The threshold is computed from the mean and standard deviation of the pixel gray level values within a neighborhood of the pixel.
For each pixel (i, i) in the frame image f(i, j) a threshold was calculated as t(i,j) = m(i,j) + kxs(i,j), where m and s are the mean and standard deviation of the gray level values within a neighborhood of (i, j) and k is a constant. We found that a neighborhood of 75x75 pixels and a k-value of 0.2 gave good results on our images. A binary (thresholded) image b(i, j) was then obtained by performing the following transformation of the gray scale frame image f(i, j) 1, f(i,j) t(i,j) b(i,j)= 0, f(i,j) <t(i,j) where t(i, j) is the threshold, see Figure 2.
The post-processing step of S.D. Yanowitz and A.M. Bruckstein, "A new method for image segmentation", Computer Vision, Graphics, and Image Processing 46 (1989), 82-95 was then applied on the thresholded frame image. In this approach, after segmenting an image, the post-processing step of Yanowitz and Bruckstein is used to remove false objects. This is done by looking at the gradient along the contour of each object. The average gradient value (computed from the gray level image) is computed for each object. If the average gradient value of a given object is lower than a defined threshold, then the object is regarded as a false object and removed from the image.
In particular, all the connected components (i.e., both objects and holes) in the thresholded image were labeled. This was done by computing a label matrix L, i.e., a 2-D matrix of non-negative integers that represent contiguous regions. The k-th region includes all elements in L that have value k. The zero-value elements of L make up the background Image-see Processing ToolBox User's Guide, Version 5, The MathWorks, Inc. 2004.
The boundaries of all labeled objects and holes (see Figure 3 (a)) were then scanned and the average gradient magnitude of each object (or hole) boundary was computed from the corresponding gradient values. In order to compute the gradient magnitude of the gray level frame image the image was first smoothed by a Gaussian filter and then convolved with the derivative of a Gaussian [10]. The standard deviation of the Gaussian filters was set to a = 2.0. If the average gradient magnitude of a given boundary did not exceed a certain threshold, then the object was removed from the image. We used a threshold value of 0.1, see Figure 3 (b).
The holes in the binary image were then filled by using an algorithm based on morphological reconstruction see P. SoilIe, Morphological Image Analysis: Principles and Applications, Springer-Verlag,1999, pp. 173-174. A hole was defined as a set of background pixels that cannot be reached by filling in the background from the edge of the image. 4-connected background neighbours were used.
Finally, small objects (with area less than 11 pixels) were removed from the image. All the objects in the binary image were labelled. Blob analysis was then performed on the labelled objects and a structure array of length N (containing the area of each object) was returned, where N was the number of objects in the binary image. All elements of the structured array that had an area greater than 10 pixels were identified and the linear indices of those elements were returned. The resulting binary image containing objects with area �= 11 pixels is shown in Figure 4 (a) and the shade corrected frame image with the corresponding object boundaries is shown in Figure 4 (b).
The binary images were then resized back to original size (by using a scale factor of 2.0). In order to produce "seeds" ("seed" is defined here as a subset of an object mask) for the nuclei, the binary image was eroded with a disk shaped structuring element with a radius of 21 pixels. Figure 5 shows our example frame with the resulting seeds added.
1.3.2 A coarse segmentation of the nuclei -optimizing the initial contours for the snakes The "seeds" in the "seed image" were used to extract sub-images of individual nuclei from the frame image. Each connected component ("seed") in the "seed image" was labeled. Seeds which included frame boundary pixels were excluded. Based on the bounding box of each "seed", i.e., the smallest rectangle containing the seed (with coordinates of the upper left corner x-min, y-min and size x-width, y-width), a sub-image (corresponding to a larger rectangle with coordinates x-min-60, y-min-60, x-width+120, y-width+120) was extracted from the frame image, see Figure 6(a). (If the larger rectangle was outside the frame image the size of the rectangle was reduced). A corresponding sub-image was extracted from the seed image, see Figure 6 (b). "Seeds" corresponding to other nuclei were removed from the "seed" sub-image, see Figure 6 (c). In order to reproduce the original object mask (which was eroded with a disk shaped structuring element with a radius of 21 pixels, see Section 1.3.1), the "seed" was dilated with the same structuring element (i.e., a disk shaped structuring element with a radius of 21 pixels), see Figure 6 (d). In order to produce a larger mask, the reproduced object mask was dilated with a disk shaped structuring element with a radius of 11 pixels. In order to produce a smaller mask, the reproduced object mask was eroded with a disk shaped structuring element with a radius of 15 pixels. The boundary of the larger mask was used as a start contour for the snake (see Figure 6 (i)).
1.3.3 Optimizing the convergence of the nuclear snakes The Canny edge detector was applied on the gray level sub-image. This is described in J. Canny, A computational approach to edge detection, IEEE Trans Pattern Analysis and Machine Intelligence 8 (1986), 679-714. This edge detector computes thin connected edge segments by performing the following four steps: a. The image is smoothed by a Gaussian filter (in order to remove noise).
b. The edges are detected by computing the gradient (magnitude and direction).
c. The edges is thinned d. Weak edges that are not connected to strong edges are removed (by using a low intensity threshold, a high intensity threshold (in the gradient magnitude image) and contour following.
The standard deviation of the Gaussian filters was set to a = 2.0 and the low and high threshold values were set to (TL = 0.18, TH = 0.7). The resulting Canny edge map (Figure 6 (e)) was added with the gradient magnitude (which was computed by first smoothing the image by a Gaussian filter and then convolving with the derivative of a Gaussian, Figure 6 (f)) to produce the image show in Figure 6 (g). This image was then "cleaned up" based on the larger and the smaller mask produced above (Section 1.3.2). All pixel values corresponding to pixels with value 0 in the "larger mask" binary image was set to 0 and all pixel values corresponding to pixels with value 1 in the "smaller mask" binary image was also set to 0, resulting in a bounded, annular edge map, see Figure 6 (h).
The gradient vector flow (GVF) snake was used. This is described in C. Xu and J.L. Prince, Gradient Vector Flow (GVF) Active Contour Toolbox, down-loaded from: p/!iacl.ecj.edu/rjects/Qvf, and C. Xu and J.L. Prince, Snakes, shapes, and gradient vector flow, IEEE Trans Imag Proc 7(1998), 359-369. In summary, a transformation of the digital gray level image f(i,j) into a binary (black and white) image b(i,j) is performed. A gray level threshold T is defined, and all pixels (ii) with a pixel gray level value f(i,j) greater or equal to T are transformed to 1 (i.e. pixel value 1, white) and all pixels (ii) with a pixel gray level value f(i,j) less than T are transformed to 0 (i.e. pixel value 0, black).
In the example, the GVF of the bounded annular edge map was computed. The GVF regularization coefficient p was set to 0.2 and the number of iterations was set to 80. The GVF was then normalized and used as an external force field. The points in the snake start contour were interpolated to have equal distances. The desired resolution between the points was set to 0.8. The snake was deformed in the given external force field. The internal force parameters were set as follows: elasticity a = 5.0, rigidity,3 = 10.0, viscosity y = 1. The external force weight K = 0.6. The snake deformation was iterated 30 times. After each deformation the snake was interpolated to have equal distances (with the desired resolution between the points set to 0.8).
1.4 Splitting nuclei in clusters A binary mask was created for each segmented object (i.e., nucleus or cluster of nuclei), see Figure 7(a). The convex hull of the binary mask was computed. The solidity of the binary mask, i.e. the proportion of the pixels in the convex hull that are also in the binary mask (solidity = number of pixels in the binary mask I number of pixels in the convex hull) was computed -see the "Image Processing ToolBox User's Guide" set out above. If the solidity of the mask was less than 0.96 we assumed that we had a cluster of (two) nuclei. We then computed the distance transform of the complement of the binary mask. The distance transform was opened (morphological opening) by using a disk shaped structuring element with a radius of 5 pixels. The pixel values of the result was multiplied by -1 and the pixel value of pixels corresponding to background in the binary mask image (i.e., pixels with value = 0) was set to the negative of infinite. The watershed transform of the result was computed to split the two nuclei, see Figure 6 (c).
2 Results Figure 8 shows the final segmentation result (shown as green boundaries) for our example frame.
3 Discussion There is no theory of image segmentation. As a consequence, no single standard method has emerged and there exist a large collection of ad hoc methods. Bengtsson et al. (see above) and D.L.
Pham, C. Xu and J.L. Prince. "Current methods in medical image segmentation", Annu Rev Biomed, 2000, 02:315-337 have discussed the advantages and disadvantages of the most commonly used segmentation methods. In the published literature on nuclear image analysis, the segmentation of nuclei in histological sections is often performed manually or semi-manually.
In the present study, the inventors have developed an automatic segmentation method that gives very good results on a large number of histological sections. The proposed method detects most of the epithelial nuclei (which are the nuclei that are of interest for further analysis) in each frame image.
Furthermore, the segmentation result is very accurate, i.e., the automatic segmentation of each nucleus is very close to the nuclear boundary. Compared to manual methods (based on manual delineation of the nuclei), automatic methods produce results that are more reproducible and make it possible to analyze a large number of nuclei (several thousand nuclei from each frame).

Claims (10)

  1. CLAIMS1. A segmentation method for processing a gray-scale image made up of a plurality of pixels, the image representing a plurality of nuclei, the method comprising: thresholding the grey-scale image to create a black and white image; post processing to identify objects in the black and white image and remove objects failing to meet predetermined criteria based on the black and white image and the gray-scale image; segmenting to extract objects corresponding to the objects remaining after post-processing from the gray-scale image; and using an edge detector on the segmented image to identify the edges of real nuclei.
  2. 2. A method according to claim 1 wherein the thresholding and post-processing steps are carried out on a resized image shrunk by a predetermined factor.
  3. 3. A method according to claim 1 or 2 wherein the thresholding step calculates for each pixel (i,j) a threshold t based on t = m(i,j) + k.s(i,j) where m and s are the mean and standard deviation of a neighborhood of pixels around the pixel (i,j) and k is a constant in the range 0.1 to 0.4.
  4. 4. A method according to any preceding claim wherein the post-processing step includes: labelling all objects and holes in the black and white image; calculating the average gradient magnitude of the grey scale image around the boundary of each labelled object and hole; removing objects and holes for which the average gradient magnitude does not exceed a predetermined threshold; filling holes; and removing objects with an area less than a predetermined number of pixels.
  5. 5. A method according to any preceding claim further comprising smoothing the gray scale image with a Gaussian filter and convolving the grey scale image with the derivative of a Gaussian before calculating the average gradient magnitude.
  6. 6. A method according to any preceding claim wherein segmenting includes extracting a sub-image from the grey-scale image corresponding to each object remaining after the post-processing step.
  7. 7. A method according to any preceding claim wherein the step of applying an edge detector applies a Canny edge detector.
  8. 8. A method according to any preceding claim wherein the step of applying an edge detector applies a gradient vector flow snake.
  9. 9. A computer program product adapted to cause a computer to carry out a method according to any preceding claim.
  10. 10. Image processing apparatus having an image input for acquiring an input image; and an image processing unit adapted to process the input image according to a method according to any of claims 1 to 8; and an image output for outputting one or more processed images.
GB0900248.6A 2009-01-09 2009-01-09 Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections Expired - Fee Related GB2466818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0900248.6A GB2466818B (en) 2009-01-09 2009-01-09 Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0900248.6A GB2466818B (en) 2009-01-09 2009-01-09 Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections

Publications (3)

Publication Number Publication Date
GB0900248D0 GB0900248D0 (en) 2009-02-11
GB2466818A true GB2466818A (en) 2010-07-14
GB2466818B GB2466818B (en) 2014-08-13

Family

ID=40379310

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0900248.6A Expired - Fee Related GB2466818B (en) 2009-01-09 2009-01-09 Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections

Country Status (1)

Country Link
GB (1) GB2466818B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496150A (en) * 2011-12-07 2012-06-13 山东大学 Smooth local region active contour model method based on Gaussian
WO2012110324A1 (en) 2011-02-15 2012-08-23 Siemens Aktiengesellschaft Method and device for examining a hollow organ using a magnet-guided endoscope capsule
CN102982534A (en) * 2012-11-01 2013-03-20 北京理工大学 Canny edge detection dual threshold acquiring method based on chord line tangent method
CN103345748A (en) * 2013-06-26 2013-10-09 福建师范大学 Positioning and partition method for human tissue cell two-photon microscopic image
WO2017158560A1 (en) * 2016-03-18 2017-09-21 Leibniz-Institut Für Photonische Technologien E.V. Method for examining distributed objects by segmenting an overview image
US10977788B2 (en) 2017-04-27 2021-04-13 Sysmex Corporation Image analysis method, image analysis apparatus, and image analysis program for analyzing cell with deep learning algorithm

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833200B (en) * 2017-09-19 2021-03-26 浙江农林大学 Method and system for detecting independent and adhesive myocardial cell nucleus area
CN113155578A (en) * 2017-12-29 2021-07-23 乔治洛德方法研究和开发液化空气有限公司 Dyeing method of filamentous microorganisms and application thereof
CN109242845B (en) * 2018-09-05 2021-07-02 北京市商汤科技开发有限公司 Medical image processing method and device, electronic device and storage medium
CN111815660B (en) * 2020-06-16 2023-07-25 北京石油化工学院 Method and device for detecting edges of goods in dangerous chemical warehouse and terminal equipment
CN113763371B (en) * 2021-09-15 2023-08-18 上海壁仞智能科技有限公司 Pathological image cell nucleus segmentation method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0610916A2 (en) * 1993-02-09 1994-08-17 Cedars-Sinai Medical Center Method and apparatus for providing preferentially segmented digital images
EP0664038A1 (en) * 1992-02-18 1995-07-26 Neopath, Inc. Method for identifying objects using data processing techniques
EP0901665A1 (en) * 1996-05-10 1999-03-17 Oncometrics Imaging Corp. Method and apparatus for automatically detecting malignancy-associated changes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682886A (en) * 1995-12-26 1997-11-04 Musculographics Inc Computer-assisted surgical system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0664038A1 (en) * 1992-02-18 1995-07-26 Neopath, Inc. Method for identifying objects using data processing techniques
EP0610916A2 (en) * 1993-02-09 1994-08-17 Cedars-Sinai Medical Center Method and apparatus for providing preferentially segmented digital images
EP0901665A1 (en) * 1996-05-10 1999-03-17 Oncometrics Imaging Corp. Method and apparatus for automatically detecting malignancy-associated changes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Mao et al, "Supervised Learning-Based Cell Image Segmentation for P53 Immunohistochemistry", IEEE Transactions on Biomedical Engineering, Vol. 53, No. 6, June 2006. *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012110324A1 (en) 2011-02-15 2012-08-23 Siemens Aktiengesellschaft Method and device for examining a hollow organ using a magnet-guided endoscope capsule
CN102496150A (en) * 2011-12-07 2012-06-13 山东大学 Smooth local region active contour model method based on Gaussian
CN102982534A (en) * 2012-11-01 2013-03-20 北京理工大学 Canny edge detection dual threshold acquiring method based on chord line tangent method
CN103345748A (en) * 2013-06-26 2013-10-09 福建师范大学 Positioning and partition method for human tissue cell two-photon microscopic image
CN103345748B (en) * 2013-06-26 2016-08-10 福建师范大学 A kind of locating segmentation method of human tissue cell two-photon micro-image
WO2017158560A1 (en) * 2016-03-18 2017-09-21 Leibniz-Institut Für Photonische Technologien E.V. Method for examining distributed objects by segmenting an overview image
US11599738B2 (en) 2016-03-18 2023-03-07 Leibniz-Institut Für Photonische Technologien E.V. Method for examining distributed objects by segmenting an overview image
US10977788B2 (en) 2017-04-27 2021-04-13 Sysmex Corporation Image analysis method, image analysis apparatus, and image analysis program for analyzing cell with deep learning algorithm

Also Published As

Publication number Publication date
GB0900248D0 (en) 2009-02-11
GB2466818B (en) 2014-08-13

Similar Documents

Publication Publication Date Title
US8942441B2 (en) Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections
GB2466818A (en) Cell image segmentation using binary threshold and greyscale image processing
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
EP1646964B1 (en) Method and arrangement for determining an object contour
CN111462076A (en) Method and system for detecting fuzzy area of full-slice digital pathological image
Bibiloni et al. A real-time fuzzy morphological algorithm for retinal vessel segmentation
CA2454091A1 (en) Chromatin segmentation
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
Plissiti et al. Automated segmentation of cell nuclei in PAP smear images
CN108596176B (en) Method and device for identifying diatom types of extracted diatom areas
EP3510526B1 (en) Particle boundary identification
CN110517273B (en) Cytology image segmentation method based on dynamic gradient threshold
US7142732B2 (en) Unsupervised scene segmentation
WO2014066218A2 (en) Cast recognition method and device, and urine analyzer
CN113470041B (en) Immunohistochemical cell image cell nucleus segmentation and counting method and system
CN116433978A (en) Automatic generation and automatic labeling method and device for high-quality flaw image
Charles et al. Object segmentation within microscope images of palynofacies
CN113989799A (en) Cervical abnormal cell identification method and device and electronic equipment
CN110458042B (en) Method for detecting number of probes in fluorescent CTC
Gim et al. A novel framework for white blood cell segmentation based on stepwise rules and morphological features
Setiawan et al. Improved Edge Detection Based on Adaptive Gaussian Smoothing in X-Ray Image
Joshi et al. Lung Cancer Detection Using Image Processing
Yang et al. Measuring shape and motion of white blood cells from sequences of fluorescence microscopy images
Kamath et al. Robust extraction of statistics from images of material fragmentation
Mabaso Automatic approach for spot detection in microscopy imaging based on image processing and statistical analysis

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20140515 AND 20140521

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20220109