WO2013148485A2 - Detection of tissue regions in microscope slide images - Google Patents

Detection of tissue regions in microscope slide images Download PDF

Info

Publication number
WO2013148485A2
WO2013148485A2 PCT/US2013/033415 US2013033415W WO2013148485A2 WO 2013148485 A2 WO2013148485 A2 WO 2013148485A2 US 2013033415 W US2013033415 W US 2013033415W WO 2013148485 A2 WO2013148485 A2 WO 2013148485A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
tissue
regions
processor
maximum
Prior art date
Application number
PCT/US2013/033415
Other languages
French (fr)
Other versions
WO2013148485A3 (en
Inventor
Ying Li
David L HENDERSON
Jasen DOBSON
Original Assignee
Clarient Diagnostic Services, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clarient Diagnostic Services, Inc. filed Critical Clarient Diagnostic Services, Inc.
Publication of WO2013148485A2 publication Critical patent/WO2013148485A2/en
Publication of WO2013148485A3 publication Critical patent/WO2013148485A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image

Definitions

  • the present invention relates generally to the field of tissue / cell colony detection and imaging, and more particularly to a system and method for efficiently segmenting a low resolution microscope tissue slide image into different regions, such as background and tissue regions.
  • a tissue is an ensemble of cells, typically from the same origin, that together may carry out a specific function.
  • the cells are spatially distributed to create a signature texture for a specific tissue.
  • One of the challenges of tissue detection is the lack of distinct boundaries or edges. In a typical tissue slide image, there is a variable intensity and density of cells, as well as a lack of well-defined edges.
  • the automated microscopic systems used in high-content screening allow users to acquire images of biological samples, such as tissues and cell colonies, located on the microscope slides. For an unknown slide, users have to scan the entire slide so that no samples are missed. This procedure is time and space consuming, since a large amount of images are taken on the unnecessary regions.
  • a known algorithm used in microscopic images to detect tissue objects is the active contours method.
  • Active contour methods for image segmentation allow a contour to deform iteratively so as to partition an image into regions.
  • the primary drawbacks of active contour methods are that they are slow to compute and the contours are susceptible to being trapped in local minimums. Accordingly, there is a need in the art for a faster, more accurate method for computing the boundaries of tissue samples.
  • Exemplary embodiments of the present invention provide a system and method that can expedite the segmentation process and avoid or reduce trapping in local minimums.
  • the system and method include a segmentation process to obtain initial boundaries of the tissue regions, followed by a fast active contours method to refine the segmentation result and return the true boundaries.
  • the second step (active contours) can be omitted for the sake of performance, if desired.
  • the invention relates to a computer implemented method comprising: obtaining an image of a tissue using at least one processor; segmenting the image with the processor into a plurality of regions with either (a) a Gaussian Markov Random Field (MRF) based method, or (b) a discrete Markov Random Field (MRF) based method; and classifying the plurality of regions into a background region and a tissue region to form a binary mask.
  • MRF Gaussian Markov Random Field
  • MRF discrete Markov Random Field
  • the invention relates to a system comprising: a processor; and a memory comprising computer-readable instructions which when executed by the processor cause the processor to perform the steps comprising: obtaining an image of a tissue using at least one processor; segmenting the image with the processor into a plurality of regions using either (a) a Gaussian Markov Random Field (MRF) based method, or (b) a discrete Markov Random Field (MRF) based method; and classifying the plurality of regions into a background region and a tissue region to form a binary mask.
  • MRF Gaussian Markov Random Field
  • MRF discrete Markov Random Field
  • Figure 1 depicts one example of an image of a tissue sample.
  • Figure 2 depicts a method according to one embodiment of the invention.
  • Figures 3A and 3B depict additional steps in the method of Figure 2 according to exemplary embodiments of the invention.
  • Figure 4 depicts the relationship of horizontal, vertical, and diagonal cliques with the current pixel site s according to an exemplary embodiment of the invention.
  • Figures 5A-5D depict an image that is processed with a method according to one embodiment of the invention.
  • Figure 6 depicts a system according to one embodiment of the invention.
  • FIGS 7A-7D depict examples of slide layout configurations for tissue samples and tissue micro arrays (TMAs).
  • a method for automatically locating tissue regions in microscopy slide images.
  • Figure 1 depicts an example of an image of a tissue sample.
  • Embodiments of the invention provide for automatic high power scanning (e.g., 10X magnification or higher) of only the tissue regions instead of the entire slide.
  • the process may employ an initial low magnification (e.g., 2X) scan of the entire slide.
  • the initial low magnification scan may use a number of fields of view (FOV) to cover the whole slide, e.g., 4 x 7 FOVs covering the slide.
  • FOV fields of view
  • objects of interest are detected from the low magnification scan.
  • the low power scans may be reviewed and regions on the slide containing object(s) of interest may be pre-defined.
  • two to three regions may be defined on the slide, such as a label region, a control region, and a tissue region as shown in Figures 8A through 8D.
  • These regions may be defined by use of a slide template that requires the control and tissue to be placed within well-defined regions of the slide (e.g., top half and bottom half), or they may be defined by the user by drawing a bounding box for each region on the screen using the low-resolution image as a guide.
  • the control region may provide known staining levels for a specified biomarker, for example, and may be used as a visual comparison or as a means to quantitatively normalize the tissue image.
  • the label region may comprise an appropriate label such as a bar code, for example.
  • the image acquisition system can store and use slide layout templates to provide this knowledge to the tissue detection component of the system.
  • a specific predefined layout may be assigned to each slide and communicated to the tissue detection module from a workflow manager, for example, or can be permanently assigned to a test type.
  • the slide layouts as shown in Figures 8A through 8D are used.
  • Figure 7A depicts a template that may be used for tissue slides that do not have on-slide controls.
  • Figure 7B depicts a template that may be used for highly-multiplexed tests in which controls must be provided for multiple biomarkers.
  • each spot in Tissue Micro Array (TMA) in the control region would be assigned as positive or negative control for one or more biomarkers.
  • Tissue Micro Array Tissue Micro Array
  • a template similar to Figure 7C can be used where entire regions of the slide would be used to designate control tissue.
  • the template depicted in Figure 7D could be used for research applications in which a large number of tissues are examined, according to exemplary embodiments of the invention.
  • exemplary embodiments of the invention provide a method for
  • tissue regions and/or cell pellets in both control and sample regions in a low magnification scan of a microscopy slide.
  • the method can be used to segment and track various tissues including cell colonies and similar microstructures which are composed of a finite number of components, such as red blood cells, nuclei, etc., as observed at a microscopic scale.
  • the tissue regions detected in the low magnification scan become regions of interest (ROI) for higher resolution scanning.
  • ROI regions of interest
  • the detection results from the low magnification scan may be highlighted on a display for a user, such as by displaying colored outlines of the tissue regions and the control regions detected by the exemplary method.
  • the tissue detection method may return a label mask or boundary indicating the detected tissue within each declared region, or it may return a list of imaging locations (or fields of view) required to cover all of the detected tissue at a specified higher resolution.
  • the display may then show the detected tissue and/or imaging locations for review by the user prior to beginning the high-resolution scan.
  • the system may be totally automatic (e.g., it automatically defines the regions of interest for subsequent high resolution scanning). In addition, it may also allow user interactions by providing tools for the user to manually adjust the desired fields of view (FOV) for higher resolution scans.
  • FOV desired fields of view
  • the method for detecting boundaries of tissue regions in a microscopy image is a hybrid algorithm of a Gaussian MRF-based method and an active contours method.
  • the method may be referred to as "GMRF-MPM," because it estimates the segmentation field, by estimating and maximizing the posterior marginal probability ("MPM") conditioned on the observed image, while the hidden posteriori probability field is modeled by a continuous Gaussian Markov Random Field (GMRF).
  • GMRF-MPM process finds relatively precise boundaries of tissue regions, which is very close to the global minimum, and then the active contours method is used to refine the initial segmentation result. It should be appreciated that that active contours step is optional, depending on the desired precision of the results.
  • a number of fields of view generally corresponding to the tissue regions, can be defined for subsequent high magnification (e.g., 10X or higher) scanning.
  • An additional feature of exemplary embodiments of the invention involves correctly labeling each detected region as sample or control. Moreover, it is expected that specific control regions can be included in or omitted from an imaging round based on their association with specific biomarkers. Other embodiments of the invention can be used for automatic stem cell colony (or other cellular colonies) finding and tracking to aid embryonic developmental biologists in understanding how these arrangements lead to and control the formation of structures. Still other embodiments make smart acquisition of high resolution images only on regions of interest (e.g., tissues, cell colonies) possible. For example, when used in combination with an automated microscopic system, embodiments of the invention enable the system to first fast scan slides in low resolution, then find regions of interest, and finally image only desired regions in the biological specimen under proper experimental conditions automatically.
  • regions of interest e.g., tissues, cell colonies
  • the "image" used for the proposed segmentation algorithm does not have to be a grey scale image. It can be a multispectral image, or it also can be a set of features on the image lattice.
  • the original grey scale image combined with a variance image calculated from the original image, can form a 2-D vector on each pixel site, which can be an input for this algorithm according to an exemplary embodiment of the invention.
  • Figure 1 depicts a typical tissue image.
  • the variable intensity and density of cells is apparent, as well as the lack of well-defined edges, which can blend into the background of the image.
  • One problem of image segmentation is that of estimating the hidden or unobserved realization of the label field (the class) from the observed image.
  • the value of a given site in the label field indicates the class to which the corresponding pixel in the observed image belongs.
  • the Finite Gaussian Mixture model (FGM) is a known model used in segmentation.
  • the FGM model has a number of advantageous features.
  • the FGM model has an intrinsic limitation, because it neglects pixel-level spatial correlation by assuming the image pixels are statistically independent and identically distributed ("i.i.d.”).
  • Such limitation causes FGM based algorithms, e.g., K-Means algorithms (see, e.g., Tou, J. T. and Gonzalez, R. C, Pattern
  • MRF Markov Random Field
  • MAP maximum a posteriori estimation
  • MPM maximum posterior marginal probability estimation
  • the MAP estimation is used to estimate the unobserved class/label field x, given the observed image y, by maximizing a posteriori probability p(xly).
  • the MPM estimate estimates the label x s at each pixel site s in the image grid lattice, such that for each pixel site s, x maximizes the posterior marginal probability P
  • XMPMC argm x x pdx s iy ⁇
  • Exemplary embodiments of the invention separate the pixels in the image into regions based on both their intensity (and/or other features) as well as their spatial information.
  • the goal is to separate tissue regions from an even background, which is relatively smooth.
  • tissue textures can be considered to be noise, which is characterized by the covariance matrix.
  • the image can be modeled as a Mixture of Gaussians (MoG) with spatial constraints.
  • the spatial constraint is introduced by modeling the posteriori marginal conditional probability field as a Gaussian MRF. Note that the spatial constraint can also be incorporated by modeling the label field as a discrete MRF.
  • FIG. 2 depicts a flow chart of a method according to an exemplary embodiment of the invention.
  • the original image is obtained.
  • the original image may be of a tissue, such as, for example, an image as depicted in Figure 1.
  • the image is preprocessed.
  • the preprocessing can include, for example, field correction and noise removal (e.g., salt and pepper noise).
  • features are extracted to form a multispectral image for better segmentation results.
  • the feature extraction can include calculation of local variance and texture features on each pixel site, for example.
  • the GMRF-MPM and/or MRF-MAP methods is conducted to segment the image into four regions or classes.
  • the four regions are classified into a background region and a tissue region to form a binary mask.
  • morphological operations are performed on the binary mask to eliminate small isolated regions, fill in holes, and dilate the mask to ensure that the mask covers the whole tissue region and provide a good initial contour for the following active contour method.
  • an optional active contours method is executed on the image using the boundary of the binary mask as the initial contour.
  • the active contours method is used to refine the segmentation result of the GMRF-MPM or MRF-MAP process.
  • Active contour model methods can cope with texture images and high level noise.
  • a problem of the active contours method is that it is extremely slow if given an arbitrary initial contour, and it is easy to be trapped in local minimum. This is the reason it can be used to refine the segmentation results obtained by the preceding steps according to exemplary embodiments of the invention.
  • Figures 3A and 3B depict a flow chart that provides additional details of the method of Figure 2 with regard to the application of the GMRF-MPM and MRF-MAP algorithms, according to an exemplary embodiment of the invention.
  • an input image is prepared.
  • the image can be an original grey scale image or a multivariate image of features.
  • the image can be an image of a tissue region as depicted in Figure 1.
  • an estimate of the label field is obtained by separating the image into four regions.
  • the separation can be random, by thresholding, or by other methods such as k- means.
  • the estimated label field is used to calculate the Maximum Likelihood Estimation of the Mixture of Gaussians (MoG) parameters.
  • MoG Mixture of Gaussians
  • X i3 ⁇ 4 denotes the hidden class (label) field to be estimated, and s denotes the pixel sites in the image lattice.
  • the observed image can be considered as a Mixture of Gaussians (MoG).
  • the intensity information is incorporated by adopting the Gaussian distribution for an image pixel y s on the pixel site s, given a specified classification/segmentation x s .
  • equation (2) above can be applied to a microscopic image with multiple fluorescence channels.
  • -i and ⁇ ( ⁇ ⁇ ) are the mean, and covariance matrix (variance) of class 1, respectively.
  • the partial of each class is computed as:
  • the mean of each class is computed as:
  • the covariance matrix/variance of each class is computed as:
  • GMRF-MPM model is applied and the marginal probability, ignoring spatial constraint, is:
  • the marginal probability is adjusted using the Gaussian MRF model of the hidden posteriori marginal probability field to incorporate spatial constraints:
  • the MRF model achieves the spatial constraints of image pixels by characterizing mutual influences among them using conditional MRF distributions.
  • the pixel sites are related to each other via a neighborhood system.
  • the conditional distribution of a pixel site s in the image field is identical to the conditional distribution of the site given only a symmetric neighborhood surrounding the site s, that is:
  • x is the value of pixel site s
  • N s is the neighbor or pixel size s. It could be either a 4-neighborhood or an 8-neighborhood.
  • the MRF is used to model the label field
  • X the priori probability of the current pixel site s of being class f (label ⁇ ) can be estimated given the estimation of the labels of its neighbors.
  • the posteriori marginal conditional probability field in favor of class 1, 3 ⁇ 4
  • the posteriori marginal conditional probability of pixel site s being class 1 (labeled as 1) can be estimated given the estimated posteriori marginal conditional probability of its neighborhood.
  • the MPM estimate of the class label is obtained by maximizing the marginal conditional probabilities of the class labels given the observed image.
  • conditional probabilities can be estimated by using a Gibbs sampler, such as disclosed in Marroquin, J., Mitter, S.,and Poggio, T., "Probability Solution of 111 Posed Problems in Computational Vision, IEEE Trans, on Image Processing, 3: 162-177 (1994). But this operation may be computationally expensive.
  • the estimate of the true value 3 ⁇ 4s can be chosen to be the maximum likelihood estimation of ' ' ' ' .*.. ⁇ given 3 ⁇ 4 ⁇ 3 ⁇ 4 :
  • a and ⁇ are unknown parameters and these parameters are set empirically according an embodiment of the invention.
  • an updated label field can be achieved by assigning the label, which maximize the ⁇ ⁇ $, to each pixel site s in the image lattice.
  • equation (4) can be rewritten as the following:
  • the computation of the MAP estimation is also computationally expensive, especially for real- world applications.
  • the same ICM method can be used to iteratively estimate x s by maximizing local joint probability. Because of the local characteristics of MRFs, the local joint probability can be calculated as:
  • the MRF-MAP model is applied and conditional probability is calculated:
  • the prior probability is calculated using the discrete MRF model: By utilizing a discrete Markov Random Field to model the label field X, the priori probability P&s can be calculated.
  • the posterior probability is calculated:
  • a new label field x 1 is obtained by assigning a label which maximizes the estimated conditional probability of the pixel site s in the i th iteration.
  • a convergence check is performed to see if the number of pixel labels that has changed is less than a predefined number (M), or if the iteration number is greater than a predefined number of iterations (N). If so, then the method goes to block 330. Otherwise, the method iterates as shown in element 306 in Figure 3A.
  • the four regions are classified into two classes: tissue and background.
  • variable intensity and density of cells they can be segmented into 4 classes (or any other desired number of classes) to capture all the tissue regions with different intensities. And then all tissue regions can be combined to form a binary mask of foreground (tissues) and background.
  • a classification method is used. This classification method can be a simple nearest neighborhood, or a sophisticated method, for example, a decision tree. For different kinds of tissues, such classification method could be different. For example for tissue micro array (TMA) spots as shown in Figures 7A-7D, the tissue spots usually have round shapes and are arranged regularly on a rectangle grid, while for a tissue sample, the size and shape could be very random.
  • Figures 5A - 5D illustrate an exemplary classification method for tissue regions.
  • Figure 5A depicts an example of an original image.
  • Figure 5B depicts an image that has been segmented into four regions according to an embodiment of the invention.
  • Figure 5C depicts a binary mask comprising a foreground (tissue) region and background region according to an embodiment of the invention.
  • Figure 5D depicts the mask after a few morphological operations, prior to running the active contours step.
  • an optional active contours step 214 may be conducted.
  • the objective of the active contour step is to evolve a curve, subject to constraints from a given image, to detect the object in that image. For example, starting with a curve around the object to be detected, the curve moves toward its interior normal and stops on the boundary of the object.
  • the strength attracting the curve to the true boundary of the object of interest comes from minimizing of two kinds of energies: inner energy and outer energy.
  • Inner energy such as curvature, curve length and so on, adds stiffness, smoothness and other desired characters to the resulting curve.
  • the outer energy comes from the image, which attracts the curve to the true boundary of the object.
  • a level set based active contour method is used to refine the contour found in step 330. The method can utilize an
  • This method utilizes the level set approach to find the 0 level of a surface (energy surface) to find the boundary of the objects of interest.
  • the surface starts with an initiation, which is obtained by the mask from step (330), and updates the surface step by step, at the same time it updates the zero level set converging to the objects' boundaries.
  • the energy surface can be defined to have zero/minimum absolute value at the boundary of the objects.
  • the methods described herein may be executed or otherwise performed by one or a combination of various systems, components, and sub- systems, including a computer implemented system.
  • a computer implemented system may incorporate one or more computer processors, computer memory, computer storage, and software.
  • the computer implemented system may have various modules. As used herein, the term module may be understood to refer to executable software, firmware, hardware, and/or various combinations thereof.
  • the computer implemented system may have a processor and operably coupled memory that is configured to execute one or more steps of the methods described herein. The processing to execute the method steps may be distributed over multiple computer components, such as multiple processors or even multiple computer systems.
  • the computer implemented system may be stand-alone or communicatively coupled to a computer network, such as a local area network or the Internet.
  • Such communicative coupling may involve a wireless or wired connection.
  • the methods herein may be embodied in software which may be tangibly embodied in one or more non-transitory physical computer-readable media, such as, but not limited to, a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a hard drive, read only memory (ROM), random access memory (RAM), as well as other physical media capable of storing software, and/or combinations thereof.
  • CD compact disc
  • DVD digital versatile disc
  • ROM read only memory
  • RAM random access memory
  • FIG. 6 depicts a system according to an exemplary embodiment of the invention.
  • the system 600 may include an imaging device 602.
  • the imaging device 602 may include a microscope or a camera device, such as a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD) camera.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the imaging device 602 may use different imaging methods such as white light, fluorescent light, and transmitted light, including bright- field, differential interference contrast (DIC), and phase contrast.
  • Slides that include tissue samples may be loaded into the device 602.
  • Several sub-units may be attached, such as an X-Y stage and a Z stage.
  • the X-Y stage is used to move samples to the field of view, the Z stage usually is used to bring sample to the focus.
  • the imaging device 602 may then proceed to generate images of the slides according to the methods described herein.
  • the imaging device 602 may be capable of different imaging types, including transmitted light imaging, white light imaging, and fluorescent imaging.
  • the imaging device 602 includes a processor, a memory, an input device such as a keyboard and mouse, and a display.
  • the imaging device 602 may be programmed with software or firmware to execute the exemplary methods described herein.
  • the imaging device 602 may include a graphical user interface on the display. A user may interact with the system imaging device 602 through the graphical user interface.
  • the graphical user interface may provide for viewing and manipulation of images on the imaging device 602.
  • the imaging device 602 may have one or more peripheral devices 604 communicatively coupled to it.
  • the imaging device may be equipped with robots for loading of micro plates or slides so that the system can automatically image a large amount of micro plates/slides under the same experimental conditions.
  • a liquid handling sub-unit and an environment control sub-unit can also be attached to the imaging device.
  • the liquid handling sub-unit is usually used to add a drug to the sample during imaging, while the environment control sub-unit may be used to control the environment surrounding the sample, such as temperature, humidity and the concentration of C0 2 .
  • the device may also include an attached barcode scanner to identify the slides as well as environmental controls or fluidic handling apparatus to manipulate the staining within the imaging device, according to some embodiments of the invention.
  • the imaging device and the sub-units can make up the whole body of the platform, which may be called the hardware of the system that is controlled by a computer through computer programs that are called as software.
  • the network 606 may be a computer based network.
  • the network 606 may be local area network or its equivalent.
  • the network 606 may include, or be connected to, the Internet.
  • the network 606 may include one or more wireless networks.
  • the imaging device 602 may be connected to other devices through the network 606. For example, images generated by the imaging device 602 may be transmitted across the network 606 to another processing device 608.
  • the device 608 may perform additional processing of the images. Or, the device 608 may perform methods described herein on the images according to some embodiments of the invention.
  • the device 608 may be a computing device, having one or more processors and/or memory contained therein.
  • the device 608 may provide control for and interaction with the imaging device 602.
  • the device 608 may include one or more displays and a graphical user interface, and other input devices. It should be appreciated that while a single device 608 is depicted, device 608 may consist of multiple devices.
  • the system 600 may execute the methods according to exemplary embodiments of the invention.
  • the system 600 may contain executable code embodied in a non- transitory medium to cause the imaging device 602 and/or the device 608 to execute the methods of Figures 2 and 3.
  • the method may be performed entirely on either the imaging device 602 or the device 608, or the method steps may be distributed between the imaging device 602 and the device 608 according to exemplary embodiments of the invention.
  • FIG. 1 depict various functionality and features associated with exemplary embodiments of the invention. While a single illustrative block, sub-system, device, or component may be shown, these illustrative blocks, sub-systems, devices, or components may be multiplied for various applications or different application environments. In addition, the blocks, sub-systems, devices, or components may be further combined into a consolidated unit. Further, while a particular structure or type of block, sub-system, device, or component is shown, this structure is meant to be exemplary and non-limiting, as other structure may be able to be substituted to perform the functions described.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

According to one embodiment, the invention relates to a computer implemented method for detecting tissue samples in low resolution microscopic images comprising: obtaining an image of a microscopic tissue slide using at least one processor; segmenting the image with the processor into a plurality of regions using either (a) a maximum posterior marginal probability (MPM) process with a Gaussian Markov Random Field (GMRF), or (b) a maximum a posteriori (MAP) estimation with a discrete Markov Random Field (MRF); and classifying the plurality of regions into a background region and a tissue region to form a binary mask. The method may also comprise applying an active contour method to the binary mask to refine the tissue boundary. The invention also relates to a computer implemented system comprising a memory and a processor that is programmed to carry out the tissue detection method.

Description

DETECTION OF TISSUE REGIONS IN MICROSCOPE SLIDE IMAGES
Field of the Invention
The present invention relates generally to the field of tissue / cell colony detection and imaging, and more particularly to a system and method for efficiently segmenting a low resolution microscope tissue slide image into different regions, such as background and tissue regions.
Background of the Invention
A tissue is an ensemble of cells, typically from the same origin, that together may carry out a specific function. The cells are spatially distributed to create a signature texture for a specific tissue. One of the challenges of tissue detection is the lack of distinct boundaries or edges. In a typical tissue slide image, there is a variable intensity and density of cells, as well as a lack of well-defined edges. Currently, the automated microscopic systems used in high-content screening allow users to acquire images of biological samples, such as tissues and cell colonies, located on the microscope slides. For an unknown slide, users have to scan the entire slide so that no samples are missed. This procedure is time and space consuming, since a large amount of images are taken on the unnecessary regions. The situation gets even worse especially when high magnification (such as when a 10X, 20X, or even 40X objective is used) is required to perform imaging of the samples of interest. It is desirable for the imaging device to take high resolution images only on tissue / cell colony regions (as opposed to background regions). Moreover, it is expected that specific control regions can be included in or omitted from an imaging round based on their association with specific biomarkers. To achieve the foregoing objectives, it may be necessary to rely on a tissue detection algorithm to automatically detect tissue / cell colony regions on low
magnification microscopic slide images.
A known algorithm used in microscopic images to detect tissue objects is the active contours method. Active contour methods for image segmentation allow a contour to deform iteratively so as to partition an image into regions. The primary drawbacks of active contour methods are that they are slow to compute and the contours are susceptible to being trapped in local minimums. Accordingly, there is a need in the art for a faster, more accurate method for computing the boundaries of tissue samples.
Summary of the Invention
Exemplary embodiments of the present invention provide a system and method that can expedite the segmentation process and avoid or reduce trapping in local minimums. According to one example, the system and method include a segmentation process to obtain initial boundaries of the tissue regions, followed by a fast active contours method to refine the segmentation result and return the true boundaries. The second step (active contours) can be omitted for the sake of performance, if desired.
According to one embodiment, the invention relates to a computer implemented method comprising: obtaining an image of a tissue using at least one processor; segmenting the image with the processor into a plurality of regions with either (a) a Gaussian Markov Random Field (MRF) based method, or (b) a discrete Markov Random Field (MRF) based method; and classifying the plurality of regions into a background region and a tissue region to form a binary mask. An active contours method can then be applied to the binary mask to further refine the boundary.
According to another embodiment, the invention relates to a system comprising: a processor; and a memory comprising computer-readable instructions which when executed by the processor cause the processor to perform the steps comprising: obtaining an image of a tissue using at least one processor; segmenting the image with the processor into a plurality of regions using either (a) a Gaussian Markov Random Field (MRF) based method, or (b) a discrete Markov Random Field (MRF) based method; and classifying the plurality of regions into a background region and a tissue region to form a binary mask. Exemplary embodiments of the invention can provide the advantage of saving time and storage space by avoiding high resolution scanning of background regions.
Brief Description of the Drawings
In order to facilitate a fuller understanding of various embodiments of the present invention, reference is now made to the attached drawings. The drawings should not be construed as limiting the present invention, but are intended only to be examples of embodiments of the invention.
Figure 1 depicts one example of an image of a tissue sample. Figure 2 depicts a method according to one embodiment of the invention.
Figures 3A and 3B depict additional steps in the method of Figure 2 according to exemplary embodiments of the invention.
Figure 4 depicts the relationship of horizontal, vertical, and diagonal cliques with the current pixel site s according to an exemplary embodiment of the invention.
Figures 5A-5D depict an image that is processed with a method according to one embodiment of the invention.
Figure 6 depicts a system according to one embodiment of the invention.
Figures 7A-7D depict examples of slide layout configurations for tissue samples and tissue micro arrays (TMAs).
It will be understood by those persons skilled in the art that the embodiments of the invention described herein are capable of broad utility and application. Accordingly, while the invention is described in detail in relation to the exemplary embodiments, it is to be understood that this disclosure is illustrative only, and is not intended to be construed to limit the invention or otherwise to exclude any other such embodiments, adaptations, variations, modifications and equivalent arrangements.
Detailed Description of the Preferred Embodiments
According to one embodiment of the invention, a method is provided for automatically locating tissue regions in microscopy slide images. Figure 1 depicts an example of an image of a tissue sample. Embodiments of the invention provide for automatic high power scanning (e.g., 10X magnification or higher) of only the tissue regions instead of the entire slide. The process may employ an initial low magnification (e.g., 2X) scan of the entire slide. The initial low magnification scan may use a number of fields of view (FOV) to cover the whole slide, e.g., 4 x 7 FOVs covering the slide. Next, objects of interest are detected from the low magnification scan. For example, the low power scans may be reviewed and regions on the slide containing object(s) of interest may be pre-defined.
According to one example, two to three regions may be defined on the slide, such as a label region, a control region, and a tissue region as shown in Figures 8A through 8D. These regions may be defined by use of a slide template that requires the control and tissue to be placed within well-defined regions of the slide (e.g., top half and bottom half), or they may be defined by the user by drawing a bounding box for each region on the screen using the low-resolution image as a guide. The control region may provide known staining levels for a specified biomarker, for example, and may be used as a visual comparison or as a means to quantitatively normalize the tissue image. The label region may comprise an appropriate label such as a bar code, for example.
Different tests require different slide layouts, and it is anticipated that a priori knowledge of the slide layout will be known to accurately detect and label tissue. The image acquisition system, according to exemplary embodiments of the invention, can store and use slide layout templates to provide this knowledge to the tissue detection component of the system. A specific predefined layout may be assigned to each slide and communicated to the tissue detection module from a workflow manager, for example, or can be permanently assigned to a test type. According to one example, the slide layouts as shown in Figures 8A through 8D are used. Figure 7A depicts a template that may be used for tissue slides that do not have on-slide controls. Figure 7B depicts a template that may be used for highly-multiplexed tests in which controls must be provided for multiple biomarkers. In this case, each spot in Tissue Micro Array (TMA) in the control region would be assigned as positive or negative control for one or more biomarkers. In cases where tissue samples were used as controls, a template similar to Figure 7C can be used where entire regions of the slide would be used to designate control tissue. Finally, the template depicted in Figure 7D could be used for research applications in which a large number of tissues are examined, according to exemplary embodiments of the invention.
Accordingly, exemplary embodiments of the invention provide a method for
automatically detecting tissue regions and/or cell pellets in both control and sample regions in a low magnification scan of a microscopy slide. The method can be used to segment and track various tissues including cell colonies and similar microstructures which are composed of a finite number of components, such as red blood cells, nuclei, etc., as observed at a microscopic scale. The tissue regions detected in the low magnification scan become regions of interest (ROI) for higher resolution scanning. This process can provide the advantage that the high resolution scan is conducted only on the regions of interest, rather than on the entire slide, which can save considerable time and storage space.
According to one aspect of the invention, the detection results from the low magnification scan may be highlighted on a display for a user, such as by displaying colored outlines of the tissue regions and the control regions detected by the exemplary method. The tissue detection method may return a label mask or boundary indicating the detected tissue within each declared region, or it may return a list of imaging locations (or fields of view) required to cover all of the detected tissue at a specified higher resolution. The display may then show the detected tissue and/or imaging locations for review by the user prior to beginning the high-resolution scan. The system may be totally automatic (e.g., it automatically defines the regions of interest for subsequent high resolution scanning). In addition, it may also allow user interactions by providing tools for the user to manually adjust the desired fields of view (FOV) for higher resolution scans.
According to one embodiment of the invention, the method for detecting boundaries of tissue regions in a microscopy image is a hybrid algorithm of a Gaussian MRF-based method and an active contours method. The method may be referred to as "GMRF-MPM," because it estimates the segmentation field, by estimating and maximizing the posterior marginal probability ("MPM") conditioned on the observed image, while the hidden posteriori probability field is modeled by a continuous Gaussian Markov Random Field (GMRF). According to exemplary embodiments of the invention, the GMRF-MPM process finds relatively precise boundaries of tissue regions, which is very close to the global minimum, and then the active contours method is used to refine the initial segmentation result. It should be appreciated that that active contours step is optional, depending on the desired precision of the results. After the tissue boundaries have been defined as regions of interest, a number of fields of view, generally corresponding to the tissue regions, can be defined for subsequent high magnification (e.g., 10X or higher) scanning.
An additional feature of exemplary embodiments of the invention involves correctly labeling each detected region as sample or control. Moreover, it is expected that specific control regions can be included in or omitted from an imaging round based on their association with specific biomarkers. Other embodiments of the invention can be used for automatic stem cell colony (or other cellular colonies) finding and tracking to aid embryonic developmental biologists in understanding how these arrangements lead to and control the formation of structures. Still other embodiments make smart acquisition of high resolution images only on regions of interest (e.g., tissues, cell colonies) possible. For example, when used in combination with an automated microscopic system, embodiments of the invention enable the system to first fast scan slides in low resolution, then find regions of interest, and finally image only desired regions in the biological specimen under proper experimental conditions automatically.
It should be noted that the "image" used for the proposed segmentation algorithm does not have to be a grey scale image. It can be a multispectral image, or it also can be a set of features on the image lattice. For example, the original grey scale image, combined with a variance image calculated from the original image, can form a 2-D vector on each pixel site, which can be an input for this algorithm according to an exemplary embodiment of the invention. Figure 1 depicts a typical tissue image. In Figure 1, the variable intensity and density of cells is apparent, as well as the lack of well-defined edges, which can blend into the background of the image. One problem of image segmentation is that of estimating the hidden or unobserved realization of the label field (the class) from the observed image. The value of a given site in the label field indicates the class to which the corresponding pixel in the observed image belongs. The Finite Gaussian Mixture model (FGM) is a known model used in segmentation. The FGM model has a number of advantageous features. However, the FGM model has an intrinsic limitation, because it neglects pixel-level spatial correlation by assuming the image pixels are statistically independent and identically distributed ("i.i.d."). Such limitation causes FGM based algorithms, e.g., K-Means algorithms (see, e.g., Tou, J. T. and Gonzalez, R. C, Pattern
Recognition Principles, Reading MA:Addison-Wesley, 1st edition (1974); and Gary, R. M. and Linde, Y., "Vector Quantizers and Predictive Quantizers for Gauss-Markov Sources," IEEE Trans. Commun., COM-30(2):380-389 (1982)) and Expectation Maximization (EM) (see, e.g., Dempster et al. (1977)), to work only on well-defined images with homogeneous regions and low levels of noise. This is seldom the case for microscopy images, especially for tissue images, which form different textural patterns for different tissue types and samples. On the other hand, these textural patterns can be described by the covariance matrix of the region if the spatial information is considered.
In order to address the spatial correlation between image pixels, embodiments of the invention use a Markov Random Field (MRF). The MRF has been used in image segmentation (see, e.g., Geman, S. and Geman,D., "Stochastic Relaxation, Gibbs Distribution, and the
Bayesian Restoration of Images," IEEETrans. Pattern Anal. Machine Intell, 6(6):721-741 (1984); Besag, J., "Spatial Interaction and Statistical Analysis of Lattice Systems. J. Roy. Stat. Soc, 36: 192-236 (1974); and Cross, J. R. and Jain, A. K., "Markov Random Field Texture Models," IEEE Trans. Pattern Anal. Machine Intell., 5(3):25-39 (1983)), because it is a robust model for representing many different types of images. According to various embodiments of the invention, two kinds of estimations can be used to estimate the hidden class/label field:
maximum a posteriori estimation (MAP), and maximum posterior marginal probability estimation (MPM).
The MAP estimation is used to estimate the unobserved class/label field x, given the observed image y, by maximizing a posteriori probability p(xly).
Figure imgf000011_0001
The MPM estimate estimates the label xs at each pixel site s in the image grid lattice, such that for each pixel site s, x maximizes the posterior marginal probability P
XMPMC = argm xxpdxsiy}
Exemplary embodiments of the invention separate the pixels in the image into regions based on both their intensity (and/or other features) as well as their spatial information. The goal is to separate tissue regions from an even background, which is relatively smooth. In addition, tissue textures can be considered to be noise, which is characterized by the covariance matrix. Thus, the image can be modeled as a Mixture of Gaussians (MoG) with spatial constraints. The spatial constraint is introduced by modeling the posteriori marginal conditional probability field as a Gaussian MRF. Note that the spatial constraint can also be incorporated by modeling the label field as a discrete MRF. By modeling different fields as a Markov Random Field in the algorithm, different estimations can be achieved of the label field; the former one will result in an MPM estimation, while using the latter one yields a MAP (Maximum a posteriori probability) estimation. Both GMRF-MPM and MRF-MAP can be used to segment the tissue regions. The system can be configured to allow a user to select which algorithm to implement, or it can be configured to use one or the other algorithm depending on the application. Figure 2 depicts a flow chart of a method according to an exemplary embodiment of the invention. At block 202, the original image is obtained. The original image may be of a tissue, such as, for example, an image as depicted in Figure 1. At block 204, the image is preprocessed. The preprocessing can include, for example, field correction and noise removal (e.g., salt and pepper noise). At block 206, which is optional according to various embodiments of the invention, features are extracted to form a multispectral image for better segmentation results. The feature extraction can include calculation of local variance and texture features on each pixel site, for example. At block 208, the GMRF-MPM and/or MRF-MAP methods is conducted to segment the image into four regions or classes. At block 210, the four regions are classified into a background region and a tissue region to form a binary mask. At block 212, morphological operations are performed on the binary mask to eliminate small isolated regions, fill in holes, and dilate the mask to ensure that the mask covers the whole tissue region and provide a good initial contour for the following active contour method.
At block 214, an optional active contours method is executed on the image using the boundary of the binary mask as the initial contour. The active contours method is used to refine the segmentation result of the GMRF-MPM or MRF-MAP process. Active contour model methods can cope with texture images and high level noise. A problem of the active contours method is that it is extremely slow if given an arbitrary initial contour, and it is easy to be trapped in local minimum. This is the reason it can be used to refine the segmentation results obtained by the preceding steps according to exemplary embodiments of the invention.
At block 216, the method ends.
Figures 3A and 3B depict a flow chart that provides additional details of the method of Figure 2 with regard to the application of the GMRF-MPM and MRF-MAP algorithms, according to an exemplary embodiment of the invention. At block 302, an input image is prepared. The image can be an original grey scale image or a multivariate image of features. For example, the image can be an image of a tissue region as depicted in Figure 1.
At block 304, an estimate of the label field (or class) is obtained by separating the image into four regions. The separation can be random, by thresholding, or by other methods such as k- means.
At block 308, the estimated label field is used to calculate the Maximum Likelihood Estimation of the Mixture of Gaussians (MoG) parameters. As used herein, Θ indicates the MoG parameters.
To segment a given image is to estimate the unobserved class (label) field X given the observed image Y. Here, X = i¾ denotes the hidden class (label) field to be estimated, and s denotes the pixel sites in the image lattice.
The observed image can be considered as a Mixture of Gaussians (MoG). The intensity information is incorporated by adopting the Gaussian distribution for an image pixel ys on the pixel site s, given a specified classification/segmentation xs. Here, x is the label of the pixel ys, with the probability
Figure imgf000014_0001
for grey scale images, or f(y ii½½} = d * exp(-i/2(ys - μ^Σ^ - μ-ϊ) (2) for multispectral images, which have vector-valued pixels. For example, equation (2) above can be applied to a microscopic image with multiple fluorescence channels. Here, -i and∑ι (σι ) are the mean, and covariance matrix (variance) of class 1, respectively. At block 310, the partial of each class is computed as:
Figure imgf000015_0001
At block 312, the mean of each class is computed as:
At block 314, the covariance matrix/variance of each class is computed as:
Figure imgf000015_0002
At block 316, GMRF-MPM model is applied and the marginal probability, ignoring spatial constraint, is:
Figure imgf000015_0003
At block 318, the marginal probability is adjusted using the Gaussian MRF model of the hidden posteriori marginal probability field to incorporate spatial constraints:
Figure imgf000015_0004
The MRF model achieves the spatial constraints of image pixels by characterizing mutual influences among them using conditional MRF distributions. In an MRF, the pixel sites are related to each other via a neighborhood system. The conditional distribution of a pixel site s in the image field is identical to the conditional distribution of the site given only a symmetric neighborhood surrounding the site s, that is:
Figure imgf000016_0001
Here, x is the value of pixel site s, and Ns is the neighbor or pixel size s. It could be either a 4-neighborhood or an 8-neighborhood. If the MRF is used to model the label field, X, the priori probability of the current pixel site s of being class f (label ^ ) can be estimated given the estimation of the labels of its neighbors. On the other hand, if a continuous MRF is used to model the posteriori marginal conditional probability field in favor of class 1, ¾, the posteriori marginal conditional probability of pixel site s being class 1 (labeled as 1) can be estimated given the estimated posteriori marginal conditional probability of its neighborhood. The value of Hi at location s is the marginal conditional probability of xs= given )¾¾· To maximize the posteriori marginal probability (MPM) estimation of the class (label) field x , is to find for each pixel site s, the value of class (label) 1, which maximizes the posteriori marginal conditional probability of xs given the observed image y,n = P( Ls W) at each pixel site s for label : x-,xs=i . The MPM estimate of the class label is obtained by maximizing the marginal conditional probabilities of the class labels given the observed image. The value of these conditional probabilities can be estimated by using a Gibbs sampler, such as disclosed in Marroquin, J., Mitter, S.,and Poggio, T., "Probability Solution of 111 Posed Problems in Computational Vision, IEEE Trans, on Image Processing, 3: 162-177 (1994). But this operation may be computationally expensive.
In order to achieve the MPM estimate of the label field as well as keep the computation efficient, we modeled the posteriori marginal conditional probability field by a Gaussian MRF field. The value of i at pixel site s is the marginal probability of ¾ = ^ given the image in the neighborhood of s,3¾¾ . Thus,
Figure imgf000017_0001
takes a continuous value in [0,1]. The Gaussian MRF property of states that the posterior marginal probability nis = Pixs = ¾s only depends on it neighbors, ';' i.r.- e ¾ , and has conditional densities:
Figure imgf000017_0002
Thus, given nLNs in the neighborhood, the estimate of the true value ¾s can be chosen to be the maximum likelihood estimation of '''' .*..¥ given ¾\¾ :
Figure imgf000017_0003
Here, a and γ are unknown parameters and these parameters are set empirically according an embodiment of the invention. After estimating the R for each class/label 4 and pixel site s, an updated label field can be achieved by assigning the label, which maximize the πι$, to each pixel site s in the image lattice. In summary, according to exemplary embodiments of the invention, an iterated method is used to estimate P ¾h¾¾) given the observed image y. That is, first calculate the conditional probability ~R' = Pt¾t-¾> as the initial estimation of the marginal posteriori probability, by assuming the image pixels are i.i.d. random variables from a specific Gaussian distribution as described by Equation (1) or (2) above (which is not true, so the second step is used to adjust the estimated posteriori probability). Then, adjust the posterior marginal probability by
incorporating the spatial constraint, using continuous Gaussian Markov Random Field to model the field Π ι . Update the class (label) field, and iterate from the beginning until convergence is achieved. An alternative method to the GMRF-MPM method for the tissue detection is MRF-MAP, maximum a posteriori estimation of the class (label) field X. Here, a discrete Markov Random Field is used to model the unobserved label field X. And same as before, MoG may be used to model the intensity component, that is given the class/label of a specific observed pixel, it follows a specific Gaussian distribution.
_ argmaxxpXfy(x,y)
*iA J Λ* · - -p(y) (6)
Since p(y) will be the same, equation (4) can be rewritten as the following:
X'MAP = argn tXx ^^^ = argmaxx(p(y\x)p(x}) ^ For any given label at the pixel site s 'xs— t? , 3¾ is conditional independent:
X'MAP
Figure imgf000019_0001
The computation of the MAP estimation is also computationally expensive, especially for real- world applications. The same ICM method can be used to iteratively estimate xs by maximizing local joint probability. Because of the local characteristics of MRFs, the local joint probability can be calculated as:
P&'iS* xis I x-iNs ) = {yis I xts }p(xxs \xiNs ) (9)
Now, from the Mixture of Gaussian Model for the intensity component, p( si:¾;s can be calculated; from the discrete MRF model of the label field x,
Figure imgf000019_0002
can be calculated too.
Referring again to Figure 3B, at block 320, the MRF-MAP model is applied and conditional probability is calculated:
Figure imgf000019_0003
At block 322, the prior probability is calculated using the discrete MRF model:
Figure imgf000019_0004
By utilizing a discrete Markov Random Field to model the label field X, the priori probability P&s can be calculated.
Here, fii and p2 are set empirically,
tt(x} = (Mhorz and vert neighbors of different label") 5 and
r2 (Y) = (≠diaa of different, label} _
At block 324, the posterior probability is calculated:
At block 326, a new label field x1 is obtained by assigning a label which maximizes the estimated conditional probability of the pixel site s in the ith iteration.
The relationship of horizontal, vertical, and diagonal cliques with the current pixel sites is depicted in Figure 4.
At block 328, a convergence check is performed to see if the number of pixel labels that has changed is less than a predefined number (M), or if the iteration number is greater than a predefined number of iterations (N). If so, then the method goes to block 330. Otherwise, the method iterates as shown in element 306 in Figure 3A.
At block 330, the four regions are classified into two classes: tissue and background.
Because of the variable intensity and density of cells, they can be segmented into 4 classes (or any other desired number of classes) to capture all the tissue regions with different intensities. And then all tissue regions can be combined to form a binary mask of foreground (tissues) and background. To select the tissue regions and background regions from the four segmented regions, a classification method is used. This classification method can be a simple nearest neighborhood, or a sophisticated method, for example, a decision tree. For different kinds of tissues, such classification method could be different. For example for tissue micro array (TMA) spots as shown in Figures 7A-7D, the tissue spots usually have round shapes and are arranged regularly on a rectangle grid, while for a tissue sample, the size and shape could be very random.
Figures 5A - 5D illustrate an exemplary classification method for tissue regions. Figure 5A depicts an example of an original image. Figure 5B depicts an image that has been segmented into four regions according to an embodiment of the invention. Figure 5C depicts a binary mask comprising a foreground (tissue) region and background region according to an embodiment of the invention. Figure 5D, depicts the mask after a few morphological operations, prior to running the active contours step.
Referring again to Figure 2, according to an exemplary embodiment of the invention, an optional active contours step 214 may be conducted. The objective of the active contour step is to evolve a curve, subject to constraints from a given image, to detect the object in that image. For example, starting with a curve around the object to be detected, the curve moves toward its interior normal and stops on the boundary of the object. The strength attracting the curve to the true boundary of the object of interest comes from minimizing of two kinds of energies: inner energy and outer energy. Inner energy, such as curvature, curve length and so on, adds stiffness, smoothness and other desired characters to the resulting curve. The outer energy comes from the image, which attracts the curve to the true boundary of the object. According to exemplary embodiments of the invention, a level set based active contour method is used to refine the contour found in step 330. The method can utilize an
implementation of the algorithm set forth in Chan and Vese, "Active Contours Without Edges," IEEE Trans. Image Processing, 10(2):266-277 (2001). It should be noted that this step is optional, depending on the desired precision of the results. This method utilizes the level set approach to find the 0 level of a surface (energy surface) to find the boundary of the objects of interest. The surface starts with an initiation, which is obtained by the mask from step (330), and updates the surface step by step, at the same time it updates the zero level set converging to the objects' boundaries. Thus, the energy surface can be defined to have zero/minimum absolute value at the boundary of the objects.
The methods described herein may be executed or otherwise performed by one or a combination of various systems, components, and sub- systems, including a computer implemented system. Such a computer implemented system may incorporate one or more computer processors, computer memory, computer storage, and software. The computer implemented system may have various modules. As used herein, the term module may be understood to refer to executable software, firmware, hardware, and/or various combinations thereof. The computer implemented system may have a processor and operably coupled memory that is configured to execute one or more steps of the methods described herein. The processing to execute the method steps may be distributed over multiple computer components, such as multiple processors or even multiple computer systems. The computer implemented system may be stand-alone or communicatively coupled to a computer network, such as a local area network or the Internet. Such communicative coupling may involve a wireless or wired connection. The methods herein may be embodied in software which may be tangibly embodied in one or more non-transitory physical computer-readable media, such as, but not limited to, a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a hard drive, read only memory (ROM), random access memory (RAM), as well as other physical media capable of storing software, and/or combinations thereof.
Figure 6 depicts a system according to an exemplary embodiment of the invention. The system 600 may include an imaging device 602. The imaging device 602 may include a microscope or a camera device, such as a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD) camera. The imaging device 602 may use different imaging methods such as white light, fluorescent light, and transmitted light, including bright- field, differential interference contrast (DIC), and phase contrast. Slides that include tissue samples may be loaded into the device 602. Several sub-units may be attached, such as an X-Y stage and a Z stage. The X-Y stage is used to move samples to the field of view, the Z stage usually is used to bring sample to the focus. The imaging device 602 may then proceed to generate images of the slides according to the methods described herein. The imaging device 602 may be capable of different imaging types, including transmitted light imaging, white light imaging, and fluorescent imaging. The imaging device 602, according to an exemplary embodiment of the invention, includes a processor, a memory, an input device such as a keyboard and mouse, and a display. The imaging device 602 may be programmed with software or firmware to execute the exemplary methods described herein. The imaging device 602 may include a graphical user interface on the display. A user may interact with the system imaging device 602 through the graphical user interface. The graphical user interface may provide for viewing and manipulation of images on the imaging device 602. As shown in Figure 6, the imaging device 602 may have one or more peripheral devices 604 communicatively coupled to it. For example, the imaging device may be equipped with robots for loading of micro plates or slides so that the system can automatically image a large amount of micro plates/slides under the same experimental conditions. Sometimes a liquid handling sub-unit and an environment control sub-unit can also be attached to the imaging device. The liquid handling sub-unit is usually used to add a drug to the sample during imaging, while the environment control sub-unit may be used to control the environment surrounding the sample, such as temperature, humidity and the concentration of C02. The device may also include an attached barcode scanner to identify the slides as well as environmental controls or fluidic handling apparatus to manipulate the staining within the imaging device, according to some embodiments of the invention. The imaging device and the sub-units can make up the whole body of the platform, which may be called the hardware of the system that is controlled by a computer through computer programs that are called as software.
Also shown in Figure 6 is a network 606. The network 606 may be a computer based network. The network 606 may be local area network or its equivalent. The network 606 may include, or be connected to, the Internet. The network 606 may include one or more wireless networks.
The imaging device 602 may be connected to other devices through the network 606. For example, images generated by the imaging device 602 may be transmitted across the network 606 to another processing device 608. The device 608 may perform additional processing of the images. Or, the device 608 may perform methods described herein on the images according to some embodiments of the invention. The device 608 may be a computing device, having one or more processors and/or memory contained therein. The device 608 may provide control for and interaction with the imaging device 602. To allow for interaction of a user with the method, the device 608 may include one or more displays and a graphical user interface, and other input devices. It should be appreciated that while a single device 608 is depicted, device 608 may consist of multiple devices.
As described above, the system 600 may execute the methods according to exemplary embodiments of the invention. The system 600 may contain executable code embodied in a non- transitory medium to cause the imaging device 602 and/or the device 608 to execute the methods of Figures 2 and 3. The method may be performed entirely on either the imaging device 602 or the device 608, or the method steps may be distributed between the imaging device 602 and the device 608 according to exemplary embodiments of the invention.
The Figures depict various functionality and features associated with exemplary embodiments of the invention. While a single illustrative block, sub-system, device, or component may be shown, these illustrative blocks, sub-systems, devices, or components may be multiplied for various applications or different application environments. In addition, the blocks, sub-systems, devices, or components may be further combined into a consolidated unit. Further, while a particular structure or type of block, sub-system, device, or component is shown, this structure is meant to be exemplary and non-limiting, as other structure may be able to be substituted to perform the functions described.
While exemplary embodiments of the present invention have been shown and described, it will be apparent to those skilled in the art that changes and modifications may be made without departing from the teachings of the invention. The subject matter set forth in the foregoing description and accompanying drawings is offered by way of illustration only and not as a limitation. The actual scope of the invention is intended to be defined in the following claims.

Claims

What Is Claimed Is:
1. A computer implemented method comprising: obtaining a microscopic image of a tissue sample using at least one processor; segmenting the image with the processor into a plurality of regions using either (a) a maximum posterior marginal probability (MPM) process with a Gaussian Markov Random Field (GMRF), or (b) a maximum a posteriori (MAP) estimation with a discrete Markov Random Field (MRF); and classifying the plurality of regions into a background region and a tissue region to form a binary mask.
2. The method of claim 1, further comprising applying an active contour method to the binary mask.
3. The method of claim 1, further comprising: extracting features from the image to form a multispectral image.
4. The method of claim 1, further comprising: performing morphological operations on the binary mask.
5. The method of claim 4, wherein the morphological operations remove isolated regions, fill in holes, and dilate the mask to cover of the entire region depicted in the image.
6. The method of claim 1, wherein the image is a grey scale microscopic image or a multichannel microscopic image.
7. The method of claim 2, wherein the active contour method is a level set based active contour method.
8. The method of claim 1, further comprising preprocessing the image, wherein the preprocessing comprises calculation of local variance and texture features on each pixel site.
9. The method of claim 2, wherein application of the active contour method to the binary mask produces a tissue boundary, and the method further comprises performing a subsequent higher resolution scan only on tissue regions.
10. The method of claim 2, further comprising providing a slide template to enable tissues at different locations on the slide to be labeled with distinct tags.
11. A method, comprising: preparing an input image; obtaining an initial estimate of a label field by separating the image into a plurality of regions; using the estimated label field to calculate a maximum likelihood estimate of a mixture of Gaussians in the input image; applying, by a computer processor, mathematical algorithms to the label field; applying, by the computer processor, an algorithm to determine spatial constraints of pixels in the input image; and classifying, by the computer processor, the plurality of regions into two classes based on the results of the algorithm.
12. The method of claim 11, wherein the input image is of one or more tissue regions.
13. The method of claim 11, wherein the two classes comprise: tissue and background.
14. The method of claim 11, wherein the algorithm comprises: a discrete Markov Random Field (MRF) - maximum a posteriori estimation (MAP) algorithm; or a Gaussian Markov Random Field (GMRF) - maximum posterior marginal probability estimation (MPM) algorithm.
15. The method of claim 11, further comprising: iterating the application of the mathematical algorithm to the label field.
16. The method of claim 15, wherein convergence is defined by at least one of a predefined number of iterations or a maximum number of changed pixels.
17. A system, comprising: a processor; and a memory comprising computer-readable instructions which when executed by the processor cause the processor to perform the steps comprising: obtaining an image of a tissue using the processor; segmenting the image with the processor into a plurality of regions using either
(a) a maximum posterior marginal probability (MPM) process with a Gaussian Markov Random Field (GMRF), or (b) a maximum a posteriori (MAP) estimation with a discrete Markov Random Field (MRF); and classifying the plurality of regions into a background region and a tissue region to form a binary mask.
18. The system of claim 17, wherein the processor is further programmed to apply an active contour method to the binary mask.
19. The system of claim 17, wherein the processor is further programmed to: obtain an initial estimate of a label field by separating the image into the plurality of regions; and use the estimated label field to calculate a maximum likelihood estimate of a mixture of Gaussians in the image.
20. A system for detecting and scanning tissue slides, comprising: a storage device for at least temporarily storing an image of a slide under test; a processor; and a memory comprising computer-readable instructions which when executed by the processor cause the processor to perform the steps comprising: applying a tissue detection algorithm to locate tissue regions of interest.
21. The system of claim 20, further comprising: labeling the detected tissues spots to be omitted or included in specific scanning round because of their association of specific biomarkers.
22. The system of claim 20, further comprising: an microscope imaging device.
23. The system of claim 22 wherein the imaging device takes a plurality of images of the slides at various levels of resolutions.
24. The system of claim 20, further comprising: applying a real-time tissue detection analysis to one low resolution image, to initiate acquisition of one or more high resolution images, at least in part by identifying the tissue regions, and taking high resolution images only on desired tissue regions.
PCT/US2013/033415 2012-03-30 2013-03-22 Detection of tissue regions in microscope slide images WO2013148485A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261618141P 2012-03-30 2012-03-30
US61/618,141 2012-03-30

Publications (2)

Publication Number Publication Date
WO2013148485A2 true WO2013148485A2 (en) 2013-10-03
WO2013148485A3 WO2013148485A3 (en) 2014-02-06

Family

ID=48048284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/033415 WO2013148485A2 (en) 2012-03-30 2013-03-22 Detection of tissue regions in microscope slide images

Country Status (1)

Country Link
WO (1) WO2013148485A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824268A (en) * 2014-02-08 2014-05-28 江西赛维Ldk太阳能高科技有限公司 Crystal grain image edge connecting method and apparatus
WO2015089434A1 (en) * 2013-12-12 2015-06-18 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Automated epithelial nuclei segmentation for computational disease detection algorithms
CN105528784A (en) * 2015-12-02 2016-04-27 沈阳东软医疗系统有限公司 Method and device for segmenting foregrounds and backgrounds
CN106780376A (en) * 2016-12-07 2017-05-31 中国农业科学院农业信息研究所 The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness
WO2017206615A1 (en) * 2016-05-30 2017-12-07 武汉沃亿生物有限公司 Method of automatically modifying imaging range in biological sample microscopic imaging
CN116609332A (en) * 2023-07-20 2023-08-18 佳木斯大学 Novel tissue embryo pathological section panorama scanning system

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BESAG, J.: "Spatial Interaction and Statistical Analysis of Lattice Systems", J. ROY. STAT. SOC., vol. 36, 1974, pages 192 - 236
CHAN; VESE: "Active Contours Without Edges", IEEE TRANS. IMAGE PROCESSING, vol. 10, no. 2, 2001, pages 266 - 277
CROSS, J. R.; JAIN, A. K.: "Markov Random Field Texture Models", IEEE TRANS. PATTERN ANAL. MACHINE INTELL., vol. 5, no. 3, 1983, pages 25 - 39
GARY, R. M.; LINDE, Y.: "Vector Quantizers and Predictive Quantizers for Gauss-Markov Sources", IEEE TRANS. COMMUN., vol. COM-30, no. 2, 1982, pages 380 - 389
GEMAN, S.; GEMAN,D.: "Stochastic Relaxation, Gibbs Distribution, and the Bayesian Restoration of Images", IEEETRANS. PATTERN ANAL. MACHINE INTELL, vol. 6, no. 6, 1984, pages 721 - 741
MARROQUIN, J.; MITTER, S.; POGGIO, T.: "Probability Solution of Ill Posed Problems in Computational Vision", IEEE TRANS. ON IMAGE PROCESSING, vol. 3, 1994, pages 162 - 177
TOU, J. T.; GONZALEZ, R. C.: "Pattern Recognition Principles", 1974, ADDISON-WESLEY

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015089434A1 (en) * 2013-12-12 2015-06-18 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Automated epithelial nuclei segmentation for computational disease detection algorithms
US9626583B2 (en) 2013-12-12 2017-04-18 University of Pittsburg—Of the Commonwealth System of Higher Education Automated epithelial nuclei segmentation for computational disease detection algorithms
CN103824268A (en) * 2014-02-08 2014-05-28 江西赛维Ldk太阳能高科技有限公司 Crystal grain image edge connecting method and apparatus
CN105528784A (en) * 2015-12-02 2016-04-27 沈阳东软医疗系统有限公司 Method and device for segmenting foregrounds and backgrounds
CN105528784B (en) * 2015-12-02 2019-01-25 沈阳东软医疗系统有限公司 A kind of method and apparatus of prospect background segmentation
WO2017206615A1 (en) * 2016-05-30 2017-12-07 武汉沃亿生物有限公司 Method of automatically modifying imaging range in biological sample microscopic imaging
US10365207B2 (en) 2016-05-30 2019-07-30 Wuhan Oe-Bio Co., Ltd. Method of automatically modifying imaging range in biological sample microscopic imaging
CN106780376A (en) * 2016-12-07 2017-05-31 中国农业科学院农业信息研究所 The background image dividing method of partitioning algorithm is detected and combined based on conspicuousness
CN116609332A (en) * 2023-07-20 2023-08-18 佳木斯大学 Novel tissue embryo pathological section panorama scanning system
CN116609332B (en) * 2023-07-20 2023-10-13 佳木斯大学 Novel tissue embryo pathological section panorama scanning system

Also Published As

Publication number Publication date
WO2013148485A3 (en) 2014-02-06

Similar Documents

Publication Publication Date Title
JP7201681B2 (en) Systems and methods for single-channel whole-cell segmentation
US10706535B2 (en) Tissue staining quality determination
Ruusuvuori et al. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images
Sommer et al. Learning-based mitotic cell detection in histopathological images
Veta et al. Automatic nuclei segmentation in H&E stained breast cancer histopathology images
EP2070047B1 (en) Automated segmentation of image structures
EP2109856B1 (en) System and method for cell analysis in microscopy
Lindblad et al. Image analysis for automatic segmentation of cytoplasms and classification of Rac1 activation
Moles Lopez et al. An automated blur detection method for histological whole slide imaging
CA3138959A1 (en) Image diagnostic system, and methods of operating thereof
WO2013148485A2 (en) Detection of tissue regions in microscope slide images
US20060204953A1 (en) Method and apparatus for automated analysis of biological specimen
US8200013B2 (en) Method and device for segmenting a digital cell image
Nateghi et al. Maximized inter-class weighted mean for fast and accurate mitosis cells detection in breast cancer histopathology images
US11176412B2 (en) Systems and methods for encoding image features of high-resolution digital images of biological specimens
US20130226548A1 (en) Systems and methods for analysis to build predictive models from microscopic cancer images
Kårsnäs et al. Learning histopathological patterns
US20230306606A1 (en) Methods and systems for providing training data sets for training a machine-learned segmentation algorithm for use in digital pathology
Friebel et al. Guided interactive image segmentation using machine learning and color-based image set clustering
EP2875488B1 (en) Biological unit segmentation with ranking based on similarity applying a shape and scale descriptor
EP4235599A1 (en) Annotation refinement for segmentation of whole-slide images in digital pathology
Zupanc et al. Markov random field model for segmenting large populations of lipid vesicles from micrographs
CN116580397B (en) Pathological image recognition method, device, equipment and storage medium
Restif Segmentation and evaluation of fluorescence microscopy images
Santamaria-Pang et al. Epithelial cell segmentation via shape ranking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13714485

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 13714485

Country of ref document: EP

Kind code of ref document: A2