WO2020037255A1 - Identification et analyse automatiques d'un échantillon de tissu - Google Patents

Identification et analyse automatiques d'un échantillon de tissu Download PDF

Info

Publication number
WO2020037255A1
WO2020037255A1 PCT/US2019/046897 US2019046897W WO2020037255A1 WO 2020037255 A1 WO2020037255 A1 WO 2020037255A1 US 2019046897 W US2019046897 W US 2019046897W WO 2020037255 A1 WO2020037255 A1 WO 2020037255A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
classifier
computing device
storage medium
image
Prior art date
Application number
PCT/US2019/046897
Other languages
English (en)
Inventor
Susan SHEEHAN
Ronny KORSTANJE
Original Assignee
The Jackson Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Jackson Laboratory filed Critical The Jackson Laboratory
Publication of WO2020037255A1 publication Critical patent/WO2020037255A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Definitions

  • histological structures of tissues Conventionally, a histologist obtains a tissue sample and manually analyzes the sample using a microscope.
  • a glomerulus is a cluster of capillaries located in kidney tissue.
  • a system and method for automatically analyzing a region of interest within a tissue sample is provided herein, in some aspects, is a system and method for automatically analyzing a region of interest within a tissue sample.
  • Some embodiments are directed to a method of automatically analyzing a region of interest within a tissue sample.
  • the method comprises obtaining an input image of the tissue sample, the input image comprising a plurality of pixels; applying a classifier to the input image to identify a plurality of candidate pixels based on the plurality of pixels of the input image; applying at least one rule to the plurality of candidate pixels to determine a region of interest; and determining at least one feature within the region of interest.
  • Some embodiments are directed to a computing device for automatically analyzing a region of interest within a tissue sample.
  • the computing device comprises a memory configured to store a classifier and an input image of the tissue sample; and a processor communicatively coupled with the memory and configured to receive the input image; apply the classifier to the input image to identify at least one candidate region of the input image; apply at least one rule to the at least one candidate region to determine a region of interest; and determine at least one feature within the region of interest.
  • Some embodiments are directed to at least one non-transitory storage medium encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to carry out a method of automatically analyzing a region of interest within a tissue sample.
  • the method comprises: receiving an input image of the tissue sample; applying a classifier to the input image to identify at least one candidate region of the input image; applying at least one rule to the at least one candidate region to determine a region of interest; and determining at least one feature within the region of interest.
  • Some embodiments are direct to a method of training a classifier for automatically determining a region of interest within a tissue sample.
  • the method comprises: obtaining a plurality of training images, each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest; and training a machine learning classifier using the plurality of training images, wherein the classifier uses a plurality of parameters.
  • Some embodiments are directed to a computing device for training a classifier for automatically determining a region of interest within a tissue sample.
  • the computing device comprises: a memory configured to store a classifier and a plurality of training images; and a processor communicatively coupled with the memory and configured to: obtain the plurality of training images, each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest; and train the classifier using the plurality of training images, wherein the classifier uses a plurality of parameters.
  • Some embodiments are directed to at least one non-transitory storage medium encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to carry out a method of training a classifier for automatically determining a region of interest within a tissue sample.
  • the method comprises: obtaining a plurality of training images, each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest; and training a machine learning classifier using the plurality of training images, wherein the classifier uses a plurality of parameters.
  • FIG. 1 is a flow chart of a process for automatically analyzing a tissue sample in accordance with some embodiments
  • FIG. 2 is a flow chart of a process for obtaining an input image of a tissue sample in accordance with some embodiments
  • FIG. 3 is a flow chart of a process for applying at least one rule to determine a region of interest in accordance with some embodiments
  • FIG. 4 is a flow chart of a process for determining at least one feature within the region of interest in accordance with some embodiments
  • FIG. 5 is a schematic diagram illustrating the flow of image processing in accordance with some embodiments.
  • FIG. 6A is a bar graph illustrating the precision rate of the automatic analysis process performed on a validation set in accordance with some embodiments
  • FIG. 6B is a bar graph illustrating the recall rate of the automatic analysis process performed on a validation set in accordance with some embodiments
  • FIG. 6C is a bar graph illustrating the F-measure of the automatic analysis process performed on a validation set in accordance with some embodiments.
  • FIG. 6D is a bar graph illustrating the precision rate of the automatic analysis process performed on a divergent data set in accordance with some embodiments
  • FIG. 6E is a bar graph illustrating the recall rate of the automatic analysis process performed on a divergent data set in accordance with some embodiments.
  • FIG. 6F is a bar graph illustrating the F-measure of the automatic analysis process performed on a divergent data set in accordance with some embodiments
  • FIG. 7 illustrates five different input images, the results of the classification, the results of the region of interest determination, and the processed images associated with mesangial matrix (MME) analysis, nuclei number analysis, and capillary openness analysis in accordance with some embodiments;
  • MME mesangial matrix
  • FIG. 8A is a scatter plot illustrating positive correlation for validation images analyzed automatically according to some embodiments in relation to glomerular tufts outlined manually;
  • FIG. 8B is a scatter plot illustrating positive correlation for divergent data set images analyzed automatically according to some embodiments in relation to glomerular tufts outlined manually;
  • FIG. 9A is a scatter plot illustrating a correlation of MME between glomeruli scored manually and glomeruli analyzed automatically in accordance with some embodiments
  • FIG. 9B is a scatter plot illustrating a correlation between a semi-quantitative method with manual tracing of glomerular tufts and our fully automated method in the divergent data set;
  • FIG. 9C is a scatter plot illustrating that errors in automatically identifying the glomeruli area according to some embodiment do not correlate with disease score;
  • FIG. 10A is a scatter plot illustrating MME without proliferation as measured by increased cell number
  • FIG. 10B is a scatter plot illustrating that as MME increases, the capillary openness decreases
  • FIG. 11 is an image showing an example of a glomerulus identified and divided and an example with false positives and one that is missed in a human data set;
  • FIG. 12 is a schematic diagram illustrating components of a computer-based system according to some embodiments.
  • kidney research the inventors have recognized and appreciated that conventional, manual techniques for analyzing tissue samples is slow, laborious and prone to error. Moreover, manual identification and analysis may not identify all areas of interest within a tissue sample. While recent advances in automated analysis have resulted in high throughput accurate histological data acquisition, automated identification and analysis of kidney tissue has not been successful. In kidney research, the inventors have recognized and appreciated that large variation in size, texture, and color of structures such as glomeruli, along with glomeruli’s similarity to other renal structures, has made automatic identification and analysis of glomeruli in kidney tissue samples difficult for both computers and untrained humans.
  • classifiers that apply machine learning techniques can be trained to accurately classify kidney tissue and identify regions of interest that correspond to glomeruli. Consequently, some embodiments relate to automated digital renal image analysis through murine glomerular tuft identification and quantification of several specific phenotypes within the glomerulus. The resulting glomerular tuft classification technique is fast, accurate and includes an expandable workflow for phenotype quantification within glomeruli. Some embodiments, if accessible to people without specialized training, would allow for easier adoption by other research groups and allow the use of classical statistical analyses, ultimately improving the speed and accuracy of kidney research.
  • immunohistological stains e.g., the glomerular marker desmin
  • Techniques described herein may stain the tissue sample using Periodic Acid Schiff (PAS), which is an economical, commonly used stain for clinical assessment of multiple phenotypes.
  • PAS Periodic Acid Schiff
  • a system and method for automatically analyzing a region of interest within a tissue sample is provided herein.
  • Embodiments described herein relate to automated analysis techniques for kidney tissue, but embodiments are not so limited.
  • a region of interest within an input image is automatically identified.
  • features of the input image within that region of interest are analyzed.
  • each pixel of an input image is classified as either candidate pixels or not a candidate pixel.
  • the classification of the pixels results in a binary image where each pixel of the binary image has one of two values based on whether the pixel is a candidate pixel.
  • the binary image is an intermediate result that itself is likely not sufficient to identify the region of interest.
  • the binary image is further processed using one or more rules to create a region of interest.
  • the resulting region of interest is a continuous group of pixels of the input image.
  • the region of interest is then analyzed by evaluating properties of the pixels of the input image within the identified region of interest.
  • the classification of the pixels of the input image is performed using a classifier that is trained using multiple images that are analyzed manually by a human to identify regions of interest.
  • the classifier is a machine learning method that uses multiple parameters to perform a classification. The parameters used may be selected based on the type of tissue sample and/or the cell type being identified in the sample.
  • a process 100 for automatically analyzing a tissue sample includes at least the illustrated acts.
  • the process 100 may be, for example, performed by a computing device that executes instructions encoded on a non-transitory storage medium.
  • an input image of a tissue sample is obtained.
  • the input image may be obtained from any suitable source.
  • the input image may be captured by a microscope operated by a human operator at the same location as the computing device that is implementing the automated analysis.
  • the input image may be obtained by a remotely located human and the input image may be obtained by the computing device via a connection to a computer network or via a storage device, such as a hard drive or flash memory.
  • a storage device such as a hard drive or flash memory.
  • the act of obtaining the input image of the tissue sample includes at least the illustrated acts.
  • a tissue sample is obtained.
  • the tissue sample may be animal tissue, such a tissue from a mouse, or human tissue.
  • the tissue sample may be a kidney tissue sample.
  • the tissue sample may be obtained by a human and is not obtained by the computing device performing the automated analysis.
  • the human may stain the tissue sample.
  • the stain may be Periodic Acid Schiff (PAS).
  • a preliminary image of the tissue sample is captured using a microscope. For example, a Hammamatsu nanozoomer 2.0HT microscope may be used to capture the preliminary image.
  • the input image from is generated from the preliminary image.
  • the preliminary image is divided into multiple smaller image files. The smaller image files may have fewer pixels that than the preliminary image.
  • the tissue sample may be placed in a paraformaldehyde in phosphate buffered saline for a period of time (e.g., 24 hours), embedded in paraffin, and sliced into sections (e.g., 2-4 mm thick).
  • the process 100 continues at act 112 by applying a classifier to the input image to identify candidate pixels.
  • the classifier classifies each pixel of the input image.
  • the classification may be based on properties of the pixel being classified, properties of neighboring pixels, and/or properties of any other pixel or group of pixels of the input image.
  • the classifier may use a plurality of parameters to perform the classification.
  • the parameters used by the classifier may be based on the type of tissue sample being analyzed and/or the cell type of interest.
  • the plurality of parameters include (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels
  • the classifier is a trained classifier using machine learning techniques.
  • the classifier may be a random forest classifier.
  • a random forest classifier may be implemented using Ilastik software (www.ilastik.org).
  • the number of parameters used by the classifier correspond to a number of branches used in decision trees of the random forest classifier.
  • the classifier is trained using annotated training images.
  • a processor implementing the training algorithm may obtain multiple training images, each of the training images including a plurality of pixels and one or more predetermined regions of interest.
  • the classifier may be trained using the plurality of training images using multiple parameters, such as the above-mentioned parameters.
  • a binary image is an image for which each pixel has one of two values, e.g., zero or one.
  • the pixels identified as candidate pixels may be set to a first value and the pixels not identified as candidate pixels may be set to a second value.
  • the binary image may be a black and white image, with no grey scale values.
  • At act 118 at least one rule is applied to the candidate pixels to determine a region of interest.
  • the at least one rule is applied to the binary image. Applying the at least one rule may result in a single contiguous group of pixels that is a region of interest.
  • the region of interest may be considered a region that is automatically identified as corresponding to a particular cell type. For example, if the tissue sample is a kidney tissue sample, the region of interest may correspond to a region that is identified as a glomerulus.
  • the act of applying the at least one rule includes at least the illustrated acts.
  • a blurring rule is applied.
  • the blurring rule may include blurring the binary image using, for example, a Gaussian blur filter.
  • a size constraint rule is applied. For example, contiguous groups of pixels that are associated with candidate pixels in the binary image may be excluded if the number of pixels in the group is below a threshold number of pixels. Excluding the group of pixels may include changing those pixels from a first value (e.g., the value associated with candidate pixels) in the binary image to a second value (e.g., the value associated with pixels that are not candidate pixels).
  • a shape constraint rule is applied. For example, contiguous groups of pixels that are associated with candidate pixels in the binary image may be excluded if the shape of the group is not similar to a reference shape.
  • a similarity measure may be used to determine how similar the group of pixels is to the reference shape.
  • a best fit technique may be used to identify an example reference shape that best fits the group of pixels.
  • the overlap between the reference shape and the group of pixels may be used as a similarity measure.
  • the reference shape may be a circle.
  • An example circle may constructed that best fits the group of pixels.
  • the similarity measure may be a ratio of the number of pixels within the example circle to the total number of pixels in the group of pixels.
  • a group of pixels that has a similarity measure less than 0.2, for example, may be excluded. Excluding the group of pixels may include changing those pixels from a first value (e.g., the value associated with candidate pixels) in the binary image to a second value (e.g., the value associated with pixels that are not candidate pixels).
  • a first value e.g., the value associated with candidate pixels
  • a second value e.g., the value associated with pixels that are not candidate pixels.
  • a single contiguous region of interest may be formed from the candidate pixels. While act 316 is illustrated as a separate step, the result of act 310, act 312 and act 314 may result in a single contiguous group of pixels without further processing. If a separate action is needed to make the group of pixels contiguous, this may be done by, for example, changing one or more pixels from the second value (e.g., the value associated with pixels that are not candidate pixels) in the binary image to a second value (e.g., the value associated with candidate pixels). For example, non-candidate pixels may separate two group so pixels that are not contiguous. One or more of the non-candidate pixels separating the two groups may be changed to candidate pixels to form a single, contiguous group of pixels.
  • the second value e.g., the value associated with pixels that are not candidate pixels
  • a second value e.g., the value associated with candidate pixels
  • FIG. 3 illustrates the act 118 of applying the at least one rule as including four successive actions
  • the illustrated process is only one example.
  • one, two or three of the illustrated actions may be performed.
  • the order in which the actions are performed may be different.
  • the blurring rule may be applied after the size constrain rule.
  • the rules are applied in succession such that a first rule is applied to the original binary image resulting in a processed binary image and a second rule is applied to the processed binary image, not the original binary image.
  • the process 100 continues at act 120 by determining at least one feature within the region of interest. In some embodiments this determination is made by processing the portion of the input image that corresponds to the region of interest determined at act 118.
  • the act of determining at least one feature within the region of interest 120 includes at least the illustrated acts.
  • mesangial matrix expansion (MME) is determined.
  • determination of MME may be based on the saturation of pixels of the input image within the region of interest.
  • the input image may be separated into hue, brightness and saturation channels.
  • a range of saturation values may be set to correspond to MME.
  • pixels with saturation values between 80 and 255 may correspond to MME.
  • a number of nuclei is determined.
  • determination of the number nuclei may be based on the hue of pixels within the region of interest.
  • the input image may be separated into hue, brightness and saturation channels.
  • a range of hue values may be set to correspond to nuclei.
  • pixels with hue values between 0 and 180 may correspond to a nuclei.
  • the number of independent groups of pixels that correspond with the specified hue values correspond with the number of nuclei.
  • capillary openness is determined.
  • determination of the capillary openness may be based on the brightness of pixels within the region of interest.
  • the input image may be separated into hue, brightness and saturation channels.
  • a range of brightness values may be set to correspond to open regions.
  • pixels with brightness values between 215 and 255 may correspond to open regions. Then, the number of pixels that correspond with the specified brightness values can be used as a measure of capillary openness.
  • FIG. 4 illustrates the act 120 of determining at least one feature within the region of interest as making three different determinations
  • the illustrated process is only one example. In some embodiments, one or two of the illustrated features may be determined. In some embodiments, the order in which the actions are performed may be different. For example, the actions may be performed in any order or be performed simultaneously.
  • Kidneys from C57BL6/NJ, B6N(Cg)-Far2 tm2a(KOMP>Wtsi /2J, MRL/MpJ and MRL/MpJ- Far2 emlRkor Fas lpr /Kkor mice were collected. Kidneys were placed in 4% paraformaldehyde in phosphate buffered saline for 24 hours prior to paraffin embedding using a Sakura VIP processor. Sections (4 pm thick) were stained using the PAS reaction and scanned at 40x using a Hammamatsu nanozoomer 2.0HT and exported as TIFF image files.
  • Ilastik machine learning and segmentation software was used to implement a classifier.
  • the classifier was trained using a collection of training images with glomeruli identified manually. Specifically, 108 murine images were trained on three classes
  • Segment files were exported as .tiff image files from Ilastik and opened for processing using ImageJ image processing software.
  • the Image J software was used to implement rules using the enhance contrast and Gaussian blur tools, followed by a size filter to identify regions of interest that correspond to glomeruli tufts.
  • the corresponding original image was then opened and thresholds applied to obtain disease scores.
  • Mesangial matrix expansion was scored by separating the image into hue, brightness, and saturation channels and setting a threshold in the saturation channel (between 80 and 255, depending on the batch).
  • Nuclei counts and area were obtained by using the hue channel between 0-180: area is measured for the glomerular tuft region of interest where cell counts are obtained by using the region of interest to crop the original image, thresholded hue, and combined with the 3D objects counter tool. Similarly, capillary openness is obtained using the region of interest in the brightness channel between 215-255. Precision, recall, and F-measure calculations were performed at an object level. This was accomplished by first visually counting glomeruli in an image to obtain expected results per image. The result per image after classification in Ilastik and initial ImageJ processing was superimposed upon the original image. The number of glomerular tufts identified were compared to the number identified by eye, taking into account location within the image.
  • Ilastik machine learning software For automatic and reliable identification of murine glomerular tufts Ilastik machine learning software was used. Ilastik adapts to user input, training a classifier using user- provided training images, using random forests formulas to segment samples (i.e., isolate features of interest). Combining Ilastik with ImageJ (software for scientific image analysis) for quantification provides an easy method of segmenting and measuring digital images. Both programs have the ability to process multiple images at once, allowing for batch processing once parameters are established.
  • FIG. 5 illustrates an example of the batch processing. Scanned slides, referred to as preliminary images, are processed such that each preliminary image is saved a multiple TIFF image files that are used as input images. Each input image is processed with the classifier and by applying at least one rule to create binary images (e.g., segmented TIFF images) representing the regions of interest. Each input image is then processed together with a corresponding binary image to determine at least one feature. The results of the feature determination, in this case is a quantitative determination of some value, is stored in a .CSV file.
  • a .CSV file is a quantitative determination of some value
  • FIG. 6C (with 95.2% of the images above 80%) in our validation set.
  • FIG. 7 five segmentation examples from the validation set are shown.
  • Original tiff images obtained from the scanned mouse slides are shown in the first column.
  • Binary images, referred to as segmentation images, formed using the Ilastik classifier that to identify candidate pixels possibly associated with glomeruli are shown in the second column.
  • the glomerular outlines, i.e., region of interest, are shown in the third column.
  • the region of interest is placed over images that have been converted to black and white and adjusted for measurement of the different phenotypes: mesangial matrix (fourth column), number of nuclei (fifth column), and capillary openness(sixth column).
  • the selected original images depict a range of mesangial matrix measurements and are arranged in order of increase categorical mesangial matrix scores.
  • FIG. 8A shows positive area correlation with glomerular tufts outlined manually when using the validation set.
  • FIG. 9B shows the correlation between a semi-quantitative method with manual tracing of glomerular tufts and the fully automated process in the divergent data set. While the area measurements are variable, the automated method still reliably scores MME. Identification of glomeruli is done in an unbiased manner, and error in the glomerular area (measured as difference from manually traced glomeruli) does not correlate with disease score (see, e.g., FIG. 9C showing correlation of MME and absolute value of residuals of glomerular area between manually tracing and fully automated output. 1085 mice with an MRL/MpJ-Faslpr/J background (889 females and 196 males).). Correlation of disease scores between our automatic, semi-automatic, and manual methods (FIG. 9A-B) further validates the glomerular identification data and shows that the variation in identified glomerular area does not affect the quantitative phenotype. Thus while improved precision of the Glomeruli area would reduce noise it does not impact MME determination.
  • An advantage of automated digital analysis is simultaneous assessment of multiple phenotypes.
  • the number and volume of nuclei per glomerular tuft are both clinical indications of renal pathology (Membranoproliferative).
  • the data show MME without proliferation as measured by increased cell number (see FIG. 10A, showing correlation between MME and the number of cells in the glomerular tuft from 338 images from 12 month C57BL6/NJ, B6N(Cg)-Far2tm2a(KOMP)Wtsi/2J mice). Cell number does not change with genotype for these images ruling out proliferative disease in our animals.
  • evidence of closure of glomerular capillaries enables the possible identification of diseases for example (fibrillary glomerulonephritis, thrombotic microangiopathy,
  • cryoglobulinemia and focal segmental sclerosis FSGS
  • rat and human samples were obtained. Slides from ten 25-week old male Heterogenous Stock rats were used. The rat images have a combined recall of 98.6% although the precision drastically varies with a combined precision of only 52.3%. The individual precision for each animal range from 92% to only 8%. Since rat kidneys are larger than mouse kidneys, increasing the size filter from 4,000 mih 2 in mice to 5,500 mih 2 in rats results in an
  • 89% of the glomeruli were automatically identified with varying success depending on the disease (76% for FSGS, 97% for membranous nephropathy).
  • the classifier was trained using mouse images, which have some obvious differences to the human images, including size of glomeruli and spacing of nuclei. The results with human samples are promising, although glomeruli were sometimes divided and a larger number of false positives were detected.
  • FIG. 11 shows two examples, one of a glomerulus identified and divided along with some false positives and one that is missed in our human data set. Nonetheless, with appropriate training on a sufficient set of human renal images and configuration refinements, the method offers potential for accelerating analysis in human research and possibly human clinical settings.
  • Some embodiments include a workflow from scanned slides to glomerular identification using a machine learning classifier that is largely capable of identifying glomeruli in a wide variety of samples and disease states as shown in the divergent data set and in the rat and human data. This is a large step forward for the field of kidney histology as renal segmentation is complex and has proven elusive. After glomerular identification, further characterization of glomerular diseases is possible using the automated quantitative analysis of features found in the region of interest. Quantitative scores for phenotypes are used (e.g., MME, number of nuclei and capillary openness).
  • Some embodiments enable the field of glomerular identification to be used in glomerular disease scoring. Despite being trained exclusively on murine glomeruli, the classifier is capable of performing across species as shown in the rat and human data.
  • the workflow presented here has several practical advantages: it is accessible without extensive image analysis training, does not require expensive software, can run on a typical personal computer, and is adaptable to other tissues and substructures.
  • Image analysis represents an emerging field of untapped data, allowing for translational work across traditional disciplines. Improved methods for image analysis allow investigators to extract more information from new and existing pathology slides. Mining image data provides a method for enriching information about physiological and
  • FIG. 12 illustrates an example implementation of a computer system 1200 that may be used in connection with any of the embodiments of the disclosure provided herein.
  • the computer system 1200 includes one or more computer hardware processors 1210 and one or more articles of manufacture that comprise non-transitory computer-readable storage media (e.g., memory 1220 and one or more non-volatile storage devices 1230).
  • the processor(s) 1210 may control writing data to and reading data from the memory 1220 and the non-volatile storage device(s) 1230 in any suitable manner.
  • the processor(s) 1210 may execute one or more processor-executable instructions stored in one or more non-transitory computer-readable storage media (e.g., the memory 10220), which may serve as non-transitory computer-readable storage media storing processor-executable instructions for execution by the processor(s) 1210.
  • non-transitory computer-readable storage media e.g., the memory 10220
  • computer system 1200 also includes a microscope 1250 that captures preliminary images that may be used to obtain input images.
  • the microscope 1250 may be communicatively coupled to processor(s) 12010 using one or more wired or wireless communication networks.
  • processor(s) 1210 may be included with microscope 1250 in a single system.
  • computer system 1200 also includes a user interface 1240 in communication with processor(s) 1210. The user interface 1240 may be configured to provide identification and analysis to a healthcare professional.
  • program or“software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor (physical or virtual) to implement various aspects of embodiments as discussed above. Additionally, according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein.
  • Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed.
  • data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form.
  • data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields.
  • any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements.
  • inventive concepts may be embodied as one or more processes, of which examples have been provided.
  • the acts performed as part of each process may be ordered in any suitable way.
  • embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • a method of automatically analyzing a region of interest within a tissue sample comprising:
  • the input image comprising a plurality of pixels; applying a classifier to the input image to identify a plurality of candidate pixels based on the plurality of pixels of the input image;
  • applying the classifier comprises applying the classifier to each pixel of a plurality of pixels one at a time.
  • the classifier is a trained classifier trained using a plurality of training images known to contain at least one target cell type.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • obtaining the input image comprises:
  • a preliminary image comprising a plurality of preliminary image pixels, wherein a number of the plurality of preliminary image pixels is greater than a number of the plurality of pixels of the input image; and generating the input image from the preliminary image by segmenting the preliminary image.
  • the at least one rule comprises a rule selected from the group consisting of (i) a blurring rule, (ii) a size constraint rule, and (iii) a shape constraint rule.
  • applying the at least one rule comprises applying the at least one rule to the binary image.
  • applying the at last one rule comprises blurring the binary image.
  • blurring the binary image comprises applying a Gaussian blur filter to the binary image.
  • applying the at last one rule comprises applying a size constraint to the binary image.
  • applying the size constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a number of pixels in the contiguous group of pixels is below a threshold number of pixels.
  • applying the at last one rule comprises applying a shape constraint on the binary image.
  • applying the shape constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a similarity measure of a shape of the contiguous group of pixels compared to a reference shape is less than a threshold similarity.
  • applying the at last one rule comprises forming, from the binary image, a single contiguous group of pixels having the first value.
  • the region of interest is a region of the input image identified as being a cell of a predetermined cell type.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a saturation that exceeds a threshold saturation value.
  • the at least one feature comprises a number of nuclei.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a hue within a predetermined range.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a brightness within a predetermined range.
  • a computing device for automatically analyzing a region of interest within a tissue sample comprising:
  • a memory configured to store a classifier and an input image of the tissue sample; and a processor communicatively coupled with the memory and configured to:
  • applying the classifier comprises applying the classifier to each pixel of a plurality of pixels one at a time.
  • the classifier is a trained classifier trained using a plurality of training images known to contain at least one target cell type.
  • the classifier uses a plurality of parameters.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • obtaining the input image comprises:
  • the computing device of 42 - 43 further comprising:
  • the computing device of 42 - 44 further comprising staining the tissue sample prior to capturing the input image.
  • the at least one rule comprises a rule selected from the group consisting of (i) a blurring rule, (ii) a size constraint rule, and (iii) a shape constraint rule. 47. The computing device of 34, further comprising forming a binary image with the plurality of candidate pixels having a first value and pixels of the plurality of pixels that are not classified as the plurality of candidate pixels having a second value.
  • applying the at least one rule comprises applying the at least one rule to the binary image.
  • blurring the binary image comprises applying a Gaussian blur filter to the binary image.
  • applying the at last one rule comprises applying a size constraint to the binary image.
  • applying the size constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a number of pixels in the contiguous group of pixels is below a threshold number of pixels.
  • applying the at last one rule comprises applying a shape constraint on the binary image.
  • applying the shape constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a similarity measure of a shape of the contiguous group of pixels compared to a reference shape is less than a threshold similarity.
  • applying the at last one rule comprises forming, from the binary image, a single contiguous group of pixels having the first value.
  • tissue sample is a kidney tissue sample.
  • region of interest is a region of the input image identified as being a cell of a predetermined cell type.
  • the computing device of 58 wherein the cell type is a glomerulus and the tissue sample is a kidney tissue sample.
  • determining at least one feature within the region of interest comprises processing a portion of the input image corresponding to the region of interest.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a saturation that exceeds a threshold saturation value.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a hue within a predetermined range.
  • the computing device of claim 60, wherein the at least one feature comprises capillary openness.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a brightness within a predetermined range.
  • At least one non-transitory storage medium encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to carry out a method of automatically analyzing a region of interest within a tissue sample, wherein the method comprises:
  • applying the classifier comprises applying the classifier to each pixel of a plurality of pixels one at a time.
  • the classifier is a trained classifier trained using a plurality of training images known to contain at least one target cell type.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • the at least one non-transitory storage medium of 67 wherein the method further comprises forming a binary image with the plurality of candidate pixels having a first value and pixels of the plurality of pixels that are not classified as the plurality of candidate pixels having a second value.
  • applying the at least one rule comprises applying the at least one rule to the binary image.
  • the at least one non-transitory storage medium of 80 - 81, wherein applying the at last one rule comprises blurring the binary image.
  • the at least one non-transitory storage medium of 80 - 82, wherein blurring the binary image comprises applying a Gaussian blur filter to the binary image.
  • applying the at last one rule comprises applying a size constraint to the binary image.
  • applying the size constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a number of pixels in the contiguous group of pixels is below a threshold number of pixels.
  • applying the at last one rule comprises applying a shape constraint on the binary image.
  • applying the shape constraint to the binary image comprises excluding a contiguous group of pixels having the first value when a similarity measure of a shape of the contiguous group of pixels compared to a reference shape is less than a threshold similarity.
  • applying the at last one rule comprises forming, from the binary image, a single contiguous group of pixels having the first value.
  • tissue sample is a kidney tissue sample.
  • determining at least one feature within the region of interest comprises processing a portion of the input image corresponding to the region of interest.
  • the at least one non-transitory storage medium of 93, wherein the at least one feature comprises mesangial matrix expansion.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a saturation that exceeds a threshold saturation value.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a hue within a predetermined range.
  • the at least one non-transitory storage medium of 93, wherein the at least one feature comprises capillary openness.
  • determining the at least one feature comprises determining which pixels of the plurality of pixels within the region of interest have a brightness within a predetermined range.
  • a method of training a classifier for automatically determining a region of interest within a tissue sample comprising:
  • each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest;
  • each of the plurality of training images is known to contain at least one target cell type.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • each of the plurality of training images is an image of a kidney obtained with a microscope.
  • each of the plurality of training images includes an image of a glomerulus.
  • a computing device for training a classifier for automatically determining a region of interest within a tissue sample comprising:
  • a memory configured to store a classifier and a plurality of training images
  • a processor communicatively coupled with the memory and configured to:
  • each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest; and train the classifier using the plurality of training images, wherein the classifier uses a plurality of parameters.
  • each of the plurality of training images is known to contain at least one target cell type.
  • the computing device of 109, wherein the classifier is a random forest classifier.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • each of the plurality of training images is an image of a kidney obtained with a microscope.
  • each of the plurality of training images includes an image of a glomerulus.
  • At least one non-transitory storage medium encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to carry out a method of training a classifier for automatically determining a region of interest within a tissue sample, wherein the method comprises: obtaining a plurality of training images, each of the plurality of training images comprising a plurality of pixels and one or more predetermined regions of interest; and
  • each of the plurality of training images is known to contain at least one target cell type.
  • the plurality of parameters include parameters selected from the group consisting of (i) a parameter based on pixel color, (ii) a parameter based on pixel intensity, (iii) a parameter based on whether a pixel belongs to an edge, (iv) a parameter based on whether a pixel is indicative a particular texture, (v) a parameter based on the characteristics of an individual pixel, and (vi) a parameter based on the characteristics of a plurality of neighboring pixels.
  • each of the plurality of training images is an image of a kidney obtained with a microscope.
  • each of the plurality of training images includes an image of a glomerulus.
  • the phrase“at least one,” in reference to a list of one or more elements should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified.
  • “at least one of A and B” can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements) ;etc.
  • a reference to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

L'invention concerne des procédés et un appareil permettant d'analyser automatiquement une zone d'intérêt dans un échantillon de tissu. Le procédé consiste à : obtenir une image d'entrée de l'échantillon de tissu, l'image d'entrée comprenant une pluralité de pixels ; appliquer un classificateur à l'image d'entrée pour identifier une pluralité de pixels candidats d'après la pluralité de pixels de l'image d'entrée ; appliquer au moins une règle à la pluralité de pixels candidats pour déterminer une zone d'intérêt ; et déterminer au moins une caractéristique dans la zone d'intérêt.
PCT/US2019/046897 2018-08-17 2019-08-16 Identification et analyse automatiques d'un échantillon de tissu WO2020037255A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862765160P 2018-08-17 2018-08-17
US62/765,160 2018-08-17

Publications (1)

Publication Number Publication Date
WO2020037255A1 true WO2020037255A1 (fr) 2020-02-20

Family

ID=69525868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/046897 WO2020037255A1 (fr) 2018-08-17 2019-08-16 Identification et analyse automatiques d'un échantillon de tissu

Country Status (1)

Country Link
WO (1) WO2020037255A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11875893B2 (en) * 2021-08-10 2024-01-16 Lunit Inc. Method and apparatus for outputting information related to a pathological slide image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993170B2 (en) * 1999-06-23 2006-01-31 Icoria, Inc. Method for quantitative analysis of blood vessel structure
US20090317381A1 (en) * 2005-11-18 2009-12-24 Tufts Medical Center Clearance of abnormal iga1 in iga1 deposition diseases
US20130102885A1 (en) * 2010-10-13 2013-04-25 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus, magnetic resonance imaging method and image display apparatus
EP2440920B1 (fr) * 2009-07-20 2014-09-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Localisation d'une région saine d'un frottis sanguin
US20150030219A1 (en) * 2011-01-10 2015-01-29 Rutgers, The State University Of New Jersey Method and apparatus for shape based deformable segmentation of multiple overlapping objects
US20160206235A1 (en) * 2013-10-07 2016-07-21 Teresa Wu Kidney glomeruli measurement systems and methods
WO2016191567A1 (fr) * 2015-05-26 2016-12-01 Memorial Sloan-Kettering Cancer Center Système, procédé et support accessible par ordinateur pour l'analyse de texture de maladies hépatopancréatobiliaires
WO2017136892A1 (fr) * 2016-02-11 2017-08-17 La Trobe University Procédé de diagnostic
US20180204048A1 (en) * 2015-09-02 2018-07-19 Ventana Medical Systems, Inc. Automated analysis of cellular samples having intermixing of analytically distinct patterns of analyte staining

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993170B2 (en) * 1999-06-23 2006-01-31 Icoria, Inc. Method for quantitative analysis of blood vessel structure
US20090317381A1 (en) * 2005-11-18 2009-12-24 Tufts Medical Center Clearance of abnormal iga1 in iga1 deposition diseases
EP2440920B1 (fr) * 2009-07-20 2014-09-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Localisation d'une région saine d'un frottis sanguin
US20130102885A1 (en) * 2010-10-13 2013-04-25 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus, magnetic resonance imaging method and image display apparatus
US20150030219A1 (en) * 2011-01-10 2015-01-29 Rutgers, The State University Of New Jersey Method and apparatus for shape based deformable segmentation of multiple overlapping objects
US20160206235A1 (en) * 2013-10-07 2016-07-21 Teresa Wu Kidney glomeruli measurement systems and methods
WO2016191567A1 (fr) * 2015-05-26 2016-12-01 Memorial Sloan-Kettering Cancer Center Système, procédé et support accessible par ordinateur pour l'analyse de texture de maladies hépatopancréatobiliaires
US20180204048A1 (en) * 2015-09-02 2018-07-19 Ventana Medical Systems, Inc. Automated analysis of cellular samples having intermixing of analytically distinct patterns of analyte staining
WO2017136892A1 (fr) * 2016-02-11 2017-08-17 La Trobe University Procédé de diagnostic

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11875893B2 (en) * 2021-08-10 2024-01-16 Lunit Inc. Method and apparatus for outputting information related to a pathological slide image

Similar Documents

Publication Publication Date Title
Burton et al. RootScan: software for high-throughput analysis of root anatomical traits
Liu et al. A vision-based robust grape berry counting algorithm for fast calibration-free bunch weight estimation in the field
Kromp et al. Evaluation of deep learning architectures for complex immunofluorescence nuclear image segmentation
CN109829882B (zh) 一种糖尿病视网膜病变分期预测方法
Sheehan et al. Automatic glomerular identification and quantification of histological phenotypes using image analysis and machine learning
Chopin et al. RootAnalyzer: a cross-section image analysis tool for automated characterization of root cells and tissues
CN112215790A (zh) 基于深度学习的ki67指数分析方法
US20150186755A1 (en) Systems and Methods for Object Identification
US20210216745A1 (en) Cell Detection Studio: a system for the development of Deep Learning Neural Networks Algorithms for cell detection and quantification from Whole Slide Images
CN113129281B (zh) 一种基于深度学习的小麦茎秆截面参数检测方法
CN110838094B (zh) 病理切片染色风格转换方法和电子设备
CN107832838A (zh) 评价细胞涂片标本满意度的方法和装置
CN107567631B (zh) 组织样品分析技术
US20240079116A1 (en) Automated segmentation of artifacts in histopathology images
CN116188423A (zh) 基于病理切片高光谱图像的超像素稀疏解混检测方法
CN114830173A (zh) 基于由病灶覆盖的人体表面积的百分比确定皮肤病的严重程度的方法
US11804029B2 (en) Hierarchical constraint (HC)-based method and system for classifying fine-grained graptolite images
CN112912923A (zh) 基于距离的组织状态确定
Kanwal et al. Quantifying the effect of color processing on blood and damaged tissue detection in whole slide images
Phillips et al. Segmentation of prognostic tissue structures in cutaneous melanoma using whole slide images
Möhle et al. Development of deep learning models for microglia analyses in brain tissue using DeePathology™ STUDIO
Rachna et al. Detection of Tuberculosis bacilli using image processing techniques
Foucart et al. Artifact identification in digital pathology from weak and noisy supervision with deep residual networks
Kloeckner et al. Multi-categorical classification using deep learning applied to the diagnosis of gastric cancer
WO2020037255A1 (fr) Identification et analyse automatiques d'un échantillon de tissu

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19849777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19849777

Country of ref document: EP

Kind code of ref document: A1