EP2174263A1 - Malignancy diagnosis using content-based image retreival of tissue histopathology - Google Patents
Malignancy diagnosis using content-based image retreival of tissue histopathologyInfo
- Publication number
- EP2174263A1 EP2174263A1 EP07836399A EP07836399A EP2174263A1 EP 2174263 A1 EP2174263 A1 EP 2174263A1 EP 07836399 A EP07836399 A EP 07836399A EP 07836399 A EP07836399 A EP 07836399A EP 2174263 A1 EP2174263 A1 EP 2174263A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- feature
- average
- regions
- gland
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
Definitions
- This invention is directed to computer-aided diagnostics using content-based retreival of histopathological image features. Specifically, the invention is directed to the extraction of image features from a histopathological tissue image based on predetermined criteria and their use in the diagnosis and prognosis of malignancy in that tissue.
- Histopathological examination which is a diagnostic method of a tissue specimen, is one of the most important medical tool in process of diagnosis.
- the histopathological diagnosis of several cancer types is based upon architectural (symmetry, circumscription, maturation, nests, arrangement, distribution), cytological (atypicality, mitosis, necrosis) and other cancer-specific criteria.
- pathologists know the criteria list that has to be checked by the pathologist in order to diagnose the specimen.
- the pathological criteria are crucial in order to establish accurate diagnosis and prognosis.
- CBIR content-based image retrieval
- a CBIR system relies on a similarity metric to retrieve images from a database.
- the metric used in most systems is a linear distance measure, but because most systems use a large number of features or dimensions, it is common to use manifold learning (ML) methods to map the data into a low-dimensional space. Images that are similar in a high dimensional space will be mapped close together in the transformed space, preserving object similarities.
- ML meth- ods have been developed over the years, most CBIR systems employ principal component analysis.
- a CBIR system was proposed for histopathology in that used color histograms, texture, and Fourier coefficients to describe the content of histological images from various malignancies, using a weighted cosine measure to determine image similarity.
- quantitative evaluation of the system with different feature sets and ML methods was not done.
- the invention provides a computer-aided diagnostic method to predict the probability that a histological image contains a malignant region comprising: obtaining a histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing a dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting, reducing and classifying are repeated.
- the invention provides a computer-aided diagnostic method to predict the probability that a histological image of a prostate tissue contains a malignant region comprising: obtaining a prostate histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing the dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting, reducing and classifying are repeated.
- the invention provides a computer-aided diagnostic method to predict the probability that a histological image of a breast tissue contains a malignant region comprising: obtaining a breast histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing the dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting, reducing and classifying are repeated.
- the invention provides a content-based image retrieval (CBIR) system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance, comprising: obtaining a histological image; extracting one or more content-based image features from said image; storing said one or more content-based image features in a computer readable media as a database image; constructing said computer readable media to hold . one or more such database images; and comparing a query image not included in said database images and one or more of said database images.
- CBIR content-based image retrieval
- Figure 1 show examples of (a) Gleason grade 3 tissue, (b) Gleason grade 4 tissue, (c) a gland from (a) magnified, (d) a gland from (b) magnified, (e) a benign gland, and (f) an illustration of the lumen and nuclei comprising the gland in (e);
- Figure 2 shows an overview and organization of the CBIR system for automated retrieval of prostate histopathology images
- Figure 3 shows examples of graphs superimposed on a patch of Gleason grade 4 tissue (a). Shown are (b) the Voronoi Diagram, (c) the Delaunay Triangulation, and (d) the Minimum Spanning Tree;
- Figure 4 shows examples of (a) Gleason grade 3 gland and (b) Gleason grade 4 gland. The lumen boundary is shown in white;
- Figure 5 shows scatter plots obtained through (a) MDS and (c) PCA, with a closeup of the boxed region.
- Figure 6 shows digitized histological tissue patches corresponding to (a) Gleason grade 3 adenocarcinoma, (b) grade 4 adenocarcinoma, (c) benign epithelium, and (d) benign stroma;
- Figure 7 shows (a) A region of benign epithelium comprising 6 glands, (b) a gland from (a) magnified, and (c) one of the nuclei surrounding the gland in (b), also magnified;
- Figure 8 shows a comparison of ((a)-(f)) Gleason grade 3 tissue, ((g)-(l)) grade 4 tissue, ((m)-(r)) benign epithelium, and ((s)-(x)) benign stroma.
- Superimposed on ((a), (g), (m), (s)) the original images are ((b), (h), (n), (t)) the Voronoi diagram, ((c), (i), (o), (u)) the Delaunay triangulation, ((d), (j)» (p).
- FIG. 9 shows scatter plots of tissue regions represented in reduced three dimensional space.
- Tissue regions belong to (a) Gleason grade 3 (circles), Gleason grade 4 (squares) and benign stromal tissue (downward-pointing triangles) are distinguished by textural and graph-based features, and (b) Gleason grades 3 and 4 with benign epithelium (upward-pointing triangles) are distinguished by textural and graph-based features.
- Gleason grades 3 and 4 with benign epithelium are distinguished by textural and graph-based features.
- c the subset of tissue regions containing glandular features distinguished by graph-based, textural, and glandular morphological features
- Figure 10 shows the features used in image classification.
- b, e graphical maps
- c, f Delaunay triangulation
- Figure 1 1 shows (a) Isomap embedding, and corresponding (b) Locally linear embedding scatter plots showing separation between high grade (+) and low grade breast cancer (o), obtained via use of over 100 textural image features;
- Figure 12 shows Clear Cell Carcinoma (Green) vs. Chromophobe Renal Cell Carcinoma (Blue), reduced using Laplacian Eigenmaps;
- Figure 13 shows Clear Cell Carcinoma (Green) vs. Mucinous / Spindle Cell Carcinoma (Yellow), reduced using Graph Embedding;
- Figure 14 shows Clear cell carcinoma (Green) vs. Papillary / Solid Growth Carcinoma (Red), reduced using Graph Embedding;
- Figure 15 shows Chromophobe Renal Cell Carcinoma (Yellow) vs. Mucinous / Spindle Cell Carcinoma (Blue), reduced using Graph Embedding;
- Figure 16 shows Chromophobe Renal Cell Carcinoma (Blue) vs. Papillary / Solid Growth Carcinoma (Red), reduced using Principal Component Analysis; and Figure 17 shows Mucinous / Spindle Cell Carcinoma (Yellow) vs. Papillary / Solid Growth Carcinoma (Red), reduced using Graph Embedding.
- This invention relates in one embodiment to computer-aided diagnostics using content-based retreival of histopathological image features.
- the invention relates to the extraction of image features from a histopathological image based on predetermined criteria and their analysis for malignancy determination.
- Computer-aided detection refers to the use of computers to analyze medical images to detect anatomical abnormalities therein.
- computer-aided detection are the terms computer-aided diagnosis, or computer-assisted diagnosis, or computer-assisted detection in other embodiments.
- the outputs of CAD systems are sets of information sufficient to communicate the locations of anatomical abnormalities, or lesions, in a medical image, and in other embodiments, can include other information such as the type of lesion, degree of suspiciousness, and the like.
- CT computerized tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- SPECT single-photon emission computed tomography
- ultrasound in other distinct embodiments
- thermography electrical conductivity-based modalities, and the like in other applicable embodiments.
- a CBIR system using features modeled on existing or newly developed paradigms as well as those that were designed outside its purview can be constructed for the benefit of clinical pathologists.
- choice of manifold learning (ML) algorithms, or number of reduced dimensions, and feature subsets in other discrete embodiments of the methods and system described herein, are used in designing an optimal CBIR system.
- a computer-aided diagnostic (CAD) method to predict the probability that a histological image contains a malignant region comprising: obtaining a histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing a dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting, reducing and classifying are repeated.
- CAD computer-aided diagnostic
- the images used in the methods and systems to predict the probability that a histological image contains a malignant region and its severity, provided herein are ordered in a series of increasing magnification.
- the increase in magnification is between about 1 — 6 orders of magnitude and is taken by different imaging means, such as a light microscope for the lowest magnification in one embodiment, and PET or Tunneling Electron Microscope in another embodiment for the higher magnification.
- imaging means such as a light microscope for the lowest magnification in one embodiment, and PET or Tunneling Electron Microscope in another embodiment for the higher magnification.
- magnification and imaging means can be optimized based on the tissue and purpose of the imaging used for carrying out the methods provided herein.
- the image feature extracted in the methods and systems described herein is a statistical feature.
- the image feature extracted is a Haralick cooccurrence feature.
- the image feature extracted is a Haar wavelet feature.
- the image feature extracted is a Gabor feature.
- the image feature extracted is calculated independently from each channel of a digitized image using a sliding window of 3x3 pixels, or in another embodiment, from 5x5 pixels, or in another embodiment, from 7x7 pixels.
- the Haralick co-occurrence feature which is a statistic feature extracted in the methods and systems described herein, is an angular second moment.
- the Haralick co-occurrence feature is a contrast.
- the Haralick co- occurrence feature is a correlation. In another embodiment, the Haralick co-occurrence feature is a variance. In another embodiment, the Haralick co-occurrence feature is an entropy. In another embodiment, the Haralick co-occurrence feature is an inverse difference moment. In another embodiment, the Haralick co-occurrence feature is a sum average. In another embodiment, the Haralick co-occurrence feature is a sum variance. In another embodiment, the Haralick co-occurrence feature is a sum entropy. In another embodiment, the Haralick co-occurrence feature is a difference variance, or a difference entropy in another embodiment.
- the Haralick co-occurrence describes texture in terms of the statistical distribution of the grayscale pixel values within an image, such as in a grayscale of a histological imge used in the methods and systems described herein.
- +d x , y ⁇ +d y ) Si+d (2.5)
- PMF joint probability mass function
- Haralick accomplishes this with the so-called co-occurrence matrices.
- an image I of size N x xN y with the set of distinct grayscale values G ⁇ 1 , 2, . . . , N g )
- Haralick's method creates symmetric co-occurrence matrices P(i, j; d, ⁇ ) with i, j ⁇ G specifying grayscale values, ⁇ ⁇ 0°, 45°, 90°, 135° ⁇ defining an angular direction and d representing the user-defined pixel distance.
- the (i, j) entry of P(i, j; d, ⁇ ) holds the total number of pixel pairs in the image, normalized by the total number of pixels in the image, with grayscale values i and j such that the two pixels in the pairs lie d pixels apart in the angular direction ⁇ .
- the method produces four cooccurrence matrices, one for each of the four ⁇ values specified above.
- the value of d specifies the size of the neighborhood over which it is feasible to estimate the PMF of the grayscale distribution.
- the resulting co-occurrence matrices serve as an estimate of the true grayscale distribution of the image.
- the term "statistical feature" refers in one embodiment to preselected substructures which are larger than a certain threshold value ⁇ chosen based on their statistically significant sub-structures statistic. That statistic is in one embodiment an average. In another embodiment, the statistic is a median. In another embodiment, the statistic is a standard deviation. In another embodiment, the statistic is a difference. In another embodiment, the statistic is a Sobel filter. In another embodiment, the statistic is a Kirsch filter. In another embodiment, the statistic is a horizontal derivative. In another embodiment, the statistic is a vertical derivative. In another embodiment, the statistic is a diagonal derivative of a pre-selected determinant. In another embodiment, the statistic is a combination of factors.
- the Haar wavelet feature extracted from the histological imge series described herein, used in the methods and systems provided herein is the result of a convolution of said image with a Haar wavelet kernel.
- the Gabor feature is the result of convolving the image used in the methods and systems described herein, with a bank of Gabor filters, generated by convolving a Gaussian function with a sinusoid at a range of scale and orientation parameter values.
- the Gabor filters used in the methods and systems described herein is a linear Filter whose impulse response is defined by a harmonic function multiplied by a Gaussian function.
- the Fouriers transform of a Gabor filter's impulse response is the convolution of the Fourier transform of the harmonic function and the Fourier transform of the Gaussian function.
- the term "convolution” refers to a mathematical operator which takes two functions/ and g and produces a third function that represents the amount of overlap between/and a reversed and translated version of g.
- convolution refers to a general moving average, by making one of the functions an indicator function of an interval.
- the step of reducing the dimensionality in the methods and systems for predicting the probability that a histological image contains a malignant region and its severity and grading in certain embodiments is done by using a principal component analysis.
- reducing the dimensionality is carried out by a linear dimensional analysis.
- reducing the dimensionality is carried out by a multidimensional scaling.
- reducing the dimensionality is carried out by a graph embedding.
- reducing the dimensionality is carried out by an ISOMAP.
- reducing the dimensionality is carried out by a local linear embedding.
- reducing the dimensionality is carried out by a kernel-based principal component analysis.
- reducing the dimensionality is- carried out by a semidefinite embedding. In another embodiment, reducing the dimensionality is carried out by Laplacian eigenmaps. In another embodiment, reducing the dimensionality is carried out by a combination thereof.
- the methods used to reduce dimensionality of the imgae features extracted from the histological images described herein are collectively referred to as "Manifold Learning" (ML). [00024] A person skilled in the art would readily recognize that the choice of ML used in the methods described herein may be optimized for several factors, such as without limiting, the tissue type, the image feature used, the imaging means and the like. Also considered in other embodiments, are methods as finite element and cluster analysis.
- the term "reducing dimensionality” refers to mathematical techniques used to reduce multivariable data sets, to a lower number of data sets, or “dimensions" for analysis. In one embodiments, these techniques use several algorithms to explain the variability among the observed variables with 1, 2...n variables in which n is a much lower dimensionality number than initially observed.
- the step of classifying the extracted region or regions from the histological image as either benign, malignant, or suspect based on the extracted image feature in the methods described herein further comprises the steps of creating a likelihood scene for the selected image; creating a mask identifying regions of said selected image as benign or suspect by thresholding said likelihood scene at a value determined through system training, or in another embodiment manifold learning; and classifying the extracted region or regions as either benign, malignant, or suspect based on said mask, whereby if the region or regions are classified as suspect in one embodiment, said mask is resized by interpolation to the size of the subsequent image magnification in the ordered series to identify suspect regions on said subsequent image.
- the suspect region in the subsequent image obtained at another time, or in another embodiment, in different imaging means.
- the term "resized" denotes the selection of the observed suspect region in a subsequent image based on the ordering of the images.
- the ordering of the images may be resolution, or in another embodiment, the application of different filtering systems.
- a subsequent image is in one embodiment a grayscale image followed by a subsequent image at the same magnification but which is a color image.
- the term “mask” refers in one embodiment to a technique for isolating the edges in an image, amplifying them, and then adding them back into the image.
- the term “scene”, or “likelihood scene” is used in the context of the recognition and localization of generic object classes in the histological images used in the methods and systems described herein.
- a likelihood scene use a combination of image features in order to classify the extracted region according to whether it contains instances of a given object, such as a nucleus in one embodiment, or thickness of membranes, calcium deposits and the like in other embodiments.
- the likelihood scene is selected based on a decision tree algorithm, which, in another embodiment is a part of the readable computer media used to carry out the methods and systems described herein.
- the likelihood scene is selected based on a Support Vector Machine (SVM) algorithm.
- SVM Support Vector Machine
- the term "Support Vector Machine algorithm” refers in one embodiment to a set of related learning methods used for classification and regression.
- SVM algorithms simultaneously minimize the empirical classification error and maximize the geometric margin and are interchangeably as maximum margin classifiers in another embodiment.
- SVM uses extracted features to discriminate between each pair of prostate tissue types of Gleason grade 3 vs. grade 4, grade 3 vs. benign epithelium, or in another embodiment between grade 3 vs. benign stroma, or in another embodiment between grade 4 vs. benign epithelium, or in another embodiment between grade 4 vs. benign stroma, and or in another embodiment between benign epithelium vs. benign stroma.
- a third of the data from each group is used for training the classifier and the rest was used for testing. Since SVMs perform classification in a high-dimensional space, local linear embedding (LLE) is applied in one embodiment to reduce the dimensionality of the feature space prior to classification.
- LLE local linear embedding
- the likelihood scene is determined by an Adaboost algorithm using Bayes Decision Theorem.
- the likelihood scene is determined at least partially by an image feature extracted from one or more training images.
- AdaBoost refers to adaptive boosting algorithm consisting of a class of concrete algorithms that is adaptive through forcing subsequent classifiers built to be tweaked in favor of those instances misclassified by previous classifiers.
- a distribution of weights D 1 is updated, indicating the importance of examples in the data set for the classification. On each iteration, the weights of each suspect classified example are increased (or in another embodiment, the weights of each benign or malignant classified example are decreased), so that the new classifier focuses more on those examples.
- a method for histopathological analysis of a gland tissue sample comprising obtaining a histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing a dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting, reducing and classifying are
- the image feature extracted in the methods described herein, for a histopathological analysis of a gland tissue is an architectural feature.
- the image feature is a nuclear density.
- the image feature is a gland morphology.
- the image feature is a global texture feature.
- the image feature is a combination of features.
- the image feature extracted for the methods and systems provided herein is gland specific and may include other features.
- the architectural feature extracted in the methods described herein is calculated from a Voronoi diagram.
- the architectural feature is calculated from a Delaunay graph.
- the architectural feature is calculated from a minimum spanning tree constructed from the nuclei centers in the image.
- the architectural feature is calculated from a co-adjacency matrix constructed from the gland centers in the image.
- the term "Voronoi diagram" refers to a decomposition of a metric space determined by distances to a specified discrete set of objects in the space.
- Voronoi diagram is used interchangeably with the terms “Voronoi tessellation", “Voronoi decomposition” or “Dirichlet tessellation”.
- the Voronoi diagram for S is the partition of the plane which associates a region V(p) with each point p from S in such a way that all points in V(p) are closer to p than any other point from S. Application of the Voroni diagram are further described in the Examples hereinbelow.
- the term "Delaunay Graph”, refers to the depiction of Delaunay triangulation.
- Delaunay graph of a set of points P is the dual graph of the Voronoi diagram of P.
- an arc is created between Vi and V j , the vertices located in sites Si and S j .
- the arcs are straightened into line segments, resulting in a Dealunay graph of P set of points (DG( P )).
- the P point set is the nuclei centers in the image extracted from the histological gland images used in the methods described herein.
- the architectural feature computed from a Delaunay graph is the standard deviation, average, ratio of minimum to maximum, or disorder of the side lengths or areas of the triangles in the Delaunay graph of the histological images used in the methods described herein.
- the image feature extracted from the histological image used in the methods described herein is an architectural feature computed from a minimum spanning tree is the standard deviation, average, ratio of minimum to maximum, or disorder of the edge lengths in the minimum spanning tree of said image.
- the image feature extracted from the histological image used in the methods described herein is an architectural feature which is the co- adjacency matrix, constructed from the gland centers is used to calculate an angular second moment, a contrast, a correlation, a variance, an entropy, an inverse difference moment, a sum average, a sum variance, a sum entropy, a difference variance, or a difference entropy.
- the image feature extracted from the histological image used in the methods described herein is nuclear density feature referring in one embodiment to the number or density of nuclei in the image and in another embodiment, its average, standard deviation, or disorder of the distances from each of the nuclei in said image to the 3 nearest neighboring nuclei, or in another embodiment, to the 5 nearest neighboring nuclei, or in another embodiment, to the 7 nearest neighboring nuclei; or the average, standard deviation, or disorder of the number of neighboring nuclei within a circle with a radius of 10, 20, 30, 40, or 50 pixels centered on each nucleus within said image in other discrete embodiments of the methods and systems described herein.
- the image feature extracted from the histological image used in the methods and systems described herein is gland morphology, which is calculated from the boundary of either the gland lumen in one embodiment, or interior nuclei in another embodiment, as the ratio of the gland area to the area of the smallest circle that circumscribes the gland; or the standard deviation, variance, or ratio of the maximum to the average distance between the gland center and the boundary; the ratio of the estimated boundary length (the boundary calculated using a fraction of the boundary pixels) to the actual boundary length; ratio of the boundary length to the area enclosed by the boundary; and the sum of the difference between the distance from the center of the gland to the boundary and the average of the distances from the center to the two adjacent points in other embodiments of gland morphology used as the extracted image feature in the methods and systems described herein.
- the image feature extracted from the histological image used in the methods described herein is a global texture feature that is an average, or a median, a standard deviation, a range, an angular second moment, a contrast, a correlation, a variance, an entropy, a sum average, a sum variance, a sum entropy, a difference variance, a difference entropy, a difference moment, or a Gabor filter in other embodiments.
- the term "Global texture feature" refers to a small number of numerical values used to define an image or a region in an image.
- the methods described hereinabove are used to diagnose the presence of malignancy in a prostate.
- a computer-aided diagnostic method to predict the probability that a histological image of a prostate tissue contains a malignant region comprising: obtaining a prostate histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing the dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next hist
- a computer-aided diagnostic method to predict the probability that a histological image of a renal tissue contains a malignant region comprising: obtaining a prostate histological image, wherein the histological image is a first of a series of images ordered in increasing magnification', identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing the dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting and classifying are repeated.
- the histological images used in the methods and systems described herein are generated by pyramidal decomposition.
- a computer-aided diagnostic method to predict the probability that a histological image of a breast tissue contains a malignant region comprising: obtaining a prostate histological image, wherein the histological image is a first of a series of images ordered in increasing magnification; identifying a region or regions of said histological image classified as suspect; extracting one or more image features from at least one of said identified regions; reducing the dimensionality of said extracted feature data; and classifying said extracted region or regions as either benign, malignant, or suspect based on at least one said extracted image feature, whereby if said extracted region or regions are classified as malignant, the histological image has a malignant region, otherwise; if the extracted region or regions are classified as benign, then the histological image does not have a malignant region; or if the extracted region or regions are classified as suspect, the next histological image in the series is obtained and the steps of identifying, extracting and classifying are repeated.
- the methods and the embodiments described hereinabove are used in the content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance.
- a content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance comprising: obtaining a histological image; extracting one or more content-based image features from said image; storing said one or more content-based image features in a computer readable media as a database image; constructing said computer readable media to hold one or more such database images; and comparing a query image not included in said database images and one or more of said database images.
- the systems described herein further comprise means for sorting retrieved images according to their image content similarity to the query image; and displaying said retrieved images to the user in the order of said sorting, whereby, in one embodiment, the first displayed image is most similar to said query image.
- the image feature described hereinabove is used in the systems of the invention.
- the similarity between one or more retrieved images and said query image is determined through the use of one or more distance metrics, whereby in another embodiment the distance metrics is Minkowski distance, or Mahalanobis, Hamming, Levenshtein, Chebyshev, geodesic, tangent, or earth mover's distance or their combination in other discrete embodiments.
- the retrieved images used in the systems described herein are sorted by increasing distance such that in another embodiment, the first in the set of retrieved images is most similar to said query image in view of the measured distance metric.
- Minkowski's distance of P data points refers to : n
- a content-based image retrieval system for the comparison of novel histopathological images with a database of histopathological images of known clinical significance, comprising: obtaining a histological image; extracting one or more content-based image features from said image; storing said one or more content-based image features in a computer readable media as a database image; constructing said computer readable media to hold one or more such database images; and comparing a query image not included in said database images and one or more of said database images, whereby the display of said retrieved images is performed via a graphical user interface to the user in the order of said ranking of said retrieved images, as well as text output indicating the results of said distance metric comparison
- Example 1 Using Manifold Learning for Content-Based Image Retrieval of Prostate
- Prostate cancer is the most commonly diagnosed cancer among males in the U.S., with 200,000 new cases and 27,000 deaths predicted for 2007 (source: American Cancer Society).
- manual examination of prostate biopsy samples un- der a microscope by an expert pathologist is the gold standard of prostate cancer diagnosis and grading.
- the most common system of numbering or ⁇ grading" prostate tissue is the Gleason scale [1], which assigns grades on a scale from 1 (well-differentiated, relatively benign tissue) to 5 (non-differentiated tissue, highly invasive cancer).
- the Gleason paradigm illustrates how cancer grades differ in terms of their architecture (spatial arrangement of nuclei and glands within the tissue with respect to their centers of mass) and morphology (shape and size of glands and nuclei). Glands and nuclei both express architectural and morphological changes as cancer progresses from benign to malignant.
- An example of tissue regions of Gleason grade 3 tissue is shown in Fig. 1 (a), grade 4 tissue in Fig. I (b), a single grade 3 gland in Fig. 1 (c), and a grade 4 gland in Fig. 1 (d).
- a gland from benign epithelial tissue is shown in Fig. 1 (e).
- a database of histopathological prostate images is constructed by extracting graph-based, texture, and morphological features from a series of images. These images are then reduced into a low-dimensional space using one of several manifold learning (ML) methods. In this example, considered are: principal component analysis (PCA), multidimensional scaling (MDS), graph embedding (GE), Isomaps (ISO), local linear embedding (LLE), kernel-based PCA (k-PCA), and laplacian eigenmaps (LE).
- PCA principal component analysis
- MDS multidimensional scaling
- GE graph embedding
- ISO Isomaps
- LLE local linear embedding
- k-PCA kernel-based PCA
- LLE kernel-based PCA
- LLE laplacian eigenmaps
- a linear Euclidean distance metric is used to rank the database images in order of similarity to the query image.
- the returned images are then output to the user for analysis.
- a returned image is Velevant" if it is the same class as the query image (Gleason grade 3, grade 4, or benign epithelium), and ⁇ irrelevant" otherwise.
- C R (C; f) where C is a 2D grid of image pixels c ⁇ C and f is a function that assigns an in- tensity to c.
- C R comprises k glands with centroids at manually labeled pixels c g ,c g 2 ,...c g k .
- C R also comprises m nuclei (grey ellipsoids in Fig. 1 (f)) with centroids at manually labeled pixels .
- the following 25 features were computed directly from the spatial location of the centroids of the nuclei in C R to characterize nuclear proliferation.
- SK denotes the set of K-nearest neighbors of nuclear centroid ca n where K e ⁇ 3; 5; 7 ⁇ and ⁇ e ⁇ 1 ; 2; ... ;m ⁇ .
- the overall average nuclear distance ⁇ n d K — ⁇ d and standard
- B c . r denotes a ball of pixels with radius r centered on c a n .
- the number of pixels corresponding to nuclear centroids c n J ⁇ j ⁇ a, j e ⁇ l,2,...,m ⁇ in B c . r are counted and the sum denoted as Q c . r .
- the mean and standard deviation of Q c . for ⁇ G ⁇ 1 ; 2; ...,m ⁇ are denoted by ⁇ % r and ⁇ ° .
- the measurement of disorder ⁇ S r is also calculated as described above for ⁇ i y . In this example, values of r e ⁇ 10; 20; ... ; 50 ⁇ , which were determined empirically were used.
- the Voronoi diagram V partitions C R with a series of polygons.
- Polygon P is constructed around c a century, creating a tessellation of C R . Every pixel is assigned to a polygon and every polygon is associated with a nuclear centroid.
- Each P c . has e unique edges E ⁇ M ⁇ , E ⁇ Ml b+2 ,..-,E ⁇ h between all adjacent vertices with corresponding edge lengths / ⁇ , / ⁇ +1 ,...,/ ⁇ and chord lengths Hl , H2, ..., Hh y between all nonadjacent vertices.
- the Delaunay graph D is a graph constructed so that any two unique nuclear centroids c a n and c b n , where a,b e ⁇ 1 ; 2;...;m ⁇ , are connected by an edge E ⁇ if their associated polygons in V share a side.
- the average, standard deviation, minimum to maximum ratio, and disorder of the areas and edge lengths are computed for all triangles in D, giving 8 features.
- Minimum Spanning Tree [00056] Minimum Spanning Tree A spanning tree S of D is a subgraph which connects all c a n , ⁇ e ⁇ 1 ;
- a single D can have many S.
- the minimum spanning tree (MST) denoted by S has a total length less than or equal to the total length of every other spanning tree.
- the average, standard deviation, minimum to maximum ratio, and disorder of the edge lengths in ST is computed to obtain an additional 4 and a total of 24 graph-based features
- c g l ,c*,...c g k denote the centroids of k glands within C R , and construct a co-adjacency matrix W wherein the value of row u, column v, W(n, ⁇ ) -e"
- This matrix describes the inter-gland spatial relationships in a manner similar to the co-occurrence matrix proposed by Haralick to describe the spatial relationships between pixel intensity values.
- the lumen area is surrounded by a boundary ⁇ obtained via a level-set algorithm, where the initial contour is initialized by the user inside the gland near the lumen area (the white region in Fig. 1 (0) and is allowed to evolve to its final position (white line in Fig. 1 (f)).
- /p denotes the length of the gland boundary B.
- the distance from the centroid of the gland c g to boundary pixel c ⁇ ⁇ is denoted d (c g ; c ⁇ B ), where c ⁇ B e B.
- the average and maximum of d (c g ; C" B ) is computed over ⁇ e ⁇ 1 ; 2; ... ; ⁇ .
- the fractal dimension of the gland boundary was also obtained. Intermediate points c ⁇ ⁇ ⁇ B were picked, where ⁇ e ⁇ 3; 6; 9 ⁇ on B and linearly interpolated between these points to obtain length V B - The fractal dimensions are obtained as / B /V B -
- Manifold learning (ML) methods reduce the dimensionality of a data set from N dimensions to M dimensions, where M « N, while preserving the high-dimensional relationships between data points. Since class structure is preserved, ML techniques are employed to avoid the curse of dimensionality and to enable the use of a Euclidean similarity metric in a low dimensional embedding space. Many ML techniques have been developed over the years and have been tested on a variety of data sets. Some methods employ a linear algorithm to map the data to a low-dimensional space, while others use a non-linear algorithm, assuming that the data lie on a non-linear manifold in the high- dimensional space.
- PR precision vs. recall
- a recall of 1.0 is obtained when all images are retrieved from the database, while a precision of 1.0 is obtained if all retrieved images are relevant.
- the retrieved images are sorted in order of increasing Euclidean distance from the query image, so that the first image returned is most similar to the query.
- Each image is queried against the remaining images in the database, and iteration is carried out through each of the returned images to generate a PR graph.
- the PR graphs obtained for all images of the same class are averaged together.
- the Mean Average Precision (MAP) an average of the precision for all returned images is calculated as well.
- Table 1 Mean average precision values for each queried class. Shown are the highest MAP over M 6 (1: 2; ... ; IQ]. Boldface values are the highest obtained for each class.
- Each row in the table represents the MAP obtained using a particular feature set and class of the query image, and each column shows the ML method used. Because each of the ML methods was used to reduce the data to M e ⁇ 1 ; 2; ... ; 10 ⁇ , the highest MAP values over all M are shown. For each class, the highest MAP values are shown in boldface. In all three classes, the highest MAP values were obtained when using only morphological features.
- PCA yielded the highest MAP for Gleason grades 3. and 4, while LE produced the highest MAP for benign epithelium.
- MDS performed as well as PCA when Gleason grade 3 was the query image, but performance decreased when Gleason grade 4 was the query image.
- MAP is highest when the number of dimensions is low (between 1 and 2), suggesting that the majority of the discriminating information is held in only a few dimensions.
- Table 2 shows the results from a two-tailed paired Student's t-test comparing MAP values obtained using morphological features alone to those of the indicated feature subsets.
- Table 2 are shown are the results from two of the ML methods analyzed, MDS and GE. In almost all cases, the values indicate that morphological features result in a statistically significant change in MAP values.
- PR recall
- the current protocol for prostate cancer diagnosis involves manual analysis of prostate biopsy tissue by a pathologist to determine the presence or absence of cancer in a tissue sample, followed by Gleason grading to assign a number between 1 (early stage cancer) and 5 (highly infiltrative cancer) to grade the invasiveness of the cancer.
- Gleason grading is currently done by visual analysis of the arrangement, size, and shape of the gland structures and nuclei within the tissue sample.
- CAD Computer-aided diagnosis
- Example 1 a CAD system to detect potentially cancerous areas on digitized prostate histology is presented.
- a system is presented for classifying prostate histology into one of four categories: Gleason grade 3 adenocarcinoma, grade 4 adenocarcinoma, benign epithelium, or benign stroma.
- Over 100 features are automatically computed from within a tissue patch, including 13 features describing nuclear arrangement, 32 features describing gland size and arrangement, and 57 image texture features.
- Non-linear dimensionality reduction methods are applied to the feature set and a support vector machine (SVM) is used to classify the different digitized tissue patches into a lower- dimensional space.
- SVM support vector machine
- a Boosting algorithm is applied to find the features that contribute the most discriminating information to the classifier.
- the novelty of the work shown in this Example is that the system focuses on distinguishing between intermediate Gleason grades (3 and 4), and the efficacy of textural and morphological features in addition to tissue architecture is explored to distinguish between cancer classes.
- the use of additional image features is integrated into the Gleason grading paradigm.
- Hematoxylin and eosin stained prostate biopsy cores are scanned into a computer using a high resolution whole slide scanner at 4Ox optical magnification at the Hospital at the University of Pennsylvania, Department of Surgical Pathology.
- An expert pathologist labels regions of tissue within each image as Gleason grades 3 adenocarcinoma, grade 4 adenocarcinoma, benign stroma, or benign epithelial tissue.
- a total of 54 labeled tissue patches were considered for this Example, comprising 11 Gleason grade 3, 7 Gleason grade 4, 17 benign epithelial, and 12 benign stromal regions.
- the feature set includes graph-based, morphological, and global textural features to capture the architectural tissue patterns and arrangement of glands and nuclei in the sample.
- a listing of the features considered in this work are given in Table 3.
- Table 3 Summary of feature types, the number of each feature type, and representative features from each type.
- Figure 7 (a) illustrates a region of benign epithelial tissue made up of six glands, and a single gland is shown in Figure 7 (b). Each region also contains several nuclei, one of which has been magnified in Figure 7 (c).
- An image region R is made up of pixels c, containing m nuclei with centroids c n l ,c*,...cTM . In addition R may contain k glands with corresponding centroids c g l ,c*,...c* .
- Figure 7 illustrates these graphs for Gleason grade 3 (Figure 7 (a)-(f)), grade 4 ( Figure 7 (g)-(l)), benign epithelium ( Figure 7 (m)-(r)), and benign stroma ( Figure 7 (s)-(x)) tissues.
- the following features are computed from V.
- the area N A (P j ) of a polygon Pj is given as ⁇ Pj 1 which is the cardinality of set Pj .
- the average area is computed
- N A — ⁇ m _ ⁇ N ⁇ (.Pj) ⁇
- Average roundness factor ?f and disorder D r are computed similar to N A and D ⁇ .
- the Delaunay graph D is constructed such that any two unique nuclear centroids c" n and c b n , where a,b e ⁇ ⁇ ,2,...,m], are connected by an edge Zs 0 "* if Pa and Pb share a side in V.
- the following features are computed from D.
- the average edge length E and the maximum edge length E" 13 * are computed over all edges in D.
- Each c a n is connected to B other nuclear centroids c°' ⁇ " 1 c°' or2 ,..£°' ofl by corresponding edges E a a ⁇ E a - a2 ,...E a ⁇ aB in D, where al,a2, ...,aB e ⁇ 1 ,2, ...,m ⁇ .
- the average length of the edges E a al , E a - a2 ,...E a a ' is calculated for each a e ⁇ 1 ,2, ...,m)
- a spanning tree S of that graph is a subgraph which is a tree and connects all the vertices together.
- a single graph can have many different S.
- Weights ⁇ f are assigned to each edge E in each S based on the length of E in S.
- the sum of all weights C ⁇ in each S is determined to give the weight ⁇ s assigned to each S.
- the minimum spanning tree (MST) denoted by S M has a weight ⁇ less than or equal to ⁇ s for every other spanning tree S.
- the features computed from S M are (i) average length of all edges in S M and (ii) edge length disorder computed as for D* and If.
- This diagonally symmetric matrix describes the spatial relationships between each pair of glands in R analogous to the manner in which Haralick's co-occurrence matrix describes the relationship between pixel intensity values. 13 of Haralick's second-order features were calculated from W: angular second moment, contrast, correlation, variance, entropy, sum average, sum variance, sum entropy, difference variance, difference entropy, difference moment, and two measurements of correlation.
- N( ⁇ , c" n ) is the set of pixels c e ⁇ contained within a circle with its center at c a n and radius ⁇ .
- the number of d n , j ⁇ a, j,a e ⁇ 1,2,...,/ « ⁇ were computed, which are in set N( ⁇ , c a n ).
- the disorder of this feature D ⁇ is calculated as described A.
- the size and shape of the structures within a set of glands are also important in discriminating tissue type.
- First-Order Statistical Features The average, median, standard deviation, and range of the pixel values/(c), for e e R, are calculated to quantify first-order statistics.
- Second-Order Statistical Features Second-order co-occurrence texture features are described by the 13 Haralick features described above, using a co-occurrence matrix Z e ⁇ $ ⁇ MxM , where M is the maximum pixel intensity of all c e R, in place of the co-adjacency matrix.
- the value in Z(f(c), f(d)) is given by the number of times intensities f (c), f (d) appear within a fixed displacement of each other and at a specified orientation.
- the 13 Haralick features are calculated from Z.
- Steerable Filters - Gabor filters provide varying responses to textural differences in an image.
- the filter kernel G is constructed based on a scale and an orientation parameters as:
- An SVM uses the above features to discriminate between each pair of tissue types used in this Example: Gleason grade 3 vs. grade 4, grade 3 vs. benign epithelium, grade 3 vs. benign stroma, grade 4 vs. benign epithelium, grade 4 vs. benign stroma, and benign epithelium vs. benign stroma.
- a third of the data from each group was used for training the classifier and the rest was used for testing. Since SVMs perform classification in a high-dimensional space, local linear embedding (LLE) was applied to reduce the dimensionality of the feature space prior to classification. Cross-validation was used to obtain optimal parameters for the classifier.
- LLE local linear embedding
- Phenotypic signatures for 54 regions consisting of 1 1 of Gleason grade 3 tissue, 7 of Gleason grade 4, 17 of benign epithelium, and 12 of benign stroma were generated.
- Table 2 lists classification results for each paired comparison of the four classes.
- Table 2. SVM classification accuracy, standard deviation, and the feature assigned the highest weight by the AdaBoost algorithm.
- Figure 9 (a) Gleason grade 3 (circles), grade 4 (squares), and benign stroma (downward triangles) are shown, and Figure 9 (b) shows grade 3 and grade 4 regions along with benign epithelium (upward triangles).
- the graphs illustrate the inter-class relationships between the data using this feature set. Because not all tissue regions contain glands, only graph-based and textural features were used to generate the graphs in Figure 9 (a) and (b).
- Figure 9 (c) plots the tissue regions for which morphological gland features could be calculated. The separation between grade 3 and grade 4 tissues improves with the inclusion of these features, indicating that their addition to the classifier improves accuracy.
- Implicit feature selection via the AdaBoost algorithm revealed that texture features are weighted heavily, particularly in discriminating the adenocarcinoma tissues from benign epithelium and in discriminating epithelium from stroma.
- Figures 10(b)-(e), and 10(g)-(j) represent the corresponding texture feature representations (Gabor, Sobel, Gradient Magnitude) of the low- (10(a)) and high-grade (,10(f)) breast cancers.
- Cellular architecture and morphology features which are visible at higher magnifications is extracted at 2Ox and 40x magnifications and include total normalized gland area, gland perimeter, lumen area, lumen-to-gland area ratio, gland area-to-perimeter ratio, number of nuclei surrounding the lumen, ratio of nuclei to gland perimeter, number of visible nuclei layers around the gland, radial gradient index, and fractal dimension of the boundary.
- each breast histology image is represented by a high (>500) dimensional attribute vector Vp.
- Non-linear dimensionality reduction techniques are then applied to Vp to reduce the data dimensionality and a multiple classifier system is used to distinguish between the various objects in the lower dimensional space.
- Feature selection methods via AdaBoost, and Forward and Backward selection is used to identify the most discriminatory features for each cancer grade. Evaluation of the CAD model is done on approximately 150 independent studies in test sets S2, S3
- the set of features comprising Gabor filter features, Greylevel statistical features, and Haralick co-occurrence features are extracted from each of the 3 color channels and at three different window sizes, generating a set of feature scenes.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US83469706P | 2006-08-01 | 2006-08-01 | |
PCT/US2007/017181 WO2009017483A1 (en) | 2007-08-01 | 2007-08-01 | Malignancy diagnosis using content-based image retreival of tissue histopathology |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2174263A1 true EP2174263A1 (en) | 2010-04-14 |
EP2174263A4 EP2174263A4 (en) | 2013-04-03 |
Family
ID=40304582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP07836399A Withdrawn EP2174263A4 (en) | 2006-08-01 | 2007-08-01 | Malignancy diagnosis using content-based image retreival of tissue histopathology |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2174263A4 (en) |
WO (1) | WO2009017483A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8208698B2 (en) | 2007-12-14 | 2012-06-26 | Mela Sciences, Inc. | Characterizing a texture of an image |
US9367216B2 (en) * | 2009-05-21 | 2016-06-14 | Sony Interactive Entertainment Inc. | Hand-held device with two-finger touch triggered selection and transformation of active elements |
EP2473314A1 (en) * | 2009-09-04 | 2012-07-11 | Precitec KG | Method for classifying a laser process and a laser material processing head using the same |
US20110064287A1 (en) * | 2009-09-14 | 2011-03-17 | Alexandru Bogdan | Characterizing a texture of an image |
US20110110575A1 (en) * | 2009-11-11 | 2011-05-12 | Thiagarajar College Of Engineering | Dental caries detector |
CN102708841B (en) * | 2012-04-16 | 2015-04-29 | 广东威创视讯科技股份有限公司 | Signal window-opening method and device |
US9980636B2 (en) | 2015-05-04 | 2018-05-29 | Adaptive Sensory Technology, Inc. | Methods and systems using fractional rank precision and mean average precision as test-retest reliability measures |
CN104933446B (en) * | 2015-07-15 | 2018-09-18 | 福州大学 | A method of it is verified for computer-aided diagnosis breast sonography characteristic validity |
WO2018186789A1 (en) * | 2017-04-06 | 2018-10-11 | Cadess Medical Ab | Segmentation of histological tissue images into glandular structures for prostate cancer tissue classification |
JP7381347B2 (en) * | 2017-05-31 | 2023-11-15 | コミッサリア ア レネルジー アトミーク エ オ ゼネルジ ザルタナテイヴ | Method for determining the invasive potential of tumor cells |
CN110136134A (en) * | 2019-04-03 | 2019-08-16 | 深兰科技(上海)有限公司 | A kind of deep learning method, apparatus, equipment and medium for road surface segmentation |
CN110275991B (en) * | 2019-06-03 | 2021-05-14 | 腾讯科技(深圳)有限公司 | Hash value determination method and device, storage medium and electronic device |
CN111192251B (en) * | 2019-12-30 | 2023-03-28 | 上海交通大学医学院附属国际和平妇幼保健院 | Follicle ultrasonic processing method and system based on level set image segmentation |
CN111695460B (en) * | 2020-05-29 | 2023-04-21 | 天津师范大学 | Pedestrian re-identification method based on local graph convolution network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050262031A1 (en) * | 2003-07-21 | 2005-11-24 | Olivier Saidi | Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition |
WO2006020627A1 (en) * | 2004-08-11 | 2006-02-23 | Aureon Laboratories, Inc. | Systems and methods for automated diagnosis and grading of tissue images |
US20060159367A1 (en) * | 2005-01-18 | 2006-07-20 | Trestle Corporation | System and method for creating variable quality images of a slide |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7379627B2 (en) * | 2003-10-20 | 2008-05-27 | Microsoft Corporation | Integrated solution to digital image similarity searching |
-
2007
- 2007-08-01 EP EP07836399A patent/EP2174263A4/en not_active Withdrawn
- 2007-08-01 WO PCT/US2007/017181 patent/WO2009017483A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050262031A1 (en) * | 2003-07-21 | 2005-11-24 | Olivier Saidi | Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition |
WO2006020627A1 (en) * | 2004-08-11 | 2006-02-23 | Aureon Laboratories, Inc. | Systems and methods for automated diagnosis and grading of tissue images |
US20060159367A1 (en) * | 2005-01-18 | 2006-07-20 | Trestle Corporation | System and method for creating variable quality images of a slide |
Non-Patent Citations (5)
Title |
---|
AYRES F J ET AL: "Performance Analysis of Oriented Feature Detectors", SIBGRAPI 2005 : XVIII BRAZILIAN SYMPOSIUM ON COMPUTER GRAPHICS AND IMAGE PROCESSING ; [NATAL, RIO GRANDE DO NORTE, BRAZIL, OCTOBER 9 - 12, 2005 ; CONFERENCE PROCEEDINGS], IEEE COMPUTER SOCIETY, LOS ALAMITOS, CALIF. [U.A.], 9 October 2005 (2005-10-09), pages 147-154, XP008104324, DOI: 10.1109/SIBGRAPI.2005.38 ISBN: 978-0-7695-2389-7 [retrieved on 2006-02-27] * |
KAI HUANG ET AL: "Feature reduction for improved recognition of subcellular location patterns in fluorescence microscope images", MANIPULATION AND ANALYSIS OF BIOMOLECULES, CELLS, AND TISSUES: 28-29 JANUARY 2003, SAN JOSE, CALIFORNIA, USA, SPIE, BELLINGHAM, WASH, vol. 4962, 1 June 2003 (2003-06-01), pages 307-318, XP008104334, DOI: 10.1117/12.477903 ISBN: 978-0-8194-4762-3 [retrieved on 2003-07-28] * |
MAVROFORAKIS M E ET AL: "Mammographic masses characterization based on localized texture and dataset fractal analysis using linear, neural and support vector machine classifiers", ARTIFICIAL INTELLIGENCE IN MEDICINE, ELSEVIER, NL, vol. 37, no. 2, 1 June 2006 (2006-06-01), pages 145-162, XP025138446, ISSN: 0933-3657, DOI: 10.1016/J.ARTMED.2006.03.002 [retrieved on 2006-06-01] * |
See also references of WO2009017483A1 * |
STEPHEN J KEENAN ET AL: "An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN)", JOURNAL OF PATHOLOGY, JOHN WILEY & SONS LTD, GB, vol. 192, no. 3, 1 November 2000 (2000-11-01), pages 351-362, XP008104335, ISSN: 0022-3417, DOI: 10.1002/1096-9896(2000)9999:9999<::AID-PAT H708>3.0.CO;2-1 [retrieved on 2000-08-15] * |
Also Published As
Publication number | Publication date |
---|---|
EP2174263A4 (en) | 2013-04-03 |
WO2009017483A1 (en) | 2009-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8280132B2 (en) | Malignancy diagnosis using content-based image retreival of tissue histopathology | |
EP2174263A1 (en) | Malignancy diagnosis using content-based image retreival of tissue histopathology | |
EP2391887B1 (en) | Image-based risk score-a prognostic predictor of survival and outcome from digital histopathology | |
Moghbel et al. | A review of breast boundary and pectoral muscle segmentation methods in computer-aided detection/diagnosis of breast mammography | |
Kumar et al. | Detection and classification of cancer from microscopic biopsy images using clinically significant and biologically interpretable features | |
Doyle et al. | Cascaded discrimination of normal, abnormal, and confounder classes in histopathology: Gleason grading of prostate cancer | |
Tan et al. | Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model | |
Chekkoury et al. | Automated malignancy detection in breast histopathological images | |
Sapate et al. | Breast cancer diagnosis using abnormalities on ipsilateral views of digital mammograms | |
Atupelage et al. | Computational hepatocellular carcinoma tumor grading based on cell nuclei classification | |
Lopez et al. | Exploration of efficacy of gland morphology and architectural features in prostate cancer gleason grading | |
WO2013019856A1 (en) | Automated malignancy detection in breast histopathological images | |
Al-Thelaya et al. | Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey | |
Shirazi et al. | Automated pathology image analysis | |
Elter et al. | Contour tracing for segmentation of mammographic masses | |
Roullier et al. | Mitosis extraction in breast-cancer histopathological whole slide images | |
Aggarwal et al. | Patient-Wise Versus Nodule-Wise Classification of Annotated Pulmonary Nodules using Pathologically Confirmed Cases. | |
Sparks et al. | Content-based image retrieval utilizing explicit shape descriptors: applications to breast MRI and prostate histopathology | |
Wajeed et al. | A Breast Cancer Image Classification Algorithm with 2c Multiclass Support Vector Machine | |
Makandar et al. | Classification of mass type based on segmentation techniques with support vector machine model for diagnosis of breast cancer | |
Rajoub et al. | Segmentation of breast tissue structures in mammographic images | |
Atupelage et al. | Classification of prostate histopathology images based on multifractal analysis | |
Tsochatzidis et al. | Microcalcification oriented content-based mammogram retrieval for breast cancer diagnosis | |
Mosquera-Lopez et al. | Computer-aided prostate cancer diagnosis: principles, recent advances, and future prospective | |
Jyash et al. | EFFICIENT MACHINE LEARNING ALGORITHM FOR CANCER DETECTION USING BIOMEDICAL IMAGE |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20090302 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK RS |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20130228 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06K 9/62 20060101ALI20130222BHEP Ipc: G06K 9/00 20060101AFI20130222BHEP Ipc: G06K 9/40 20060101ALI20130222BHEP Ipc: G06K 9/36 20060101ALI20130222BHEP Ipc: G06K 9/34 20060101ALI20130222BHEP |
|
17Q | First examination report despatched |
Effective date: 20131120 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20151106 |