GB2527930A - Method and apparatus - Google Patents

Method and apparatus Download PDF

Info

Publication number
GB2527930A
GB2527930A GB1509878.3A GB201509878A GB2527930A GB 2527930 A GB2527930 A GB 2527930A GB 201509878 A GB201509878 A GB 201509878A GB 2527930 A GB2527930 A GB 2527930A
Authority
GB
United Kingdom
Prior art keywords
image
tumour
data
classification data
comparing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1509878.3A
Other versions
GB201509878D0 (en
Inventor
Jonathon Tunstall
Peter Hamilton
Yinhai Wang
David Mccleary
James Diamond
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Philips DCP Belfast Ltd
Original Assignee
PathXL Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PathXL Ltd filed Critical PathXL Ltd
Publication of GB201509878D0 publication Critical patent/GB201509878D0/en
Publication of GB2527930A publication Critical patent/GB2527930A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation

Abstract

A computer implemented method of identifying a tumour structure in an image by first applying a threshold to the image from which a supra threshold region can be identified. This region is then compared with stored classification data to identify a structure. If the identified structure is found to lie adjacent to a boundary of the region then an additional region adjacent to that boundary may be selected and processed in the same way. The image may be divided into tiles and each tile thresholded to identify a supra threshold region for comparison.

Description

Method and Apparatus The present invention relates to image processing, in particular to systems, methods, apparatus, computer program products and servers for processing an image of a tissue section of a tissue sample to analyse and identify a tumour region of a tissue sample for the purpose of macrodissection of the tissue sample.
The accurate separation of tumour tissue from non-tumour tissue is a prerequisite for laboratory molecular analysis of many types of tumour tissues.
In standard laboratory practice, tumour tissue for analysis is obtained by cutting a thin tissue section from a formalin fixed, paraffin embedded (FFPE) tissue block known to contain tumour and non-tumour tissue, and preparing the tissue section on a glass histology slide.
The tissue section will usually be cut with a thickness of 10-40 pm to allow the outline of tissue structures to be made out by viewing the slide under a standard laboratory microscope.
In order to determine a boundary of a tumour region of a tissue section, it has been proposed to prepare a histology slide with the tissue section, comprising staining the tissue section with a laboratory dye and covering with a glass cover slip according to standard laboratory practice, for viewing and analysis by a trained pathologist.
A method of marking tumour regions of the tissue section comprises the pathologist viewing the slide using a standard laboratory microscope and, based on subjective assessment, for example based on memory or on visual comparison with a look-up chart, identifying regions of the tissue section appearing to correspond to model tumour structures, and indicating boundaries of the regions via a manual annotation with a marker pen on the glass coverslip.
Following annotation, a sequential tissue section, preferably having a thickness in the range indicated above, can be cut from the tissue block for further testing.
It is often desilable to test tissue samples to identify the type of a cancerous or other pathological tissue that may be present. The sensitivity of these tests may depend upon the relative and absolute quantity of cancerous or pathological tissue in the sample, for example the number and percentage of tumour cells in the tissue. This is a complex evaluation, as there are many cell types within a given tissue sample and even in regions of the sample which are predominantly tumour, a mixture of non-tumour and normal cells can be found.
Such is the complexity of the pathology image, this task is normally carried out subjectively by an experienced pathologist who visually estimates the % tumour cells within the sample or within a defined region of the sample. This visual estimation can be highly inaccurate Aspects of the disclosure provide a method to generate objective, reproducible measurements of tumour cell populations based on digital images of tissues using computerised image analysis.
Embodiments of the disclosure provide multi-resolution analysis of high resolution microscopic images. Other embodiments of the disclosure provide object-based tissue segmentation for tumour region identification. Other embodiments of the disclosure provide tumour cell size averaging for large scale tumour cell number estimation. These embodiments may be employed independently, or in combination Some aspects of the disclosure provide an automated method for tumour boundary indication which will accurately depict the tumour region for either manual dissection or for dissection using an automated instrument. Estimates of the relative quantity of tumour tissue within a boundary may be used to determine whether to dissect a tissue sample along that boundary.
In an aspect there is provided a computer implemented method of determining an amount of tumour cells in a tissue sample, the method comprising: obtaining first image data describing an image of a tissue sample at a first resolution; obtaining second image data describing the image at a second resolution, wherein the first resolution is lower than the second resolution; selecting a candidate tumour region from the second image data based on texture data determined from the first image data; identifying a tumour structure in the candidate region of the second image data; determining a number of cells in the tumour structure and a cancerous tissue type based on its area and an estimate of tumour cell area to estimate an amount of tumour cells in the tissue sample.
In an embodiment the method comprises obtaining third image data, describing the image at a third resolution, wherein the third resolution is higher than the second resolution, and identifying a tumour structure in the candidate region of the third image data.
In an embodiment the method comprises selecting a subset of image regions corresponding to the identified tumour objects, wherein the subset of image regions is smaller in area than the total area of the tumour objects, and counting cells in the subset of regions to estimate the area of a tumour cell.
In an embodiment selecting a candidate tumour region comprises determining texture data, spatial frequency data, or morphological image data, or another image property, based on the first image data, and comparing the texture data with classification data to identify candidate tumour regions or as non-tumour regions.
In an embodiment determining texture data comprises determining the grey level co-occurrence matrix.
In an embodiment identifying a tumour structure comprises identifying an object in the candidate region, determining at least one property of the object, and comparing the at least one property with classification data. In an embodiment the property of the object comprises a boundary of the object, and/or the shape of the boundary.
In an embodiment the at least one property is one of, or any combination of, image properties selected from the group comprising: a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature and a morphological feature.
In an embodiment the method comprises selecting a model of the object based on the classification data, and comparing the model with objects in regions of the image adjacent to the candidate region.
In an embodiment the method comprises comparing the model with objects in regions of the image adjacent to the candidate region comprises identifying objects which span adjacent regions of the image.
In an embodiment the method comprises applying a threshold to the image data to generate a mask adapted to select tissue image data and to discard background data.
In an embodiment the threshold is selected to reduce the variance of at least one of the
background data and the tissue data.
In an embodiment the regions comprise tiles of the image selected from the tissue data.
In an embodiment the method comprises obtaining data identifying the tissue type, and selecting a subset of the classification data for comparison with objects in the region based on the tissue type.
In an embodiment the method comprises reconstructing an image based on the identified objects, and their locations.
In an embodiment the classification data is obtained from a knowledge file.
In some embodiments identifying a tumour structure comprises identifying a boundary of the tumour structure. In an embodiment identifying the boundary comprises: identifying at least one image property of a pad of the candidate region of the image; and comparing the image property with classification data to classify the pad of the candidate region as a tumour region or a non-tumour region; and identifying a boundary of the part of the candidate region of the image.
The at least one image property may comprise one or any combination of image properties from the group comprising: a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature or a morphological feature.
In an embodiment the at least one image property comprises at least one image property of each of a plurality of different colour components of the image concatenated together to provide a feature vector.
In an embodiment the method comprises applying a smoothing algorithm to the boundary to provide a smoothed template for cutting along the tissue boundary. In an embodiment applying the smoothing algorithm comprises applying a forward frequency-domain transform and an inverse frequency-domain transform. In an embodiment applying the smoothing algorithm comprises representing the image boundary as a sequence of transition indicators indicating the direction of a transition between pixels on the image boundary, and smoothing the sequence of transition indicators.
In an embodiment the method comprises classifying the part of the candidate region based on a comparison between the feature vector and the classification data.
In an embodiment the method comprises reconstructing an image based on the identified objects, and their locations, and displaying a visual indication of the relative number of tumour cells on the image. In an embodiment the visual indication comprises a colour map in which the image colour is selected based on the relative number of tumour cells on the image.
An aspect of the disclosure provides a computer implemented method of identifying structures in an image of a tissue sample. This method may be used to provide "object guided segmentation" for use in any one of the other methods described herein. This method comprises, applying a threshold to the image, and identifying a first supra threshold region in the image, comparing data describing the first supra threshold region with a plurality of items of stored classification data, selecting, based on the comparing, a first item of the stored classification data, and, based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with the at least one other item of classification data.
The at least one other item of classification data may comprise a subset of items selected from the plurality of items of stored classification data. The method may comprise dividing the image into tiles, and selecting a tile of the image, wherein the first supra threshold region and the second supra threshold region are regions of the same selected tile.
The method may comprise identifying, based on the comparing, a structure in the selected tile, determining whether the structure is adjacent to a boundary of the tile, and in the event that the structure is adjacent a boundary, selecting a second tile from the image adjacent to that boundary. The method may comprise identifying a supra threshold region in the second tile, and comparing data describing that supra threshold region with a plurality of items of stored classification data. The method may also comprise identifying a structure in the image as a tumour structure based on the classification data, and determining a number of cells in the tumour structure based on its area and an estimate of tumour cell area to estimate an amount of tumour cells in the tissue sample.
In an aspect there is provided a computer implemented method of identifying structures in an image of a tissue sample, the method comprising: dividing the image into tiles; applying a threshold to the image, and identifying a supra threshold region in a first tile of the image; comparing data describing the supra threshold region with a plurality of items of stored classification data; identifying, based on the comparing, a structure in the first tile; determining whether the structure is adjacent to a boundary of the first tile, and in the event that the structure is adjacent to a boundary, selecting a second tile from the image adjacent to that boundary.
The method may comprise selecting, based on the comparing, a first item of the stored classification data; based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with the at least one other item of classification data.
Embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which: Figure 1 shows a schematic illustration of a macrodissection system for excising tumour tissue from a tissue sample; Figure 2 shows a flow chart illustrating a method of using the macrodissection system of Figure 1; Figure 3 shows a flow chart illustrating a method of using the macrodissection system of Figure 1; and Figure 4 shows a flow chart illustrating a method of using the macrodissection system of Figure 1; Figure 5 illustrates a method of defining a boundary of a region such as may be used with any other method described herein; and Figure 6 illustrates a method of identifying structures such as may be used in the methods illustrated in Figure 3, Figure 4, or Figure 5.
Embodiments of the disclosure are directed towards identifying the relative number of cells in a sample, or in a region of a sample for example within a boundary, that can be classified as tumour.
Based on this determination, a decision can be made to dissect the region from the sample, or to dissect a larger region, or not to dissect the sample at all, prior to testing the sample to identify the type of cancer cells that may be present in the sample.
Figure 1 shows a macrodissection system 60, comprising a tissue sample 2, a histology slide 10 carrying a tissue section 4 taken from the tissue sample 2, an imager 50, a computer 20, a controller 30, and a cutting device 40.
The imager 50 is arranged to obtain a digital image of the tissue section 4 on the histology slide 10 and to provide data representing the image to the computer 20. The computer 20 is configured to receive the data from the imager 50, to run an algorithm on the data to generate a result, and to provide the result to the controller 30 for controlling, guiding or initiating a cutting operation.
The tissue sample 2 comprises a formalin fixed, paraffin embedded tissue block suspected of containing at least one tumour region 6, and containing non-tumour regions 8. Tumour regions, such as the tumour schematically illustrated by region 6, are tissue regions containing abnormal patterns of growth, which may include, but are not limited to, any of dysplasia, neoplasia, carcinoma in -situ and cancerous tissue or any combination thereof The non-tumour regions 8 may also contain tumour tissue, but in a lower concentration than present in their tumour regions 6.
The tissue section 4 is a section cut from the tissue sample 2, having a thickness in the range of 10 to 40 pm, although the skilled practitioner will understand that another thickness could be chosen as appropriate. The histology slide 10 is a standard glass laboratory slide or any suitable equivalent for providing a light transmissive surface for receiving and displaying the tissue section 4 to the imager 50.
The imager 50 is configured to generate an image of the tissue section 4 on the histology slide 10 and to provide data representing the image to computer 20. The imager comprises any suitable image generating means such as digital slide scanning systems whereby images are reconstructed following acquisition of multiple image tiles, or image lines, or an analogue or digital camera.
The computer 20 comprises memory 24 and a processor 22. The memory 24 is configured to receive and store data from the imager 50. The processor 22 is coupled to access image data stored in the memory 24, to implement an image processing algorithm on the data to classify a region of the image as a tumour region and identify the relative number of tumour cells and non-tumour cells in a sample, or a part of the sample.
Figure 2 illustrates a method comprising preparing 201 a tissue sample, and generating 202 an image of the slide. The image data is stored into memory 24 for later retrieval.
The processor 22 obtains the image data, and identifies 203 based on a low resolution image regions which may relate to tumour tissues. Using the regions identified in the low resolution data as a guide, the processor 22 segments 204 objects from higher resolution image data.
The processor 22 then compares the objects with a data model (e.g. classification data, as described below) to determine the type of the objects, for example whether they relate to known tissue structures, or relate to tumour tissue structures.
Once the objects have been classified, the processor may define boundaries of the objects for example as discussed below with reference to Figure 5. The processor then reconstructs 206 the image based on the known locations of the tissue (non-tumour) regions, and the tumour objects identified by the segmentation 204.
The processor 22 obtains an estimate of the size of a tumour cell, and an estimate of the size of a non-tumour cell, and determines the percentage number of tumour cells in the reconstructed image based on the area of the tumour objects, the area of the non-tumour regions, and the tumour and non-tumour cell areas.
An example of this method is illustrated by the flow chart in Figure 3.
The processor 22 obtains image data at a first resolution, and defines candidate regions in the image data at the first resolution which may relate to tumour tissues. The candidate regions may be defined based on their texture, or another image property such as the size and/or location of objects. The regions may be defined according to the method described with reference to Figure 5 below.
The processor 22 then obtains second image data corresponding to the defined candidate regions. The second image data has a higher resolution than the first resolution.
The processor then identifies 3004 objects in the candidate region, for example using an object based segmentation, as described below with reference to Figure 4. This identifies objects as tumour objects in the second (higher resolution) image The processor then estimates 3006 the area of a tumour cell by selecting regions of image data at native resolution corresponding to the objects identified as tumours. The regions selected can be selected at random (in Monte-Carlo fashion) from image data corresponding to identified tumour objects. The cell count in these selected regions (which may be a subset, e.g. less than all) of the tumour objects, is then used by the processor to determine an estimate of the area of a tumour cell.
As an example, cell count can be estimated based on nuclear segmentation and object counting. This can provide an estimate of cell numbers per unit area which can be used (e.g. scaled based on area) to extrapolate the estimate of cell number per unit area to estimate the number of cells in a given area, e.g. an entire tumour area.
A similar approach can be used to determine the area of a non-tumour cell.
Based on the known area of the tumour and non-tumour tissues, and the estimated cell area, the processor 22 estimates 3008 the total number of tumour cells and/or the relative quantity of tumour and non-tumour cells present in the sample.
Figure 4 illustrates one such method in more detail, and Figure 6 illustrates a related method which may be used to identify structures in an image.
As shown in Figure 4, the processor 22 obtains a first image dataset based on an image of a tissue sample, and a second image dataset based on the same sample where the second data set is at higher resolution than the first.
The processor 22 selects a threshold to apply the image data that divides the data into two groups. The processor is configured to select a threshold that reduces, for example minimises, the variance of the data values within each group. The processor 22 then applies the threshold value to the image data to generate a mask. The processor then applies the mask to segment the image data into two groups, tissue, and background.
The processor 22 then selects a region of tissue from the first image dataset (low resolution) and determines the grey level co-occurrence matrix of the first image dataset to provide an indication of the texture of the low resolution data. The low resolution texture data is subdivided into regions and compared with data based on the first image data, and comparing the texture data with classification data to identify candidate tumour regions or as non-tumour regions.
The processor 22 then selects 2004 a candidate tumour region from the second image data based on this comparison.
The processor 22 then identifies types of tissue and types of objects in the candidate region according to steps 2006, 2007, 2008, 2009 as described below.
To identify objects in the candidate region, the processor 22 applies an object based segmentation method. Accordingly, a set of image properties of the object is determined. The set of image properties may include properties selected from the group comprising: a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature and a morphological feature.
The processor 22 then compares the set of image properties with classification data to classify the object according to what type of tissue structure the object belongs to. This may be performed according to the method of object guided segmentation described with reference to Figure 6, below.
The classification data can be provided as a knowledge file, stored in the memory 24. The knowledge file stores an association between image properties, and tissue types to enable regions of an image to be classified based on image properties. In one example a knowledge file comprises a file that stores the conditions that define a tissue object, structure, tissue type or disease type. Image properties may be selected from the list comprising object size, shape, colour, density, contextual data, and textural data.
The classification data may also comprise contextual features of the image data, for example the spacing and/or orientation of objects of an identified type relative to other objects. As another example the classification data may comprise a sequence of object types, and the processor may be configured to identify an object based on this classification data, and the types and locations of other objects in the image.
The processor then selects 2010 parts of the candidate region for further analysis, for example at a higher resolution, and selects 2012 a data model for that type of tissue structure from the classification data, and compares 2014 the model with objects in regions of the image adjacent to the candidate region. This enables similar objects in adjacent image regions to be more efficiently identified. In addition, this enables objects which span adjacent tiles of the image to be more fully represented. This localised search and classification process may be repeated at incrementally higher resolutions to refine the estimate of the size and location of tumour type tissues in the image.
The processor 22 then reconstructs 2016, a version of the original image based on the location and size of the identified objects. This provides a map of the tissue indicating the tumour and non-tumour regions from which the relative areas of these regions can be determined.
The processor then obtains an estimate of the size of tissue cells, and the size of tumour cells. This estimate may be selected from memory based on knowledge of the tissue type, or it may be input by a user, or determined based on cell segmentation of the image at native resolution. One possibility is that the estimate of tumour cell size is based on sampling selected regions of the image of the tissue, for example at a higher resolution than is used for the object classification. Counting cells in these selected regions then enables an estimate of the size of a tumour cell to be determined. These regions may be selected at random, for example in a monte-carlo type fashion from the regions of the image identified in the lower resolution data (e.g. used for object classification) as being related to tumour tissue. The estimates based on this sampling of a subset of the tumour tissue can then be applied to the entire tumour area. An analogous approach may be used to estimate the area of non-tumour cells.
The processor then determines 2018 the quantity of tumour cells in the tissue sample based -12-on the total area of the tumour objects, and the estimated size of a tumour cell. The processor also determines the quantity of non-tumour cells in the tissue sample based on the total area of non-tumour tissue, and the estimated area of a non-tumour cell. This enables the estimate of a percentage of the tumour vs. non-tumour cells in the sample.
Whilst this method has been described in specific terms it will be appreciated that many variations of the approach described here may be applied.
For example, the threshold used to provide the mask can also be predefined rather than being based on the variance of the data values within each group. In some possibilities the threshold may be selected based on user input, for example the processor may be configured to determine the threshold (e.g. based on intra-group variances) and then to adjust the threshold based on input from a user.
Segmentation may be performed to provide a mask at each of a plurality of resolution levels, or segmentation may be performed at one resolution (e.g. the native resolution of the images) and then up-sampled or down-sampled (e.g. by smoothing) to provide masks at different resolutions.
The image data for each tile may comprise image data relating to that tile at at least one resolution. Where different resolutions are used these may be provided by images collected at differing levels of magnification. In some possibilities images at different resolutions may be obtained by downsampling (e.g. smoothing). This approach may be used in combination with differing magnification levels.
Image tiles of the same image region having different resolutions described above may comprise the same number of pixels, or a different number of pixels covering the same spatial region of the image. Different classification data may be applied to the image data relating to different resolutions.
The processor subset of the stored classification data may be selected at random, for example in a Monte-Carlo type approach. In some possibilities, selecting may comprise selecting a predefined, or user selected, subset of classification data. In one possibility the classification data comprises data (e.g. feature vectors) relating to known tissue types and/or known tumour types, and selecting the subset of classification data may comprise selecting classification data based on the tissue type of the sample from which the imaged section of tissue was derived.
The classification data may be derived from a supervised learning model in which the classification data comprises a feature vector derived from an image of a tissue sample, and an indication of whether that feature vector relates to tumour or non-tumour image data. The processor may be configured to obtain input from a user confirming whether a region of the image comprises tumour tissue and to store one or more feature vectors from that region of the image in memory with the classification data. This may enable the operation of the method to be adapted or tuned to operation in particular types of tissue.
Figure 5 shows a flow chart illustrating a computer implemented method of processing an image to obtain a boundary of an object in the image, these boundaries may be used in the identification of objects (for example based on their shape), and may also be used as a guide to dissection of a tissue sample suspected of comprising tumour.
The processor 22 obtains 1000 from memory 24 an image of a section through the tissue sample. The section can be imaged from a microscope slide stained using Haemotoxylin and Eosin.
The processor obtains 1001 a first component of the image data corresponding to the Eosin stain.
The processor PPP then selects a threshold to apply to the first component of the image data that divides the eosin image data into two groups. The processor is configured to select a threshold that reduces, for example minimises, the variance of the data values within each group. The processor 1000 then applies the threshold value to the first component of the image data to generate a mask.
The processor then applies the mask generated from the first component to segment 1002 the image data into two groups. Image data in the first group is identified as tissue, and image data in the second group is identified as background.
The processor then partitions 1004 the first group of image data, relating to tissue, into tiles.
The image data for each tile comprises image data relating to that tile at a series of different -14-resolutions The data for the tile at each different resolution comprises a plurality of blocks each representing at least a portion of the tile at a different effective magnification. Different magnifications may be achieved by providing equivalent pixel numbers for different sized spatial regions, or a different number of pixels for equally sized spatial regions.
For each tile, at each resolution level, the processor obtains 1005 three components of the image data, a first component corresponding to the Eosin stain, a second component corresponding to the Haemotoxylin stain, and a third grey scale component. The first and second components may be obtained by applying a colour deconvolution method to the image data. The grey scale image data comprises a greyscale version of the image data in the tile.
For each colour component of the tile, the processor selects at least one property to be determined based on the colour component image data in the tile. The properties to be determined are selected based on the colour component so different properties can be determined for different colour components. The properties are selected from the list comprising texture, statistical moments, such as centroids, averages, variances, higher order moments, moment invariant, frequency domain features, features derived from the grey level co-occurrence matrix, and morphological features, such as average nuclear size and/or shape, nuclear concentration in a spatial region, and high level spatial relationships between image objects, which may be derived from Delaunay Triangulation, Voronoi diagram and/or a minimal expanding tree algorithm which treats each cell nucleus as a vertex.
The processor then determines 1006 the selected image properties for each of the three components of the tile, and concatenates the image properties together to provide a feature vector.
The processor then obtains from memory a subset of the stored classification data. The classification data comprises a first set of model image feature vectors associated with tumour tissues, and a second set of model image feature vectors associated with non-tumour tissue.
The processor selects from amongst the first plurality of feature vectors (tumour type) from the classification data, and the second plurality of feature vectors (non-tumour type) from the classification data to provide a subset (e.g. less than all of the set). This provides a subset of model feature vectors.
The processor then compares 1008 the concatenated feature vector from the tile with the selected subset of the classification data, and based on the comparison, the processor classify 1010 the tile as belonging to one of two states -tumour or non-tumour.
The processor is configured to combine 1012 the tiles to provide a two state map (e.g. binary) identifying tumour, and non-tumour regions of the tissue with the tissue/non-tissue mask generated by the segmentation 1002 to provide a spatial map of the image data which classifies regions of the image into one of three states e.g. background, tumour tissue, and non-tumour tissue.
The processor is further configured to identify 1014 a boundary between regions in the three state map The processor is configured to identify an initial boundary based on an edge detection algorithm, encode the resulting boundary, and smooth the boundary by reducing the contribution of high spatial frequency components to the boundary.
The processor then obtains a probability estimate based on comparing the feature vectors of tiles in tissue regions of the image with the selected subset of model image data to assign a probability to each tile.
The processor can then display the resulting probability estimate as a colour map, overlaid with the smoothed boundary data, to provide a user with an estimate of the location of tumour and non-tumour regions in the image.
Figure 6 illustrates a computer implemented method of object based segmentation which may be used to identify tissue structures, such as glands, in images of tissue. This method may be employed to identify structures and objects in any one of the methods described herein.
A system such as that illustrated in Figure 1 may be operated according to the method illustrated in Figure 6.
As noted above, the memory 24 of Figure 1 may store classification data comprising a knowledge file. The knowledge file may comprise stored images of known structures, and/or -16-data describing those structures, for example the knowledge file may comprise one or more feature vectors associated with each known structure.
The knowledge file may comprise image data for each of a plurality of known structures so that, by comparing a test image with image data from the knowledge file, a structure can be identified. The knowledge file may also indicate relationships between known structures. For example, the knowledge file may indicate that, where a particular type of structure is identified in a tissue, it is likely that other associated structures may also be present in the surrounding tissue.
As shown in Figure 6, the processor obtains 4000 image data, for example by retrieving the image data from memory or receiving the image data from an imager 50. The processor divides 4002 the image data into tiles.
The processor 22 applies 4004 a threshold to the image data, and selects 4006 a supra threshold region.
The processor 22 compares 4008 the selected supra threshold region with an item of stored classification data, this comparison may generate 4010 a numerical score indicating the degree of similarity between the supra threshold region and that item of classification data.
The numerical score may be, for example, a confidence interval, or a score based on a merit function. The items of classification data may be obtained from a knowledge file stored in memory 24.
The processor then determines 4012 whether the comparisons are complete. The processor may determine that the comparisons are complete when the supra threshold region has been compared with all of the items of classification data. In some examples the processor may determine that the comparisons are complete when an item of classification data is found that matches the supra threshold region (e.g. provides a merit function value, or a confidence interval, within a selected range).
In the event that the comparisons indicate that the supra threshold region matches an item of stored classification data, the processor retrieves 4014 data describing the tissue structure associated with that item of classification data.
For example, a stored classifier image may be associated with an example of a particular tissue structure, such as a gland, and data indicating whether that example of the particular tissue structure is cancerous or normal.
The data associated with the matching item of classification data image may also include data indicating structures which can be expected to be found near to the structure identified by that item of classification data. For example, this may comprise additional images showing the shape of the parts of the tissue structure, and/or the shapes of structures expected to be found nearby, and/or feature vectors for those structures.
The processor is therefore able to select 4016 a subset of the items of classification data (e.g. less than all of the total set stored in memory), and to begin the comparisons of the next supra threshold structure by using that subset. The processor 22 then determines 4018 whether or not all of the supra threshold regions in the current image tile have been compared with the classification data. If other supra threshold regions remain to be identified in the tile, the processor 22 compares 4006 those selected supra threshold regions with at least one item of classification data of the subset identified at step 4016 before comparing the supra threshold region with items of classification data other than those in the subset. By this approach efficiency may be improved because, if a supra threshold structure matches part of a known type of structure, the image data around that supra threshold structure can be compared with data associated with that known tissue structure.
If all the supra threshold regions in the current tile have been compared with the classification data, the processor then selects the next tile for analysis. If a structure has been identified in the tile, the processor determines 4020 whether that structure lies along a boundary of a tile. In the event that the structure does lie along a boundary of a tile, the processor selects 4022 the tile adjacent to that boundary and analyses that tile next (e.g. according to the thresholding, and comparing steps described above). In the event that a structure has not been identified as lying on a boundary the next tile may be selected 4024 based on a sequence, or a predefined rule.
Embodiments of the disclosure described with reference to Figure 6 may enable comparisons performed in a selected tile to be informed by the classification of the structure(s) identified in that tile, or in an adjacent tile. For example, the knowledge file for a particular structure may include images, and data, describing structures which are likely to be found around that particular structure. Therefore, when trying to classify tissue structures in adjacent tiles, the classification can begin by trying to identify structures which are known to be likely to be found in that area.
The method described with reference to Figure 6 may be performed at a number of different resolutions, and the lower resolution data may be treated before data at relatively higher resolution. Regions, for example tiles, of relatively higher resolution image data may be selected based on the identification of a structure in the lower resolution image data. In addition, the method described with reference to Figure 6 may be applied to one or more colour components of the image data such as the Haemotoxylin and Eosin colour component.
Numerous variations and alternatives to the embodiments described herein will be apparent to the skilled reader in the context of the present disclosure. For example, in Figure 1, the computer 20 is represented as a desk-top PC. It will be appreciated that any other type of computer or server could be used.
The processor 22 of Figure 1 may be a standard computer processor, but it will be understood that the processor could be implemented in hardware, software, film ware or any combination thereof as appropriate for implementing the image processing method described herein.
The memory 24 of the computer 20 of Figure 1 may be configured to store data received from the imager 50, results generated by the processor 22, and classification data for classifying tumour and non-tumour regions. Non-volatile memory may be provided for storing the classification data. Further non-volatile memory may be provided for storing the image data so that the image data for a plurality of tissue samples may be uploaded and stored in memory until such time as the processor 22 has capability or an instruction to process it. The memory 24 may comprise a buffer or an on-chip cache. Further non-volatile memory may be provided for storing results of the image processing method for later user reference and/or for updating the classification data using a learning method.
Tumour regions, such as the tumour schematically illustrated by region 6 of Figure 1, are tissue regions containing abnormal patterns of growth, which may include, but are not limited to, any of dysplasia, neoplasia, carcinoma in-situ and cancerous tissue or any combination thereof. The non-tumour regions 8 may also contain tumour tissue, but in a lower concentration than present in their tumour regions 6, as will be understood by those skilled in the art. The tissue block may be a formalin fixed, paraffin embedded tissue block, or a tissue block prepared in any other suitable way.
The imager 50 may comprise any suitable image generating means, including, but not limited to, an analogue or digital camera and a digital slide scanning system, in which an image is reconstructed following acquisition of image tiles or raster lines.
Obtaining the image data may comprise retrieving it from non-volatile memory, or from RAM, or from an on chip-cache, ADC or buffer. The image data in memory may be derived from data stored elsewhere in the apparatus, or received over a communications link such as a network, or obtained from an imager such as a microscope.
The section of tissue can be stained using Haemotoxylin and Eosin, or with any other appropriate histological stain. The description above makes reference to separating the image data into components corresponding to the particular stains. As will be appreciated, other coloured stains may be used, and the image data may be separated into components corresponding to the stains used. The components may comprise colour channels, which may be separated using a colour deconvolution method. However, other types of colour component, separated by other kinds of methods may also be used.
Obtaining 1001 the first component corresponding to the eosin stain may comprise obtaining the intensity of eosin stain using colour deconvolution method. The second component corresponding to the Haemotoxylin stain may be similarly obtained.
The segmentation by masking may be based on a single component of the image data, such as the first (eosin) component as described above, or from one of the other components, or from the original image data, or from a combination of one or more of these. In some examples a predefined image mask may be used.
The threshold used to provide the mask can also be predefined rather than being based on the variance of the data values within each group. In some possibilities the threshold may be selected based on user input, for example the processor may be configured to determine the threshold (e.g. based on intra-group variances) and then to adjust the threshold based on -20 -input from a user.
Segmentation may be performed to provide a mask at each of a plurality of resolution levels, or segmentation may be performed at one resolution (e.g. the native resolution of the images) and then up-sampled or down-sampled (e.g. by smoothing) to provide masks at different resolutions.
The image data for each tile may comprise image data relating to that tile at at least one resolution. Where different resolutions are used these may be provided by images collected at differing levels of magnification, for example as described in relation to Figure 3. In some possibilities, images at different resolutions for a given tile may be obtained by down-sampling, e.g. smoothing, or by image interpolation. This approach may be used in combination with differing magnification levels.
Image tiles of the same image region having different resolutions described above may comprise the same number of pixels, or a different number of pixels covering the same spatial region of the image. Different classification data may be applied to the image data relating to different resolutions.
In one possibility, for each tile, at each resolution level, the processor obtains 1005 three colour components, one corresponding to an eosin colour channel and another corresponding to a Haemotoxylin colour channel as obtained using a colour decomposition method, as well as one grey scale colour channel obtained directly from the original RGB coloured image. The processor then continues to step 1006 of the method as described above.
At or following the classification step 1010, tiles classified as representing tumour regions may be assigned a posterior probability of corresponding to a tumour region of the tissue sample, based on a selected threshold level. For example, when classifying the tile as tumour or non-tumour, a threshold level of 0.5 (50%) may be applied.
The probability estimate used to generate the colour map be obtained by updating the posterior probability data. -21 -
It will be appreciated by the skilled addressee in the context of the present disclosure that the disclosure provides systems, methods, apparatus, computer program products and servers for processing an image of a tissue section of a tissue sample to analyse and identify a tumour region of a tissue sample for the purpose of macrodissection of the tissue sample. As already provided, a tumour may contain patterns of cell growth which contain, but are not limited to, any of dysplasia, neoplasia, carcinoma in-situ and cancerous tissue, or any combination thereof. It will be appreciated by the skilled addressee in the context of the present disclosure that the disclosure could equally apply to other diseases which are capable of morphological identification.
In some examples the functionality of the computer and/or the processor may be provided by digital logic, such as field programmable gate arrays, FPGA, application specific integrated circuits, ASIC, a digital signal processor, DSP, or by software loaded into a programmable processor. The functionality of the processor and its programs may be provided in a single integrated unit, or it may be distributed between a number of processors, which may be arranged to communicate over a network, such as "cloud" computing. This may enable, for example, the processing steps of the method to be performed at a device (or devices) that are remote from the image capture and the tissue sample apparatus.
In some embodiments contextual information may be part of the knowledge file. The knowledge file may contain data to enable objects and/or regions and/or disease variants to be classified within a tissue sample. This may include the relationship (e.g. a spatial relationship) between objects, this may provide contextual characteristics of the objects.
A knowledge file may define object features (e.g. size, shape, texture, etc.) and contextual features (e.g. proximity to other objects, Delauney triangulation data, etc.) to allow the image to be reconstructed into morphological objects, tissue structures and tissue classifications.
The may enable "image understanding", e.g. getting the computer system to understand what is being identified and use this to better identify objects which constitute cancer.
In some embodiments estimating an amount of tumour cells comprises estimating a number of tumour cells which can then be used to determine a percentage of tissue that has tumour cells.
-22 -Other examples and variations are within the scope of the disclosure, as set out in the appended claims.

Claims (40)

  1. -23 -CLAIMS: 1. A computer implemented method of determining an amount of tumour cells in a tissue sample, the method comprising: obtaining first image data describing an image of a tissue sample at a first resolution; obtaining second image data describing the image at a second resolution, wherein the first resolution is lower than the second resolution; selecting a candidate tumour region from the second image data based on texture data determined from the first image data; identifying a tumour structure in the candidate region of the second image data; determining a number of cells in the tumour structure based on its area and an estimate of tumour cell area to estimate an amount of tumour cells in the tissue sample.
  2. 2. The method of claim 1 further comprising obtaining third image data, describing the image at a third resolution, wherein the third resolution is higher than the second resolution, and identifying a tumour structure in the candidate region of the third image data.
  3. 3. The method of claim 1 or 2 comprising selecting a subset of image regions corresponding to the identified tumour objects, wherein the subset of image regions is smaller in area than the total area of the tumour objects, and counting cells in the subset of regions to estimate the area of a tumour cell.
  4. 4. The method of claim 1 in which selecting a candidate tumour region comprises determining texture data, spatial frequency data, or morphological image data, or another image property, based on the first image data, and comparing the texture data with classification data to identify candidate tumour regions or as non-tumour regions.
  5. 5. The method of claim 4 in which determining texture data comprises determining the grey level co-occurrence matrix.
  6. 6. The method of any preceding claim in which identifying a tumour structure comprises identifying an object in the candidate region, determining at least one property of the object, and comparing the at least one property with classification data.
    -24 -
  7. 7. The method of claim 4, wherein the at least one property is one of, or any combination of, image properties selected from the group comprising: a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature and a morphological feature.
  8. 8. The method of claim 4 or 5 comprising selecting a model of the object based on the classification data, and comparing the model with objects in regions of the image adjacent to the candidate region.
  9. 9. The method of claim 6 in which comparing the model with objects in regions of the image adjacent to the candidate region comprises identifying objects which span adjacent regions of the image.
  10. 10. The method of any preceding claim comprising applying a threshold to the image data to generate a mask adapted to select tissue image data and to discard background data.
  11. 11. The method of claim 10 in which the threshold is selected to reduce the variance of at least one of the background data and the tissue data.
  12. 12. The method of claim 10 or 11 in which the regions comprise tiles of the image selected from the tissue data.
  13. 13. The method of any preceding claim comprising obtaining data identifying the tissue type, and selecting a subset of the classification data for comparison with objects in the region based on the tissue type.
  14. 14. The method of claim 13 comprising reconstructing an image based on the identified objects, and their locations.
  15. 15. The method of any of claims 6 to 8, or any preceding claim as dependent thereon in which the classification data is obtained from a knowledge file.
  16. 16. The method of any preceding claim in which identifying a tumour structure comprises identifying a boundary of the tumour structure.
    -25 -
  17. 17. The method of claim 16 in which identifying the boundary comprises: identifying at least one image property of a part of the candidate region of the image; and comparing the image property with classification data to classify the part of the candidate region as a tumour region or a non-tumour region; and identifying a boundary of the part of the candidate region of the image.
  18. 18. The method of claim 17, wherein the at least one image property comprises one or any combination of image properties from the group comprising: a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature or a morphological feature.
  19. 19. The method of claim 17 or 18, in which the at least one image property comprises at least one image property of each of a plurality of different colour components of the image concatenated together to provide a feature vector.
  20. 20. The method of claim 19, comprising classifying the part of the candidate region based on a comparison between the feature vector and the classification data.
  21. 21. The method of any of claims 1 to 20 comprising reconstructing an image based on the identified objects, and their locations, and displaying a visual indication of the relative number of tumour cells on the image.
  22. 22. The method of claim 21 in which the visual indication comprises a colour map in which the image colour is selected based on the relative number of tumour cells on the image.
  23. 23. A computer implemented method of identifying structures in an image of a tissue sample, the method comprising: applying a threshold to the image, and identifying a first supra threshold region in the image; comparing data describing the first supra threshold region with a plurality of items of stored classification data; selecting, based on the comparing, a first item of the stored classification data; based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with -26 -the at least one other item of classification data.
  24. 24. The method of claim 23 in which the at least one other item of classification data comprises a subset selected from the plurality of items of stored classification data.
  25. 25. The method of claim 23 or 24 comprising dividing the image into tiles, and selecting a tile of the image, wherein the first supra threshold region and the second supra threshold region are regions of the selected tile.
  26. 26. The method of claim 25 comprising identifying, based on the comparing, a structure in the selected tile, determining whether the structure is adjacent to a boundary of the tile, and in the event that the structure is adjacent a boundary, selecting a second tile from the image adjacent to that boundary.
  27. 27. The method of claim 26 comprising identifying a supra threshold region in the second tile, and comparing data describing that supra threshold region with a plurality of items of stored classification data.
  28. 28. The method of any of claims 23 to 27 comprising identifying a structure in the image as a tumour structure based on the classification data, and determining a number of cells in the tumour structure based on its area and an estimate of tumour cell area to estimate an amount of tumour cells in the tissue sample.
  29. 29. The method of claim 28 comprising selecting a subset of image regions corresponding to identified tumour objects, wherein the subset of image regions is smaller in area than the total area of the tumour objects, and counting cells in the subset of regions to estimate the area of a tumour cell.
  30. 30. The method of any of claims 23 to 29 in which comparing data describing the first supra threshold region comprises determining at least one property of the supra threshold region, and comparing the at least one property with a corresponding property of the classification data.
  31. 31. The method of claim 30, wherein the at least one image property comprises one or any combination of image properties from the group comprising: shape, a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a -27 -spectral feature or a morphological feature.
  32. 32. The method of claim 31, in which the at least one property comprises at least one property of each of a plurality of different colour components of the image concatenated together to provide a feature vector.
  33. 33. The method of any of claims 29 to 32 comprising reconstructing an image based on the identified structures, and their locations, and displaying a visual indication of the relative number of tumour cells on the image.
  34. 34. The method of claim 33 in which the visual indication comprises a colour map in which the image colour is selected based on the relative number of tumour cells on the image.
  35. 35. A computer implemented method of identifying structures in an image of a tissue sample, the method comprising: dividing the image into tiles; applying a threshold to the image, and identifying a supra threshold region in a first tile of the image; comparing data describing the supra threshold region with a plurality of items of stored classification data; identifying, based on the comparing, a structure in the first tile; determining whether the structure is adjacent to a boundary of the first tile, and in the event that the structure is adjacent to a boundary, selecting a second tile from the image adjacent to that boundary.
  36. 36. The method of claim 35 comprising selecting, based on the comparing, a first item of the stored classification data; based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with the at least one other item of classification data.
  37. 37. The method of claim 36 in which the second tile comprises the second supra threshold region.
  38. 38. The method of any of claims 35 to 37 comprising comparing data describing a supra -28 -threshold region of the second tile with a plurality of items of stored classification data.
  39. 39. A computer program product configured to program a processor to perform the method of any preceding claim.
  40. 40. Apparatus configured to carry out the method of any of claims ito 39.AMENDMENTS TO THE CLAIMS HAVE BEEN FILED AS FOLLOWS:- 1. A computer implemented method of identifying structures in an image of a tissue sample, the method comprising: applying a threshold to the image, and identifying a first supra threshold region in the image; comparing data describing the first supra threshold region with a plurality of items of stored classification data; selecting, based on the comparing, a first item of the stored classification data; based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with the at least one other item of classification data.2. The method of claim 1 in which the at least one other item of classification data comprises a subset selected from the plurality of items of stored classification data.IC) 3. The method of claim 1 or 2 comprising dividing the image into tiles, and selecting a tile of the image, wherein the first supra threshold region and the second supra threshold region are regions of the selected tile.N-4. The method of claim 3 comprising identifying, based on the comparing, a structure in O the selected tile, determining whether the structure is adjacent to a boundary of the tile, and in the event that the structure is adjacent a boundary, selecting a second tile from the image adjacent to that boundary.5. The method of claim 4 comprising identifying a supra threshold region in the second tile, and comparing data describing that supra threshold region with a plurality of items of stored classification data.6. The method of any of claims 1 to 5 comprising identifying a structure in the image as a tumour structure based on the classification data, and determining a number of cells in the tumour structure based on its area and an estimate of tumour cell area to estimate an amount of tumour cells in the tissue sample.7. The method of claim 6 comprising selecting a subset of image regions corresponding to identified tumour objects, wherein the subset of image regions is smaller in area than the total area of the tumour objects, and counting cells in the subset of regions to estimate the area of a tumour cell.8. The method of any of claims ito 7 in which comparing data describing the first supra threshold region comprises determining at least one property of the supra threshold region, and comparing the at least one property with a corresponding property of the classification data.9. The method of claim 8, wherein the at least one image property comprises one or any combination of image properties from the group comprising: shape, a statistical moment, a moment invariant feature, a features derived from a grey level co-occurrence matrix, a spectral feature or a morphological feature.10. The method of claim 9, in which the at least one property comprises at least one property of each of a plurality of different colour components of the image concatenated together to provide a feature vector. IC)11. The method of any of claims 7 to 10 comprising reconstructing an image based on the identified structures, and their locations, and displaying a visual indication of the relative 0 number of tumour cells on the image.NO i2. The method of claim 11 in which the visual indication comprises a colour map in which the image colour is selected based on the relative number of tumour cells on the image.13. A computer implemented method of identifying structures in an image of a tissue sample, the method comprising: dividing the image into tiles; applying a threshold to the image, and identifying a supra threshold region in a first tile of the image; comparing data describing the supra threshold region with a plurality of items of stored classification data; identifying, based on the comparing, a structure in the first tile; determining whether the structure is adjacent to a boundary of the first tile, and in the event that the structure is adjacent to a boundary, selecting a second tile from the image adjacent to that boundary.14. The method of claim 13 comprising selecting, based on the comparing, a first item of the stored classification data; based on the selected item of stored classification data, selecting at least one other item of classification data, and comparing a second supra threshold region in the image with the at least one other item of classification data.15. The method of claim 14 in which the second tile comprises the second supra threshold region.16. The method of any of claims 13 to 15 comprising comparing data describing a supra threshold region of the second tile with a plurality of items of stored classification data.17. A computer program product configured to program a processor to perform the method of any preceding claim.18. Apparatus configured to carry out the method of any of claims ito 17. IC)N
GB1509878.3A 2013-05-14 2013-09-02 Method and apparatus Withdrawn GB2527930A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1308664.0A GB201308664D0 (en) 2013-05-14 2013-05-14 Method and apparatus
GB1315597.3A GB2514197A (en) 2013-05-14 2013-09-02 Method and apparatus

Publications (2)

Publication Number Publication Date
GB201509878D0 GB201509878D0 (en) 2015-07-22
GB2527930A true GB2527930A (en) 2016-01-06

Family

ID=48700770

Family Applications (3)

Application Number Title Priority Date Filing Date
GBGB1308664.0A Ceased GB201308664D0 (en) 2013-05-14 2013-05-14 Method and apparatus
GB1315597.3A Withdrawn GB2514197A (en) 2013-05-14 2013-09-02 Method and apparatus
GB1509878.3A Withdrawn GB2527930A (en) 2013-05-14 2013-09-02 Method and apparatus

Family Applications Before (2)

Application Number Title Priority Date Filing Date
GBGB1308664.0A Ceased GB201308664D0 (en) 2013-05-14 2013-05-14 Method and apparatus
GB1315597.3A Withdrawn GB2514197A (en) 2013-05-14 2013-09-02 Method and apparatus

Country Status (3)

Country Link
EP (1) EP2997541A2 (en)
GB (3) GB201308664D0 (en)
WO (1) WO2014184522A2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018104254A1 (en) 2016-12-05 2018-06-14 Koninklijke Philips N.V. Device and method for identifying a region of interest (roi)
CN109815974A (en) * 2018-12-10 2019-05-28 清影医疗科技(深圳)有限公司 A kind of cell pathology slide classification method, system, equipment, storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2463141A (en) * 2008-09-05 2010-03-10 Siemens Medical Solutions Medical image segmentation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993169B2 (en) * 2001-01-11 2006-01-31 Trestle Corporation System and method for finding regions of interest for microscopic digital montage imaging
US8346483B2 (en) * 2002-09-13 2013-01-01 Life Technologies Corporation Interactive and automated tissue image analysis with global training database and variable-abstraction processing in cytological specimen classification and laser capture microdissection applications
US6800249B2 (en) * 2002-06-14 2004-10-05 Chromavision Medical Systems, Inc. Automated slide staining apparatus
EP1534114A2 (en) * 2002-06-18 2005-06-01 Lifespan Biosciences, Inc. Computerized image capture of structures of interest within a tissue sample
WO2010027476A1 (en) * 2008-09-03 2010-03-11 Rutgers, The State University Of New Jersey System and method for accurate and rapid identification of diseased regions on biological images with applications to disease diagnosis and prognosis
FR2964744B1 (en) * 2010-09-10 2015-04-03 Univ Versailles St Quentin En Yvelines PROGNOSTIC TEST OF THE EVOLUTION OF A SOLID TUMOR BY ANALYSIS OF IMAGES
WO2013049153A2 (en) * 2011-09-27 2013-04-04 Board Of Regents, University Of Texas System Systems and methods for automated screening and prognosis of cancer from whole-slide biopsy images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2463141A (en) * 2008-09-05 2010-03-10 Siemens Medical Solutions Medical image segmentation

Also Published As

Publication number Publication date
WO2014184522A2 (en) 2014-11-20
EP2997541A2 (en) 2016-03-23
WO2014184522A3 (en) 2015-06-11
GB201315597D0 (en) 2013-10-16
GB201509878D0 (en) 2015-07-22
GB2514197A (en) 2014-11-19
GB201308664D0 (en) 2013-06-26

Similar Documents

Publication Publication Date Title
US9842391B2 (en) Method and apparatus for processing an image of a tissue sample
US9946953B2 (en) Apparatus and method for processing images of tissue samples
CN109791693B (en) Digital pathology system and related workflow for providing visualized whole-slice image analysis
Sommer et al. Learning-based mitotic cell detection in histopathological images
US10565706B2 (en) Method and apparatus for tissue recognition
Kulikova et al. Nuclei extraction from histopathological images using a marked point process approach
CA3196713C (en) Critical component detection using deep learning and attention
Atupelage et al. Computational hepatocellular carcinoma tumor grading based on cell nuclei classification
He et al. Local and global Gaussian mixture models for hematoxylin and eosin stained histology image segmentation
US10671832B2 (en) Method and apparatus for tissue recognition
CA3195891A1 (en) Training end-to-end weakly supervised networks at the specimen (supra-image) level
WO2017051195A1 (en) Pattern driven image processing method & apparatus for tissue recognition
WO2014006421A1 (en) Identification of mitotic cells within a tumor region
Sertel et al. An image analysis approach for detecting malignant cells in digitized H&E-stained histology images of follicular lymphoma
US11887355B2 (en) System and method for analysis of microscopic image data and for generating an annotated data set for classifier training
EP2997541A2 (en) Method and apparatus for processing an image of a tissue sample
Schäfer et al. Image database analysis of Hodgkin lymphoma
CN112840375A (en) System for analyzing microscopic data using a pattern
WO2014181123A1 (en) Apparatus and method for processing images of tissue samples
Lloyd et al. Image analysis in surgical pathology
Guatemala-Sanchez et al. Nuclei segmentation on histopathology images of breast carcinoma
Ajemba et al. Integrated segmentation of cellular structures
GB2531845A (en) Apparatus and method
GB2478133A (en) A method of Image classification based on duct and nuclei parameters
Lu et al. Efficient epidermis segmentation for whole slide skin histopathological images

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)