WO2014053520A1 - Targeting cell nuclei for the automation of raman spectroscopy in cytology - Google Patents

Targeting cell nuclei for the automation of raman spectroscopy in cytology Download PDF

Info

Publication number
WO2014053520A1
WO2014053520A1 PCT/EP2013/070495 EP2013070495W WO2014053520A1 WO 2014053520 A1 WO2014053520 A1 WO 2014053520A1 EP 2013070495 W EP2013070495 W EP 2013070495W WO 2014053520 A1 WO2014053520 A1 WO 2014053520A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
images
cell nuclei
image analysis
image
Prior art date
Application number
PCT/EP2013/070495
Other languages
French (fr)
Inventor
Jonathan Blackledge
Dmitriy DUBOVITSKIY
Original Assignee
Dublin Institute Of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dublin Institute Of Technology filed Critical Dublin Institute Of Technology
Publication of WO2014053520A1 publication Critical patent/WO2014053520A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/65Raman scattering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present application is directed to image analysis and more particularly to the identification of target sites in a biological sample, specifically, but not exceptionally, cell nuclei, which is applied to the problem of automating the diagnosis of cervical cancer using Raman Spectroscopy.
  • Cervical cancer is among the most common female cancers in many countries of the developing world. Sexually transmitted infection by certain strains of the human papilloma virus is a major cause of cervical cancer; smoking has also been linked to the disease.
  • Cervical cancer is preceded by a precancerous condition called CIN (Cervical Intraepithelial Neoplasia) which can be treated relatively easily if detected at an earlier enough stage. It is therefore important to identify CINs via screening programmes.
  • the screening test is called a cervical smear.
  • a clinician removes a small sample of cells from the surface of the cervix and spreads the sample onto a glass slide, the material being 'fixed' in alcohol.
  • the slide sample is conventionally stained and examined using an optical microscope, reports being provided on any abnormal cells or cell clusters. Staining the sample can provide valuable colour identifiers to assist a user in performing their examination. Equally, such staining may be of benefit in automatic image recognition.
  • staining can have a negative effect when alternative methods of detection are used, in particular, with regard to particularly Raman Spectroscopy, a method of diagnosis upon which this invention is focused.
  • image processing method may be used to identify a target region.
  • a Raman Spectrum is then obtained for that target region with the presence/absence of abnormal cells being confirmed or otherwise from the spectrum.
  • the present application therefore seeks to provide a method of detecting target regions in a sample using a method which does not rely on staining.
  • the present application seeks to improve the performance of existing techniques, and, in particular, provides an optical imaging method to identify target sites in a sample for further analysis through Raman Spectroscopy. This optical imaging method obviates the need for staining of the sample.
  • the method and system of the present application are set forth in the claims which follow.
  • a method of screening a sample to identify potential abnormalities comprising the basic steps: using image analysis to identify cell nuclei in the sample; and determining from said image analysis a likelihood value for the presence of an abnormality in the identified cell nuclei.
  • a method according to statement 1 further comprising performing Raman spectroscopy on those cell nuclei whose likelihood value exceeds a predetermined threshold.
  • a method according to statement 1 or statement 2, wherein the step of performing image analysis comprises applying at least one membership function to identify whether a point in an image acquired of a region of the sample is within a cell nuclei.
  • step of image analysis comprises obtaining a series of images of a region of the sample, wherein each image in the series is obtained using a different focal plane.
  • a system for analyzing a sample comprising: a controller, a stage for receiving the sample, an imaging device for acquiring a sequence of images of the sample, a Raman spectroscopy device for obtaining a spectrogram of a point of interest in said sample, wherein the controller is adapted to identify the point of interest in said sample from the sequence of images acquired from a sample where the system is configured to allow images to be taken of the sample at different focal depths and where a plurality of images of the same scene taken at different focal depths are employed in the determination of the point of interest.
  • the system of statement 6 may be employed to advantage with the methods of statements 1 to 5.
  • Figure 1 is an example of a cytological slide that may be employed in the present application
  • Figure 2 is an exemplary image of cervical cells taken in a lower plane
  • Figure 3 is the same view of cervical cells of Figure 2 except the image was obtained in a mid plane;
  • Figure 4 is the same view of cervical cells of Figures 2 and 3 except the image was obtained in the upper plane;
  • Figure 5 is a representation of a process of nucleus identification using a Fuzzy Logic
  • Figure 6 is an exemplary graphical user interface which illustrates how the technique may be of assistance to a user.
  • Pattern recognition is a component of image analysis which involves the use of digital image processing methods which are designed in an attempt to provide a machine interpretation of an image, preferably, in a form that allows some decision criterion to be applied. It should be appreciated that there is no complete and unique theoretical model available for explaining and simulating the processes of human visual image comprehension and accordingly machine vision remains a subject area in which automatic inspection systems are advanced without having a fully operational theoretical framework as a guide.
  • Pattern recognition can be thought of as the process of linking parts of the visual object's field with stored information or 'templates' with regard to a pre-determined significance for the observer.
  • Regions of pixels with similar intensity values or sets of lines are obtained by isolating the edges of an image scene and computed by locating regions where there is a significant difference in the intensity.
  • Such sets are subject to inherent ambiguities when computed from a given input image and associated with those from which an existing data base has been constructed. These ambiguities can only be overcome by the application of high-level rules based on how humans interpret images, but the nature of this interpretation is not always able to be clearly defined. Parts of an image may tend to have an association if they share size, figural similarity, continuity, shading and texture. For this reason, it is necessary to consider how best to segment an image and what form this segmentation should take.
  • optical microscopy involves the use of image processing methods that are often designed in an attempt to provide a machine interpretation of a biological image, ideally in a form that allows some decision criterion to be applied, such that a pattern of biological significance can be recognised.
  • computer vision is more than automated image processing. It results in a conclusion, based on a machine performing an inspection of its own. The machine must be programmed to be sensitive to the same aspects of the visual field as humans find meaningful.
  • segmentation is concerned with the process of dividing an image into meaningful regions or segments. It is used in image analysis to separate features or regions of a pre-determined type from the background and it, in most cases, the first step in automatic image analysis and pattern recognition.
  • Segmentation is broadly based on one of two properties in an image: (i) similarity; (ii) discontinuity.
  • the first property is used to segment an image into regions which have grey (or colour) levels within a predetermined range.
  • the second property segments the image into regions of discontinuity where there is a more or less abrupt change in the values of the grey (or colour) levels.
  • This invention relates to a new method of segmenting cell nuclei from the cells using a differencing method induced by applying different depth of foci which alter visual appearance of the cell nucleus (due to it three-dimensional profile above the plane) while maintaining the image content of the surrounding cellular material (which remains within the plane).
  • this approach provides a solution to the problem of cell nuclei target detection for the automation of Raman Spectroscopy applied to cervical cancer screening.
  • the present application provides a more efficient way of isolating the nucleus using different images acquired at different focal planes.
  • the method does not analyse images of the entire sample at each focal plane. Instead, a membership function based on a few 'shots' of the slide material taking at different focal planes is employed. A membership function is then used to determine an optimal focal plane. It will be appreciated that the "optimal" plane is with reference to the other acquired focal planes and may thus be considered the preferred optimal plane.
  • the preferred focal plane so determined may then be employed to isolate the nuclei throughout the rest of the slide for subsequent image analysis.
  • a windowed approach may be taken where different windows (images) are acquired of the sample at the previously determined optimal focal plane. Any nuclei which are identified in this subsequent image analysis may then be targeted with Raman spectroscopy.
  • the computational load and thus the time required to analyse a sample is significantly reduced.
  • Pattern recognition may be considered to be a form of machine understanding based on assigning a particular class to an object.
  • the tasks of construction and application of formal operations for numerical or character representation of objects in a real or idealized world is the basis for pattern recognition. This depends on establishing equivalence relations that express a fit of evaluated objects to any class with independent semantic units.
  • the recognition classes of equivalence can be set by the user in the construction of an algorithm, which uses selective representations or external padding information on a likeness and difference of objects in the context of a solved task. This is the basis for phrase 'recognition with the teacher'. For a typical object recognition system, the determination of the class is only one of the aspects of the overall task.
  • pattern recognition systems receive data in the form of 'raw' measurements which collectively form a stimuli for the generation of a 'feature' vector. Uncovering relevant attributes in the elements present within the feature vector is an essential part of such systems. An ordered collection of relevant attributes which most clearly represent the underlying features of the object is assembled into the feature vector. In this context, learning amounts to the determination of the rules of associations between the features and attributes of a pattern.
  • Practical image recognition systems generally contain several stages in addition to the recognition engine itself, an 'engine' that represents information processing that is realised by some converter of the information having an input and output. On input, such a system establishes information on the properties of an object. On output, the information shows which class or feature of an object is to be assigned.
  • the present application considers objects from the point of view of a superposition of global scenery and the problem is compounded in how one can evaluate an object in terms of it being part of the 'bigger picture' without losing specific details on its particular texture for precise recognition.
  • the present application proposes an approach to object detection in an image scene that is based on a new segmentation algorithm using a Contour Tracing Algorithm and a Space Oriented Filter. Because some parts of the image need enhancement, a novel self-adjustable filter for isolated feature sharpening has been developed within the context of the application considered. This technique may be used in association with other image analysis tools, and, is thus, not restricted to the methods described herein.
  • an object may then be analyzed in terms metrics derived from both a Euclidean geometric and textural perspective, the output fields being used to train a fuzzy inference engine and the recognition structures being based on technologies for image processing, analysis and machine vision.
  • metrics derived from both a Euclidean geometric and textural perspective
  • the output fields being used to train a fuzzy inference engine
  • the recognition structures being based on technologies for image processing, analysis and machine vision.
  • self- calibration and leaning is mandatory.
  • example applications may include remote sensing, non-destructive evaluation and testing and many other applications which specifically require the classification of objects that are textural or semi- textural.
  • Cytological cells may be prepared, for example, using the THINPREPTM technology where the cells are 'fixed' on a slide 10 of the exemplary type shown in Figure 1.
  • the cell sample is fixed as a monolayer within the circle (normally coloured red) as shown in Figure 1.
  • Each slide has a unique Identification Number 12.
  • An OCR (Optical Character Recognition) system controls the 'order' and identification of individual slides.
  • the slides are stored in a cartridge and loaded into an optical microscope.
  • an Olympus BX51 microscope has been used.
  • a motorised stage may be employed to load the slides in an automated/semi-automated fashion.
  • the principal purpose of the image recognition system described herein is to return the relative coordinates that define the location of a target region, for example the location of a suspect cell or more specifically, the nucleus of that cell.
  • each slide needs to be calibrated so that features are linked to a reference co-ordinate system for the slide.
  • a reference co-ordinate system for the slide.
  • this is achieved by locating a number of standard features on the slide which may be used to align the slide and co-ordinate system.
  • three (normally blue) rings as shown in Figure 1 once identified, provide reference locations on which a reference co-ordinate system may be aligned.
  • other optical and non-optical features may equally be employed to achieve the same result.
  • Image acquisition depends on the technology that is best suited for integration with a particular application.
  • high fidelity digital images are employed for image analysis whose resolution is compatible with the image acquisition equipment used for human inspection, e.g. an optical microscope.
  • image acquisition equipment used for human inspection e.g. an optical microscope.
  • such systems provide an arrangement in which the optical viewer may be replaced by an optical camera, for example using a 'C mount adapter.
  • the system may be designed using fully digital or using hybrid digital technologies to provide images that are inherently digital.
  • the images acquired are relatively noise free and are digitized using a standard CCD camera.
  • the cells obtained via a cervical smear do not cover the whole surface area within the sample region. Accordingly, to reduce the areas to be analysed a low(er) magnification lens may be used to segment regions of interest which include cell clusters where there is a high population density of cells suitable for more detailed inspection.
  • the focus is on the identification of the edge features which is an important component of cell recognition, in general. This identification provides information on the basic topology of a feature from which an interpretative match can be achieved. Some edges can be detected only in terms of a representative view of a whole image and have no connection with local pixels. Nevertheless, the segmentation of an image into a complex of edges is a useful pre -requisite for object identification and the solution requires an analysis of the whole scene.
  • the process of recognition commonly involves four definable stages: (i) image acquisition and filtering (as might be required for the removal of noise, for example); (ii) object location (which may include edge detection); (iii) measurement of object parameters; (iv) object class estimation.
  • object (cell) location is undertaken via the computation of a set of weight coefficients ⁇ x that, for each pixel are defined in terms of the equation
  • This probability may be calculated from a Fuzzy Logic Membership Function which has a feedback to the current object location.
  • the function P° b i (x . y ) is a two dimensional matrix and recalculates local values dynamically using the object table location f m , n .
  • the construction of this matrix is based on the following procedures:
  • the intensity level of the object(s) is computed. This level uses only those pixels which have not been recognised as a part of the object.
  • a filter may be used to restrict a region of interest (ROI) a priori depending on the light conditions and point of 'evaporation' when the light can be so bright that one is unable to record the object because of luminance saturation.
  • ROI region of interest
  • the approach presented herein is generic in that it can, in principle, be applied to any type of imaging modality and thus is not to be construed as being limited to application in the detection of abnormalities in PAP smear tests.
  • the system developed for the application of cell location includes features that are based on the textural properties of an image which is an important theme in the field of pattern analysis for biophotonics.
  • PAP smear tests it is the cell nuclei that generally need to be identified and are the principle regions of interest. This is because, in the application of Raman spectroscopy, the cell nucleus is the principal target area.
  • an index is determined for each cell, with the index ranging from one value, indicating a high probability of abnormality, to the opposite end of the scale, where there is a low probability of abnormality.
  • Raman spectrograms are then generated for targets with the highest probability of abnormality thereby confirming or otherwise the condition of the cell.
  • the spectrograms may then be analysed to identify whether the cell nuclei is abnormal or otherwise. It will be appreciated by those skilled in the art that a variety of different techniques may be employed to perform this. For example, in demonstrating the effectiveness of the technique the inventors of the present application require a significant set of both abnormal and normal spectrograms. This set is employed to train an Artificial Intelligence algorithm. Once trained, the algorithm can be proven to produce an indication with a high degree of accuracy as to whether a given cell nuclei is normal or abnormal.
  • LBC Liquid Based Cytology
  • Squamous cell cancer Squamous cell cancer
  • Adenocarcinoma Adenocarcinoma cells that produce mucus. The cervix has these glandular cells along the inside of the passageway that runs from the cervix to the womb (the endocervical canal). Adenocarcinoma is the cancer of these cell types. It is less common than squamous cell cancer, but has become more commonly recognised in recent years.
  • the border between the Nucleus and the Cytoplasm has a distribution of textures in depth, i.e. a border pattern with changes in its textural properties as a function of depth.
  • the present method considers a three-dimensional image.
  • This three dimensional image is obtained by changing the focal plane between images to yield a sequence of images obtained of the same scene but at different focal depths.
  • This approach considers the three-dimensional cell structure which, in general, may not generally be considered to be of clinical significance, at least, in a conventional sense, but nonetheless differentiates between the three dimensional nature of the cell nucleus relative to the flatness of the Cytoplasm as discussed below.
  • a three-dimensional array composed of a set of images (of the type given in Figures 2, 3 and 4, for example) which may be denoted by a function / ( x > J > z ) .
  • a conventional method consists of calculating some function of a point-wise coincidence between the map of the object and the image together with a search for the maximum value of this function.
  • this method can be represented in terms of metrics that include the sum of square deviations, the sum of the modulus of deviations or as a pair of sum of multiplications of brightness values (function of the greatest transparency), for example.
  • the first two similarity functions compute the 'smallness' of a functional pair (instead of searching for a maximum a search is launched to obtain the minimum).
  • a broadly distributed functional evaluation matched with weighted coefficients is undertaken.
  • the selection of weight coefficients is calculated from a given set of samples with two fuzzy logic sets for the nucleus and for the cell.
  • fuzzy logic systems for image analysis provide a decision using a knowledge database by subscribing different edges.
  • the nucleus edge is distributed only in depth and the fuzzy set is defined such that it is not necessary to use a positive feedback learning procedure for second stage object modelling. The computation of a particular value of the
  • the function edge xyz is an edge detection function.
  • the maximum value of Pob j( x , y , z ) corresponds to the top surface of the nucleus.
  • this maximum value may change for different cells because not all cells are fixed in a single layer. Accordingly, it may be necessary to undertake a search for this local maximum.
  • the local maximum can occur within the Cytoplasm and thus, the object segmentation function is limited to the locality of a particular region of interest. Irrespective of the area allocation, the algorithm is applied recursively until nuclei fail to be detected.
  • the shapes of these nuclei (as fixed on a slide) are not continuous and so it is not possible to develop a model based on a deterministic.
  • Fuzzy Logic is well suited for this application.
  • « is the radius at step n and «+i is radius at step computed by taking the average length from the Centre-of-Gravity to the edge of the nucleus segmented using the edge detection function.
  • is the depth of focus for step n and ⁇ n+ i is equivalent at the next consecutive step.
  • is a correction coefficient specific to the stepping motor (obtained by calibration of the microscope) the point when d Z ⁇ T h r e s ho l d j s la k en lo represent the upper bound of the nucleus.
  • the value of the threshold and the step lengths need to be established experimentally so that they are within the bounds on the extent of the nucleus.
  • the size and shape of the upper surface of the nucleus may then be mapped on to a reference image for morphological processing. This may be achieved, for example, using the 'Detour by Object Contour' and the 'Convex Hull Spider' algorithms presented in J. M. Blackledge and D.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The present application is directed to image analysis and more particularly to the identification of target sites in a biological sample, specifically, but not exceptionally, cell nuclei, which is applied to the problem of automating the diagnosis of cervical cancer using Raman Spectroscopy. The application now provides a method of screening a sample to identify potential abnormalities comprising the basic steps of using image analysis to identify cell nuclei in the sample; and determining from said image analysis a likelihood value for the presence of an abnormality in the identified cell nuclei.

Description

TARGETING CELL NUCLEI FOR THE AUTOMATION OF RAMAN SPECTROSCOPY IN CYTOLOGY Field of the application
The present application is directed to image analysis and more particularly to the identification of target sites in a biological sample, specifically, but not exceptionally, cell nuclei, which is applied to the problem of automating the diagnosis of cervical cancer using Raman Spectroscopy. Background
Approximately 471,000 women are diagnosed with invasive carcinoma of the cervix each year and 233,000 die from the disease, worldwide. Cervical cancer is among the most common female cancers in many countries of the developing world. Sexually transmitted infection by certain strains of the human papilloma virus is a major cause of cervical cancer; smoking has also been linked to the disease.
Cervical cancer is preceded by a precancerous condition called CIN (Cervical Intraepithelial Neoplasia) which can be treated relatively easily if detected at an earlier enough stage. It is therefore important to identify CINs via screening programmes. The screening test is called a cervical smear. A clinician removes a small sample of cells from the surface of the cervix and spreads the sample onto a glass slide, the material being 'fixed' in alcohol. The slide sample is conventionally stained and examined using an optical microscope, reports being provided on any abnormal cells or cell clusters. Staining the sample can provide valuable colour identifiers to assist a user in performing their examination. Equally, such staining may be of benefit in automatic image recognition. Unfortunately, staining can have a negative effect when alternative methods of detection are used, in particular, with regard to particularly Raman Spectroscopy, a method of diagnosis upon which this invention is focused. In this case, image processing method may be used to identify a target region. A Raman Spectrum is then obtained for that target region with the presence/absence of abnormal cells being confirmed or otherwise from the spectrum.
The present application therefore seeks to provide a method of detecting target regions in a sample using a method which does not rely on staining.
One approach which has been described previously in US20070070343 is to obtain a first spectroscopic data set for a sample which corresponds to spectroscopic data obtained at a plurality of different positions within the sample. The sample is then treated with a contrast enhancing agent and a digital image captured from which regions of interest may be visually identified and for which previously obtained spectroscopic data may be reviewed. Using this approach, the negative effects of staining on the samples is obviated. Unfortunately, this process still requires samples to be stained and requires extensive data sets to be obtained for the spectroscopic data. Thus the process is not suitable to implementation in a mass screening process.
Another imaging approach as described in US2005/0031183 obtains a plurality of images of a sample over time and uses tracking information to assist in the identification process. A similar approach is described in US20030108231. Unfortunately, this process is time consuming and so of limited to use to a mass screening process.
Summary
The present application seeks to improve the performance of existing techniques, and, in particular, provides an optical imaging method to identify target sites in a sample for further analysis through Raman Spectroscopy. This optical imaging method obviates the need for staining of the sample. The method and system of the present application are set forth in the claims which follow.
Additionally, the application should be taken to extend to the following numbered statements: 1 .A method of screening a sample to identify potential abnormalities comprising the basic steps: using image analysis to identify cell nuclei in the sample; and determining from said image analysis a likelihood value for the presence of an abnormality in the identified cell nuclei.
2. A method according to statement 1 , further comprising performing Raman spectroscopy on those cell nuclei whose likelihood value exceeds a predetermined threshold.
3. A method according to statement 1 or statement 2, wherein the step of performing image analysis comprises applying at least one membership function to identify whether a point in an image acquired of a region of the sample is within a cell nuclei.
4. A method according to statement 3, wherein the step of image analysis comprises obtaining a series of images of a region of the sample, wherein each image in the series is obtained using a different focal plane.
5. A method according to statement 4, wherein the membership function is applied at each of the different focal planes to determine a local maximum.
6. A system for analyzing a sample, the system comprising: a controller, a stage for receiving the sample, an imaging device for acquiring a sequence of images of the sample, a Raman spectroscopy device for obtaining a spectrogram of a point of interest in said sample, wherein the controller is adapted to identify the point of interest in said sample from the sequence of images acquired from a sample where the system is configured to allow images to be taken of the sample at different focal depths and where a plurality of images of the same scene taken at different focal depths are employed in the determination of the point of interest. The system of statement 6 may be employed to advantage with the methods of statements 1 to 5.
Brief Description Of Drawings
Figure 1 is an example of a cytological slide that may be employed in the present application; Figure 2 is an exemplary image of cervical cells taken in a lower plane;
Figure 3 is the same view of cervical cells of Figure 2 except the image was obtained in a mid plane;
Figure 4 is the same view of cervical cells of Figures 2 and 3 except the image was obtained in the upper plane;
Figure 5 is a representation of a process of nucleus identification using a Fuzzy Logic
Membership Function according to one aspect of the present application; and
Figure 6 is an exemplary graphical user interface which illustrates how the technique may be of assistance to a user. Detailed Description
The present application seeks to employ pattern recognition techniques to identify a target region in a sample. Pattern recognition is a component of image analysis which involves the use of digital image processing methods which are designed in an attempt to provide a machine interpretation of an image, preferably, in a form that allows some decision criterion to be applied. It should be appreciated that there is no complete and unique theoretical model available for explaining and simulating the processes of human visual image comprehension and accordingly machine vision remains a subject area in which automatic inspection systems are advanced without having a fully operational theoretical framework as a guide.
Pattern recognition can be thought of as the process of linking parts of the visual object's field with stored information or 'templates' with regard to a pre-determined significance for the observer. There are a number of questions that need to be considered in the development of any machine vision system. These include: (i) what are the goals and constraints? (ii) what type of algorithm or set of algorithms is required to complete the system? (iii) what are the implications for the processes, given the types of hardware that might be available? (iv) what are the levels of representation required? The levels of representation are dependent on what type of segmentation process can and/or should be applied to an image. These are recorded as place tokens and stored in a database. Regions of pixels with similar intensity values or sets of lines are obtained by isolating the edges of an image scene and computed by locating regions where there is a significant difference in the intensity. Such sets are subject to inherent ambiguities when computed from a given input image and associated with those from which an existing data base has been constructed. These ambiguities can only be overcome by the application of high-level rules based on how humans interpret images, but the nature of this interpretation is not always able to be clearly defined. Parts of an image may tend to have an association if they share size, figural similarity, continuity, shading and texture. For this reason, it is necessary to consider how best to segment an image and what form this segmentation should take. For example, optical microscopy involves the use of image processing methods that are often designed in an attempt to provide a machine interpretation of a biological image, ideally in a form that allows some decision criterion to be applied, such that a pattern of biological significance can be recognised. Compared to image processing, computer vision is more than automated image processing. It results in a conclusion, based on a machine performing an inspection of its own. The machine must be programmed to be sensitive to the same aspects of the visual field as humans find meaningful. In this context, segmentation is concerned with the process of dividing an image into meaningful regions or segments. It is used in image analysis to separate features or regions of a pre-determined type from the background and it, in most cases, the first step in automatic image analysis and pattern recognition. Segmentation is broadly based on one of two properties in an image: (i) similarity; (ii) discontinuity. The first property is used to segment an image into regions which have grey (or colour) levels within a predetermined range. The second property segments the image into regions of discontinuity where there is a more or less abrupt change in the values of the grey (or colour) levels. This invention relates to a new method of segmenting cell nuclei from the cells using a differencing method induced by applying different depth of foci which alter visual appearance of the cell nucleus (due to it three-dimensional profile above the plane) while maintaining the image content of the surrounding cellular material (which remains within the plane). Coupled with various image processing techniques to be described, this approach provides a solution to the problem of cell nuclei target detection for the automation of Raman Spectroscopy applied to cervical cancer screening. Thus the present application provides a more efficient way of isolating the nucleus using different images acquired at different focal planes. The method does not analyse images of the entire sample at each focal plane. Instead, a membership function based on a few 'shots' of the slide material taking at different focal planes is employed. A membership function is then used to determine an optimal focal plane. It will be appreciated that the "optimal" plane is with reference to the other acquired focal planes and may thus be considered the preferred optimal plane. The preferred focal plane so determined may then be employed to isolate the nuclei throughout the rest of the slide for subsequent image analysis. Thus a windowed approach may be taken where different windows (images) are acquired of the sample at the previously determined optimal focal plane. Any nuclei which are identified in this subsequent image analysis may then be targeted with Raman spectroscopy. As a result of this approach, the computational load and thus the time required to analyse a sample is significantly reduced.
Pattern recognition may be considered to be a form of machine understanding based on assigning a particular class to an object. The tasks of construction and application of formal operations for numerical or character representation of objects in a real or idealized world is the basis for pattern recognition. This depends on establishing equivalence relations that express a fit of evaluated objects to any class with independent semantic units. The recognition classes of equivalence can be set by the user in the construction of an algorithm, which uses selective representations or external padding information on a likeness and difference of objects in the context of a solved task. This is the basis for phrase 'recognition with the teacher'. For a typical object recognition system, the determination of the class is only one of the aspects of the overall task. In general, pattern recognition systems receive data in the form of 'raw' measurements which collectively form a stimuli for the generation of a 'feature' vector. Uncovering relevant attributes in the elements present within the feature vector is an essential part of such systems. An ordered collection of relevant attributes which most clearly represent the underlying features of the object is assembled into the feature vector. In this context, learning amounts to the determination of the rules of associations between the features and attributes of a pattern.
Practical image recognition systems generally contain several stages in addition to the recognition engine itself, an 'engine' that represents information processing that is realised by some converter of the information having an input and output. On input, such a system establishes information on the properties of an object. On output, the information shows which class or feature of an object is to be assigned.
When a 'system' decides on the task of classification without engaging external learning information, it is called automatic classification - 'recognition without the teacher'. The majority of algorithms for pattern recognition require the engagement of a number of computational procedures which can be provided only with high-performance computer equipment. There are two principal methods for object recognition using either a parametric or non-parametric approach. Statistical voting and alphabetic proposition methods are well known. The main disadvantage with such methods is that classes have to be clearly defined so that no overlapping is allowed. Methods based on a principle of separation and potential functions are also well known but these methods require a large amount of training data or preliminary information on the 'object information' which makes the recognition process less flexible. The present application considers objects from the point of view of a superposition of global scenery and the problem is compounded in how one can evaluate an object in terms of it being part of the 'bigger picture' without losing specific details on its particular texture for precise recognition. The present application proposes an approach to object detection in an image scene that is based on a new segmentation algorithm using a Contour Tracing Algorithm and a Space Oriented Filter. Because some parts of the image need enhancement, a novel self-adjustable filter for isolated feature sharpening has been developed within the context of the application considered. This technique may be used in association with other image analysis tools, and, is thus, not restricted to the methods described herein. Once an object has been segmented it may then be analyzed in terms metrics derived from both a Euclidean geometric and textural perspective, the output fields being used to train a fuzzy inference engine and the recognition structures being based on technologies for image processing, analysis and machine vision. There are numerous applications for this technique where self- calibration and leaning is mandatory. In addition to the application discussed herein, example applications may include remote sensing, non-destructive evaluation and testing and many other applications which specifically require the classification of objects that are textural or semi- textural.
Application Specific Description
Cytological cells may be prepared, for example, using the THINPREP™ technology where the cells are 'fixed' on a slide 10 of the exemplary type shown in Figure 1.
The cell sample is fixed as a monolayer within the circle (normally coloured red) as shown in Figure 1. Each slide has a unique Identification Number 12. An OCR (Optical Character Recognition) system controls the 'order' and identification of individual slides. The slides are stored in a cartridge and loaded into an optical microscope. In the research undertaken with regard to this article, an Olympus BX51 microscope has been used. A motorised stage may be employed to load the slides in an automated/semi-automated fashion. The principal purpose of the image recognition system described herein is to return the relative coordinates that define the location of a target region, for example the location of a suspect cell or more specifically, the nucleus of that cell. For this purpose each slide needs to be calibrated so that features are linked to a reference co-ordinate system for the slide. In the case of a THINPREP™ slide, this is achieved by locating a number of standard features on the slide which may be used to align the slide and co-ordinate system. In the case of a THINPREP slide, three (normally blue) rings as shown in Figure 1 , once identified, provide reference locations on which a reference co-ordinate system may be aligned. However, other optical and non-optical features may equally be employed to achieve the same result.
Image acquisition depends on the technology that is best suited for integration with a particular application. For pattern recognition in histopathology, for example, high fidelity digital images are employed for image analysis whose resolution is compatible with the image acquisition equipment used for human inspection, e.g. an optical microscope. In a conventional sense, such systems provide an arrangement in which the optical viewer may be replaced by an optical camera, for example using a 'C mount adapter. Equally, the system may be designed using fully digital or using hybrid digital technologies to provide images that are inherently digital. In general the images acquired are relatively noise free and are digitized using a standard CCD camera.
Typically, the cells obtained via a cervical smear do not cover the whole surface area within the sample region. Accordingly, to reduce the areas to be analysed a low(er) magnification lens may be used to segment regions of interest which include cell clusters where there is a high population density of cells suitable for more detailed inspection. The focus is on the identification of the edge features which is an important component of cell recognition, in general. This identification provides information on the basic topology of a feature from which an interpretative match can be achieved. Some edges can be detected only in terms of a representative view of a whole image and have no connection with local pixels. Nevertheless, the segmentation of an image into a complex of edges is a useful pre -requisite for object identification and the solution requires an analysis of the whole scene.
Although many low-level processing methods can be applied for this purpose, the problem is to decide which object boundary each pixel in an image falls within and which high-level constraints are necessary.
Consider an image which is given by a function / ( x > y ) which contains some object described by a set (a feature vector that may be composed of integer, floating point and strings) S = { s1 , s2, ... , sn } _ We define a sample which is somewhat 'close' to this object. This task can be reduced to the construction of some function determining the degree of proximity of the object to a sample - a template of the object. Recognition is the process of comparing individual features against some pre-established template subject to a set of conditions and tolerances. The process of recognition commonly involves four definable stages: (i) image acquisition and filtering (as might be required for the removal of noise, for example); (ii) object location (which may include edge detection); (iii) measurement of object parameters; (iv) object class estimation.
For the current application, object (cell) location is undertaken via the computation of a set of weight coefficients ^x that, for each pixel are defined in terms of the equation
Figure imgf000010_0001
where
Figure imgf000010_0002
and ^ denotes the convolution integral (over both x and ^ ), the matrix values being user defined. This result yields local dependency between the current pixel f m , n and the object pixels, global evaluation being determined by P° b i (x . y ) which is the probability that the pixel could be a part of an object.
This probability may be calculated from a Fuzzy Logic Membership Function which has a feedback to the current object location. The function P° b i (x . y ) is a two dimensional matrix and recalculates local values dynamically using the object table location f m , n . The construction of this matrix is based on the following procedures:
1. The intensity level of the object(s) is computed. This level uses only those pixels which have not been recognised as a part of the object. The object level, denoted by Lobj ; js initially set to be lower than the background level Lbg r ; an(j; as the recognition process continues so long as Lobj = Lb g r ^ ajj objects are recognised as having been indexed according to the equation
-mea n[f{x, y)-f{m,n)]
2. In order to obtain Lobj ; a probabilistic min-max equation is employed given by:
Lx, LX≤L ;
Lob, = {
obj ' Lv, otherwise. where
Lx
Figure imgf000011_0001
and
Li max [min/fx, y }) — {niin fix, y))5
, x v v
+{min f (ai,
For reasons of simplicity, we do not include in this equation that component which is responsible for dividing those previously defined objects in f m,n . For more complex images, a filter may be used to restrict a region of interest (ROI) a priori depending on the light conditions and point of 'evaporation' when the light can be so bright that one is unable to record the object because of luminance saturation.
The approach presented herein is generic in that it can, in principle, be applied to any type of imaging modality and thus is not to be construed as being limited to application in the detection of abnormalities in PAP smear tests. The system developed for the application of cell location includes features that are based on the textural properties of an image which is an important theme in the field of pattern analysis for biophotonics. With PAP smear tests, it is the cell nuclei that generally need to be identified and are the principle regions of interest. This is because, in the application of Raman spectroscopy, the cell nucleus is the principal target area. In performing a test on a sample and in the process of cell nuclei recognition, an index is determined for each cell, with the index ranging from one value, indicating a high probability of abnormality, to the opposite end of the scale, where there is a low probability of abnormality. Raman spectrograms are then generated for targets with the highest probability of abnormality thereby confirming or otherwise the condition of the cell. The spectrograms may then be analysed to identify whether the cell nuclei is abnormal or otherwise. It will be appreciated by those skilled in the art that a variety of different techniques may be employed to perform this. For example, in demonstrating the effectiveness of the technique the inventors of the present application require a significant set of both abnormal and normal spectrograms. This set is employed to train an Artificial Intelligence algorithm. Once trained, the algorithm can be proven to produce an indication with a high degree of accuracy as to whether a given cell nuclei is normal or abnormal.
The method used for targeting the cell nuclei is discussed in the following section. The method is used with Liquid Based Cytology (LBC) but may equally well have application with other techniques and is not to be understood as being limited to this method. In LBC, a clinician takes a sample in the same way as in a PAP test, but using a very small brush instead of a spatula. The head of the brush is then broken off and immersed in a small vessel of liquid instead of smearing the sample directly onto a slide. This approach is better at preserving the cells and so the results of the test have been shown to be generally more reliable. At present, about one in twelve PAP smears have to be repeated because the results are inconclusive due to poor readability.
There are two principal types of cervical cancer: Squamous cell cancer and Adenocarcinoma. They are named after the type of cell that becomes cancerous. Squamous cells are the flat skin- like cells that cover the surface of the cervix. This is the most common type of cervical cancer. Adenocarcinoma cells are glandular cells that produce mucus. The cervix has these glandular cells along the inside of the passageway that runs from the cervix to the womb (the endocervical canal). Adenocarcinoma is the cancer of these cell types. It is less common than squamous cell cancer, but has become more commonly recognised in recent years. Only about one in five to one in ten cases of cervical cancer are adenocarcinoma and are associated with a similar precancerous phase. It is treated in the same way as squamous cell cancer of the cervix. The present application employs a 'depth of focus' technique to assist in cell nuclei identification. In a simplistic three-dimensional sense, most cells consist of some basic generic features which may be classified. These features include: (i) The Cytoplasm which has a relatively flat textural surface whereas the Nucleus is not flat but has relatively significant depth.
(ii) The border between the Nucleus and the Cytoplasm has a distribution of textures in depth, i.e. a border pattern with changes in its textural properties as a function of depth.
The present inventors have realised that images of this 'depth dependence' may be acquired by considering the same scene at different focal depths. This observation may be understood with reference to Figure 2, 3 and 4 which show three images of the same cell(s) at different depth of foci (in a 'lower, mid and upper plane').
It should be appreciated that the differences in the distribution of texture that takes place as the depth of focus is changed may be used to isolate the nucleus from the Cytoplasm arising from the difference in the generic three-dimensional properties of a cell. Thus to isolate the nucleus, two sets of a Membership Functions are constructed one each for the cell and cell nucleus. Based on these functions, cell nuclei can be identified from its surroundings. The method for achieving this will now be explained in greater detail.
The present method considers a three-dimensional image. This three dimensional image is obtained by changing the focal plane between images to yield a sequence of images obtained of the same scene but at different focal depths. This approach considers the three-dimensional cell structure which, in general, may not generally be considered to be of clinical significance, at least, in a conventional sense, but nonetheless differentiates between the three dimensional nature of the cell nucleus relative to the flatness of the Cytoplasm as discussed below. Consider a three-dimensional array composed of a set of images (of the type given in Figures 2, 3 and 4, for example) which may be denoted by a function / ( x > J > z ) . This set of images may be said to contain some object described by a set of features S = { s1 , s2 , .. . , sK ] _ Consider the case when it is necessary to define a sample which is similar to this object in terms of a matching set. A conventional method consists of calculating some function of a point-wise coincidence between the map of the object and the image together with a search for the maximum value of this function. In terms of a 'similarity function', this method can be represented in terms of metrics that include the sum of square deviations, the sum of the modulus of deviations or as a pair of sum of multiplications of brightness values (function of the greatest transparency), for example. The first two similarity functions compute the 'smallness' of a functional pair (instead of searching for a maximum a search is launched to obtain the minimum). However, in this application, not all fragments of a nucleus' edge are equally important and hence, a broadly distributed functional evaluation matched with weighted coefficients is undertaken. The selection of weight coefficients is calculated from a given set of samples with two fuzzy logic sets for the nucleus and for the cell.
Normally, fuzzy logic systems for image analysis provide a decision using a knowledge database by subscribing different edges. In this application the nucleus edge is distributed only in depth and the fuzzy set is defined such that it is not necessary to use a positive feedback learning procedure for second stage object modelling. The computation of a particular value of the
Membership Function P°b i(x , y, z) S obtained according to the equation
Pofa]{is ff s2i) = ( a-,»Lobj - Lbgr + edgexy z)dxd<ydz
Figure imgf000014_0001
xyz
for a closed border of the object, a schematic diagram being given in Figure 5.
The function edgexyz is an edge detection function. In Figure 5, and, by way of an example only, the maximum value of Pob j(x , y , z) corresponds to the top surface of the nucleus. However, this maximum value may change for different cells because not all cells are fixed in a single layer. Accordingly, it may be necessary to undertake a search for this local maximum. For a three dimensional polygon, the local maximum can occur within the Cytoplasm and thus, the object segmentation function is limited to the locality of a particular region of interest. Irrespective of the area allocation, the algorithm is applied recursively until nuclei fail to be detected. The shapes of these nuclei (as fixed on a slide) are not continuous and so it is not possible to develop a model based on a deterministic. However, Fuzzy Logic is well suited for this application.
We use the approach discussed in the previous section to allocate a Membership Function which provides a fast and reliable solution based on changing the focal depth over a number of consecutive steps " generated by a stepping motor. We consider the function T _ 27r(Rn+i - Rn)
ώη+ 1
where » is the radius at step n and «+i is radius at step computed by taking the average length from the Centre-of-Gravity to the edge of the nucleus segmented using the edge detection function. In correspondence, ^„ is the depth of focus for step n and ^n+ i is equivalent at the next consecutive step. By computing dZ =—— -—
V ( Lbgr)
where ^ is a correction coefficient specific to the stepping motor (obtained by calibration of the microscope) the point when d Z < T h r e s ho l d js laken lo represent the upper bound of the nucleus. The value of the threshold and the step lengths need to be established experimentally so that they are within the bounds on the extent of the nucleus. The size and shape of the upper surface of the nucleus may then be mapped on to a reference image for morphological processing. This may be achieved, for example, using the 'Detour by Object Contour' and the 'Convex Hull Spider' algorithms presented in J. M. Blackledge and D. Dubovitsky, Object Detection and Classification with Applications to Skin Cancer Screening, ISAST Transactions on Intelligent Systems, Vol. 1, No 1, 34-45, 2008. These filters are used to generate a uniformly closed boundary which functions to define the edge of the cell nucleus, i.e. an 'edge detection function'. The 'Centre-of-Gravity' of the closed boundary is taken to be the centre of nucleus. The coordinates for this determined 'Centre-of-Gravity' may be employed to target the point from which a Raman spectrum may be generated. The result of this is illustrated in Figure 6 which shows a screen shot from a GUI used for targeting a cell nucleus based on the approach discussed herein.

Claims

Claims
1. A method of screening a sample to identify potential abnormalities comprising the basic steps of:
(a) using image analysis to identify cell nuclei in the sample, wherein said image
analysis comprises the initial step of obtaining a plurality of images at different focal planes and applying at least one membership function to determine a preferred focal plane in which the upper surface of cell nuclei are present;
(b) using at least one image acquired of the sample at that focal plane for subsequent morphological image analysis in which cell nuclei are identified;
(c) determining from said subsequent morphological image analysis a likelihood value for the presence of an abnormality in the identified cell nuclei, and
(i) performing Raman spectroscopy on those cell nuclei whose likelihood value exceeds a predetermined threshold.
2. A method according to claim 1 , wherein the sample is fixed.
3. A method according to claim 1 or claim 2, wherein the sample is a non-stained sample.
4. A method according to any preceding claim, wherein the membership function is applied at each of the different focal planes to determine a local maximum.
5. A method according to any preceding claim, wherein a plurality of images are obtained of different regions of the sample at the preferred focal plane after it has been determined and these subsequently obtained images are those which are processed by the morphological image analysis.
6. A system for analyzing a sample using the method of any preceding claim, the system
comprising:
a controller,
a stage for receiving the sample, an imaging device for acquiring a sequence of images of the sample,
a Raman spectroscopy device for obtaining a spectrogram of a point of interest in said sample, wherein the controller is adapted to identify the point of interest in said sample from the sequence of images acquired from a sample where the system is configured to allow images to be taken of the sample at different focal depths and where a plurality of images of the same scene taken at different focal depths are employed in the determination of a focal depth at which to analyse for potential points of interest.
PCT/EP2013/070495 2012-10-02 2013-10-01 Targeting cell nuclei for the automation of raman spectroscopy in cytology WO2014053520A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1217633.5A GB201217633D0 (en) 2012-10-02 2012-10-02 Targetin cell nuclei for the automation of raman spectroscopy in cytology
GB1217633.5 2012-10-02

Publications (1)

Publication Number Publication Date
WO2014053520A1 true WO2014053520A1 (en) 2014-04-10

Family

ID=47225566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/070495 WO2014053520A1 (en) 2012-10-02 2013-10-01 Targeting cell nuclei for the automation of raman spectroscopy in cytology

Country Status (2)

Country Link
GB (1) GB201217633D0 (en)
WO (1) WO2014053520A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11439327B2 (en) 2019-06-18 2022-09-13 Samsung Electronics Co., Ltd. Apparatus and method for measuring Raman spectrum
EP3430565B1 (en) * 2016-03-18 2023-10-25 Leibniz-Institut für Photonische Technologien e.V. Method for testing distributed objects by segmenting an overview image

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JEROEN A.M. BELIEN ET AL: "Confocal DNA cytometry: A contour-based segmentation algorithm for automated three-dimensional image segmentation", CYTOMETRY, vol. 49, no. 1, 26 August 2002 (2002-08-26), pages 12 - 21, XP055091960, ISSN: 0196-4763, DOI: 10.1002/cyto.10138 *
JONATHAN BLACKLEDGE ET AL: "Dublin Institute of Technology An Optical Machine Vision System for Applications in Cytopathology", ISAST TRANSACTIONS ON COMPUTERS AND INTELLIGENT SYSTEMS, 1 January 2011 (2011-01-01), pages 95 - 109, XP055091574, Retrieved from the Internet <URL:http://arrow.dit.ie/cgi/viewcontent.cgi?article=1024&context=engscheleart2> [retrieved on 20131204] *
JONATHAN BLACKLEDGE ET AL: "Object Detection and Classification with Applications to Skin Cancer Screening", ISAST TRANSACTIONS ON INTELLIGENT SYSTEMS ISAST TRANSACTIONS ON INTELLIGENT SYSTEMS, 1 January 2008 (2008-01-01), pages 34 - 45, XP055091532, Retrieved from the Internet <URL:http://arrow.dit.ie/cgi/viewcontent.cgi?article=1038&context=engscheleart2> [retrieved on 20131204] *
JONATHAN BLACKLEDGE ET AL: "Targeting Cell Nuclei for the Automation of Raman Spectroscopy in Cytology", 4 December 2013 (2013-12-04), pages 1 - 10, XP055091489, Retrieved from the Internet <URL:https://web.archive.org/web/20131204113917/http://eleceng.dit.ie/papers/250.pdf> [retrieved on 20131204] *
SALIM J ATTIA ET AL: "Diagnosis of Breast Cancer by Optical Image Analysis", IRISH SIGNALS AND SYSTEMS CONFERENCE ISSC2012, NUI MAYNOOTH, 28 June 2012 (2012-06-28), http://arrow.dit.ie/engscheleart/192/, pages 1 - 7, XP055091485, Retrieved from the Internet <URL:http://arrow.dit.ie/cgi/viewcontent.cgi?article=1196&context=engscheleart> [retrieved on 20131204] *
UMESH ADIGA P S ET AL: "SEGMENTATION AND COUNTING OF FISH SIGNALS IN CONFOCAL MICROSCOPY IMAGES", MICRON, PERGAMON, OXFORD, GB, vol. 31, no. 1, 1 February 2000 (2000-02-01), pages 5 - 15, XP001024089, ISSN: 0968-4328, DOI: 10.1016/S0968-4328(99)00057-8 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3430565B1 (en) * 2016-03-18 2023-10-25 Leibniz-Institut für Photonische Technologien e.V. Method for testing distributed objects by segmenting an overview image
US11439327B2 (en) 2019-06-18 2022-09-13 Samsung Electronics Co., Ltd. Apparatus and method for measuring Raman spectrum

Also Published As

Publication number Publication date
GB201217633D0 (en) 2012-11-14

Similar Documents

Publication Publication Date Title
EP3776337B1 (en) Systems for cell shape estimation
JP7558242B2 (en) Method for storing and retrieving digital pathology analysis results - Patents.com
Ortega et al. Hyperspectral imaging and deep learning for the detection of breast cancer cells in digitized histological images
US20220351860A1 (en) Federated learning system for training machine learning algorithms and maintaining patient privacy
WO2019110567A1 (en) Method of computing tumor spatial and inter-marker heterogeneity
JP7422235B2 (en) Non-tumor segmentation to aid tumor detection and analysis
JP2017521779A (en) Detection of nuclear edges using image analysis
CN116188423B (en) Super-pixel sparse and unmixed detection method based on pathological section hyperspectral image
Bhattacharjee et al. Review on histopathological slide analysis using digital microscopy
US11959848B2 (en) Method of storing and retrieving digital pathology analysis results
WO2016189469A1 (en) A method for medical screening and a system therefor
Kanwal et al. Quantifying the effect of color processing on blood and damaged tissue detection in whole slide images
Saxena et al. Study of Computerized Segmentation & Classification Techniques: An Application to Histopathological Imagery
WO2014053520A1 (en) Targeting cell nuclei for the automation of raman spectroscopy in cytology
CN113222928B (en) Urine cytology artificial intelligence urothelial cancer identification system
Blackledge et al. Targeting cell nuclei for the automation of Raman spectroscopy in cytology
CN116503858B (en) Immunofluorescence image classification method and system based on generation model
Ortega Sarmiento et al. Hyperspectral imaging and deep learning for the detection of breast cancer cells in digitized histological images
Blackledge et al. Pattern Recognition in Cytopathology for Papanicolaou Screening
Song et al. A circumscribing active contour model for delineation of nuclei and membranes of megakaryocytes in bone marrow trephine biopsy images
Blackledge et al. An Optical Machine Vision System for Applications in Cytopathology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13771479

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13771479

Country of ref document: EP

Kind code of ref document: A1