WO2010139697A1 - Image analysis - Google Patents

Image analysis Download PDF

Info

Publication number
WO2010139697A1
WO2010139697A1 PCT/EP2010/057651 EP2010057651W WO2010139697A1 WO 2010139697 A1 WO2010139697 A1 WO 2010139697A1 EP 2010057651 W EP2010057651 W EP 2010057651W WO 2010139697 A1 WO2010139697 A1 WO 2010139697A1
Authority
WO
WIPO (PCT)
Prior art keywords
cells
classified
scoring
module
image
Prior art date
Application number
PCT/EP2010/057651
Other languages
French (fr)
Inventor
Yuriy Alexandrov
Simon Laurence John Stubbs
Original Assignee
Ge Healthcare Uk Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ge Healthcare Uk Limited filed Critical Ge Healthcare Uk Limited
Priority to JP2012513601A priority Critical patent/JP2012529020A/en
Priority to EP10726031A priority patent/EP2438553A1/en
Priority to US13/375,273 priority patent/US8750592B2/en
Priority to CN2010800249300A priority patent/CN102449639A/en
Publication of WO2010139697A1 publication Critical patent/WO2010139697A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Definitions

  • the present invention relates generally to image analysis. More particularly, the present invention relates to apparatus and methods for improved automated image analysis to identify the presence of certain phenotypes, such as, for example, micronuclei, that might be seen in various images.
  • HCS high-content screening
  • an apparatus for genotoxicological screening comprises a processor that is configured to provide an identifier module for identifying target cells in an image, a classifier module for classifying the identified cells in accordance with one or more phenotype, and a scoring module for assigning respective confidence measurements to the classified cells.
  • a method for classifying cells in an image according to one or more phenotype comprises identifying candidate target cells in an image, classifying the identified target cells according to one or more phenotype, and scoring the classified cells.
  • Certain aspects and embodiments of the present invention also provide various computer program products for configuring a data processing apparatus to implement various of the functions needed to provide an apparatus or method in accordance with the aforementioned first and second aspects of the present invention.
  • various aspects and embodiments of the present invention are able to improve the accuracy of automated feature identification, e.g. in HCS, by substantially reducing the number of false negative identifications that are detected in an image. For example, when screening images to detect micronuclei indicative of potentially toxic drug compounds such false negative readings may mean that a possibly useful compound is ruled out from further testing by error. Reduction of the number of false negative identifications is thus highly desirable.
  • Figure 1 shows an apparatus for genotoxicological screening in accordance with an embodiment of the present invention
  • Figure 2 shows a method for classifying cells in an image according to one or more phenotypes in accordance with various aspects and embodiments of the present invention.
  • FIG. 1 shows an apparatus 100 for genotoxicological screening in accordance with an embodiment of the present invention.
  • Genotoxicological screening may, for example, be provided by identifying the presence or absence of one or more phenotypes, such as micronuclei, that may be present in images of treated cells.
  • the apparatus 100 may also be used, for example, for automated high-content screening (HCS) and/or high-throughput screening (HTS).
  • HCS high-content screening
  • HTS high-throughput screening
  • Micronuclei are small nuclei containing chromatin, separate from and additional to the main nuclei of cells, produced during telophase of mitosis (or meiosis) by lagging (slower motility during chromosome segregation) chromosome fragments or whole chromosomes
  • the apparatus 100 which is illustrated schematically for clarity, comprises a light source 102 for producing light 120a.
  • the light 120a is focussed by a condenser 104 onto a test plate 108.
  • the test plate 108 may contain an array of wells or spots 109 to be imaged.
  • the condenser 104 can focus the light 120b in a focal plane at the test plate 108.
  • the test plate 108 may be provided as a consumable product, and the spots 109 might contain various materials that are able to interact with certain types of cells (e.g. mammalian cells).
  • the test plate 108 may comprise at least one fiducial marker (not shown) provided to aid in aligning the test plate 108 within the apparatus 100.
  • one or more coloured dyes may be provided within the spots 109.
  • Such coloured dyes can be identified by various imaging systems in order to derive data relating to the relative positioning of the test plate 108 within the apparatus 100.
  • the apparatus 100 may include a GE In-CeIl Analyzer 1000TM that is commercially available from GE Healthcare Life Sciences, Little Chalfont, Buckinghamshire, U.K., and which can use four colour channels to image the test plate 108.
  • the apparatus 100 also contains a detector system 112 and a translation mechanism (not shown).
  • the translation mechanism is configured to move the focus of the light 120b relative to the test plate 108 (e.g. by moving the test plate 108 in the x-y plane). This enables a plurality of images to be acquired from respective of the individual spots 109.
  • the translation mechanism may also be operable to move the test plate 108 in the z-direction shown in Figure 1, for example, in order to bring the spots 109 into focus.
  • only one spot is imaged at a time.
  • the images acquired are of sufficient magnification to resolve cells and sub-cellular morphology.
  • this may entail use of a 2Ox objective, the field of view of which is slightly smaller than a single spot.
  • various methods of the invention would also work for lower power magnification imaging, e.g. on GE In- CeIl Analyzer 1000TM using a 4x objective to image 4-6 spots/image.
  • An aperture stop 106 is optionally provided between the light source 102 and the detector system 112, the size of which may be variable. For example, various differently sized movable apertures may be rotated into position or a continuously variable iris-type diaphragm may be provided. Image contrast can be controlled by changing the aperture setting of the aperture stop 106.
  • Focussed light 120b passing through the aperture stop 106 passes through the sample test plate 108 in a transmission imaging mode.
  • Emergent light 120c modulated with image information relating to material adjacent to an individual spot 109 is collected by an objective lens 110 and focussed 12Od onto the detector system 112, and is used to form an original image for that spot 109.
  • Various embodiments of methods of the present invention are independent of the imaging modality used, e.g. they can operate with transmission or reflection geometry.
  • imaging an epi-fluorescence mode may be used, with both the fiducial marker spots and the assay signals from the cells being imaged at different excitation and emission wavelengths.
  • the detector system 112 is operable to acquire a plurality of images from the test plate 108. For example, images may be obtained each representing different spots 109 or of the same spot 109 at different points in time. Differences between neighbouring spots 109 or temporal changes occurring within the same spot 109 can thus be analysed.
  • the detector system 112 is also operably coupled to a processor 114 that in turn is operable to process the images. Analysis of the images may be used to provide for genotoxicological screening. Of course, such images may be generated by the apparatus 100 itself or might be provided from storage and/or transmitted to the processor 114 from a remote location (not shown).
  • the processor 114 is configured to provide an identifier module 115 for identifying target cells in an image, a classifier module 116 for classifying the identified cells in accordance with one or more phenotype, and a scoring module 117 for assigning respective confidence measurements to the classified cells.
  • the identifier module 115 segments individual cell images from whole images, that might contain images of many such cells, to provide a set of target cells that have been identified in the images. For example, a pattern recognition or thresholding technique may be used [1, 8, 9]. Cells may be segmented and multi-parametric cellular data provided.
  • the classifier module 116 analyses the content of the individual identified cells.
  • the content of the cell image is tested to determine the presence or absence of one or more phenotype.
  • the classifier module 116 checks the cell images for the presence or absence of micronuclei in accordance with an initial predetermined classification scheme.
  • the initial classification scheme may, for example, be determined following the application of a training algorithm that is provided and shipped as part of a new GE In-CeIl 1000TM apparatus [10].
  • Initial classification may be provided using a predetermined set of classification criteria, and cells may be annotated with graphical-cluster or multi-cell methods, for example.
  • the classifier module 116 is dynamically modifiable.
  • the classifier module 116 may be operable to modify various criteria, such as threshold values and/or phenotypes that are analysed at run-time.
  • Such a dynamically adaptable classifier module 116 can thereby evolve over time to re-classify the cell images in order to reduce the incidence of false negatives, which are highly undesirable.
  • the classifier module 116 may further adapt its behaviour automatically in response to certain user input, e.g. where a user has determined that a false positive classification of a cell image has been made.
  • the scoring module 117 generates scores indicative of the degree of confidence that a particular imaged cell possesses the target phenotype(s).
  • the scoring module 117 is operable to apply a canonical variate analysis (CVA) to determine the respective confidence measurements for respective classified cells [H].
  • CVA canonical variate analysis
  • CVA may thus be employed to optimise the parameter choice and maximise separation of classes based upon initial training and/or any annotation of the image data.
  • CVA analysis results may be reported in the form of a confidence rating which indicates the reliability of each individual cell belonging to the particular class into which it has been categorised by the initial process.
  • users may be given the option to highlight "questionable" cells, such as, for example: those that are not easily categorised according to a user defined set of descriptors; those that have a confidence rating lower than a certain user-defined threshold value; or in the case of micronuclei scoring all cells in a particular class e.g. those potentially containing micronuclei.
  • the system 100 may flag any questionable cells and request verification (e.g. re- annotation) from either a user or other processor. For example, images of such cells might "pop-up" on a display for visual user clarification of a class.
  • the system 100 may also be operable to permit a complete re-annotation of identified cell images, for example, where there is a change in biological protocol or throughput needs are low. Additionally, verification may result in re-classification of a cell and/or definition of an alternate class.
  • Training information that is supplied subsequently may be incorporated into a classification routine to facilitate better informed decisions with improved confidence.
  • a routine may be embodied in an algorithm that provides constantly improving decisions having both increased specificity and sensitivity. Such an algorithm may thus never need to flag the same cell more than once and hence overall processing speed also increases over time.
  • visualisations of the training set data overlaid with "unknown” data can help in assessing the efficiency of the processing, as well as identification of various drug effects from the analysis of the distribution of "under-rated" patterns around the datasets in training data corresponding to the pre-defined classes, hi the case of presentation of unknown data with available manual scorings, for example, a matching matrix may be calculated on-the-fly and/or as a post-annotation step that defines correlation between manual and supervised learning classification.
  • the processor 114 is further operable to exchange data with other similar processors and the scoring module 117 is operable to derive respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by a plurality of scoring modules of respective processors.
  • the combined experience of many different adaptive learning systems can automatically be combined to further improve the accuracy in identifying various phenotypes.
  • Such experience might be weighted in an overall score combined from individual scores for each contributor determined, for example, according to how long a particular contributing machine has been operating, how many separate image analyses a specific machine has performed, how many modifications or reclassifications have been applied for a particular machine since initial training, etc.
  • processor 114 may act as a server device and transmit requests and image data to remote devices that are not necessarily trained with the same training data.
  • the processor 114 is then operable to determine a consensus score for classifying its localised image data thereby providing improved confidence scoring and accuracy when classifying various phenotypes of interest. Such analysis may be performed automatically, without the need for specific user input or instructions.
  • the aggregated score may be generated as a weighted average of analyses of the same images(s) by different processors.
  • Data may be transmitted between processors, for example, anonymously over a virtual private network (VPN) or via the Internet using a secure socket layer (SSL) channel to prevent specific detailed information being accessible to remote users of the system or other network users.
  • VPN virtual private network
  • SSL secure socket layer
  • additional processing is performed remotely by distributed processors as a relatively low priority background task.
  • distributed processors as a relatively low priority background task.
  • the processor 114 can be configured to control a translation mechanism (not shown) to move the focal position of the light source 102 relative to the spot plate 108.
  • the processor 114 may, for example, be provided as part of a conventional computer system appropriately programmed to implement one or more of the identifier module 115, the classifier module 116 and the scoring module 117, or may be provided by a digital signal processor (DSP), dedicated application specific integrated circuit (ASIC), appropriately configured firmware, etc.
  • DSP digital signal processor
  • ASIC dedicated application specific integrated circuit
  • Various embodiments of the invention may thus be used to screen cells to detect micro-nucleation events that are indicative of drug toxicity. Improved classification improves screening and identification of potentially toxic drug compounds. In turn, such improved screening can reduce the need to test various compounds, for example pharmaceutical compounds that need to be tested on human or animal models, by eliminating the number of false negatives detected in an automated initial screening process.
  • the apparatus 100 as described above may be used to implement the following method that is described below in connection with Figure 2.
  • Figure 2 shows a method 200 for classifying cells in an image according to one or more phenotypes in accordance with various aspects and embodiments of the present invention.
  • the method 200 may, for example, be applied for automatic classification and scoring of cell phenotypes depicted in the image.
  • Classes may initially be defined and training and/or annotation applied to refine class definitions. For example, cluster and/or visual multi-cell annotation may be used.
  • a classifier may thus be built, e.g. using CVA and plots, to optimise the parameter choice and class differentiation.
  • the method 200 comprises a first step 202 of identifying candidate target cells in an image. Then at second step 204 the method 200 applies classifying of the identified target cells according to one or more phenotype, such as micronuclei for example, before scoring the classified cells in the next step 206.
  • the first step 202 entails identifying and segmenting the candidate target cells using pattern recognition. For example, cells may be segmented and analysed to produce multi-parametric cellular data [8, 10, 12, 13].
  • Classification may be performed at step 204 using a variety of pattern recognition techniques [9, 14].
  • supervised techniques such as: linear and quadratic discriminant analysis; neural networks; K-nearest neighbours; support vector classifiers; tree-based methods; etc. and/or unsupervised methods like: k-means; self- organised maps; etc. may be used.
  • step 204 comprises deriving respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by ⁇ plurality of processors. Collaborative classification and/or scoring may thus be applied in order to further improve the overall accuracy of phenotype determination.
  • Scoring may also be provided at step 206 using a number of techniques. For example, scoring of the classified cells can be done by using a canonical variate analysis (CVA) to define a confidence rating for each respective classified cell. CVA is generally described by Tofallis [H].
  • CVA canonical variate analysis
  • the CVA method Given a specific grouping (clustering) of data in a particular feature space, the CVA method applies a clustering dependent coordinate transformation [15].
  • the resulting effective coordinates have the same dimensionality as the original feature space, but provide maximal inter-cluster separation (i.e. minimising intra-cluster and maximising inter-cluster distances of the various patterns).
  • CVA can also be applied to optimise a choice of descriptors and maximise the separation of classes based upon an initial training/annotation data set.
  • a confidence rating is then determined for each cell in the image to indicate the reliability of its belonging to the particular class into which it is categorised by the initial classification process.
  • one simple confidence rating for a categorised pattern P is the difference between decision function g(P) of the "winner" class and the next closest to it from among the other classes, such that:
  • method 200 optionally comprises an additional step 208 of reclassifying the scored classified cells, for example, periodically and/or at event- driven times.
  • Reclassification may be provided when significant changes occur to an initial training data set such that the method 200 enables cell phenotype classification accuracy to improve over time as, for example, part of a dynamically evolving adaptive learning system.
  • Such a system may, for example, evolve automatically in response to event-driven user and/or automated input.
  • Reclassifying of the scored classified cells may comprise reclassifying the cells by modifying a learning algorithm training data set in response to user weighting of the scoring applied to the classified target image areas initially by the method 200. For example, specific cells may be selected and reclassified by providing confidence values for each cell, selecting those requiring reclassification by applying predetermined threshold criteria, and optionally visually verifying the identity of the cells. Any verified cells may then be incorporated into a learning algorithm classifier to improve its speed and accuracy.
  • the method 200 may be used to determine whether changes occur between various images.
  • the method 200 can be used to detect drug induced-effects over time. These effects might be qualifiable/quantifiable, and may include phenomena and processes such as changes in size parameters, necrosis, mitosis (cell splitting) etc.
  • the method 200 can be used for automated image analysis e.g. in high-throughput screening (HTS) for drug assays or the like.
  • the images used may comprise cellular data such as a microscope image.
  • the method 200 may be applied to images that have been previously stored, transmitted etc., or may acquired and processed "on-the-fly".
  • the method 200 may be implemented using a processor comprising one or more of hardware, firmware and software.
  • the processor might use a conventional personal computer (PC), ASIC, DSP, etc., and/or the apparatus may include a GE In-CeIl Analyzer 1000TM using GE's In-CeIl MinerTM software package upgraded to implement the method 200. Additional functionality, as described herein, may also be provided by various embodiments that implement the method 200.
  • the method 200 can be implemented by an apparatus using various software components.
  • Such software components may be provided to an apparatus in the form of an upgrade, for example, transmitted to the apparatus via the Internet.
  • Certain embodiments of the present invention may be provided which enable a plurality of users to connect to an apparatus in order that multiple users (e.g. humans and/or remote machines) can provide training data, for example, by scoring classified cells.
  • Such embodiments enable an individual apparatus to be trained quickly with a peer-reviewed degree of confidence in the accuracy of the results.
  • Such embodiments do not necessarily need to share the results analysed but merely training data, e.g. in the form of classification scores, etc., the former of which is often highly commercially sensitive but the latter of which may not be. In this manner more accurate and faster automated screening can be provided without compromising confidential assay-specific research data.
  • this technique is likely to produce more universally concordant classifications, which may be especially beneficial, for example, in an FDA regulated environment where, despite regulatory test guidelines, subjective interpretation, inter-scorer differences and temporal effects lead to systematic differences in results between groups and over extended time periods [16,17].
  • Metafer MSearchTM available from MetaSystemsTM, Robert-Bosch-Strasse 6, D-68804, Altlussheim, Germany

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

A first aspect of the invention relates to an apparatus (100) for genotoxicological screening. The apparatus (100) comprises a processor (114) for analysing images. The processor (114) is configured to provide an identifier module (115) for identifying target cells in an image and a dynamically modifiable classifier module (116) for classifying the identified cells in accordance with one or more phenotype, such as micronuclei, for example. The processor (114) is also configured to provide a scoring module (117) for assigning respective confidence measurements to the classified cells. Various aspects and embodiments of the invention may be used, for example, to provide for improved reliability and accuracy when performing automated high-throughput screening (HTS) drug assays.

Description

Image Analysis
Field
The present invention relates generally to image analysis. More particularly, the present invention relates to apparatus and methods for improved automated image analysis to identify the presence of certain phenotypes, such as, for example, micronuclei, that might be seen in various images.
Background
The combination of recent advances in fluorescent probe technologies, automation of epifluorescent microscopy and image analysis has enabled high-content screening (HCS) to become a useful tool in the assessment of compound toxicity. For example, the detection of micronuclei (MN) in vitro may be used as a test of genotoxicity for biomonitoring, mutagenicity testing and to assess the proficiency of DNA-repair [I].
Currently, pharmaceutical genotoxicity units are governed by regulatory requirements and in vitro micronuclei tests can have a significant impact upon late stage developmental drugs where high costs have already been incurred. For example, the FDA/ICH currently require: i) a test for gene mutation in bacteria (Ames, or similar); ii) an in vitro test with cytogenetic evaluation of chromosomal damage (usually MN or chromosome aberration assays, MN may be used as a predicator of the mouse lymphoma assay) with mammalian cells or an in vitro mouse lymphoma assay; and iii) an in vivo test for chromosomal damage using rodent haematopoietic cells (in vivo bone marrow mouse MN). In vitro MN results can thus influence decisions regarding further downstream toxicity testing and entry into clinical trials, lead-modification or drug withdrawal.
Groups that are subject to regulatory approval adhere to guidelines and integrate a number of genotoxicity assays to ensure compliance and high confidence in detection sensitivity and specificity. Compounds are not progressed if there is evidence of genotoxicity even where it may be questionable and there is no knowledge of the interaction mechanism. Accuracy and precision of MN scoring are paramount to provide a sensitive, specific solution. Whilst various conventional systems [2, 3] can help such groups with the screening processes, more current conventional systems and methods [4] may be highly dependent upon expert user input to help classify whether or not various image features are or are not, for example, micronuclei.
Other automated systems for classifying biological specimens are known in the prior art. Lee et al. [5] describes an automated microscope system comprising a computer and high speed processing field processor to identify free-lying cells. Long et al. [6] relates to algorithms that automatically recognise viable cells. The document discloses a method of identifying and localizing objects belonging to one of three or more classes, including deriving vectors, each being mapped to one of the objects, where each of the vectors is an element from an N-dimensional space. The method includes training an ensemble of binary classifiers with CISS technique, using training sets generated with an ECOC technique. Rutenberg [7] describes an automated cytological specimen classification system and method for increasing the speed and accuracy of cervical smear analysis. However, all of the above systems and methods [5, 6 and 7] are "pre-trained" in the sense that classification is based on pre-set criteria and thus the systems and methods do not improve with use.
Accordingly, there is a need to provide improved systems and methods that can more rapidly and accurately identify potential significant features of interest, for example, in an automated screening process or device. In particular, there is a need for improved systems and methods which are dynamically modifiable in the sense that they use algorithms which continuously learn, through for example human intervention or from other processors, and thus become more accurate with time.
Summary
The present invention has thus been devised whilst bearing the above-mentioned drawbacks associated with conventional devices and techniques in mind.
According to a first aspect of the present invention, there is thus provided an apparatus for genotoxicological screening. The apparatus comprises a processor that is configured to provide an identifier module for identifying target cells in an image, a classifier module for classifying the identified cells in accordance with one or more phenotype, and a scoring module for assigning respective confidence measurements to the classified cells.
According to a second aspect of the present invention, there is provided a method for classifying cells in an image according to one or more phenotype. The method comprises identifying candidate target cells in an image, classifying the identified target cells according to one or more phenotype, and scoring the classified cells.
Certain aspects and embodiments of the present invention also provide various computer program products for configuring a data processing apparatus to implement various of the functions needed to provide an apparatus or method in accordance with the aforementioned first and second aspects of the present invention.
As described in further detail below, various aspects and embodiments of the present invention are able to improve the accuracy of automated feature identification, e.g. in HCS, by substantially reducing the number of false negative identifications that are detected in an image. For example, when screening images to detect micronuclei indicative of potentially toxic drug compounds such false negative readings may mean that a possibly useful compound is ruled out from further testing by error. Reduction of the number of false negative identifications is thus highly desirable.
Brief description of the drawings
Various aspects and embodiments of the present invention will now be described in connection with the accompanying drawings, in which:
Figure 1 shows an apparatus for genotoxicological screening in accordance with an embodiment of the present invention; and
Figure 2 shows a method for classifying cells in an image according to one or more phenotypes in accordance with various aspects and embodiments of the present invention.
Detailed description
Figure 1 shows an apparatus 100 for genotoxicological screening in accordance with an embodiment of the present invention. Genotoxicological screening may, for example, be provided by identifying the presence or absence of one or more phenotypes, such as micronuclei, that may be present in images of treated cells. The apparatus 100 may also be used, for example, for automated high-content screening (HCS) and/or high-throughput screening (HTS). Micronuclei are small nuclei containing chromatin, separate from and additional to the main nuclei of cells, produced during telophase of mitosis (or meiosis) by lagging (slower motility during chromosome segregation) chromosome fragments or whole chromosomes
The apparatus 100, which is illustrated schematically for clarity, comprises a light source 102 for producing light 120a. The light 120a is focussed by a condenser 104 onto a test plate 108. The test plate 108 may contain an array of wells or spots 109 to be imaged. The condenser 104 can focus the light 120b in a focal plane at the test plate 108. The test plate 108 may be provided as a consumable product, and the spots 109 might contain various materials that are able to interact with certain types of cells (e.g. mammalian cells).
In various embodiments, the test plate 108 may comprise at least one fiducial marker (not shown) provided to aid in aligning the test plate 108 within the apparatus 100. For example, one or more coloured dyes may be provided within the spots 109. Such coloured dyes can be identified by various imaging systems in order to derive data relating to the relative positioning of the test plate 108 within the apparatus 100. For example, the apparatus 100 may include a GE In-CeIl Analyzer 1000™ that is commercially available from GE Healthcare Life Sciences, Little Chalfont, Buckinghamshire, U.K., and which can use four colour channels to image the test plate 108. One colour channel may thus be dedicated to imaging coloured fiducial markers provided in various of the spots 109 in order to obtain data relating to the positioning of the test plate 108 within the apparatus 100. The apparatus 100 also contains a detector system 112 and a translation mechanism (not shown). The translation mechanism is configured to move the focus of the light 120b relative to the test plate 108 (e.g. by moving the test plate 108 in the x-y plane). This enables a plurality of images to be acquired from respective of the individual spots 109. Additionally, the translation mechanism may also be operable to move the test plate 108 in the z-direction shown in Figure 1, for example, in order to bring the spots 109 into focus.
For certain embodiments, only one spot is imaged at a time. The images acquired are of sufficient magnification to resolve cells and sub-cellular morphology. With the current GE In-CeIl Analyzer 1000™, this may entail use of a 2Ox objective, the field of view of which is slightly smaller than a single spot. However, various methods of the invention would also work for lower power magnification imaging, e.g. on GE In- CeIl Analyzer 1000™ using a 4x objective to image 4-6 spots/image.
An aperture stop 106 is optionally provided between the light source 102 and the detector system 112, the size of which may be variable. For example, various differently sized movable apertures may be rotated into position or a continuously variable iris-type diaphragm may be provided. Image contrast can be controlled by changing the aperture setting of the aperture stop 106.
Focussed light 120b passing through the aperture stop 106 passes through the sample test plate 108 in a transmission imaging mode. Emergent light 120c modulated with image information relating to material adjacent to an individual spot 109 is collected by an objective lens 110 and focussed 12Od onto the detector system 112, and is used to form an original image for that spot 109.
Various embodiments of methods of the present invention are independent of the imaging modality used, e.g. they can operate with transmission or reflection geometry. For GE In-CeIl Analyzer 1000™ imaging an epi-fluorescence mode may be used, with both the fiducial marker spots and the assay signals from the cells being imaged at different excitation and emission wavelengths. However there is nothing in principle to prevent a mix of imaging modes being deployed, provided that they do not interfere. For example, it would be possible to use a non-fluorescent dye for fiducial marking and to detect the fiducial marks by absorbance in reflectance or transmission geometry, while detecting assay signals by epi-fluorescence.
The detector system 112 is operable to acquire a plurality of images from the test plate 108. For example, images may be obtained each representing different spots 109 or of the same spot 109 at different points in time. Differences between neighbouring spots 109 or temporal changes occurring within the same spot 109 can thus be analysed.
The detector system 112 is also operably coupled to a processor 114 that in turn is operable to process the images. Analysis of the images may be used to provide for genotoxicological screening. Of course, such images may be generated by the apparatus 100 itself or might be provided from storage and/or transmitted to the processor 114 from a remote location (not shown).
The processor 114 is configured to provide an identifier module 115 for identifying target cells in an image, a classifier module 116 for classifying the identified cells in accordance with one or more phenotype, and a scoring module 117 for assigning respective confidence measurements to the classified cells.
The identifier module 115 segments individual cell images from whole images, that might contain images of many such cells, to provide a set of target cells that have been identified in the images. For example, a pattern recognition or thresholding technique may be used [1, 8, 9]. Cells may be segmented and multi-parametric cellular data provided.
The classifier module 116 analyses the content of the individual identified cells. The content of the cell image is tested to determine the presence or absence of one or more phenotype. In this embodiment, the classifier module 116 checks the cell images for the presence or absence of micronuclei in accordance with an initial predetermined classification scheme. The initial classification scheme may, for example, be determined following the application of a training algorithm that is provided and shipped as part of a new GE In-CeIl 1000™ apparatus [10]. Initial classification may be provided using a predetermined set of classification criteria, and cells may be annotated with graphical-cluster or multi-cell methods, for example.
Additionally, the classifier module 116 is dynamically modifiable. For example, the classifier module 116 may be operable to modify various criteria, such as threshold values and/or phenotypes that are analysed at run-time. Such a dynamically adaptable classifier module 116 can thereby evolve over time to re-classify the cell images in order to reduce the incidence of false negatives, which are highly undesirable.
Moreover, the classifier module 116 may further adapt its behaviour automatically in response to certain user input, e.g. where a user has determined that a false positive classification of a cell image has been made.
The scoring module 117 generates scores indicative of the degree of confidence that a particular imaged cell possesses the target phenotype(s). In this embodiment the scoring module 117 is operable to apply a canonical variate analysis (CVA) to determine the respective confidence measurements for respective classified cells [H]. However, in general, a confidence rating may be defined without CVA. For example, the K-NN method may be used when classifying an unknown pattern, which looks for the nearest patterns from a training set, and arranges voting. For example, if K=5 and there are 3 classes (A, B, C) then for a pattern that scored A=I, B=2, C=2, among 5 neighbours, classification is problematic (e.g. B or C?). In this case, distance measures can be applied, hi general though, problematic patterns are located at the periphery of corresponding clusters in the parameter space, and use of CVA is advantageous as it helps optimise the geometry of inter-cluster separation.
In this embodiment CVA may thus be employed to optimise the parameter choice and maximise separation of classes based upon initial training and/or any annotation of the image data. CVA analysis results may be reported in the form of a confidence rating which indicates the reliability of each individual cell belonging to the particular class into which it has been categorised by the initial process. Optionally, users may be given the option to highlight "questionable" cells, such as, for example: those that are not easily categorised according to a user defined set of descriptors; those that have a confidence rating lower than a certain user-defined threshold value; or in the case of micronuclei scoring all cells in a particular class e.g. those potentially containing micronuclei.
The system 100 may flag any questionable cells and request verification (e.g. re- annotation) from either a user or other processor. For example, images of such cells might "pop-up" on a display for visual user clarification of a class. The system 100 may also be operable to permit a complete re-annotation of identified cell images, for example, where there is a change in biological protocol or throughput needs are low. Additionally, verification may result in re-classification of a cell and/or definition of an alternate class.
Training information that is supplied subsequently may be incorporated into a classification routine to facilitate better informed decisions with improved confidence. Such a routine may be embodied in an algorithm that provides constantly improving decisions having both increased specificity and sensitivity. Such an algorithm may thus never need to flag the same cell more than once and hence overall processing speed also increases over time.
In various aspects, visualisations of the training set data overlaid with "unknown" data can help in assessing the efficiency of the processing, as well as identification of various drug effects from the analysis of the distribution of "under-rated" patterns around the datasets in training data corresponding to the pre-defined classes, hi the case of presentation of unknown data with available manual scorings, for example, a matching matrix may be calculated on-the-fly and/or as a post-annotation step that defines correlation between manual and supervised learning classification.
In certain variants of this embodiment, the processor 114 is further operable to exchange data with other similar processors and the scoring module 117 is operable to derive respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by a plurality of scoring modules of respective processors.
By using networked processors, for example, the combined experience of many different adaptive learning systems can automatically be combined to further improve the accuracy in identifying various phenotypes. Such experience might be weighted in an overall score combined from individual scores for each contributor determined, for example, according to how long a particular contributing machine has been operating, how many separate image analyses a specific machine has performed, how many modifications or reclassifications have been applied for a particular machine since initial training, etc.
Hence in certain embodiments, processor 114 may act as a server device and transmit requests and image data to remote devices that are not necessarily trained with the same training data. The processor 114 is then operable to determine a consensus score for classifying its localised image data thereby providing improved confidence scoring and accuracy when classifying various phenotypes of interest. Such analysis may be performed automatically, without the need for specific user input or instructions.
The aggregated score may be generated as a weighted average of analyses of the same images(s) by different processors. Data may be transmitted between processors, for example, anonymously over a virtual private network (VPN) or via the Internet using a secure socket layer (SSL) channel to prevent specific detailed information being accessible to remote users of the system or other network users.
In certain variants, additional processing is performed remotely by distributed processors as a relatively low priority background task. This enables all users of the distributed processing system to benefit from the collective experience and processing power of all networked processors to provide improved classification scoring without unduly burdening individual processors operating locally. For example, distinct machines of a particular business, research institute, University, etc., may be linked either to each other institutionally or externally to form a global network. Additionally, the processor 114 can be configured to control a translation mechanism (not shown) to move the focal position of the light source 102 relative to the spot plate 108. The processor 114 may, for example, be provided as part of a conventional computer system appropriately programmed to implement one or more of the identifier module 115, the classifier module 116 and the scoring module 117, or may be provided by a digital signal processor (DSP), dedicated application specific integrated circuit (ASIC), appropriately configured firmware, etc.
Various embodiments of the invention may thus be used to screen cells to detect micro-nucleation events that are indicative of drug toxicity. Improved classification improves screening and identification of potentially toxic drug compounds. In turn, such improved screening can reduce the need to test various compounds, for example pharmaceutical compounds that need to be tested on human or animal models, by eliminating the number of false negatives detected in an automated initial screening process.
In certain embodiments, the apparatus 100 as described above may be used to implement the following method that is described below in connection with Figure 2.
Figure 2 shows a method 200 for classifying cells in an image according to one or more phenotypes in accordance with various aspects and embodiments of the present invention. The method 200 may, for example, be applied for automatic classification and scoring of cell phenotypes depicted in the image.
Classes may initially be defined and training and/or annotation applied to refine class definitions. For example, cluster and/or visual multi-cell annotation may be used. A classifier may thus be built, e.g. using CVA and plots, to optimise the parameter choice and class differentiation.
The method 200 comprises a first step 202 of identifying candidate target cells in an image. Then at second step 204 the method 200 applies classifying of the identified target cells according to one or more phenotype, such as micronuclei for example, before scoring the classified cells in the next step 206. In certain embodiments, the first step 202 entails identifying and segmenting the candidate target cells using pattern recognition. For example, cells may be segmented and analysed to produce multi-parametric cellular data [8, 10, 12, 13].
Classification may be performed at step 204 using a variety of pattern recognition techniques [9, 14]. For example, supervised techniques such as: linear and quadratic discriminant analysis; neural networks; K-nearest neighbours; support vector classifiers; tree-based methods; etc. and/or unsupervised methods like: k-means; self- organised maps; etc. may be used.
In certain embodiments, step 204 comprises deriving respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by ^ plurality of processors. Collaborative classification and/or scoring may thus be applied in order to further improve the overall accuracy of phenotype determination.
Scoring may also be provided at step 206 using a number of techniques. For example, scoring of the classified cells can be done by using a canonical variate analysis (CVA) to define a confidence rating for each respective classified cell. CVA is generally described by Tofallis [H].
Given a specific grouping (clustering) of data in a particular feature space, the CVA method applies a clustering dependent coordinate transformation [15]. The resulting effective coordinates have the same dimensionality as the original feature space, but provide maximal inter-cluster separation (i.e. minimising intra-cluster and maximising inter-cluster distances of the various patterns).
CVA can also be applied to optimise a choice of descriptors and maximise the separation of classes based upon an initial training/annotation data set. A confidence rating is then determined for each cell in the image to indicate the reliability of its belonging to the particular class into which it is categorised by the initial classification process. For example, one simple confidence rating for a categorised pattern P is the difference between decision function g(P) of the "winner" class and the next closest to it from among the other classes, such that:
ConfidenceRatingwinmr (p) = min [gwimer (P)- gk (P)] - Equation ( 1 )
Vk≠wiimer
where k is the running index of classes and winner is the index of the "winner" class.
In the illustrated embodiment, method 200 optionally comprises an additional step 208 of reclassifying the scored classified cells, for example, periodically and/or at event- driven times. Reclassification may be provided when significant changes occur to an initial training data set such that the method 200 enables cell phenotype classification accuracy to improve over time as, for example, part of a dynamically evolving adaptive learning system. Such a system may, for example, evolve automatically in response to event-driven user and/or automated input.
Reclassifying of the scored classified cells may comprise reclassifying the cells by modifying a learning algorithm training data set in response to user weighting of the scoring applied to the classified target image areas initially by the method 200. For example, specific cells may be selected and reclassified by providing confidence values for each cell, selecting those requiring reclassification by applying predetermined threshold criteria, and optionally visually verifying the identity of the cells. Any verified cells may then be incorporated into a learning algorithm classifier to improve its speed and accuracy.
The method 200 may be used to determine whether changes occur between various images. For example, the method 200 can be used to detect drug induced-effects over time. These effects might be qualifiable/quantifiable, and may include phenomena and processes such as changes in size parameters, necrosis, mitosis (cell splitting) etc.
The method 200 can be used for automated image analysis e.g. in high-throughput screening (HTS) for drug assays or the like. The images used may comprise cellular data such as a microscope image. The method 200 may be applied to images that have been previously stored, transmitted etc., or may acquired and processed "on-the-fly". The method 200 may be implemented using a processor comprising one or more of hardware, firmware and software. For example, the processor might use a conventional personal computer (PC), ASIC, DSP, etc., and/or the apparatus may include a GE In-CeIl Analyzer 1000™ using GE's In-CeIl Miner™ software package upgraded to implement the method 200. Additional functionality, as described herein, may also be provided by various embodiments that implement the method 200.
In various embodiments of the present invention, the method 200 can be implemented by an apparatus using various software components. Such software components may be provided to an apparatus in the form of an upgrade, for example, transmitted to the apparatus via the Internet.
Certain embodiments of the present invention may be provided which enable a plurality of users to connect to an apparatus in order that multiple users (e.g. humans and/or remote machines) can provide training data, for example, by scoring classified cells. Such embodiments enable an individual apparatus to be trained quickly with a peer-reviewed degree of confidence in the accuracy of the results. Moreover, such embodiments do not necessarily need to share the results analysed but merely training data, e.g. in the form of classification scores, etc., the former of which is often highly commercially sensitive but the latter of which may not be. In this manner more accurate and faster automated screening can be provided without compromising confidential assay-specific research data. Moreover, this technique is likely to produce more universally concordant classifications, which may be especially beneficial, for example, in an FDA regulated environment where, despite regulatory test guidelines, subjective interpretation, inter-scorer differences and temporal effects lead to systematic differences in results between groups and over extended time periods [16,17].
Whilst the present invention has been described in connection with various embodiments, those skilled in the art will be aware that many different embodiments and variations are possible. All such variations and embodiments are intended to fall within the scope of the present invention as defined by the appended claims.
References
1. WO 2007/063290 (GE Healthcare UK Limited)
2. US 3,824,393 (Brain)
3. US 5,989,835 (Dunlay)
4. Metafer MSearch™ available from MetaSystems™, Robert-Bosch-Strasse 6, D-68804, Altlussheim, Germany
5. US 6,134,354 (Lee et al.)
6. WO 2006/055413 (Long et al.)
7. WO 91/15826 (Rutenberg)
8. GE Healthcare, "Multi Target Analysis Module for IN Cell Analyzer 1000," Product User Manual Revision A, Chapter 6, 2006
9. K. Fukunaga, "Introduction to Statistical Pattern Recognition," Second Edition, Academic Press Inc., 1990, ISBN 0122698517
10. GE Healthcare, "Multi Target Analysis Module for IN Cell Analyzer 1000," Product User Manual Revision A, Chapter 7, 2006
11. C. Tofallis, "Model Building with Multiple Dependent Variables and Constraints," The Statistician, Vol. 48, No. 3, pp. 371-378, 1999
12. WO 2004/088574 (Amersham Biosciences UK Ltd)
13. B. Soltys, Y. Alexandrov, D. Remezov, M. Swiatek, L. Dagenais, S. Murphy and A. Yekta, "Learning Algorithms Applied to Cell Subpopulation Analysis in High Content Screening," http://www6.gelirescienccs.com/aptrix/ upp00919.nsf/Content/WD:Sctentific+Post(285564266-B653) 14. T. Hastie, J. Friedman and R. Tibshiram, "The Elements of Statistical Learning: Data Mining, Inference, and Prediction," Edition 9, Springer, 2007, ISBN 0387952845
15 D. M. Hawkins (Ed), "Topics in Applied Multivariate Analysis," National Research Institute for Mathematical Sciences, National Research Institute for
Mathematical Sciences (South Africa), Cambridge University Press, 1982, ISBN 0521243688, p. 205
16. M. Fenech, S Bonassi, J. Turner, et al, "Intra- and Inter- Laboratory Variation in the Scoring of Micronuclei and Nucleoplasms Bridges in Binucleated Human Lymphocytes," Results of an International Slide-scoring Exercise by the HUMN project, Mutat. Res. (2003a), 534, 45-64
17. B. Patino-Garcia, J. Hoegel, D. Varga, M. Hoehne, I. Michel, S. Jainta, R. Kreienberg, C. Maier and W. Vogel, "Scoring Variability of Micronuclei in Binucleated Human Lymphocytes in a Case-control Study," Mutagenesis, 21(3):191-7, May 2006
Where permitted, the content of the above-mentioned references are hereby also incorporated into this application by reference in their entirety.

Claims

CLAIMS:
1. An apparatus (100) for automated genotoxicological screening, the apparatus (100) comprising a processor (114) configured to provide: an identifier module (115) for identifying target cells in an image; a dynamically modifiable classifier module (116) for classifying the identified cells in accordance with one or more phenotype; and a scoring module (117) for assigning respective confidence measurements to the classified cells.
2. The apparatus (100) of claim 1, wherein the one or more phenotype includes micronuclei that may be present in a respective identified cell.
3. The apparatus (100) of any preceding claim, wherein scoring module (117) is operable to apply a canonical variate analysis (CVA) to determine the respective confidence measurements for respective classified cells.
4. The apparatus (100) of any preceding claim, wherein the processor (114) is further operable to exchange data with other similar processors and the scoring module (117) is operable to derive respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by a plurality of scoring modules of respective processors.
5. The apparatus (100) of any preceding claim, wherein the classifier module (116) is further operable to dynamically re-classify the scored classified cells.
6. The apparatus (100) of claim 5, wherein the classifier module (116) is further operable to re-classify the scored classified cells by modifying a learning algorithm training data set in response to re-weighting of the scoring applied to the classified target cells.
7. The apparatus (100) of any preceding claim, wherein the classifier module (116) is dynamically modifiable to classify and/or re-classify cells in response to input from a plurality of users.
8. A method (200) for automatically classifying cells in an image according to one or more phenotype, the method (200) comprising: identifying (202) candidate target cells in an image; dynamically classifying (204) the identified target cells according to one or more phenotype; and scoring (206) the classified cells.
9. The method (200) of claim 8, wherein the one or more phenotype includes micronuclei.
10. The method (200) of claim 8 or claim 9, wherein identifying (202) candidate target cells in an image comprises identifying target cells by pattern recognition.
11. The method (200) of any of claims 8 to 10, wherein scoring (206) of the classified cells comprises using a canonical variate analysis (CVA) to define a confidence rating for each respective classified cell.
12. The method (200) of any of claims 8 to I L wherein classifying (204) the identified target cells comprises deriving respective consensus confidence measurements for respective classified cells from corresponding confidence score ratings provided by a plurality of processors.
13. The method (200) of any of claims 8 to 12, further comprising an additional step of subsequently reclassifying (208) the scored classified cells.
14. The method (200) of claim 13, wherein reclassifying (208) of the scored classified cells comprises reclassifying the cells by modifying a learning algorithm training data set in response to re-weighting of the scoring applied to the classified target cells.
15. The method (200) of any of claims 8 to 14, comprising modifying the classification and/or re-classification criteria for cells in response to input from a plurality of users.
16. A computer program product for configuring a data processing apparatus to implement the method (200) according to any of claims 8 to 15.
PCT/EP2010/057651 2009-06-02 2010-06-01 Image analysis WO2010139697A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2012513601A JP2012529020A (en) 2009-06-02 2010-06-01 Image analysis
EP10726031A EP2438553A1 (en) 2009-06-02 2010-06-01 Image analysis
US13/375,273 US8750592B2 (en) 2009-06-02 2010-06-01 Image analysis
CN2010800249300A CN102449639A (en) 2009-06-02 2010-06-01 Image analysis

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0909461.6A GB0909461D0 (en) 2009-06-02 2009-06-02 Image analysis
GB0909461.6 2009-06-02

Publications (1)

Publication Number Publication Date
WO2010139697A1 true WO2010139697A1 (en) 2010-12-09

Family

ID=40902460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/057651 WO2010139697A1 (en) 2009-06-02 2010-06-01 Image analysis

Country Status (6)

Country Link
US (1) US8750592B2 (en)
EP (1) EP2438553A1 (en)
JP (1) JP2012529020A (en)
CN (1) CN102449639A (en)
GB (1) GB0909461D0 (en)
WO (1) WO2010139697A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105378453A (en) * 2012-12-19 2016-03-02 皇家飞利浦有限公司 System and method for classification of particles in a fluid sample
EP2524337A4 (en) * 2010-01-12 2016-07-06 Rigel Pharmaceuticals Inc Mode of action screening method
WO2018005413A1 (en) 2016-06-30 2018-01-04 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
EP3300001A3 (en) * 2016-09-27 2018-05-16 Sectra AB Viewers and related methods, systems and circuits with patch gallery user interfaces for medical microscopy

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5698208B2 (en) * 2012-11-30 2015-04-08 株式会社Screenホールディングス Image processing apparatus, image processing method, and image processing program
US9552535B2 (en) * 2013-02-11 2017-01-24 Emotient, Inc. Data acquisition for machine perception systems
CA2905637C (en) * 2013-03-13 2022-04-05 Fdna Inc. Systems, methods, and computer-readable media for identifying when a subject is likely to be affected by a medical condition
US9639743B2 (en) 2013-05-02 2017-05-02 Emotient, Inc. Anonymization of facial images
ES2921435T3 (en) * 2015-12-18 2022-08-25 Abbott Lab Methods and systems for the evaluation of histological stains
CN117457097A (en) * 2016-09-30 2024-01-26 分子装置有限公司 Computer device for detecting optimal candidate compound and method thereof
WO2018165279A1 (en) * 2017-03-07 2018-09-13 Mighty AI, Inc. Segmentation of images
US10671896B2 (en) * 2017-12-04 2020-06-02 International Business Machines Corporation Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
US10607122B2 (en) * 2017-12-04 2020-03-31 International Business Machines Corporation Systems and user interfaces for enhancement of data utilized in machine-learning based medical image review
CN109815870B (en) * 2019-01-17 2021-02-05 华中科技大学 High-throughput functional gene screening method and system for quantitative analysis of cell phenotype image
US10902295B2 (en) * 2019-02-08 2021-01-26 Sap Se Using transformations to verify computer vision quality
EP3876193A1 (en) * 2020-03-02 2021-09-08 Euroimmun Medizinische Labordiagnostika AG Image processing method for displaying cells of multiple overall images
CN112613505B (en) * 2020-12-18 2024-08-09 合肥码鑫生物科技有限公司 Cell micronucleus identification, positioning and counting method based on deep learning
CN114974433B (en) * 2022-05-26 2024-07-23 厦门大学 Rapid annotation method of circulating tumor cells based on deep migration learning
CN116433588B (en) * 2023-02-21 2023-10-03 广东劢智医疗科技有限公司 Multi-category classification and confidence discrimination method based on cervical cells

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088574A1 (en) * 2003-04-02 2004-10-14 Amersham Biosciences Uk Limited Method of, and computer software for, classification of cells into subpopulations

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991015826A1 (en) 1990-03-30 1991-10-17 Neuromedical Systems, Inc. Automated cytological specimen classification system and method
US5978497A (en) 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for the identification of free-lying cells
JPH08254501A (en) 1995-03-16 1996-10-01 Hitachi Denshi Ltd Method and apparatus for visual inspection
US6678669B2 (en) * 1996-02-09 2004-01-13 Adeza Biomedical Corporation Method for selecting medical and biochemical diagnostic tests using neural network-related applications
JP2002142800A (en) 2000-11-13 2002-05-21 Olympus Optical Co Ltd Method for cell form analysis and storage medium
JP4749637B2 (en) * 2001-09-28 2011-08-17 オリンパス株式会社 Image analysis method, apparatus, and recording medium
JP4176041B2 (en) 2004-04-14 2008-11-05 オリンパス株式会社 Classification apparatus and classification method
US20050240357A1 (en) * 2004-04-26 2005-10-27 Minor James M Methods and systems for differential clustering
WO2006055413A2 (en) 2004-11-11 2006-05-26 The Trustees Of Columbia University In The City Of New York Methods and systems for identifying and localizing objects based on features of the objects that are mapped to a vector
JP2006293820A (en) * 2005-04-13 2006-10-26 Sharp Corp Appearance inspection device, appearance inspection method, and program for causing computer to function as appearance inspection device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088574A1 (en) * 2003-04-02 2004-10-14 Amersham Biosciences Uk Limited Method of, and computer software for, classification of cells into subpopulations

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
"PlayStation 3 gamers can aid medical research, Sony says", THE SYDNEY MORNING HERALD, 17 November 2006 (2006-11-17), XP002596482, Retrieved from the Internet <URL:http://www.smh.com.au/news/Technology/PlayStation-3-fetching-mostly-hohum-prices-on-Japan-Net-auctions/2006/11/10/1162661892292.html> [retrieved on 20100811] *
ADAM KAPELNER ET AL: "An Interactive Statistical Image Segmentation and Visualization System", MEDICAL INFORMATION VISUALISATION - BIOMEDICAL VISUALISATION, 2007. ME DIVIS 2007. INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 July 2007 (2007-07-01), pages 81 - 86, XP031115803, ISBN: 978-0-7695-2904-2 *
B. PATINO-GARCIA; J. HOEGEL; D. VARGA; M. HOEHNE; I. MICHEL; S. JAINTA; R. KREIENBERG; C. MAIER; W. VOGEL: "Scoring Variability of Micronuclei in Binucleated Human Lymphocytes in a Case-control Study", MUTAGENESIS, vol. 21, no. 3, May 2006 (2006-05-01), pages 191 - 7
B. SOLTYS; Y. ALEXANDROV; D. REMEZOV; M. SWIATEK; L. DAGENAIS; S. MURPHY; A. YEKTA, LEARNING ALGORITHMS APPLIED TO CELL SUBPOPULATION ANALYSIS IN HIGH CONTENT SCREENING, Retrieved from the Internet <URL:http://www6.gelifesciences.com/aptrix/ upp00919.nsf/Content/WD:Scientific+Post(285564266-B653)>
C. TOFALLIS: "Model Building with Multiple Dependent Variables and Constraints", THE STATISTICIAN, vol. 48, no. 3, 1999, pages 371 - 378
CARPENTER ANNE E ET AL: "CellProfiler: image analysis software for identifying and quantifying cell phenotypes", GENOME BIOLOGY, BIOMED CENTRAL LTD., LONDON, GB LNKD- DOI:10.1186/GB-2006-7-10-R100, vol. 7, no. 10, 31 October 2006 (2006-10-31), pages R100, XP021027289, ISSN: 1465-6906 *
DUDA, HART,STORK: "PATTERN CLASSIFICATION", 1 November 2000, WILEY-INTERSCIENCE, ISBN: 9780471056690, article "FISHER LINEAR DISCRIMINANT", pages: 117 - 121, XP002596480 *
HANEY ET AL: "High-content screening moves to the front of the line", DRUG DISCOVERY TODAY, ELSEVIER, RAHWAY, NJ, US LNKD- DOI:10.1016/J.DRUDIS.2006.08.015, vol. 11, no. 19-20, 1 October 2006 (2006-10-01), pages 889 - 894, XP005663039, ISSN: 1359-6446 *
JUN WANG ET AL: "Active microscopic cellular image annotation by superposable graph transduction with imbalanced labels", COMPUTER VISION AND PATTERN RECOGNITION, 2008. CVPR 2008. IEEE CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 23 June 2008 (2008-06-23), pages 1 - 8, XP031297304, ISBN: 978-1-4244-2242-5 *
L.I. KUNCHEVA: "COMBINING PATTERN CLASSIFIERS", 2004, WILEY-INTERSCIENCE, ISBN: 0471210781, article "WEIGHTED MAJORITY VOTE", pages: 123 - 125, XP002596481 *
M. FENECH; S, BONASSI; J. TURNER ET AL.: "Intra- and Inter-Laboratory Variation in the Scoring of Micronuclei and Nucleoplasmic Bridges in Binucleated Human Lymphocytes", RESULTS OF AN INTERNATIONAL SLIDE-SCORING EXERCISE BY THE HUMN PROJECT, MUTAT. RES., vol. 534, 2003, pages 45 - 64, XP009095854
RAUSCH ET AL: "High content cellular screening", CURRENT OPINION IN CHEMICAL BIOLOGY, CURRENT BIOLOGY LTD, LONDON, GB LNKD- DOI:10.1016/J.CBPA.2006.06.004, vol. 10, no. 4, 1 August 2006 (2006-08-01), pages 316 - 320, XP025155103, ISSN: 1367-5931, [retrieved on 20060801] *
T.R. JONES ET AL.: "Scoring diverse cellular morphologies in image-basedscreens with iterative feedback and machine learning", PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES, vol. 106, no. 6, 10 February 2009 (2009-02-10), pages 1826 - 1831, XP002596479 *
TIM W NATTKEMPER ET AL: "A Neural Classifier Enabling High-Throughput Topological Analysis of Lymphocytesin Tissue Sections", IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 5, no. 2, 1 June 2001 (2001-06-01), XP011028233, ISSN: 1089-7771 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2524337A4 (en) * 2010-01-12 2016-07-06 Rigel Pharmaceuticals Inc Mode of action screening method
US10311214B2 (en) 2010-01-12 2019-06-04 Rigel Pharmaceuticals, Inc. Mode of action screening method
CN105378453A (en) * 2012-12-19 2016-03-02 皇家飞利浦有限公司 System and method for classification of particles in a fluid sample
EP2936116A4 (en) * 2012-12-19 2016-08-17 Koninkl Philips Nv System and method for classification of particles in a fluid sample
US9904842B2 (en) 2012-12-19 2018-02-27 Koninklijke Philips N.V. System and method for classification of particles in a fluid sample
US10192100B2 (en) 2012-12-19 2019-01-29 Koninklijke Philips N.V. System and method for classification of particles in a fluid sample
US10430640B2 (en) 2012-12-19 2019-10-01 Koninklijke Philips N.V. System and method for classification of particles in a fluid sample
WO2018005413A1 (en) 2016-06-30 2018-01-04 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
EP3478728A4 (en) * 2016-06-30 2019-07-17 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
US10853695B2 (en) 2016-06-30 2020-12-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell annotation with adaptive incremental learning
EP3300001A3 (en) * 2016-09-27 2018-05-16 Sectra AB Viewers and related methods, systems and circuits with patch gallery user interfaces for medical microscopy
US10489633B2 (en) 2016-09-27 2019-11-26 Sectra Ab Viewers and related methods, systems and circuits with patch gallery user interfaces

Also Published As

Publication number Publication date
JP2012529020A (en) 2012-11-15
US20130101199A1 (en) 2013-04-25
GB0909461D0 (en) 2009-07-15
EP2438553A1 (en) 2012-04-11
US8750592B2 (en) 2014-06-10
CN102449639A (en) 2012-05-09

Similar Documents

Publication Publication Date Title
US8750592B2 (en) Image analysis
Bray et al. Advanced assay development guidelines for image-based high content screening and analysis
Hennig et al. An open-source solution for advanced imaging flow cytometry data analysis using machine learning
Abdelaal et al. Predicting cell populations in single cell mass cytometry data
Pepperkok et al. High-throughput fluorescence microscopy for systems biology
Boutros et al. Microscopy-based high-content screening
Crane et al. Autonomous screening of C. elegans identifies genes implicated in synaptogenesis
EP2936116B1 (en) System and method for classification of particles in a fluid sample
EP1922695B1 (en) Method of, and apparatus and computer software for, performing image processing
San-Miguel et al. Deep phenotyping unveils hidden traits and genetic relations in subtle mutants
Shamir Assessing the efficacy of low‐level image content descriptors for computer‐based fluorescence microscopy image analysis
Marée The need for careful data collection for pattern recognition in digital pathology
Rohani et al. Mito Hacker: a set of tools to enable high-throughput analysis of mitochondrial network morphology
CN105043998B (en) One kind differentiates the haploid method of corn
Dürr et al. Know when you don't know: a robust deep learning approach in the presence of unknown phenotypes
EP2422182B1 (en) Method and apparatus for multi-parameter data analysis
Bajcsy An overview of DNA microarray image requirements for automated processing
Zhang et al. Reference-based cell type matching of spatial transcriptomics data
Siegismund et al. Benchmarking feature selection methods for compressing image information in high-content screening
Qiu et al. A cell-level quality control workflow for high-throughput image analysis
Bearer Overview of image analysis, image importing, and image processing using freeware
Bickle High-content screening: a new primary screening tool?
Corbe et al. Transfer learning for versatile and training free high content screening analyses
Tomkinson et al. Toward generalizable phenotype prediction from single-cell morphology representations
Pearson et al. A statistical framework for high-content phenotypic profiling using cellular feature distributions

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080024930.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10726031

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010726031

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 8444/CHENP/2011

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2012513601

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 13375273

Country of ref document: US