WO2001078009A2 - Quantitative analysis of biological images - Google Patents

Quantitative analysis of biological images Download PDF

Info

Publication number
WO2001078009A2
WO2001078009A2 PCT/GB2001/001400 GB0101400W WO0178009A2 WO 2001078009 A2 WO2001078009 A2 WO 2001078009A2 GB 0101400 W GB0101400 W GB 0101400W WO 0178009 A2 WO0178009 A2 WO 0178009A2
Authority
WO
WIPO (PCT)
Prior art keywords
images
biological
project
biological images
image
Prior art date
Application number
PCT/GB2001/001400
Other languages
French (fr)
Other versions
WO2001078009A3 (en
Inventor
Laurent Gabriel Bollondi
Anne Sophie Danckaert
Original Assignee
Glaxo Group Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glaxo Group Limited filed Critical Glaxo Group Limited
Priority to AU42619/01A priority Critical patent/AU4261901A/en
Publication of WO2001078009A2 publication Critical patent/WO2001078009A2/en
Publication of WO2001078009A3 publication Critical patent/WO2001078009A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

A computer user interface for biological images allows biologists and other biology professionals to organize biological images into 'projects' containing hierarchy levels that separate images based upon relevant biological criteria. In addition, the user interface allows users to select various filters and methods that can be used to (1) manipulate the display of a biological image and/or (2) extract quantitative information from the images.

Description

QUANTITATIVE ANALYSIS OF BIOLOGICAL IMAGES BACKGROUND OF THE INVENTION
This invention relates to image processing technology. More specifically, it relates to computer user interfaces that allow users to classify, manipulate, and quantify features of biological images.
Increasingly biologists, biochemists, and chemists investigating biological systems must analyze numerous images of such systems. Examples of such biological images include microscopic images of tissue sections, biological assay results, and fluorescent images of proteins or cell components. Traditionally, to obtain relevant information about such biological systems, the biology professional had to visually analyze the many images to determine their relevance and meaning. Such analysis typically involves a qualitative assessment of the images using the professional's knowledge and experience. Often the analysis requires that the professional measures or calculates quantitative information about these images. For example, a biologist might have to identify a cellular organelle as a nucleus and then manually measure the diameter of the nucleus in the image. As the number and complexity of images to be analyzed increases, two very significant difficulties become even bigger problems. First, the amount of time required to meaningfully examine images increases, so that a scientist may have to spend most of his or her day analyzing images. Second, the ability of a human to make proper analyses of the image features is subject to error - particularly when the human has been spending much time analyzing other images.
In analyzing biological images, a scientist must typically perform at least the four following operations: (1) organize the images into biologically relevant classifications, (2) display the images so that he or she can study them, (3) manipulate the displays, by contrast filtering for example, to better evaluate them, and (4) perform qualitative and quantitative analysis of the images to assess their relevance. The resulting analytical data must ultimately be organized and presented in a numerical or textual fashion.
Currently, the available tools to facilitate this general process are generally incomplete (in that they do not permit all of the above analytical operations) and they are not optimized for analysis of biological images. Therefore, it would be desirable to have automated or user friendly systems allowing biology professionals to organize biological images and to rapidly quantify features contained within those images.
SUMMARY OF THE INVENTION
This invention provides a computer user interface that allows biologists and other biology professionals to organize biological images into "projects" containing hierarchy levels that segregate images based upon relevant biological criteria. In addition, the user, interface allows users to rapidly select various filters and methods that can be used to (1) manipulate the display of a biological image and/or (2) extract quantitative information from the images. As an example, a project may segregate images of biological specimens subjected to tests with various drugs. At a first hierarchical level in the project (one level removed from the root), the biological images are segregated such that all images obtained for a particular biological specimen that was exposed to a first drug are provided in a first drawer, all images obtained for a particular biological specimen that was exposed to a second drug are provided in a second drawer, and so on. At a next hierarchical level, the individual images are segregated according to the times when the images were taken. For example, a first set of images may be stored together in a project drawer corresponding to day 1 of testing with the first drug, a second set of images may be stored together in a second drawer corresponding to day 3 of testing with the first drug, and so on. One example of a quantitative method that can be selected via the user interface is an algorithm that identifies and classifies features on a biological image. Such features may be cellular organelles, spots generated during an in vitro assay, etc. Another class of methods compares related images (possibly those obtained at days 1 and 3) to determine what characteristics of the image have changed. Such methods may generate a third image representing the merger, subtraction, etc. of the related images.
One aspect of the invention pertains to computer systems that may be characterized by a graphical user interface running on hardware and allowing organization and quantitative analysis of biological images. Suitable hardware may include one or more processors, one or more user input devices, and a display capable of displaying the image and associated information in pre- specified formats, which are responsive to input signals from one or more of the input devices and signals from one or more of the processors. The graphical user interface may be characterized by the following features: (a) a first graphical tool allowing the biological images to be organized into projects, wherein a project includes a hierarchy of nodes, each node representing a biological classification of the images; and (b) a second graphical tool listing available methods and allowing selection and execution of a method that performs a quantitative analysis on one or more of the biological images organized into the project. The quantitative analysis calculates biological information from one or more of the biological images.
The graphical user interface may include various elements that facilitate grouping of biological images into projects. For example, the interface may include a "project display" that represents the project on the screen as a hierarchy of nodes and associated names of those nodes. The interface may also include a menu allowing a user to select new project nodes and organize them in the project hierarchy. It may further include a feature allowing selection of a "template" specifying a hierarchy of nodes for a predefined project associated with one or more methods.
In a preferred embodiment, the graphical user interface also includes a graphical tool allowing selection of one or more filters; each of which performs a specific filtering operation on selected biological images. In specific embodiment, the graphical tool for selecting and initiating execution of a method also allows selection and execution of a filter. This tool may take the form of a menu for example. Another tool allows users to link filters together in the form of a macro for performing a selected series of filtering operations on a biological image.
Another aspect of the invention pertains to methods of organizing and analyzing biological images. Such methods may be characterized by the following sequence: (a) associating each of a group of biological images with a project node in response to user inputs that select individual biological images and associated project nodes via a graphical user interface; and (b) initiating execution of a method in response to user selection of the method. The user selects the method from a list of methods, via the graphical user interface. The method performs a quantitative analysis on one or more of the biological images organized into the project. Regarding operation (a), associating the biological images with project nodes may require determining that the user has performed drag and drop operations with individual biological images.
The method may generate a project in response to user inputs that create new project nodes, locate such nodes within the project, and name the project nodes. This may be accomplished by simply selecting a project template as described above. The user may select a method from a list of methods displayed via a menu, for example. As mentioned, such methods may be listed together with image filters. Another aspect of the invention pertains to computer program products including a machine readable medium on which is provided program instructions for implementing one or more of the methods or computer user interfaces described herein. Any of the methods or interfaces of this invention may be represented as program instructions that can be provided on such computer readable media. These and other features and advantages of the invention will be described in more detail below with reference to the associated drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 A is a schematic diagram illustrating the high level interfaces of a visual quantization tool in accordance with an embodiment of this invention.
Figure 1 B is a flow diagram illustrating a general flow during quantization using a quantitative visualization tool of this invention.
Figure 1C is a process flow diagram illustrating, at a high level, various operations executed by a visualization tool of this invention in response to a typical sequence of user inputs.
Figure 1D is a schematic illustration of the database environment in which a quantitative visualization tool of this invention may reside.
Figures 2A-2C are screen shots of a user interface that may be employed to open, create and modify projects for organizing biological images.
Figure 2D is a schematic representation of a project file structure in accordance with an embodiment of this invention. Figure 3A is a process flow diagram depicting a sequence of actions that may be performed by a quantitative visualization tool in response to a typical sequence of user inputs, as they pertain to filtering a biological image.
Figures 3B and 3C is a screen shot illustrating certain features that may be employed to facilitate selection and application of specific filters in a quantitative visualization tool of this invention.
Figures 4A and 4B are screen shots of user interface tools employed to review and create macros linking various image filters in accordance with an embodiment of this invention.
Figure 5A is a screen shot illustrating a user interface feature that may be employed to select a spot detection or spot classification method of this invention.
Figure 5B is a screen shot illustrating contours drawn around spots as generated by a melanophore spot detection method of this invention.
Figure 5C is a screen shot presenting local data and global data generated by a spheroid spot detection method of this invention.
Figures 5D and 5E are process flow diagrams illustrating exemplary spot detection methods in accordance with this invention.
Figure 6 is a process flow diagram depicting a method for classifying sub-cellular organelles in accordance with one embodiment of this invention.
Figure 7A is a screen shot of a thermal image of a seated human, which was analyzed by a method employed via the interface of this invention. Figure 7B is a screen shot of a thermal image of a human torso, which was analyzed by a method employed via the interface of this invention.
Figure 7C is a screen shot of the thermal image of Figure 7B in which a grid has been modified thereby effecting the values calculated by the method.
Figures 7D-7F are torso images in which many potential torso corner points are identified by Chetverikov's algorithm (Figure 7D), then reduced to six corner points representing the neck, armpits, and lower back portions of the torso (Figure 7E), and then used to apply a grid to the torso (Figure 7F).
Figure 7G is a process flow diagram depicting a method for detecting a body contour in accordance with one embodiment of this invention.
Figure 7H is a process flow diagram depicting a method for determining a temperature distribution within a body contour such as that determined via the process of Figure 7G.
Figure 8 is a block diagram of a hardware architecture of a computer system suitable for running the software tools of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
INTRODUCTION
As mentioned, this invention pertains to graphical user interfaces that facilitate organization and analysis of biological images. A biological image is generally any image of a real biological system or of an assay result providing information about a biological system. Specific examples of biological images include images of higher organisms or portions of such organisms such as human torsos, rats, and the like, tissue cross-sections (brain, trachea, etc.), individual cell images (in which sub-cellular organelles are clearly visible for example), biological assay results in which individual spots (representing regions where distinct biological agents were applied) are visible, etc. The biological images may be provided in any number of formats such as TIFF, GIFF, JPEG, and the like. The biological images may be captured by any number of image capture apparatuses. Examples include microscopes of various types including confocal microscopes, scanning and transmission electron microscopes, atomic force microscopes, scanning tunneling microscopes. Examples of image capture apparatuses include infrared cameras, visible light cameras, etc. Examples of specific image capture devices include photographic plates, CCD arrays, photodiode arrays, photogate arrays, and the like.
As mentioned, biological images may be segregated into "projects." A project is typically distinguished from other projects by (1) the type of biological image that is included within the project and (2) associated methods for quantifying features of all images within the project. Certain methods may operate on only certain associated projects. A project is typically organized as a hierarchy within which individual images are located. Such hierarchy may be required by particular methods that operate on the images. For example, a method that compares related images may expect to find such related images in two separate but adjacent leaf nodes of a project hierarchy. It may take a first image from a node associated with time 0 and a second image from a node associated with time 1.
Figure 1A schematically presents the high level interfaces of a visual quantization tool 10 in accordance with an embodiment of this invention. In a preferred embodiment, tool 10 is an application providing a graphical user interface. As illustrated, visualization tool 10 takes multiple biological images - possibly in multiple different file formats - as input data 12. Such images may be any of a variety of biological images. Input data 12 is received by visualization tool 10 via a convenient interface on tool 10. Such data is preferably grouped into a larger "project" as described herein. The images are subjected to one or more types of quantization to extract relevant quantitative information about the images. That data serves as output data 14 provided by the visualization tool 10. Such data may take the form of text data, table data, spreadsheet data, and the like. It is preferably grouped with the image itself in the context of an image project.
Visualization tool 10 supports a variety of filters and methods, which may be employed to facilitate display and quantitative analysis of the images. The particular filters and methods associated with tool 10 may form separate interchangeable modules. Such modules may be upgraded, exchanged; deleted, as appropriate given new developments and the applications for tool 10. In a typical embodiment, visualization tool 10 makes use of a variety of generic filters 16 such as smoothing filters contrast filters, and color filters. Tool 10 also makes use of certain predefined quantization methods 18. These represent sophisticated algorithms for providing specific biological information pertaining to images. They may be grouped with one or more filters that facilitate quantitative analysis of the images. In typical examples, methods 18 compare features of similar images, classify features of images, etc. Visualization tool 10 is designed so that new, user-developed, methods 20 may be developed and stored for later use with tool 10.
Figure 1B schematically illustrates a general data flow during quantization using a quantitative visualization tool of this invention. A specific method 21 takes as an input one or more biological images 23 and outputs quantitative data about such image or images as output information 25. While the construction of methods in accordance with this invention may take on a variety of forms depending upon the building blocks of the methods in the particular applications employed by the user, Figure 1B shows one example. In this example, method 21 includes an input side filter 27, a specific quantization algorithm 29, and an output side filter 31. Filters 27 and 31 may be general- purpose filters that act on a bitmap. Examples include contrast filters, smoothing filters, and color filters. Algorithm 29 may be a neural network (preferably a modular neural network) or other algorithm that serves to compare or classify features from one or more images.
Figure 1C presents a typical process flow 100 executed by a visualization tool of this invention in response to a typical sequence of user inputs. As shown, the process flow 100 begins at 102 with the system defining a project within the visualization tool. As explained in more detail below, this may involve setting forth a tree or hierarchy of files or "drawers" representing a user-defined arrangement for the biological images.
Next, at 104, the system receives the actual images that will make up the project. The images may be received in response to a "drag and drop" operation within the user interface, a menu selection operation, or other suitable input mechanisms. At this point, the images are segregated into specific files defined by the project hierarchy. As part of this process of creating a project, the visualization tool must locate or associate the specific images with the specific files or drawers that comprise the project hierarchy. See 106. Further, before it can display and quantify a biological image, the system must know which format has been employed to store the image. As mentioned, there are a myriad of available image formats such as JPEG, TIFF, GIFF, and the like. In addition, many organizations have created special purpose image formats for their own applications. In any event, in the example process flow 100, the visualization tool recognizes the image format at 108. Preferably, this recognition takes place automatically, without any user actions except those actions necessary to input the images into the visualization tool. Image formats have characteristic signatures, which may be recognized to implement block 108.
The biological image project can remain static for any length of time. At some point, a user will wish to display and/or analyze such images. The user may select images for display by navigating through the drawers or files of a project hierarchy. Thus, as represented in process flow 100, the system may display one or more images in the context of the project at 110. Such display may be accomplished by clicking on a drawer associated with the images of interest or other suitable selection technique.
As indicated at block 112, the quantitative visualization tool allows users to "manipulate" image displays. Examples of such manipulation include filtering and annotating the images. Examples of filters that can' be applied easily include smoothing filters, contrast filters, and color filters, all of various types. Examples of annotations that may be added to an image for display include a description the system used to acquire the project images, a description of the quantitative analysis to be performed, etc.
As indicated, a central feature of visualization tools of this invention allows quantification of specific information contained in biological images. Accordingly, one feature of visualization tools in accordance with this invention is the ability to trigger or initiate a quantification method with minimal user input. This may be accomplished by presenting specific quantification methods as menu items or other easily accessible user interface representations. The underlying methods contain the necessary processing functionality to identify the relevant images within a project and perform all necessary operations to extract the desired quantitative information. In the context of process flow 100, performing image quantification in response to user inputs is generally represented in block 114. After the quantification operation is complete, the resulting data is output as illustrated at 116 by the visualization tool. The output data may be presented as a table of that data and displayed alongside the associated biological images. In addition, or alternatively, the data may be provided in a format for export to another application such as a spreadsheet application. In one example, the quantification data is provided in an XLS format.
One important feature of the invention is the ability to provide a single portal, via the user interface, into a collection of methods and images. In other words, from the user interface, the user can specify one or more images to be input and processed, together with a collection of methods or operations to be performed on those images. The user can also specify where the processed images are to be stored or displayed. Importantly, the images can be processed in batch. Also, the data and methods could be stored/provided at/from various locations on a computer network. These locations can be accessed in the background by the computer system after the user specifies the relevant parameters via the user interface.
Figure 1D illustrates an environment 123 in which a quantitative visualization user interface 125 of this invention can provide these benefits. As discussed above, such tool makes use of various filters and methods 127 that may be employed to manipulate the display of a biological image and/or calculate quantitative information from such image.
In one specific implementation, user interface 125 allows users to access a specific biological model database 129. This model may interpret and organize numerous biological images of a specific type. For example, the biological images may all be tissue cross-sections from a particular organ. These images may be analyzed and cataloged according to particular structures or features residing within the organ. To this end, the system provides an "atlas" of the organ via an atlas interface 131. In the depicted embodiment, the atlas may characterize and catalog specific images with the aid of certain neural networks 133 (also made available to user interface 25). ,
User interface 125 may give a user access to a large database of biological images 135. In a preferred embodiment, database 135 contains images from a wide variety of sources, possibly from a large organization or group of organizations. In this regard, the images may be stored in various digital image formats. In the embodiment shown, users may search for specific biological images via user interface 125 and a searching application 137. In a preferred embodiment, the search can be conducted as a component of one of the methods 127, executed via user interface 125.
Note that user interface 125 may be employed to facilitate transmission of images from database 135 to atlas 131 and model database 129. Note also that information about various imaging protocols used to produce the biological images in database 135 may be stored at database 139 and made available to users via the searching application 137.
PROJECTS
As indicated, biological images are organized in "projects" to facilitate quantification and presentation of the images. Figures 2A-2C present screen shots of a user interface that may be employed to create and modify a project for biological images. In a user interface 201 depicted in Figure 2A, a file menu 203 allows users to select various operations associated with a project of biological images. For example, one or more projects may have been previously prepared and saved. These projects can be accessed by selecting the "Open Projects" menu item. When a user opens such saved project, the visualization tool will present a project window 205 that may include a tree diagram 208 and an image display section 210. Tree diagram 208 represents the project hierarchy. Image display section 210 displays the images associated with a particular drawer of the tree diagram that has been selected.
Within file menu 203, a user may also elect to create a new project by selecting the "New Project" menu item. Upon selection of this item, the visualization tool displays a new project dialog box 212 shown in Figure 2B. As illustrated,, the user may elect an empty project item 214 that creates a new project of undefined structure. In other words, when a user creates an "Empty Project" he or she must specify all structure associated with that project. The structure being the hierarchical tree arrangement. In addition, the new project dialog box 212 allows the user to select predefined project "templates." These templates provide a predefined arrangement of drawers within a project hierarchy. The nodes or drawers associated with such project template may also be pre-named. As an example, the spheroid project template may provide a tree diagram such as tree diagram 208 of project window 205 (Figure 2A). Generally, a project template will not, in itself, contain images.
When a user creates a new project or modifies an existing project, he or she may save the tree structure associated with that project as a template. This may be accomplished by selecting the "Save Template As" menu item in menu 203. Again, this operation saves the tree structure and drawer names but does not save the image content of the project. The user may also save an entire project, including the image content and any quantitative data, by selecting the "Save Project As" menu item of menu 203. The "Close Project" menu item simply allows the use to close a currently opened project.
Another feature of interest in menu 203 is the "Open Images" menu item. This allows the user to open specific images without regard to a specific project. The user may be interested in reviewing such images alongside the images associated with a specific project. Or, the user may be interested in opening such images to determine whether they should be imported into a current project.
The "Export Result" menu item of menu 203 allows the user to export a quantitative result of project image analysis to another application (outside the quantitative visualization tool of this invention). For example, quantitative results obtained by executing a method within the quantitative visualization tool may be exported to a spreadsheet.
As mentioned, a project hierarchy may be structured in whatever manner the user deems appropriate for the particular types of biological images he or she is considering. For example, the tree diagram 208 in Figure 2A depicts a project hierarchy in which a project named "2DQV Project II.2QV" is depicted by a top level (or root level) drawer. The "-" next to the root level drawer indicates that the drawer is "opened" to display constituent doors. In this example, there are four constituent drawers. The first contains images of specimens which were treated with or administered a first drug (designated "Drugl") at a concentration of 30 micromolar. In the second drawer, the relevant images were derived from specimens that were given a second drug (designated "Drug2") at a concentration of 30 micromolar. The third drawer contains images of specimens receiving Drugl at a concentration of 1 micromolar. Finally, the fourth drawer contains images of specimens treated with Drug2 at a concentration of 1 micromolar. In tree diagram 208, the leaf nodes are represented as three separate drawers within each of the four just described intermediate level drawers. Each of these three leaf level drawers contains one or more images of the specimens taken at a time indicated by the name of the drawer. In this case, the first drawer contains specimens imaged at Day 0, the second drawer contains specimens imaged at Day 7, and the third drawer contains specimens imaged at Day 14. Obviously, other suitable hierarchies of drawers may be developed for other groups of biological images.
To develop or enhance a project, a user may take actions provided within a project menu 220 of user interface 201. See Figure 2C. The first menu item, "Insert Image," allows a user to insert a selected image into a selected drawer of a project. Alternatively, the user may drag and drop an image into a particular drawer represented in the tree diagram of a project window.
Project menu 220 also allows the user to create a new drawer by selecting the "New Drawer" menu item. This operation appends a new drawer in parallel with other drawers at the lowest level of currently opened drawers within the project hierarchy. Initially, such new drawer may be created with a generic name such as "New Drawer." A user may rename this drawer by selecting the "Rename Drawer" menu item of project menu 220. Alternatively, a user can rename a drawer by simply double clicking on that name and typing in the desired name. A user may also delete an existing drawer with a Delete Drawer menu item as shown.
As indicated, a significant advantage of using the project or tree file arrangement for biological images is that it allows users to classify biological images in a logical manner for subsequent analysis or execution of a quantitative method. To this end, the underlying data associated with a project may include a tree file structure, the biological images contained and categorized within such file structure, and various pieces of data associated with the images so organized.
Figure 2D presents a schematic representation of a project file structure in accordance with an embodiment of this invention. In the example depicted in
Figure 2D, a quantitative visualization project file structure 230 is organized in the same manner as a user views the project logically. At the root level of the file structure 230, the project itself, as represented by the project name 232 is stored. At the next lower level within the file structure, representations of various drawers within the project are stored. In the example depicted, there are two drawers, 234 and 236. Depending upon how the user has organized the project, the individual drawers may contain one or more of the following: other, lower level, drawers, quantification results, and images. In the example of Figure 2D, Drawerl (234) contains quantification results from a method executed on images and the images themselves. In addition, Drawerl includes two sub-drawers (Drawer 1.1 and Drawer 1.2).
Within the project depicted in Figure 2D, file structure 230 provides within Drawerl (234) "local" quantification results 238 which specify quantitative values defined within contours of the individual images. A contour may define the boundaries of a cell, a spot on an assay plate, etc. Often it is desirable to have "local" information associated with such images stored as such. As shown, in the example of Figure 2D, such local quantification results include perimeters, surface areas, volumes, average values (e.g., average temperature, average density, etc.), and the like. These are stored separately from the images themselves in file structure 230. The images themselves are stored as indicated at 240 of file structure 230. The image component of the file structure typically includes an actual bitmap of the image or a link to the location where such image is stored in a computer system. In addition, such image component of the file structure may include "qualitative objects" such as contours, grids, and the like. Such qualitative objects may be displayed directly on the images. As mentioned, they may be used to define the boundaries used for the local quantitative results provided at 238 of the file structure.
If projects and associated images are to be shared among multiple disparate nodes in a larger computer system, such as a network, it will generally be desirable to store the images as bitmaps within the file structure. This is because the path to a stored image will vary depending upon which computer node is trying to access that image. Thus, to ensure that the image is always immediately available to whichever node wishes to use the project, the project images should generally be stored in the file structure as a bitmap. Of course, such bitmap may be stored in a compressed format to conserve storage space.
As shown in file structure 230, individual drawers 1.1 and 1.2 (represented in the file structure at 242) contain separate quantification results associated with the images in Drawerl . Such quantification results, 244 should be distinguished from local quantification results 238. As mentioned, such local quantification results include perimeters, and the like associated with contours of individual images. These "local" results are specifically based upon contours or grids, of individual images and generally do not contain higher level relevant biological information about the images. Such higher level "global" results 244 are separately stored in the file structure of this example. These results may classify component objects of an image in a manner helpful to al biologist. For example, cells may be identified within the confines of calculated contours. The contour itself provides local quantitative results. However, classification of the cell within such contour as a goblet cell or some other type of cell represents the "global" quantification results of the type provided at 244 of file structure 230. As another example, the global quantification results 244 may provide data representing differences calculated from a comparison of two related images. For example, two images of an assay plate taken at different times may be compared. The interesting results are contained at those positions where the image has changed substantially with time. Thus, the different information provides high level relevant biological information. Note that a project window 250 representing the project stored within file structure 230 represents the drawers of the project in the same logical arrangement indicated by file structure 230. In this example, because drawerl is shown as open, the contents of that drawer are depicted in an image display section 252 of project window 250. Those contents include two images and the associated local quantification results. The local quantification results can be viewed in detail by selecting a quantification results icon 254. Generally, the local data is calculated at the same time as global data, when a method is executed. However, it should be understood that the local data could be separately calculated.
Note that the file structure 230 shown in this example is just that, an example. Numerous other possible file structures exist. For example, the project could be arranged such that both the local and.the global quantification results are stored in drawers 1 and 2; rather than having the global quantification results stored in sub-drawers such as drawer 1.1. In general, regardless of how the information is arranged, it is desirable to have a single project file structure that includes the logical arrangement of files within the project, the images contained within the project, and the various quantification results obtained for the project.
FILTERS
Figure 3A presents a process flow diagram of a sequence of actions that may be performed by a quantitative visualization tool of this invention in response to a typical sequence of user inputs, as they pertain to filtering an image. As shown, a filtering process 301 begins with the visualization tool identifying a user- selected biological image. See 303. Of interest, the user may decide to vary the setting of a particular filter used to display the image. Thus, the visualization tool may next identify which specific filter the user has selected for manipulation. See 305. As depicted in Figure 3B, a user may select such filter via a process menu of a user interface. In a preferred embodiment, the menu lists specific methods in addition to specific filters. After the visualization tool has determined which filter the user has selected, it displays a dialog specific for that filter as indicated at 307. Such dialog may contain a current view of the image together with such features as a tool for adjusting the value of the associated filter and a numerical representation of relevant filter parameters. An example filter dialog is depicted in Figure 3C.
Next, the visualization tool determines how the user has adjusted any settings associated with the selected filter. See 309. For example, a user may use a component of the filter dialog box to adjust the value of a contrast or smoothing filter. The new setting is detected by the quantitative visualization tool. The tool then allows the user to preview the image with its new setting and compare that image with a previous image at the original filter settings. See 311.
Turning now to Figures 3B and 3C, user interface 201 is depicted with a process menu 314 drawn down. Process menu 314 provides three categories of filter: contrast filters, smoothing filters and color maps. Each of these categories is represented by a separate menu item and each of them can be expanded to view individual filters. For example, as shown, the contrast menu item provides an auto density filter, a contrast filter, and an expansion histogram filter. Selection of the Auto Density filter menu item causes interface 201 to display a filter dialog box 316. This dialog box includes a slide bar 318 which allows the user to adjust the value of the auto density filter. In addition, filter dialog box 316 includes a filter parameters section 320 which displays the numeric values of the filter parameters. Still further, filter dialog box 316 includes "before" and "after" displays of the image which correspond to the old and new filter settings, respectively.
Any number of conventional and specially designed image filters may be employed with the quantitative analysis tool of this invention. A few of these will be briefly described below to assist in understanding the operation of methods subsequently described. Various references such as "Analyse d'images: filtrage et segmentation," J. P. Cocquerez and S. Philipp, Ed. Masson (1995) and "Algorithms for Graphics and Image Processing," T. Pavlidis, Computer Science Press, Inc. (1982) describe contrast filters. References such as "Image Processing, Analysis, and Machine Vision," M. Sonka, V. Hlavac, and R. Boyle, Brooks/Cole Publishing Company (1999) and "Handbook of Image Processing Operators," R. Klette and P. Zamperoni, John Wiley & Sons (1996) describe smoothing filters. Each of the above references is incorporated herein by reference for all purposes.
In general, an image may be represented as a histogram of pixel count (density) versus gray-scale level. For example, there may be 256 levels of gray from 0 (black) to 255 (white). In a given image, each pixel has one of these gray levels. For each gray level, the total number of pixels within the image having that level is represented in the histogram. So, the histogram has pixel counts as y-values and gray levels as x-levels. Color images may be represented as three such histograms, one for each of three different colors. Many filters can be understood in terms of such image histograms.
Various contrast filters shift the color or level of gray for pixels in the image as a function of their separation (in shades of gray) from some "level" or central value of gray scale. In general, these filters tend to expand or compress the range of gray scale or color levels within an image.
An auto density filter allows the user to select minimum and maximum levels of gray. All levels of gray below the minimum are converted to black. All levels of gray above the maximum value are converted to white. The remaining portions of the histogram are expanded so that the contrast of the image is increased in the range between the minimum and the maximum values.
A "contrast" filter modifies the contrast of an image based upon the parameters
"level" and "gain." The level parameter sets a baseline value of gray scale or color that defines the left and right portions of the histogram. The parameter gain is the percentage by which the level of gray (or color) of each pixel is increased or decreased with respect to the level.
An "expansion histogram" filter acts much like an auto density filter. Values of minimum and maximum levels of gray (or color) are specified to define the interval in which the original portion of the histogram is redistributed.
Various "smoothing" filters may be employed. Generally, these filters smooth the background or an image by removing abrupt changes in pixel values across a short distance. They accomplish this by replacing a pixel's original level of gray (or color) with a value calculated as the average of pixel values in a circle of pixels of defined radius around the pixel of interest. The radius over which the average value is calculated may be varied as a user-settable parameter. In some smoothing filters, a matrix is applied to the surrounding pixels defined by the radius. This matrix includes various weights or multipliers used to scale the pixel values within the radius prior to averaging them. Various matrixes may be employed to change the property of the filters. Without such matrix, a simple "median" type filter results. One type of matrix creates an "erosion" filter, which not only smoothes features but tends to reduce the area encompassed by contours. A dilation filter employs a different type of matrix and tends to dilate or increase the area of a feature that is smoothed. An "opening" filter employs and erosion filter followed by a dilation filter. A "closing" filter does the opposite; it applies a dilation filter followed by an erosion filter.
Various types of filters may be employed to remove granularity in a particular image. Many tissue sections can be difficult to interpret because of a very granular background. Such filters can reduce or eliminate differences in pixel values between adjacent pixels. Another class of filters is known as "gray map" filters. One example of such filter is a threshold filter that transforms an image into two levels (black and white) as specified as a parameter known as the threshold. The threshold is a gray level. All pixels having values less than the gray level are made black and all pixels having values greater than the threshold are made white.
Another gray map filter is known as a "reduce" filter. This filter reduces the number of levels of gray (or color) to a "level" specified by the user. Thus, if there were originally 256 levels of gray (or color) in the original image, the filtered image will contain only "x" number of gray levels as specified by the level parameter. When the value of level is set, this filter identifies "x" density peaks in the histogram of an image. Specifically, it identifies the "x" highest density peaks in the image. Then, all pixel gray levels in the image are set to the specific gray levels identified at the density peaks.
Various "segmentation" filters may be employed to emphasize structure in certain features of an image. One example of such segmentation filter is a top hat transformation, Such filter is used for segmenting objects in images that differ in brightness from the background, even when the background is very uneven and gray scale. This filter is useful for extracting light objects (or conversely dark ones) on a dark (or light) but slowly changing background.
Another segmentation filter is the "skeleton" filter, which is used to find a connected skeleton inside an object or feature. Another segmentation filter can be used to "compute contours" by reducing gray levels in the original image and then detecting contours by color (or level of gray).
In some embodiments, the quantitative tools of this invention provide user interface features allowing scientists to construct macros that combine groups of filters that are available singularly via the interface. The resulting macro is given a name. When the macro having that name is selected and applied to an image, a linked sequence of filtering operations, as defined by the macro, is performed on the selected image (or images).
Figures 4A and 4B illustrate one example of a suitable macro creation tool. As illustrated in these figures, a macro window 403 displays a list of existing macros 405 list all macros that have been created for a particular instance of the quantitative user interface. Associated with list 405 is a set of buttons 407 that allow scientists , to create new macros, rename existing macros, and delete existing macros. A macros definitions window 409 with the sequence of operations that will be performed upon execution of the particular macro highlighted in list 405. These operations include load, save, and close operations in addition the various filters selected to operate on biological images. Each of these operations has an associated argument. For example, as illustrated in Figure 4A, a load command specifies the file name of a file containing biological images to be manipulated via the macro. A save command specifies a location and document name for storing images manipulated in accordance with the macro. Specific filters that comprise the macro each have their own arguments, as necessary. For example, an altered density filter must specify minimum and maximum gray scale levels. A degran filter must specify a range of gray levels to be consolidated and a size limit defining the number of contiguous pixels that can be consolidated for smoothing. In a preferred embodiment, default versions of these arguments will be provided with the filters. The user who creates a macro can then choose to use the default arguments or input his or her own arguments.
Within macro window 403, a "filters" feature 411 presents a list of available macro operations 413 available for selection by a scientists. Each of the listed macro operations specifies the arguments that must be specified, if any, for selected operations. By selecting individual ones of these operations, the scientist can construct a desired macro.
METHODS
As mentioned, one desirable goal of a quantitative visualization tool for biological images is the ability to provide sophisticated quantitative analysis of biological images in response to a single user action or a simple sequence of actions. As used herein, the terms "quantitative analysis" and "quantitative methods" may subsume some analysis that provides a qualitative output (e.g., classification algorithms). In one embodiment of the invention, a user may initiate various quantitative methods by selecting a menu option, clicking on a button, or similar operation.
Generally, the methods perform one or more of detection of patterns (spots, cells, organelles, etc.) in biological images, quantification of these patterns, classification of these patterns, and comparative analysis of data results. In general the methods of this invention calculate some local and/or global parameters of the type described above. Specific examples include detecting image features meeting certain geometric constraints, comparing such features in related images, classifying features, and the like. Cells may be located and classified in a tissue image, temperature variations may be detected between two related thermal images, specific proteins may be identified and classified, etc.
Figure 5A presents one example of a visualization tool interface in which a method is executed in response to selection of a menu item. In this example, user interface 201 includes four "method" menu items in process menu 314. As shown in Figure 5A, a group of specific methods deemed "melanophore" methods have been selected. Based upon the active project, two melanophore methods are available for execution: spot detection and spot classification. If a user selects one of these, that method will automatically be executed on one or more images - as specified by the method - in the currently opened project.
In one embodiment, a melanophore assay is used to look for chemicals that activate specific receptors. Frog pigment cells (melanophores) that express cloned human receptors are combined agar and poured into a plate, forming a flat sheet. Very small amounts of liquid compounds are delivered to the surface of the frog cells in a created pattern. Receptor activation causes dispersion of melanin, and dark spots appear where active compounds were delivered.
In an image window 502 depicted in Figure 5A, an image resulting from a melanophore assay is presented. This image represents a "difference" between two images of the same assay taken at different times. Spots in this "difference" image identify regions of an assay substrate where the dispersion of melanin has changed over time. This generally suggests that compounds delivered at those locations are in some way active.
An image analysis method associated with spot detection can detect such spots and quantify their properties such as their dimensions. Any number of suitable algorithms may be used for this analysis. In an assay employing very small dots, a large number of compounds can be screened at one time. As their identify is positional, rapid image analysis allows quick identification of active compounds.
Sometimes in a melanophore assay, spots identified in a "difference" image may not represent significant results. To discriminate between spots of likely interest, and artificial or ambiguous spots, a spot classification method may be employed. Such method may employ an algorithm (preferably a neural network) that distinguishes spots based on such criteria as size, shape, texture, and the like. Examples of classifications that may be accomplished by such method include good spots, ambiguous spots, fused spots, spots with pale centers, and the like. Note that a spot with a pale center could be particularly interesting because it indicates a location where early in the assay a small dot appeared and later in the assay a much larger dot appeared. The subtraction of two images remove the darkness in the center of the spot because darkness in that region has not changed over time.
Figure 5B presents a screen shot of user interface 201 after a melanophore method analyzed a collection of melanophore images in a project. As shown, the method creates icons 504 which, when selected by a double click for example, allow the user to visualize numerical results of the analysis. In addition, a melanophore assay image 506 presents boundary contours around the edges of the melanophore spots, as calculated by the method.
Figure 5C presents a screen shot of user interface 201 showing an image 508 having displayed therein a boundary of a spherical object as identified by a user actuated quantitative method. In addition data regarding an array of such objects is presented in a quantitative results table 510.
SPHEROID DETECTION
For many biological applications, such as the above-described melanophore analysis, it becomes important to identify features that assume a generally circular or spherical shape within an image. General examples include detection of dark or colored spots on an assay substrate, detection of cellular organelles such as nuclei, detection of interesting morphologies in a tissue section, and the like. In a specific example illustrated in Figure 5D, a spheroid detection algorithm operates as follows. Initially, an autodensity filter is applied. In specific embodiment, it contains settings of Min = 5 and Max = 99. See 512. Next, a degranulation filter (degran) is applied with the following parameters: level = 1 and size = 128. See 514. Obviously, the filter parameters can be varied depending upon the amount of granule smoothing required for the background of the image. The goal is to reduce the spheroid and background granulosity and to obtain the relevant shape.
Next, at 516, the system applies a threshold filter. As mentioned, such filters serve to convert gray scale images to entirely black and white images. In a specific embodiment, the threshold filter is set at a value of 92. This transforms all pixels having a gray level of 92 or higher to white and all pixels having a gray level of 91 or lower to black. After the image under consideration has been subjected to the threshold filter, the algorithm computes contours for any features still appearing on the image. See 518. With contours now delineated, the system determines which of the contoured features qualify as spheroids. See 520. Various parameters may be considered for this purpose. And various algorithms may make use of these parameters to give the ultimate indication of whether or not a feature qualifies as a spheroid. Examples of suitable parameters include the feature diameters, location in the image, density, etc.
SPOT CLASSIFICATION
If the system detects a spheroid in a biological image, it may need to determine whether that spheroid can be classified in a meaningful way. As mentioned, a melanophore assay image, for example, may present various spots, some of which are meaningful and others of which are not. A specific example of a spot classification algorithm will now be described with reference to Figure 5E. As shown, the process begins with application of an auto density filter at 501. As with the case of a spheroid detection in Figure 5D, a specific implementation employs auto density filter parameters set at Min = 2 and Max = 95. After application of the auto density filter, the system next deletes background by employing a background deletion filter at 503. Such filter iteratively deletes background features until converging at an optimum level. The resulting image thus has a cleaner background. The next filter applied is a degran filter. See 505. This serves to reduce the occurrence of potentially interfering granules in the background of the image. In a specific embodiment, the degran filter parameters are set at level =1 and size = 256. Next, the system applies a color reduction filter to reduce the number of different colors/shades of gray in the image. See 507. For example, if there are eight significant maxima in the histogram, the filter reduces number of shades to eight.
To further improve the quality of the image, the system again applies a degran filter to the resulting image. See 509. This can remove some interfering granular structures that could not be removed with the earlier application of the degran filter (at 505). Next, the system performs threshold filtering on the image twice (in parallel): once with a low parameter setting (see 511) and once with a high parameter setting (see 513). This allows the system to determine whether an image includes a feature having a contour that remains detectable for two separate threshold settings. To accomplish this, the system calculates contours for each of the parallel images - once for the image subjected to the low threshold filtering (see 515) and once for the image subjected to the high threshold filtering (see 517). This should produce two separately filtered versions of a single image, each with its own contours. Note that any subset or all of the filtering operations shown in 501-517 could be programmed as a macro in accordance with this invention. The system then compares the two image versions to determine whether the calculated contours in one image closely correspond to the contours in the other version. See 519. This merging of contours operation may be performed by any suitable algorithm. For example, the system may determine all the x-y coordinates for each feature having a closed contour. These coordinates are compared between image versions. A feature having a particular degree of agreement between the two image versions may be classified as a "good" or "unambiguous" spot. Other features whose contours do not match so well (possibly appearing in only one filtered version of the image) may be classified as "ambiguous." The system may employ other criteria to determine whether the shape of feature affects its classification. For example, a spot whose shape is not sufficiently round may be suspect. Also, the system may employ a special classification for spots having their perimeters fused with adjacent spots. Still further, a spot with a "pale center" may be particularly relevant as explained above in the context of the melanophore assay.
After the algorithm has made the necessary classification, it may output the results in a table or other logical representation. In one format, the individual features (as identified by particular ID numerals) and there associated classifications are listed on the computer screen.
ORGANELLE CLASSIFICATION
For many applications, it can be important to identify specific organelles within a cell. Such organelles often appear as contours when cells are stained with an appropriate dye and imaged. One biological quantitative method of this invention involves classification of cellular organelles appearing in an image.
In one general approach, image files are processed to (1) remove granulosity and (2) compute the edges of the stained areas to isolate them from the background. Characteristic visual features of the isolated stained areas may then be extracted to discriminate between the different sub-cellular structures.
As illustrated in Figure 6, a specific example of such method may operate as follows. The images of interest are collected from cells using confocal microscopy at various depths of focus. Each "slice" of data is then loaded as a separate image into a drawer of a project. See 603. From the collection of images in a given drawer, the algorithm first selects four (or other number as specified by the algorithm) of these images for further analysis. See 605. In one embodiment, this selection is made by applying a simple threshold filter and then determining which four images contain the most black pixels.
The four selected image slices are analyzed by a classification algorithm that first identifies contours of the features within each image. See 607. The algorithm then creates a new compressed image containing only the organelle shape. See 609. At this point, the system employs an appropriately trained neural net (or other suitable classification algorithm) to make a determination as to what type of organelle resides within the contours of the image. See 621. From this information, the system classifies the feature as Golgi, endosome, nucleus, mitotic nucleus; lysosome, or reticulum (in one example). .
At the end of the classification process, the system may create a table that provides the classification for each group of images as well as their "score." A high score indicates high confidence in the classification. Sometimes 3 of the 4 images are unambiguous but the fourth is ambiguous; then the score might be 75%.
As mentioned, the various quantification methods may employ specific algorithms in conjunction with various filters. Such algorithms may be of any suitable type. In one specific embodiment, modular neural networks are employed for this purpose. Such neural networks are described in M. De Francesco, Handbook of Neural Computation, B2.9 "Modular topologies," Oxford University Press 1997, which is incorporated herein by reference for all purposes. As is well understood in the art, appropriate training sets of images must be employed to train the neural networks to accurately quantify and classify images.
In general, the automated recognition of intracellular structures as described herein can be applied in several ways in the area of gene discovery. For instance, by randomly inserting visualisable tags (e.g. GFP for in vivo labelling) into a wide variety of genes, it is possible to generate localisation patterns for a large number of proteins, some known and some unknown. Automation of the pattern analysis would speed up such an approach, allowing to apply it to genome-wide screenings aiming at the creation of a localisation database for all expressed proteins in an organism and improving the knowledge of sub-cellular architecture and cell morphogenesis.
THERMAL IMAGING METHODS
Certain imaging techniques may generate images by employing very sensitive infra red cameras (or other thermal imaging apparatus) to detect small changes in temperature at particular locations in in vitro assays or in local human body temperature (in vivo). One application involves putting cells into a 96-well plate and measuring their metabolic response to various compounds. Another application involves imaging human bodies and comparing temperature distributions over time or between individuals. This information may also be employed to determine local temperature changes over time.
Central to some of these investigations is defining a contour delineating the boundary of regions of interest and uninteresting regions. In one example, the contour may define the outline of a human body or portion of a human body (head, neck, torso, pelvis, leg, and arm regions). For such contours, a virtual grid pattern is defined and a corresponding screen overlay generated for the user. Each grid element in the grid pattern represents a different spatial region of the human body (or portion) contour. The grid elements can be compared to determine such properties as average temperature within each grid element, and differences in average temperature between corresponding grid elements of two separate images (possibly taken at different times for the same individual or possibly taken for different individuals).
In order to define a contour image, an algorithm first determines the relative orientation of a human body image. For example, a simple algorithm using width and height of a human body image can be used to determine relative orientation R, such as: R = width of image/height of image.
In this case, when R>1 the human body image is horizontal as projected on a computer monitor for example. When R<1, the image is vertical. Determining the relative orientation of an image is important not only for comparing different images, for example from different projects, but also for choosing appropriate algorithms used to generate contours from a human body image. For example, different algorithms are used to detect the arms, head, torso, etc. depending upon the relative orientation of the image.
Once the relative orientation of the image is determined, -such algorithms define the contour(s) of interest and a virtual grid overlay created. In Figure 7A, a visualization interface of the invention is displaying a thermal image 700 of a portion of a human body (seated male) showing a virtual grid overlay, 702. In this case, grid overlay 702 maps the head and torso contours of the thermal image. Corresponding pixels for each of the grid elements are used for temperature calculations. For example, all of the pixels in grid element 704 would be used for calculations because the entire area of 704 is filled with human body contour image. In another grid element, only those pixels representing human body contour image would be considered for calculation, see 706. In the same grid element, pixels not representing human body contour image, 708, would not be considered for temperature calculations.
In Figure 7B, the quantitative visualization interface of the invention is displaying an image 701 of a human torso that has been imaged with an infra red camera. As shown, a grid has been overlaid on the torso image to define a series of regions. A data table 703 showing average temperate values for each grid element (represented by a unique row and column element). In some methods involving a comparison of two images at different times, the computer system generates a table showing, for each grid element, the change in average temperature, the average temperature and the standard deviation. In one embodiment, such result tables for thermal imaging methods are generated in color code. In a preferred example, one column of data in a table corresponds to temperature average values. If a particular temperature average is less than a general average (or some standard), then that value is displayed (or printed out) in blue, otherwise in red. Of course, the result table color-coding feature of the invention can be used for other results such as those obtained from applying colored filters or manipulating color images.
In Figure 7C, the user stretched the contour confining the grid elements of the torso image (to produce a modified image 701'). The user then reapplied the method to the image. This produced a second data table 705 showing recalculated values average' temperature within the new grid elements. Note that the values in tables 703 and 705 are different, reflecting the change in location of grid elements. Methods that operate on thermal images identify relevant contours of the human body image, align the virtual grid appropriately on such contours, and ensure that there is an accurate correspondence between grid elements onto separate images. As mentioned, methods of the invention can segment a human body image into different contour regions, for example head, neck, torso, pelvic, leg, and arm regions. The methods also automatically calculate the relevant grid element parameters such as average temperature, temperature difference, standard deviation, element dimensions, and the like.
To delineate relevant body image contours, methods of the invention first assign control points to a human body thermal image. One specific embodiment adapts Chetverikov's algorithm as described in "A Simple and Efficient Algorithm for detection of High Curvature Points in Planar Curves" By Dmitry Chetverikov and Zsolt SzaboProc. 23rd Workshop of the Austrian Pattern Recognition Group, Steyr, pp.175-184, 1999 (incorporated herein by reference for all purposes). This paper describes a two-pass algorithm that defines a corner as a location where a triangle of specified size and opening angle can be inscribed in a curve. In the first pass, the algorithm scans the sequence and selects candidate corner points.
In a preferred embodiment of this invention, the first pass of this algorithm is used to define the best control points (like corners) of a body contour image. For example in the case of an image of a human torso, these points are scrutinised to select the best 6 control points responding to the following criteria: 2 control points for the neck base 2 control points for armpits 2 control points for the back bottom. Figures 7D-7F illustrate an example of this process.
Figure 7G depicts one procedure for determining the contour of a torso so that the control points can be identified in further processing. As shown in Figure 7G, a contour finding method 712 first receives a previously generated thermogenesis image of a torso at 714. Such images may be obtained with ultra-sensitive infra-red cameras, which can distinguish slight temperature variations in a torso. The thermogenesis image is first processed by applying an autodensity filter (716). In a specific embodiment, the parameters are set as follows: Min=2 and Max=95. Thereafter, the method performs an inverse video filter at 718. This filter is necessary to obtain an image with a white background (the subsequently used compute contour filter needs an image with black patterns on a white background).
When the inverse video filtering has been performed, the method next performs threshold filtering at 720. In a preferred embodiment, the threshold parameter is dynamically set depending upon the characteristics of the image. In this case, to obtain the threshold parameter, an image histogram may be calculated from the following expression: Relevant Pixels = Width * Height - White Pixels
• Width and Height are respectively the width and the height of the current image (in numbers of pixels)
• White Pixels = Number of white pixels in the current image (corresponding to the image background)
Threshold Parameter = The grey tone value corresponding to 50% of the "Relevant Pixels." With this formula, one obtains the limit (threshold) where the image pixels are most relevant for applying the next filter.
Now the contour can be computed (722) using a technique such as that described above. The process of finding a body contour is thus complete. The next step is to identify only the human torso by defining 6 control points. Figure 7H presents a method 726 for determining a temperature distribution over a torso having a contour such as that determined by process 712. Initially the method identifies all control points on the body contour. See 728. Preferably, this is accomplished using Chetverikov's algorithm as mentioned above. Of course, other regions of interest (head, arms, legs, pelvis, etc.) can be identified using analogous algorithms to define corresponding control points. Then, the control points are scrutinized to select only a minimum number of control points (e.g., six) that define a coarse torso shape. See 730. To select these six control points, one may divide the body contour into four regions defined by the center of gravity of this body contour. These regions include a bottom region from the waist up to the lower chest, a top left region including the left arm and armpit, a top right region including the right arm and armpit, and a top center region including the neck and shoulders.
To define these six points, one may apply a series of rules. For the back bottom region, choose the two control points with a maximal angle in this region. For the top left and right regions, choose two control points with the maximal angle respectively in these regions. These represent the armpits. For a neck base region, choose two control points with the maximal angle with a distance constraint between the neck and the armpits (this constraint obtains these last two control points with the minimum y distance between the neck and the armpit).
After the defining control points are selected, the method next applies a grid to the torso using the selected control points. See 732. Finally, based upon variations in the infra-red thermogenesis image, the method calculates temperature variations. See 734. It does this separately for each grid element generated at 732. This method provides relevant information about localised body temperature variations as a function of time, exposure to exogenous compounds, etc. As well, once a human body thermal image is segmented into regions (torso, head, arm, etc.), a unique temperature threshold value can be applied to each region for further analysis. Body regions can display unique temperature profiles in different situations. For example, if a response to a particular drug was supposed to induce an effect of above normal body temperatures generally, then a unique temperature threshold filter value could be set for each unique body contour segment depending on what "above normal" means for each unique body segment. This feature also is useful for redefining contours based on specific temperature data of interest or any arbitrary user-defined paradigm.
The methods described above use algorithms to automatically construct a contour of a human body thermal image. The contour, once formed, can be automatically segmented further into smaller, separate contours, like head, leg, and arm contours. Another embodiment of the invention is a cutter tool. The cutter tool allows the user to cut (free hand) an existing contour into smaller contours based on subjective criteria rather than objective criteria used in algorithms. This feature is particularly useful when an algorithm does not satisfactorily segment a particular contour into a desired segment, a user desires analysis of segments not readily computed by algorithm, or a user wants to remove a portion of an existing contour for further analysis without said portion. When the cutter tool is used, the segments created are automatically numbered for ease of tracking and manipulation. Complex cuts can be made with the cutter tool, adding additional flexibility. Finally, the cutter tool can be used with any contours generated in methods described herein; for example, generated contours representing organelles.
COMPUTER SYSTEMS AND ALTERNATIVE EMBODIMENTS
As should be apparent, embodiments of the present invention employ various process steps involving data stored in or transferred through one or more computer systems. Embodiments of the present invention also relate to an apparatus for performing these operations. This apparatus may be specially constructed for the required purposes, or it may be a general-purpose computer selectively activated or reconfigured by a computer program and/or data structure stored in the computer. The processes presented herein are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description given below.
In addition, embodiments of the present invention further relate to computer readable media or computer program products that include program instructions and/or data (including data structures) for performing various computer- implemented operations. The media and program instructions may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the computer software arts. Examples of computer-readable media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as floptical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM) and random access memory (RAM). The data and program instructions of this invention may also be embodied on a carrier wave or other transport medium. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. Figure 8 illustrates a typical computer system in accordance with an embodiment of the present invention. The computer system 800 includes any number of processors 802 (also referred to as central processing units, or CPUs) that are coupled to storage devices including primary storage 806 (typically a random access memory, or RAM), primary storage 804 (typically a read only memory, or ROM). As is well known in the art, primary storage 804 acts to transfer data and instructions uni-directionally to the CPU and primary storage 806 is used typically to transfer data and instructions in a bi-directional manner. Both of these primary storage devices may include any suitable computer-readable media such as those described above. A mass storage device 808 is also coupled bi-directionally to CPU 802 and provides additional data storage capacity and may include any of the computer-readable media described above. Mass storage device 808 may be used to store programs, data and the like and is typically a secondary storage medium such as a hard disk that is slower than primary storage. It will be appreciated that the information retained within the mass storage device 808, may, in appropriate cases, be incorporated in standard fashion as part of primary storage 806 as virtual memory. A specific mass storage device such as a CD-ROM 814 may also pass data uni- directionally to the CPU.
CPU 802 is also coupled to an interface 810 that includes one or more input/output devices such as such as video monitors, track balls, mice, keyboards, microphones, touch-sensitive displays, transducer card readers, magnetic or paper tape readers, tablets, styluses, voice or handwriting recognizers, or other well-known input devices such as, of course, other computers. Finally, CPU 802 optionally may be coupled to a computer or telecommunications network using a network connection as shown generally at 812. With such a network connection, it is contemplated that the CPU might receive information from the network, or might output information to the network in the course of performing the method steps described herein. The above- described devices and materials will be familiar to those of skill in the computer hardware and software arts.
Although the foregoing invention has been described in some detail to facilitate understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. For example, the graphical user interfaces of this invention can provide access to methods and filters other than or in addition to those specifically enumerated herein. Furthermore, it should be noted that there are alternative ways of implementing both the process and apparatus of the present invention. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims

CLAIMSWhat is claimed:
1. A computer system capable of displaying and analyzing an image, the computer system comprising: one or more processors; one or more user input devices; a display capable of displaying the image and associated information in pre- specified formats, which are responsive to input signals from one or more of the input devices and signals from one or more of the processors; and a graphical user interface running on one or more of the processors and allowing organization and quantitative analysis of biological images, the graphical user interface comprising: a first graphical tool allowing the biological images to be organized into projects, wherein a project includes a hierarchy of nodes, each node representing a biological classification of the images; and a second graphical tool listing available methods and allowing. selection and execution of a method that performs a quantitative analysis on one or more of the biological images organized into the project, wherein the quantitative analysis calculates biological information from one or more of the biological images.
2. The computer system of claim 1 , wherein the biological images present tissue cross-sections.
3. The computer system of claim 1 , wherein the biological images present results of biological assays.
4. The computer system of claim 1 , wherein the biological images present sub- cellular organelles within a cell.
5. The computer system of claim 1 , wherein the graphical user interface further comprises a project display that represents the project on the screen as a hierarchy of nodes.
6. The computer system of claim 1 , wherein the first graphical tool allowing the biological images to be organized into projects comprises a menu allowing a user to select new project nodes and organize them in the project hierarchy.
7. The computer system of claim 1 , wherein the first graphical tool allowing biological images to be organized into projects comprises a graphical feature allowing selection of a template specifying a hierarchy of nodes for a predefined project associated with one or more methods.
8. The computer system of claim 1 , wherein one method available for selection using the graphical user interface automatically detects spots on the biological images.
9. The computer system of claim 8, wherein the one method classifies detected spots into biological categories.
10. The computer system of claim 1 , wherein one method available for selection using the graphical user interface classifies sub-cellular organelles in the biological images.
11. . The computer system of claim 1 , wherein one method available for selection using the graphical user interface identifies and classifies cells in the biological images.
12. The computer system of claim 1 , wherein one method available for selection using the graphical user interface compares biological information contained in related biological images.
13. The computer system of claim 1 , wherein the graphical user interface further comprises a third graphical tool allowing selection of one or more filters, each of which performs a specific filtering operation on selected biological images.
14. The computer system of claim 1 , wherein the second graphical tool for selecting and initiating execution of a method also allows selection and execution of a filter.
15. The computer system of claim 14, wherein the second graphical tool for selecting and initiating and execution of a method comprises a menu displaying both methods and filters available for selection.
16. The computer system of claim 1 , wherein the graphical user interface further comprises a third graphical tool allowing a user to create a macro linking two or more filters.
17. A computer program product including a machine readable medium on which are provided instructions for a graphical user interface allowing organization and quantitative analysis of biological images, the graphical user interface comprising: a first graphical tool allowing the biological images to be organized into projects, wherein a project includes a hierarchy of nodes, each node representing a biological classification of the images; and a second graphical tool listing available methods and allowing selection and execution of a method that performs a quantitative analysis on one or more of the biological images organized into the project, wherein the quantitative analysis calculates biological information from one or more of the biological images.
18. The computer program product of claim 17, wherein the biological images present at least one of the following: tissue cross-sections, results of biological assays, and sub-cellular organelles within a cell.
19. The computer program product of claim 17, wherein the first graphical tool allowing the biological images to be organized into projects comprises a menu allowing a user to select new project nodes and organize them in the project hierarchy.
20. The computer program product of claim 17, wherein the first graphical tool allowing biological images to be organized into projects comprises a graphical feature allowing selection of a template specifying a hierarchy of nodes for a predefined project associated with one or more methods.
21. The computer program product of claim 17, wherein the methods available for selection using the graphical user interface include at least one of a method that automatically detects spots on the biological images, a method that classifies detected spots into biological categories, a method that classifies sub- cellular organelles in the biological images, and a method that identifies and classifies cells in the biological images.
22. Jhe computer program product of claim 17, wherein one method available for selection using the graphical user interface compares biological information contained in related biological images.
23. The computer program product of claim 17, wherein the graphical user interface further comprises a third graphical tool allowing selection of one or more filters, each of which performs a specific filtering operation on selected biological images.
24. The computer program product of claim 17, wherein the second graphical tool for selecting and initiating execution of a method also allows selection and execution of a filter.
25. The computer program product of claim 17, wherein the graphical user interface further comprising a third graphical tool allowing a user to create a macro linking two or more filters.
26. A method of organizing and performing quantitative analysis on one ore more biological images using a graphical user interface, the method comprising: associating each of a group of biological images with a project node in response to user inputs that select individual biological images and associated project nodes via the graphical user interface; and initiating execution of a method in response to user selection of the method, from a list of methods, via the graphical user interface, wherein the method performs a quantitative analysis on one or more of the biological images organized into the project, which the quantitative analysis calculates biological information from the one or more biological images.
27. The method of claim 26, wherein the biological images present tissue cross-sections.
28. The method of claim 26, wherein the biological images present results of biological assays.
29. The method of claim 26, wherein the biological images present sub- cellular organelles within a cell.
30. The method of claim 26, wherein associating each of the biological images with a project node comprises determining that the user has performed a drag and drop operation with individual biological images.
31. The method of claim 26, further comprising creating a project in response to user input that create new project nodes, locate such nodes within the project, and name the project nodes.
32. The method of claim 31 , wherein the project is created in response to a user selection of a project template, which project template provides a hierarchy of nodes defining a specific saved project.
33. The method of claim 26, wherein one method available using the graphical user interface detects spots on the biological images.
34. The method of claim 33, wherein the one method classifies detected spots into biological categories.
35. The method of claim 26, wherein one method available for selection classifies sub-cellular organelles in the biological images.
36. The method of claim 26, wherein one method available for selection identifies and classifies cells in the biological images.
37. The method of claim 26, wherein one method available for selection compares biological information contained in related biological images.
38. The method of claim 26, further comprising filtering a selected biological image in response to a user selection of one or more specific filters to be applied to the selected biological image.
39. The method of claim 38, wherein the one or more filters include at least one of a contrast filter, a smoothing filter, and a color filter.
40. The method of claim 26, further comprising displaying quantitative information for multiple images, the quantitative information obtained by executing the method.
41. A method of analyzing biological images with the aid of a graphical user interface on a computer system, the method comprising: (a) organizing the images into projects separating the biological images based on biologically relevant classifications;
(b) displaying the images on a display of the computer in a manner allowing a user to study them;
(c) manipulating the displays, by filtering, to facilitate evaluation; and (d) performing qualitative or quantitative analysis of the images to assess their relevance.
42. The method of claim 41 , further comprising displaying one or more tables of quantitative information resulting from performing quantitative analysis.
43. The method of claim 41 , wherein the quantitative analysis provides local data describing dimensions of features contained in individual biological images and global data describing a biological classification of features in the biological images or a biological comparison of features contained in related biological images.
44. The method of claim 41 , wherein the local data is displayed in association with individual biological images and wherein the global data is displayed separately as a table.
45. The method of claim 41 , further comprising obtaining the biological images from a database of images.
PCT/GB2001/001400 2000-03-31 2001-03-29 Quantitative analysis of biological images WO2001078009A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU42619/01A AU4261901A (en) 2000-03-31 2001-03-29 Quantitative analysis of biological images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0007860.0 2000-03-31
GBGB0007860.0A GB0007860D0 (en) 2000-03-31 2000-03-31 Quantative analysis of biological images

Publications (2)

Publication Number Publication Date
WO2001078009A2 true WO2001078009A2 (en) 2001-10-18
WO2001078009A3 WO2001078009A3 (en) 2002-08-08

Family

ID=9888868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/001400 WO2001078009A2 (en) 2000-03-31 2001-03-29 Quantitative analysis of biological images

Country Status (3)

Country Link
AU (1) AU4261901A (en)
GB (1) GB0007860D0 (en)
WO (1) WO2001078009A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013134772A2 (en) * 2012-03-09 2013-09-12 Seno Medical Instruments, Inc. Statistical mapping in an optoacoustic imaging system
US9733119B2 (en) 2011-11-02 2017-08-15 Seno Medical Instruments, Inc. Optoacoustic component utilization tracking
US9743839B2 (en) 2011-11-02 2017-08-29 Seno Medical Instruments, Inc. Playback mode in an optoacoustic imaging system
US9814394B2 (en) 2011-11-02 2017-11-14 Seno Medical Instruments, Inc. Noise suppression in an optoacoustic system
US10321896B2 (en) 2011-10-12 2019-06-18 Seno Medical Instruments, Inc. System and method for mixed modality acoustic sampling
GB2571379A (en) * 2018-07-16 2019-08-28 Npl Management Ltd System and method for obtaining thermal image data of a body part and thermal imager
US10436705B2 (en) 2011-12-31 2019-10-08 Seno Medical Instruments, Inc. System and method for calibrating the light output of an optoacoustic probe
US10542892B2 (en) 2011-11-02 2020-01-28 Seno Medical Instruments, Inc. Diagnostic simulator
CN110929719A (en) * 2018-09-20 2020-03-27 宁波工程学院 Chemical reagent concentration quantitative representation method
US11191435B2 (en) 2013-01-22 2021-12-07 Seno Medical Instruments, Inc. Probe with optoacoustic isolator
US11287309B2 (en) 2011-11-02 2022-03-29 Seno Medical Instruments, Inc. Optoacoustic component utilization tracking

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999030264A1 (en) * 1997-12-11 1999-06-17 Bellsouth Intellectual Property Corporation Digital telepathology imaging system programmed for identifying image regions of potential interest and anticipating sequence of image acquisition
US5950002A (en) * 1996-08-13 1999-09-07 General Electric Company Learn mode script generation in a medical imaging system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950002A (en) * 1996-08-13 1999-09-07 General Electric Company Learn mode script generation in a medical imaging system
WO1999030264A1 (en) * 1997-12-11 1999-06-17 Bellsouth Intellectual Property Corporation Digital telepathology imaging system programmed for identifying image regions of potential interest and anticipating sequence of image acquisition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DIETZ M A ET AL: "An object oriented user interface for analysis of biological data" COMPUTERS AND BIOMEDICAL RESEARCH, FEB. 1990, USA, vol. 23, no. 1, pages 82-96, XP008003491 ISSN: 0010-4809 *
MEDIGUE C ET AL: "Cooperative computer system for genome sequence analysis" ISMB-95 PROCEEDINGS. THIRD INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS FOR MOLECULAR BIOLOGY, PROCEEDINGS OF THIRD INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS FOR MOLECULAR BIOLOGY, CAMBRIDGE, UK, 16-19 JULY 1995, pages 249-258, XP008003499 1995, Menlo Park, CA, USA, AAAI Press, USA ISBN: 0-929280-83-0 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10321896B2 (en) 2011-10-12 2019-06-18 Seno Medical Instruments, Inc. System and method for mixed modality acoustic sampling
US11426147B2 (en) 2011-10-12 2022-08-30 Seno Medical Instruments, Inc. System and method for acquiring optoacoustic data and producing parametric maps thereof
US10349921B2 (en) 2011-10-12 2019-07-16 Seno Medical Instruments, Inc. System and method for mixed modality acoustic sampling
US9814394B2 (en) 2011-11-02 2017-11-14 Seno Medical Instruments, Inc. Noise suppression in an optoacoustic system
US10278589B2 (en) 2011-11-02 2019-05-07 Seno Medical Instruments, Inc. Playback mode in an optoacoustic imaging system
US9743839B2 (en) 2011-11-02 2017-08-29 Seno Medical Instruments, Inc. Playback mode in an optoacoustic imaging system
US9733119B2 (en) 2011-11-02 2017-08-15 Seno Medical Instruments, Inc. Optoacoustic component utilization tracking
US10542892B2 (en) 2011-11-02 2020-01-28 Seno Medical Instruments, Inc. Diagnostic simulator
US11287309B2 (en) 2011-11-02 2022-03-29 Seno Medical Instruments, Inc. Optoacoustic component utilization tracking
US11160457B2 (en) 2011-11-02 2021-11-02 Seno Medical Instruments, Inc. Noise suppression in an optoacoustic system
US10436705B2 (en) 2011-12-31 2019-10-08 Seno Medical Instruments, Inc. System and method for calibrating the light output of an optoacoustic probe
US9836838B2 (en) 2012-03-09 2017-12-05 Seno Medical Instruments, Inc. Statistical mapping in an optoacoustic imaging system
US10354379B2 (en) 2012-03-09 2019-07-16 Seno Medical Instruments, Inc. Statistical mapping in an optoacoustic imaging system
WO2013134772A2 (en) * 2012-03-09 2013-09-12 Seno Medical Instruments, Inc. Statistical mapping in an optoacoustic imaging system
WO2013134772A3 (en) * 2012-03-09 2013-11-07 Seno Medical Instruments, Inc. Statistical mapping in an optoacoustic imaging system
US11191435B2 (en) 2013-01-22 2021-12-07 Seno Medical Instruments, Inc. Probe with optoacoustic isolator
GB2571379A (en) * 2018-07-16 2019-08-28 Npl Management Ltd System and method for obtaining thermal image data of a body part and thermal imager
GB2571379B (en) * 2018-07-16 2021-10-27 Npl Management Ltd System and method for obtaining thermal image data of a body part and thermal imager
CN110929719A (en) * 2018-09-20 2020-03-27 宁波工程学院 Chemical reagent concentration quantitative representation method

Also Published As

Publication number Publication date
GB0007860D0 (en) 2000-05-17
AU4261901A (en) 2001-10-23
WO2001078009A3 (en) 2002-08-08

Similar Documents

Publication Publication Date Title
CN109791693B (en) Digital pathology system and related workflow for providing visualized whole-slice image analysis
EP1586897B1 (en) Image analysis supporting method, image analysis supporting program, and image analysis supporting device
US7050613B2 (en) Method for supporting cell image analysis
Comaniciu et al. Cell image segmentation for diagnostic pathology
Rodenacker et al. A feature set for cytometry on digitized microscopic images
Huang et al. From quantitative microscopy to automated image understanding
EP3055835B1 (en) Systems and methods for comprehensive multi-assay tissue analysis
US8340389B2 (en) Cellular- or sub-cellular-based visualization information using virtual stains
US7949181B2 (en) Segmentation of tissue images using color and texture
JP2017529513A (en) Image processing method and system for analyzing multi-channel images acquired from biological tissue samples stained with multiple stains
Velliste et al. Automated determination of protein subcellular locations from 3D fluorescence microscope images
US11062168B2 (en) Systems and methods of unmixing images with varying acquisition properties
CN104680193B (en) Online objective classification method and system based on quick similitude network integration algorithm
JP2018512072A (en) Quality control for automated slide-wide analysis
WO2001078009A2 (en) Quantitative analysis of biological images
JP4985480B2 (en) Method for classifying cancer cells, apparatus for classifying cancer cells, and program for classifying cancer cells
Schnorrenberg et al. Content-based retrieval of breast cancer biopsy slides
AU5284800A (en) Method and system for general purpose analysis of experimental data
Sabino et al. Toward leukocyte recognition using morphometry, texture and color
Sayadia et al. Computational efficiency of optic disk detection on fundus image: a survey
JPH08315144A (en) Device and method for pattern classification
Špringl Automatic malaria diagnosis through microscopy imaging
Zanotelli et al. A flexible image segmentation pipeline for heterogeneous multiplexed tissue images based on pixel classification
Hu et al. Application of temporal texture features to automated analysis of protein subcellular locations in time series fluorescence microscope images
Gholap et al. Content-based tissue image mining

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP