WO2011071505A1 - Visualisation de données intégrée pour microscopie multidimensionnelle - Google Patents

Visualisation de données intégrée pour microscopie multidimensionnelle Download PDF

Info

Publication number
WO2011071505A1
WO2011071505A1 PCT/US2009/067751 US2009067751W WO2011071505A1 WO 2011071505 A1 WO2011071505 A1 WO 2011071505A1 US 2009067751 W US2009067751 W US 2009067751W WO 2011071505 A1 WO2011071505 A1 WO 2011071505A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
dimensions
grid
dimensional
Prior art date
Application number
PCT/US2009/067751
Other languages
English (en)
Inventor
Avrum I. Cohen
Jerry M. Rubinow
Steven P. Boyd
Arad Shaiber
Original Assignee
Mds Analytical Technologies (Us) Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mds Analytical Technologies (Us) Inc. filed Critical Mds Analytical Technologies (Us) Inc.
Priority to PCT/US2009/067751 priority Critical patent/WO2011071505A1/fr
Publication of WO2011071505A1 publication Critical patent/WO2011071505A1/fr

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs

Definitions

  • the technology described herein relates to integrated visualization and
  • Integrated microscopy systems have contributed to the rapid growth of scientific breakthroughs in many disciplines, and particularly in the biosciences. Many such systems have associated image acquisition, processing, and analysis capabilities. For each microscopy experiment, a system can produce images and measurement data in multiple image-collection dimensions.
  • images can be acquired at multiple focus positions (e.g., in the Z, or Focus Position, dimension), at multiple wavelengths using different fluorochromes or microscopy techniques (e.g., in the Wavelength or Channel dimension), during a time lapse (e.g., in the Time dimension), and/or over multiple areas in a sample (e.g., in the Stage dimension), during a multidimensional experiment.
  • Numeric measurement data can also be derived from the images.
  • Visualization of the images and measurement data of a multi-dimensional experiment e.g. , a set of measurements on a biological sample, allows a user to identify, assess, and compare features within the sample, under different experimental conditions, at different times, and in different areas within the sample.
  • images taken at different wavelengths tend to highlight different structures in a biological sample; these images can be aligned in a stack to produce a composite image to show the set of features and structures in the sample and their spatial relationships.
  • images taken at different times can show the evolution of the sample over time; these images can be played in sequence, as in a movie, to allow the user to quickly visualize the changes in the sample over time.
  • images taken along different spatial dimensions e.g., the X-Y, and Z dimensions
  • Numeric measurement data can be obtained from the images for the identified objects and structures and includes, for example, cell cycle measurements, cell or nuclei scoring, colocalization and brightness measurements, and movement data for identified objects.
  • the measurement data for different images can be correlated with one another, or with values in one or more image- collection dimensions, and presented in graphs or tables along with the images.
  • the user often has to make comparisons between related images of different time points, different locations, different samples, or different experiments to identify and assess the commonality or differences in the related images.
  • the currently available systems are capable of presenting images and numeric measurement data in graphs and tables along with the images, typically the selection and mode of presentation of images and measurement data must be decided by the user.
  • a user wishes to do a side -by-side comparison of two images containing a common object of interest
  • the user typically has to manually select the images from a database, place them side by side in an image viewer, and manipulate each image individually to locate the region of interest in the image in order to finally view the images side by side for proper comparison.
  • the user needs to examine the numeric data for the same images, the user also has to locate the correct file for numeric data associated with the images. This process can become very time consuming and tedious if the user has to compare a large number of images. Sometimes, the task becomes virtually impossible, because the user may not always be able to remember or even be aware of which images show the same object of interest, and therefore are not able to locate the appropriate images for visual comparison.
  • This specification describes technologies relating to image visualization and manipulation for multi-dimensional microscopy datasets.
  • one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of: displaying a first plurality of images in a first two-dimensional image grid, wherein a first and a second dimension of the first two-dimensional image grid are selected from a plurality of image- collection dimensions including Time, Stage, Channel, and Focus Position, wherein the first plurality of images are selected from a first multi-dimensional dataset comprising images of a biological sample, each image in the first plurality of images being associated with a respective value in each of the first and the second dimensions, and wherein the first plurality of images are associated with a common value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the first two- dimensional image grid according to the associated values of the images in the first and the second dimensions; receiving user input for visually manipulating one of the first plurality of images displayed in the first two-dimensional image grid, the user input including one or more of zooming, panning, and thresholding; and simultaneously updating views of the first plurality of images that
  • the methods further include actions of: prior to receiving user input for visually manipulating one of the first plurality of images, displaying a second plurality of images from a second multi-dimensional dataset in a second two-dimensional image grid, wherein the simultaneous updating also updates views of each image that is visible in the second two-dimensional image grid.
  • each of the second plurality of images is associated with a respective value in each of the first and the second dimensions.
  • the methods further include actions of: displaying a table, the table comprising numeric measurement data derived from one or more of the first plurality of images; receiving user input selecting the numeric measurement data in the table; and updating the first two-dimensional image grid to visually emphasize the one or more images from which the selected numeric measurement data was derived.
  • the selected numeric measurement data is associated with an object in the one or more of the first plurality of images, and the methods further include the actions of visually emphasizing the object in the images that are visible in the first two-dimensional image grid.
  • the methods further include actions of: displaying a graph, the graph comprising data points representing numeric measurement data derived from one or more of the first plurality of images; receiving user input selecting one or more data points on the graph; and updating the first two-dimensional image grid to visually emphasize the one or more images corresponding to the selected data points.
  • the methods further include actions of: displaying a browsable image strip comprising a sequence of thumbnail images, the sequence of thumbnail images representing a sequence of values in one of the plurality of image- collection dimensions; receiving user input selecting one of the sequence of thumbnail images; and updating the first two-dimensional image grid to present a second plurality of images associated with a value represented by the selected thumbnail image in the one of the plurality of image-collection dimensions.
  • the methods further include actions of: visually manipulating the second plurality of images that are visible in the first two-dimensional image grid according to the visual manipulation that was applied to the first plurality of images.
  • a first and a second dimension of the two-dimensional image grid are selected from a plurality of image- collection dimensions including time, stage, channel, and focus position, wherein the plurality of images are selected from a first multi-dimensional dataset comprising images of a biological sample, each of the plurality of images being associated with a respective value in each of the first and the second dimensions, and wherein the plurality of images are associated with a common value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the first two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions; displaying a browsable image strip comprising a plurality of thumbnail images, the plurality of thumbnail images being derived from a first set of source images from one or more multi-dimensional datasets and representing a plurality of values in a respective image-col
  • each of the second set of source images is associated with the associated values of the selected image in both the first and the second dimensions.
  • a first plurality of images in a two-dimensional image grid wherein a first and a second dimension of the two-dimensional image grid are selected from a plurality of image- collection dimensions including Time, Stage, Channel, and Focus Position, wherein the first plurality of images are selected from a first multi-dimensional dataset comprising images of a biological sample, each of the first plurality of images being associated with a respective value in the first and the second dimensions, and wherein the first plurality of images are associated with a common value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions; displaying a browsable image strip comprising a plurality of thumbnail images, each of the plurality of thumbnail images representing a respective multidimensional dataset in a plurality of multi-dimensional datasets; receiving user input selecting a thumbnail image displayed in the brow
  • a first and a second dimension of the two-dimensional image grid are selected from a plurality of image- collection dimensions including Time, Stage, Channel, and Focus Position, wherein the first plurality of images are selected from a multi-dimensional dataset comprising images of a biological sample, each image in the first plurality of images being associated with a respective value each of the first and the second dimensions, and wherein the first plurality of images are associated with a first value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions; displaying a browsable image strip comprising a plurality of thumbnail images, each of the plurality of thumbnail images representing a respective value in the third dimension; receiving user input selecting one of the plurality of thumbnail images displayed in the browsable image strip,
  • displaying images in a two-dimensional image grid further includes the actions of: generating a plurality of composite images by overlaying multiple constituent images from multiple focus positions or multiple channels, wherein each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions; and displaying the plurality of composite images in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • displaying images in a two-dimensional image grid further includes the actions of: generating a plurality of composite images by overlaying multiple constituent images, wherein each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions, and at least one object present in one of the plurality of composite images is invisible in at least one of the multiple constituent images of the composite image; and displaying the plurality of composite images in the two- dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • displaying images in the two-dimensional image grid further includes the actions of: generating a plurality of composite images by overlaying multiple constituent images from multiple channels, wherein each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions, and the multiple constituent images including at least a grayscale image and a colored florescent image; displaying the plurality of composite images in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • displaying images in the two-dimensional image grid further includes the actions of: generating a plurality of composite images by overlaying multiple constituent images from multiple channels or multiple focus positions, wherein each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions; and displaying the plurality of composite images in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions, and the methods further include the actions of: receiving user input modifying a display option for one of the multiple channels or multiple focus positions; and updating the plurality of composite images displayed in the two-dimensional image grid by independently modifying the constituent images from the one of the multiple channels or focus positions according to the modified display option.
  • the techniques described in this specification allow images associated with multiple dimensions to be organized in a single integrated data visualization interface. Multiple related images (either from a single dataset or from multiple related datasets or experiments) can be viewed and manipulated in a
  • Measurement data can be correlated with images and can be presented or highlighted in tables and graphs when presented along the images.
  • the selection and highlighting of particular measurement data corresponding to the displayed images can be accomplished automatically by the integrated data visualization tool without user interference.
  • the integrated data visualization tool can anticipate a user's image viewing needs based on the known relationships between the images in the different image-collection dimensions, and accurately and efficiently cache and present the images with minimal delay. Therefore, the user can concentrate on interpreting the images and data without bearing the burden to manually select and prepare each image and data before viewing the images for interpretation and comparison. Coordinated display and manipulation of images, measurement data, and graphs can be made both within a single dataset and across multiple related datasets, depending on user preferences.
  • thumbnail navigation can be enabled by using one or more browsable image strips.
  • Each image strip may represent one or more image-collection dimensions within a dataset or across multiple datasets or experiments.
  • Each cell of the image strip includes a thumbnail image representing the corresponding values of the cell in the one or more image-collection dimensions.
  • the thumbnail image can be derived from source images in the multi-dimensional dataset. The thumbnail is chosen to best represent the image memorize(s) that are associated with the corresponding values of the cell in the one or more image-collection dimensions represented by the browsable image strip.
  • the browsable image strip allows the user to get a rough view of the images before selecting a particular image series to display in the image grid.
  • the image strip and generation of thumbnails can be updated automatically based on the user's selection or manipulation of the images in the image grid(s).
  • the images in the image grid can be updated automatically based on the user's selection of thumbnails in the image strip(s). Therefore, the navigation of the images becomes more intuitive and the user does not have to spent mental energy keeping track of the structure and organization of the multi-dimensional image database.
  • FIGS. 1 A-1D illustrate series of images taken in four exemplary image-collection dimensions: Time (FIG. 1A); Z (FIG. IB); wavelength/Channel (FIG. 1C); and Stage (FIG. ID).
  • FIG. 2 shows an exemplary layout of various components of a dataset viewer in an integrated data visualization interface for use with a multi-dimensional microscopy system.
  • FIG. 3A shows an exemplary image grid in a Time v. Z view mode.
  • FIG. 3B shows an exemplary image grid in a Time v. Channel view mode.
  • FIG. 4 shows an exemplary image strip for image navigation in a Stage dimension.
  • FIG. 5 is an exemplary dataset viewer showing a table component and a graph component presented along with an image grid and an image strip.
  • FIGS. 6A-6D are flow diagrams of exemplary processes for coordinated manipulation of related images and/or data within an image grid or multiple image grids (FIG. 6A), between an image grid and a table (FIG. 6B), between an image grid and a graph (FIG. 6C), and between an image grid and an image strip (FIG. 6D).
  • FIG. 7 is a flow diagram of an exemplary process for coordinated manipulation of an image grid and a browsable image strip in a dataset viewer.
  • FIG. 8 is an exemplary integrated multi-dimensional microscopy system, as may produce image and numerical data for processing with the technology described herein.
  • FIG. 9 is a general purpose computing device, as may be configured for use with the technology described herein.
  • FIGS. 1 A- ID are schematics illustrating images collected in a number of such dimensions.
  • FIG. 1A shows, schematically, images (lOla-e) of a cell sample taken consecutively in time.
  • the images (lOla-e) can be consecutively captured at discrete time intervals by the imaging system (e.g., in a time lapse), or collected continuously. These images form a Time series, and when presented in order, these images can reveal how the cell sample evolved or changed over time.
  • each image is associated with a particular time value in a time sequence.
  • Each multidimensional experiment can include multiple Time series, each associated with a different value in at least one other image-collection dimension.
  • FIG. IB shows images (102a-c) of a cell sample collected at a number of focus positions along the z-axis.
  • the z-axis is in the direction toward or away from cell sample.
  • These images form a Z-series.
  • the images (102a-c) in the Z-series when presented in order, show the cross-sections of the three-dimensional cell sample along the z-axis through the biological sample. Given sufficient granularity for the progression in the z- direction, the images in the Z-series can be used to reconstruct three-dimensional views of the objects in the sample.
  • each image is associated with a particular Z- position value along the z-axis.
  • FIG. 1C shows images (103a-c) taken under different imaging conditions, for example at different wavelengths.
  • imaging conditions include, for example, bright-field imaging, fluorescent imaging, imaging using transmitted light, imaging using reflected light, Differential Interference Contrast (DIC) imaging, phase contrast imaging, and so on.
  • DIC Differential Interference Contrast
  • phase contrast imaging and so on.
  • fluorescent imaging different fluorescent markers and fluorochromes can be used to tag different structures within the biological sample.
  • Images can be collected of the different emission spectra. Different wavelengths can be used to excite the different fluorescent markers.
  • the different imaging conditions can be used to define an image-collection dimension, namely the Channel dimension. Each of the different imaging conditions, or wavelengths, corresponds to a particular channel in the Channel dimension. Images of different channels need not be arranged in any particular order relative to one another. There can be multiple Channel series in a multidimensional dataset, each Channel series being associated with a particular value in at least one other image-collection dimension (e.g., the Time dimension).
  • each image is associated with a particular imaging technique or imaging wavelength.
  • the different images of the same biological sample taken under different imaging conditions can visually reveal or enhance different structures within the biological sample.
  • the images in the Channel series can be stacked together to produce a composite image that provides a more complete picture of the biological sample.
  • the user can optionally select a subset of the images in the Channel series for making a composite that reveals only particular objects of interest in the biological sample.
  • images from different channels can be enhanced and subtracted from one another to isolate particular objects of interest.
  • FIG. ID shows images (104a-c) taken at different stage locations in the biological sample.
  • the field of view for a microscopy imaging system is smaller than the size of the biological sample to be imaged. Therefore, multiple images need to be taken at different locations over the biological sample to capture the entirety of the features of interest of the biological sample.
  • the area (105) over the entire biological sample can be divided into sections, each section being associated with a particular stage identifier. In some examples, the images taken over each section are associated with the stage identifier of the section. Images from different stages form the Stage series, and can be stitched together to form a broader view of the biological sample.
  • FIGS. 1A-1D illustrate images for four example image-collection dimensions obtained from a multi-dimensional biological experiment.
  • Other image-collection dimensions can be defined.
  • a user can define an Experiment Condition dimension, where each value in the Experiment Condition dimension represents a particular experiment condition, or combination of conditions, that is present when images are being captured. For example, suppose in an experiment, a number of different catalysts were added consecutively to a biological sample, and an image was taken after each catalyst is introduced. The set of images collected under the different combinations of catalysts can form an Experiment Condition series, and each image is associated with a particular catalyst combination within the Experiment Condition series. In some implementations, multiple images can be taken under each experiment condition (e.g., at different time points, in different channels, at different stages, etc.), and multiple
  • Experiment Condition series can be formed, each associated with a different value in at least one of the other image-collection dimensions.
  • Each image-collection dimension can exist with different granularities in different datasets. For example, images can be taken more frequently (e.g., every 1 ms) in one dataset, and taken less frequently (e.g., every 10 s) in another dataset. In some cases, each image-collection dimension can have different granularities for different value intervals. For example, for the first and the last thirds of the experiment, images are taken every 10 seconds, while images are taken every 2 seconds during the middle third of the experiment. The Time-dimension for the dataset therefore shows a higher sampling frequency in the middle third section.
  • each image can be associated with particular values in multiple image-collection dimensions.
  • each time value is associated with a number of images. These images are associated with the same value in the Time dimension, but different values in at least one other dimension (e.g., different channel values, stage values, or z values).
  • a one-dimensional image series can be extracted without modification from a multi-dimensional dataset.
  • each image is associated with a different value in one particular image-collection dimension, but share identical values in all other image-collection dimensions.
  • each image is associated with a different time value, while all images in the one-dimensional Time series are associated with the same stage, z, and channel values.
  • a one-dimensional image series can be derived from a multi-dimensional dataset.
  • each image in the one-dimensional image series can be a composite of multiple constituent images that are associated with a common value in one image-collection dimension but different values in at least one other image- collection dimension.
  • each image represents a particular time value and is a composite of multiple source images that are associated with the particular time value.
  • Each of the multiple source images for making the composite can be a one dimensional Channel series associated with the particular time value, or a one dimensional Z-series associated with the particular time value.
  • images collected in a multi-dimensional experiment can be organized in different single or multi-dimensional series depending on the viewing needs of the user.
  • Images can be related to one another based on the associated values in each of the image-collection dimensions. For example, an image is associated with a particular channel, a particular time value, a particular focus position (i.e., Z-value), and a particular stage position. This image can therefore be related to another image that is associated with the same channel or in the same Channel series, another image with the same associated time value or in the same Time series, another image having the same focus position or in the same Z-series, or another image having the same stage position or in the same Stage series.
  • Other image relationships by dimension can be defined.
  • multiple datasets can be collected during a multidimensional experiment. For example, multiple trials of an experiment can be performed on multiple samples under the same set of control conditions. Images of a particular sample taken during each trial can form a dataset. The datasets for the multiple trials in the experiment are related datasets. Similarly, one experiment can be carried out under one set of conditions, while a related experiment can be carried out under a different set of conditions. Images taken in each of the experiments can form a respective dataset, and the two datasets can be related datasets. Datasets can be related by experiment conditions, sample type, control parameters, and so on.
  • a user of the imaging system can define each of these image organization levels (e.g., experiment, trial, or dataset) as a corresponding image-collection dimension (e.g., the Experiment, Trial, or Data Set dimensions), and relate the datasets under these different organization levels.
  • image organization levels e.g., experiment, trial, or dataset
  • image-collection dimension e.g., the Experiment, Trial, or Data Set dimensions
  • a first dataset contains images of a biological sample taken before a particular catalyst is injected and a second dataset contains images of an identical sample taken after the particular catalyst is injected
  • the Time series of the two datasets can be compared to see how the catalyst influenced the biological sample over time.
  • Each image in the Time series in the first dataset is related to a corresponding image in the Time series in the second dataset by a common associated time value.
  • Each image taken in a multi-dimensional experiment may contain different objects of interest at various locations, such as particular structures, substances, artificial markers, and so on. These objects of interest can be identified using various image processing techniques. The same object can be present in multiple images, and therefore identified and tagged in each image. Due to the varying sensitivity of different imaging techniques to different objects, the same object may appear in only some of the images of the same sample. Sometimes the same object may evolve and change during the course of the experiment and exhibit different appearances, change locations, or disappear in the images. For example, images from different channels can show different cell structures of the same cell, while images from a Time series can show the progress of a cell cycle. Even though the sample is shown in each image, the objects contained in the different images may not be identical.
  • objects can be identified and tracked in different image series.
  • Objects can include actual physical structures or substances in nature (e.g., a cell, a component of a cell).
  • Regions of interest can be user-defined or application-defined, and can be tracked in different image series as well. For example, a user can select a particular region in one image, and have that region tracked in multiple images in a Time series, Z series, or Channel series.
  • Markers e.g., arrowheads
  • other indicators e.g., circling, highlighting, and so on
  • a dataset can also include numeric measurement data.
  • measurements can be done on the images or the objects or regions of interest identified in the images.
  • Images, objects, and measurements can bear relationships to one another.
  • multiple images can be related to one another by common objects present in the images and/or similar measurements derived from the objects or images.
  • Measurements derived from a particular set of images and objects can be related to the set of images and objects, and vice versa.
  • a numeric measurement derived from an image can include, for example, a cell count.
  • the measurement i.e., the cell count, in this example
  • the individual cells may be present in multiple images, and these multiple images can be related to one another, to the individual cells, and to the numeric measurement (i.e., the cell count, in this example).
  • Numeric measurement data can be associated with multiple images in a single image-collection dimension, across multiple image-collection dimensions, in a single dataset, or even cross multiple datasets.
  • Particular measurement data can be associated with a single object in a single image, multiple objects in an image, a single object in multiple images, or multiple objects in multiple images, and so on. Examples of measurement data include, size, shape, intensity, gradation, motion, speed, counts, scores, and so on.
  • a dataset can also include other derived correlations and statistics for the images and measurements (e.g., averages of the numeric measurement data across objects, images, datasets, dimensions, and so on).
  • An integrated data visualization interface is disclosed herein.
  • the integrated data visualization interface exploits the relationships that are known due to the nature of the collection methods and dimensions, to assist in the presentation and navigation of images and data within relevant datasets. Many manual selection and manipulation steps can be automatically and synchronously accomplished without the user having to issue specific commands through the integrated multi-dimensional data visualization interface. Thus, the navigation and viewing of images in the integrated data visualization interface become much more efficient and intuitive compared to visualization interfaces available elsewhere.
  • FIG. 2 shows an exemplary layout of an integrated data visualization interface 200.
  • the integrated data visualization interface 200 includes four components: an image grid 202, a browsable image strip 204, a graph component 206, and a table component 208. Not all components need to be present in the integrated data visualization interface at the same time. Not all components need to be implemented for an integrated data visualization interface to function.
  • the integrated data visualization interface 200 includes one or more dataset viewers 210.
  • the dataset viewer 210 is a container object that can contain a group of components, such as the image grid 202, the browsable image strip 204, the graph component 206, and the table component 208.
  • the image grid, browsable image strip, graph, and table components within each dataset viewer are used to display images and data for a single dataset.
  • Different dataset viewers can be used to display data (images and numeric measurements) from the same dataset or different datasets.
  • multiple instances of a component e.g., multiple image grids or multiple image strips, and so on
  • two image grids can be displayed in the integrated data visualization interface.
  • multiple instances of a component can exist within their respective dataset viewer windows.
  • two image grids can exist in two different dataset viewers, along with two browsable image strips.
  • Other components and container objects can also be implemented.
  • a frequently used component such as the image grid 202
  • the user when the user first starts the integrated data visualization interface 200, only an image grid is presented in the interface 200 (e.g., within a dataset viewer window 210 in the interface 200).
  • the user can subsequently invoke the other components, such as the browsable image strip 204, the graph component 206, the table component 208, and so on, as desired.
  • the user can also subsequently invoke multiple instances of each component as needed (e.g., two instances of the image grid) within the integrated data visualization interface 200 (e.g., by opening two dataset viewers 210 each containing an image grid 202, or by opening two standalone image grids).
  • no default component needs to be present in the integrated data visualization interface initially, and the user can select a desired component after the interface is activated. For example, the user can choose to invoke only the table component to view measurement data, without opening the image grid or the image strip. This option is useful when the dataset does not contain many images, and the measurement data is the primary source of information in the dataset. In some situations, image data in a dataset is compressed or archived due to its large size, and it may be more convenient for a user to view the numeric measurement data first before deciding whether to open the image data.
  • the layout of dataset viewer windows and the layout of display components within each dataset viewer can be varied. For example, the user can move and rearrange the different components in the dataset viewer window after the components are invoked.
  • the different components can also be resized manually or automatically based on the content displayed within each component.
  • FIGS. 3 A and 3B show an example of an image grid in two different view modes (300a and 300b). Specifically, FIG. 3A shows the image grid in a Time v. Z view mode (300a), and FIG. 3B shows the image grid in a Time v. Channel (Wavelength) view mode (300b). Other view modes are also possible. Each of the image grids is shown in a respective dataset viewer.
  • the image grid can display images in a selected dataset.
  • the image grid (300a and 300b) is a two-dimensional grid and is associated with two image-collection dimensions.
  • the columns of the image grid are associated with a series of values in a first image-collection dimension and the rows of the image grid are associated with a series of values in a second image-collection dimension.
  • the image grid can also be associated with a single value in a third dimension.
  • the first and the second dimensions can be identical.
  • Each cell of the image grid can contain an image associated with a first value in the first image-collection dimension and a second value in the second image-collection dimension of the image grid.
  • the user can first select a dataset (e.g., using user interface element 302a and 302b) for the dataset viewer. The user can then specify a view mode (not shown in FIGS. 3A and 3B) for the image grid.
  • the user can also select the total number of rows and columns for the image grid. If the number of rows and columns are not specified, the size of the image grid can be based on the number of sampling points available in the selected dataset for the first and the second image-collection dimensions.
  • Each dimension of the two-dimensional image grid can represent a corresponding image-collection dimension (e.g., Time, Z, Channel, Stage, and so on).
  • a corresponding image-collection dimension e.g., Time, Z, Channel, Stage, and so on.
  • each row of the image grid is populated with images from a Time series that is associated with a particular value in the Z-dimension
  • each column of the image grid is populated with images from a Z-series that is associated with a particular value in the Time dimension.
  • Each image presented in a cell of the image grid is associated with the corresponding Z value and time value of that cell.
  • the image grid is also associated with a single value in a third image- collection dimension (e.g., Stage).
  • the third dimension is Stage
  • the associated value in the third dimension is stage value 0, as indicated by element 304a.
  • the images displayed in the image grid are arranged such that the focus positions (i.e., Z values) at which the images were taken are incremented in the vertical direction of the grid, and the time points at which the images were taken are incremented in the horizontal direction of the grid.
  • the user can specify a coarser resolution (or step intervals) for the time grid so that only a subset of the images in the relevant image series is presented.
  • each column of the image grid corresponds to a value in the Time dimension
  • each row of the image grid corresponds to a value in the Channel dimension.
  • Each image presented in a cell of the image grid is associated with a corresponding time value and a corresponding channel value for the cell.
  • the image grid in FIG. 3B is also associated with a single value in the third dimension (e.g., the Stage dimension).
  • the images in the image grid are arranged so that the images' associated time values are incremented along the horizontal direction.
  • the values in the Channel dimension do not have any required order, and therefore can be arranged in any predetermined order in the image grid.
  • the image grid can also be presented in a Channel v. Z mode, a Multi-Channel mode, a Multi-Time mode, or a Multi-Z mode.
  • the image grid can be populated row by row, or column by column, with images that are associated with incremental values in the image-collection dimension.
  • Other alternatives such as a Stage v. Time mode, a Stage v. Channel mode, a Stage v. Z mode, and a Multi-Stage mode can also be implemented.
  • View modes with other image-collection dimensions can also be implemented.
  • image processing a copy of each image in a Time series can be saved after each of a set of image processing steps, and the processing history of the images can be defined as a new image-collection dimension (e.g., the History dimension).
  • the image processing steps can include, for example, background subtraction, pixel manipulation, thresholding (i.e., removing pixels meeting one or more thresholds from an image), sharpening, blurring, binarization, edge detection, and so on.
  • a Time v. History view mode can be defined, where each column of the image grid displays the images from a corresponding History series that is associated with each time value in the Time dimension.
  • the image shown in each cell of the image grid can be a single image that is associated with the values represented by the cell in the two image-collection dimensions of the image grid. If the image grid is also associated with a value in the third image- collection dimension, the single image is also associated with that single value in the third image-collection dimension (e.g., Stage). For example, in FIG. 3B, each image is a single image associated with a single time value, a single channel value, and a single stage value.
  • each image shown in the image grid can also be a representative image or a composite image of multiple images that are associated with the same values in the first and the second dimensions of the image grid.
  • images from multiple channels e.g., the Channel series associated with the time value and Z-position
  • These images from the multiple channels can be combined to produce a single composite image, and presented in a cell of the image grid.
  • the each image shown in FIG. 3A is a composite (e.g., a true color 24 bit or 48 bit image) of a Channel series that are associated with the same time value and the same Z-position.
  • FIG. 3A at each time value and Z-position, images from multiple channels (e.g., the Channel series associated with the time value and Z-position) can be available in the dataset. These images from the multiple channels can be combined to produce a single composite image, and presented in a cell of the image grid.
  • the each image shown in FIG. 3A is a composite (e.g., a true color 24 bit or 48 bit image) of
  • images from multiple Z-positions can be available in the dataset.
  • a representative image can be selected or generated for display in the cell.
  • each cell of the image grid can display an image associated with a mid-point value in the Z dimension.
  • the criteria for selecting the representative image can be provided by a user, for example, through a menu command. The criteria can also be defined by the system based on the properties of the images in each particular image- collection dimension.
  • neither composites nor representative images need to be selected or generated for display in the image grid. Instead, all images in a dimensional series (e.g., Z series, Channel series, Time series, etc.) can be presented in a cell of the image grid. Continuing with the FIG. 3B example, images in an entire Z series that are associated with a common Time value and a common Channel value can be displayed in a corresponding cell of the image grid as a cycled movie sequence.
  • a dimensional series e.g., Z series, Channel series, Time series, etc.
  • the movie sequence is automatically cycled through.
  • the movie sequence is played, for example, when the mouse pointer is hovered over the image cell or a play control is invoked in the viewer interface.
  • a horizontal slider e.g., 306a and 306b
  • a vertical slider e.g., 308a and 308b
  • a third dimension can also be selected for the image grid (e.g., using a dropdown menu). For example, the user can select a Channel dimension or Stage dimension as the third dimension for an image grid in the Time v. Z view mode. The user can then specify a particular value for the third dimension (e.g., from a dropdown menu).
  • Images simultaneously displayed in an image grid are related in at least one image-collection dimension. Therefore, if a user visually manipulates one image in the image grid (e.g., zooming into the image, enhancing the image, or marking an object in the image, etc.), it is likely that the user would find it useful to have the same image manipulation applied to other images in the image grid as well.
  • the desirability of coordinated image manipulation across multiple images in the image grid may depend on the nature of the visual manipulation and the associated dimensions of the image series displayed in the image grid. For example, spatial positions in one image naturally relate to spatial positions in other images that are in the same Time, Channel, and Z series. Therefore, spatial manipulations (e.g., zoom and pan operations) applied to one image can be suitably applied to other images in the same Time, Channel, or Z series. Although applying coordinated spatial manipulations to images from different stages may not be sensible in some cases, in other cases it may still be meaningful. For example, if the sample is machine plated, samples at different stage locations resemble one another; coordinated spatial manipulation can still be meaningful.
  • spatial manipulations e.g., zoom and pan operations
  • all images shown in the image grid can be automatically zoomed to the same level.
  • an un-zoomed image can be optionally displayed indicating the zoomed region, when the cursor, such as controlled by a track-pad or mouse, is hovered over a particular zoomed image in the image grid, for example.
  • the user then pans the zoomed image to the left all zoomed images in the image grid can be automatically panned to the left by the same amount.
  • the user may restore one of the images to its original size and center, and all images in the image grid are automatically restored to their original sizes and centers as well.
  • the automatic and synchronized application of visual manipulations across multiple images in the image grid allows the user to maintain a consistent view the images in the image grid in an effortless and error- free manner.
  • FIG. 3A and FIG. 3B shows an example implementation of a zoom and pan control. 310a and 310b.
  • the zoom and pan control 310 is a semi-transparent window showing the entirety of an image that is displayed in the image grid cell underneath the semi-transparent window.
  • the semi-transparent window includes a highlighted area indicating a section of the full image that is currently visible in the image grid cell.
  • a user can optionally resize the highlighted area (e.g., by dragging a corner or an edge of the highlighted area) to change the current zoom level of the image grid.
  • the resizing can be with a fixed aspect ratio or variable aspect ratio.
  • the user can also move the highlighted area within the window (e.g., by click and drag the highlighted area within the semi-transparent window), so that the portion of the image that is visible in the image grid cell matches the highlighted section in the full image.
  • the user can select a different image in the image grid and have the zoom and pan control display a full image of the currently selected image in the image grid.
  • Some image manipulations are suitable for an image in one particular dimensional series, but may not be suitable for another image in another dimensional series, even if the two images are related in one or more image-collection dimensions.
  • some visual manipulations e.g., background subtraction, coloring based on intensity variations, etc.
  • images in one dimensional series e.g., a time series associated with a first channel value
  • images in another dimensional series e.g., a time series associated with a second channel value.
  • the image viewer can intelligently determine the appropriate set of images to which to apply the visual manipulations based on the nature of the visual manipulation and the properties of the image series.
  • the same background subtraction operation can be automatically applied to all images in the second row, but not the images in the other rows.
  • the reason for this selective application of the background subtraction operation is that the operation is only suitable for the images in the same channel.
  • the background intensities for images taken from different channels are typically different because different optical conditions were used for the different channels. It would only be appropriate to apply the same background subtraction operation for images in the same channel.
  • the selective application of the background subtraction operation is used only if the user specifies the background intensity for the channel. If a generic command is used by the user for background subtraction, and the image viewer is able to determine the appropriate background level for each channel without user input, then the background subtraction operation can be applied to all images in each channel according to the appropriate background levels for each channel.
  • the integrated data visualization interface can implement a variety of image display controls and operations which can be used to visually manipulate the appearance of the images in the image grid.
  • Examples of such display controls and operations include, selection or identification of region(s) of interest (both manually or automatically detected), pseudocolor display (e.g., grayscale or color display to visualize intensity gradation in the image), image zoom and pan, intensity scales, spatial scales, calipers, overlaying object identifiers, markers or text, and so on. Invocation of each of these operations and display controls can be registered by the data visualization interface, and synchronously applied to all or a selected subset of related images within the image grid.
  • the integrated data visualization interface can include a toggle button or selection menu, that allows the user to set up when and which operations are to be applied across which series of related images.
  • previously applied operations and visual manipulations can be automatically applied to a new set of images that subsequently occupies the image grid (e.g., by replacing the original set of images). This can be done when the new set of images is related to the previous set of images in one or more image-collection dimensions.
  • the automatic application of previous operations and visual manipulation steps to a new set of images in the image grid is a useful feature because it provides a logical continuity in the user's image viewing experience.
  • other user manipulations to the images can also be recorded and automatically applied to the new set of images that are subsequently displayed in the image grid.
  • the zoom level and the zoom location can also be recorded.
  • the new set of images associated with the same time and Z value, but the new channel value are automatically scrolled into view in the image grid.
  • the same manipulation, i.e., the same zoom location and zoom level can be automatically applied to the new images in the image grid.
  • the user can selectively switch on or off such automatic application of the previous operations when images in the image grid are replaced.
  • a toggle key can be used to allow the user to selectively switch on or off this function for an extended period of time.
  • Other image processing and manipulations can be recorded and applied to the new images in the image grid as well.
  • the user has not altered the view mode of the image grid, so operations (e.g., scroll, pan, zoom, etc.) that have been applied to the previous set of images in the image grid can be easily applied to the current set of images that are just loaded into the image grid.
  • operations e.g., scroll, pan, zoom, etc.
  • the view mode of the image grid does not change if the user simply specifies a new value for a dimension other than the first and the second dimensions of the image grid, even though it does change the set of images displayed in the image grid.
  • the view mode does not change, operations previously applied to the images in the image grid can be applied to the images subsequently occupying the image grid as well.
  • the user may select a different view mode for the image grid, such that one of the two dimensions of the image grid is changed.
  • Some of the previous operations may still be applicable and can be automatically applied to the new set of images in the image grid under the new view mode. For example, if only one of the first and second dimensions of the image grid is changed, then automatic scrolling can still be applied to the unchanged dimension.
  • the view mode of an image grid was originally Time versus Z, and the user has scrolled the image grid to reveal images associate with a particular time range. Further suppose that the user decides the change the view mode from Time v. Z to Time v. Channel.
  • the image grid can be updated with a new set of images that are associated with values in the Time and Channel dimensions, and can be automatically scrolled to the time range that was previously shown in the image grid.
  • the view mode of the image grid is altered such that both the first and second dimensions of the image grid are changed, then the images currently shown in the image grid are not related to the image previously shown in the image grid by any dimension. In such situations, no automatic application of previous operations or visual manipulations needs to be carried out.
  • the user can also open multiple image grids in the integrated data visualization interface.
  • the user can open multiple dataset viewers in the integrated data visualization interface, and open an image grid within each of the dataset viewers.
  • the image viewer can display data (images and measurement data) from the same dataset or different dataset.
  • the view modes of the multiple image grids can be the same or different from one another.
  • the set of images loaded in each image grid can also be the same or different from one another; or some of the images may overlap with one another.
  • Images in different image grids can be related by channel, by spatial positions, by time, or by the objects identified in the images, for example.
  • the relationships can be automatically determined by the integrated data visualization interface, or be specified by the user.
  • a linking option interface can be implemented, and the user can specify, through the linking option interface, whether to link to image grids (or dataset viewers containing the image grids), and how operations applied in one image grid are to affect other image grids.
  • the coordinated image presentation and manipulation between two image grids that are in the same view mode are similar to that applied to images within the same image grid. For example, if two image grids are both in the Time v. Z view mode but are associated with different stage values, and the user scrolls to a particular position in one image grid and visually manipulates an image in that position, then not only is the same visual manipulation automatically and synchronously applied to all images in the same image grid, the same scrolling and visual manipulation are automatically and
  • an operation e.g., a background subtraction
  • a subset of images e.g., images in a particular channel
  • the operation is only applied to that subset of images in the first image grid, and to a corresponding subset of images (e.g., images in the same particular channel) in the second image grid.
  • the multiple image grids in an integrated data visualization interface do not have to display images from the same dataset. Images from a different dataset can be displayed in each image grid.
  • the coordinated image manipulation and visualization enable comparison of related images, both within a dataset or across multiple datasets, in a fast, easy, and efficient manner.
  • the user can turn the linking between image grids on or off at will. For example, if a user finds an interesting section in the time dimension in a first image grid and then opens a second image grid, and if linking was already on, the second image grid automatically scrolls to the same section in the time dimension. The user can manipulate either image grid, and have the views in both image grid update at the same time to provide an analogous view of the data in the two image grids. If the linking is off (or is turned off) when both image grids are present in the data visualization interface, for example, in two dataset viewer windows, the user can manually scroll the second image grid to a different section in the time dimension. The user can then turn on the linking.
  • all subsequent actions applied to images in one image grid would be automatically and synchronously applied to images in the other image grid. For example if the user modifies the channels intensity scaling on one image grid it will be applied to the corresponding channel in other image grids. For another example, if the user marks an object in an image in the first image grid, the same object will be marked in other images in the same image grid, and in the images in the second image grid.
  • the image grid treats each dataset as one continuous block of data. For example, if a dataset contains 100 time points, the image grid treats the data as if there are images for all 100 time points at every stage, channel, and focus position in the dataset. In reality, some of these images may not be present. For example, some channels may be collected at a higher time frequency than others. For example, if channel 1 is collected at every time point, channel 2 may be collected only once every 10 time points.
  • the image grid can compensate for the missing images, typically by displaying a nearby image from the dataset (e.g., an image acquired at a most close time point). For another example, images in some channels may be acquired for only a single focus position, while images in other channels are collected at multiple focus positions.
  • the Z dimension of the image grid may be filled by repeating images for some channels that do not have sufficient Z resolution.
  • Other techniques for representing unavailable images may be developed. For example, interpolation between two or more nearby images can be used to generate images for a particular value that is missing from the dataset.
  • the images displayed in the image grid are ordered and represent evenly distributed values in the two dimensions of the image grid. For example, a time point 5s (5 seconds) is displayed next to a time point 6s, which is next to a time point 7s, and so on.
  • the resolution (or sampling rate) can be specified in either the horizontal direction, the vertical direction, or both.
  • the resolution can be every 1 second, and in the vertical dimension the resolution can be every 100 seconds. So the adjacent images in each row were taken 1 second apart in time, while the adjacent images in each column were taken 100 seconds apart. This arrangement is useful, for example, when the user wishes to skip a big section of image data that shows slow variations.
  • the image grid can provide easily accessible options to give the user better views of the image data.
  • a toggle button can be implemented for the user to switch between a single image view and multiple image view.
  • a control button can be implemented to maximize a selected image or the image grid, such that the selected image or the image grid fills the screen.
  • the image grid can also go into a movie mode such that images in each dimension of the image grid are played in a movie sequence. Predictive Caching to Improve Responsiveness of the Image Grid
  • the number of images collected in a multi-dimensional experiment can be rather large, sometimes in the tens of thousands, or more. Such large numbers of images may easily exceed available RAM on a currently available computer system.
  • retrieving images from a file system is often slow and impedes the user from moving smoothly through the data.
  • the image grid alleviates these issues by anticipatorily caching some images based on a prediction on when these images will likely be accessed or manipulated.
  • the image grid does not load all images from the dataset into memory.
  • the image grid selectively pre-loads only images that are likely to be displayed soon.
  • the integrated data visualization interface determines the images that the user is likely to view next based on the current state of the image grid, and the history of the user actions. For example, if the user is scrolling to the right in the Time dimension, then images for the later time points can be preloaded. In another example, if the user is changing the value of the third dimension of the image grid incrementally, then images associated with the next value in the third dimension can be preloaded. Furthermore, if the user is viewing images within a particular range in the image grid, then other images associated with the same range can be preloaded. User behavior can be analyzed based on past action histories to improve the accuracy of the predictions.
  • a pattern can be identified based on the past action histories in terms of repeated sequence of viewing options (e.g., view modes) the user has selected in the past, and/or the most frequently accessed datasets or image series, images or image series that are frequently accessed can be cached or preloaded.
  • Various data analysis methods such as machine- learning techniques, statistical analysis, heuristics, neural-network analysis and so on, can be utilized to identify patterns in user actions to determine the user's likely viewing needs and use the determined viewing needs as the basis for the caching and preloading of images in anticipation of such viewing needs.
  • One technique is to automatically adjust the number of pixels and then sample the image based on the zoom factor and the memory available. As the user adjusts the zoom, or position of the view, new pixel data will be loaded for the display as necessary. In this way, no new data set needs to be created but full resolution images are available for viewing.
  • Another technique is to treat the large image as a stage series composed of smaller images in a grid arrangement. As data is loaded from the file the appropriate selection is loaded into RAM. Only the loaded portion is used for display and analysis. This prevents memory limitations from interfering with both display and analysis.
  • the integrated data visualization interface can include a second component for navigating the images in a multi-dimensional experiment database.
  • This navigation component can be implemented in the form of one or more browsable image strips.
  • An image strip can be a one-dimensional strip or a two-dimensional grid.
  • a one-dimensional image strip can represent values in a single image-collection dimension.
  • This image- collection dimension can include, for example, an Experiment dimension, a Data Set dimension, a Stage dimension, a Channel dimension, a Z-dimension, a Time dimension, or other user or system defined dimensions (e.g., a History Dimension, an Experiment Condition dimension).
  • Each cell of the image strip represents a value in the associated image-collection dimension of the image strip.
  • the associated image-collection dimension of an image strip can be an image- collection dimension within a dataset, or an image-collection dimension going beyond a particular dataset.
  • an image strip can represent a Data Set dimension, and each cell of the image strip represents a different dataset.
  • an image strip can represent an Experiment dimension, and each cell of the image strip can represent datasets from a different Experiment.
  • each level of a hierarchical file system for storing the multi-dimensional image data can be associated with a browsable image strip, and each cell of the image strip can represent a data entity (e.g., dataset, image series, or image, etc.) on that level.
  • Each cell of the image strip can include a thumbnail image. Thumbnails are reduced-size versions of images, used to help in recognizing and organizing the images, for example.
  • the thumbnail image can be a representative or composite of multiple source images. For each thumbnail, the source images are all associated with the value represented by the cell holding the thumbnail.
  • each cell of the image strip represents a particular time value and contains a thumbnail image representing that time value.
  • the thumbnail image can be a representative image (e.g., image associated with a mid-point value) selected from, for example, a Z series associated with the time value.
  • the thumbnail image can be a composite (e.g., a full color image) of the Channel series associated with the time value.
  • each cell of the image strip represents a particular channel value and contains a thumbnail image representing that channel value.
  • the thumbnail can be a representative image selected from, for example, a Z series or a Time series that is associated with the particular channel value.
  • the thumbnail image can be a composite (e.g., a 3D reconstruction) of the Z series associated with the particular channel value.
  • the thumbnail can be the Time series or Z series being displayed consecutively in the cell. The thumbnail images provide visual clues as to the images associated with a particular value in a given dimension, and therefore help the user to locate the relevant image data in a database quickly.
  • FIG. 4 shows an example image strip 408 in the integrated data visualization interface.
  • the integrated data visualization interface includes a dataset viewer 402, and the dataset viewer 402 includes an image grid 414 and the image strip 408.
  • the image strip 408 is associated with a view mode selection element 410 which allows the user to specify the associated dimensions of the image strip. In this particular example, only four options are provided: Stage, Time, Z, and Data Set.
  • the currently selected dimension for the image strip 408 is Stage, and therefore, each thumbnail shown in the image strip represents a particular stage position in the currently selected experiment and dataset.
  • each thumbnail (412a-d) is a composite image of a Channel series associated with the particular stage position represented by the thumbnail.
  • the user can select a different dimension (e.g., the Data Set dimension, or the Time dimension, etc.) for the image strip, and all the thumbnails in the image strip 408 will be replaced by the thumbnails representing values in the newly selected dimension.
  • the browsable image strip can be a two dimensional grid resembling the image grid under different view modes.
  • a Time v. Z image strip allows easy navigation within both the Time and the Z dimensions.
  • a Channel v. Experiment image strip allows navigation in both the Channel dimension and the Experiment dimension. The user selecting a thumbnail in the Channel v. Experiment image strip can quickly located all the images associated with a particular channel within a particular experiment.
  • the browsable image strip can be a two dimensional grid representing stage positions, such that the image are arranged to reflect the spatial arrangement of the stage location. For example, if an image (A) is collected at a stage position to the right of another image (B), A will appear to the right of B in the browsable image strip. In some cases the images are collected in a grid arrangement and so the browsable image strip can represent this arrangement.
  • Two-dimensional image strips include, for example, a Data Set v.
  • Experiment image strip a Data Set v. Stage image strip, a Data Set v. Time image strip, a Data Set v. Z image strip, a Stage v. Time image strip, a Stage v. Z image strip, a Stage v. Channel image strip, a Time v. Z image strip, a Channel v. Time image strip, a Z image strip, a Z v. Channel image strip, a Channel image strip, as well as others not specifically called out herein.
  • the image strip can be used to select images to display in the image grid that is shown in the same dataset viewer.
  • the effect of the selection may be different depending on the view mode and other associated image-collection dimensions of the image grid.
  • the selected thumbnail represents a value in an image-collection dimension other than the first and the second dimensions of the image grid
  • all images in the image grid are replaced with images associated with the selected value. For example, if the image grid is in a Channel v. Z view mode, and the image grid is associated with a particular time value in a third dimension, when a user selects a thumbnail in the Time image strip, the image grid is updated such that images from the Channel and Z series that are associated with the newly selected time value would populate the image grid. In another example, for the same Channel v. Z image grid, if the user selects a thumbnail in the Data Set image strip, the image grid is updated such that images from the Channel and Z series in the newly selected dataset would populate the image grid.
  • the image grid is simply scrolled such that the images associated with the selected value become visible in the image grid. For example, for a Channel v. Z time grid that is associated with a particular time value in the third dimension, if the user selects a particular channel value in a Channel image strip, the image grid automatically scrolls horizontally such that the images associated with the selected channel value become visible in the image grid.
  • the selection of an image in the image strip may not cause any immediate change in the image grid.
  • a user may be required to use a particular manner of selection (e.g., double click, or hold down a hot key, or invoke a menu option) to affect the image grid.
  • the updating of the image grid due to the selection in the image strip may be further accompanied by the updating to the graphs and the tables displayed alongside the image grid, and which are further described elsewhere herein.
  • thumbnails in the image strips can affect the display of images in the image grid. Similarly, selection of an image in the image grid may cause changes in the image strips as well. In order to provide image navigation that is relevant under the circumstances, the thumbnail images in the image strip can be tailored according to the images shown or selected in the image grid.
  • a dataset viewer currently displays a Z image strip and a Channel v. Z image grid.
  • the Channel v. Z image grid is associated with a particular time value in a third dimension.
  • the images currently displayed in the image grid in the dataset viewer are therefore from the Z and Channel series that are associated with the particular time value.
  • the thumbnail images in the Z image strip can be derived from source images that are associated with the same time value specified for the third dimension of the image grid. If the user decides to select a different time value for the third dimension of the image grid, a new set of thumbnail images can be derived from source images that are associated with the newly selected time value. This change in thumbnail images is helpful for a user to better navigate within the images that are of interest to him or her at the time.
  • the thumbnails in the Z image strip can be updated such that they are derived from source images that are associated with the currently selected time value.
  • thumbnails to represent a large number of images such as a single thumbnail to represent an entire dataset in a Data Set image strip.
  • the methods used can include, for example, dimensional compression, determining a predictive representative, obtaining a calculated representative, user selection, or a combination of one or more of these methods.
  • each thumbnail can be derived from multiple source images in the Channels and/or the Z series that are associated a particular time value.
  • Each thumbnail can therefore be a full color composite image showing visual information from all channels, or a three-dimensional reconstruction showing visual information from all focus positions.
  • the predictive representative method can be implemented, for example, by choosing a source image that the user is likely to find relevant under the circumstance to represent a value in a given image-collection dimension. For example, multiple images in the Z-series can be associated with a particular time value, and the image that is associated with the mid- value in the Z-dimension can be selected as a representative because the image is likely to give the most complete picture of the sample as compared to other images.
  • the user can define which image in a given dimension is likely to be the best representative. For example, the user can specify a particular time value that he or she is interested in, and have thumbnails in all image strips (other than the image strips having Time as one of its dimensions) be based on source images associated with that particular time value.
  • the predictive representative can be selected based on the associated values of the currently displayed images in the image grid. For example, if all images displayed in the image grid are from a particular channel (e.g., the image grid is associated with a particular channel in a third dimension), it is likely that the user is interested in seeing images from that particular channel at the moment when navigating through the dataset. Therefore, the thumbnails in the image strips (other than the image strips having Channel as one of its dimensions) can be derived from source images that are associated with that particular channel. If the user subsequently changes the view mode and/or the third dimension, the thumbnails in the image strips can be changed correspondingly.
  • the predictive representative can be selected based on the associated values of the currently selected image in the image grid. For example, if the currently selected image in a Time v. Channel image grid is associated with a particular Z value in the third dimension, the Time image strip can be populated with thumbnails that are derived from source images associated with the selected Channel value or Z value, the Channel film strip can be populated with thumbnails that are derived from source images that are associated with the selected time or Z value, the Z image strip can be populated with thumbnails that are derived from images that are associated with the selected time or channel value, and so on.
  • the calculated representative method can be implemented by, for example, using image content to automatically score the image for fitness as a representative.
  • Some example measures of image content can be the amount of contrast in the image, the amount of high frequency information in the image, the total brightness of the image, and so on.
  • a user-selected representative method can also be implemented by, for example, allowing the user to tag or select a particular image in each image series or multiple series as the representative.
  • the user-selected representative method can also allow a user to select a representative for one cell of an image strip, and have the integrated data visualization interface automatically select images with similar qualities as the representative for the other cells in the image strip.
  • the representative selection methods can be combined in generating the thumbnails for the image strips.
  • different representative selection methods can be applied to different image strips depending on the particular image-collection dimensions represented by the image strip.
  • the representative in the Channel image strip can be a composite or overlay of multiple channels, and the representative in the Time image strip can be calculated based on intensity, for example.
  • multiple image strips can be present in a dataset viewer window.
  • one image strip can represent the Time dimension
  • another image strip can represent the Z dimension, in an experiment.
  • the image strips concurrently displayed can be correlated to one another.
  • a first image strip can represent a Data Set dimension
  • a second image strip can represent a secondary dimension such as the Stage dimension. If the user selects a thumbnail in the Data Set film strip, the thumbnails in the Stage image strip can be updated, where the new thumbnails in the Stage image strip are derived from source images in the dataset represented by the selected thumbnail in the Data Set image strip. In addition, if the initially selected dataset has images associate with 10 different stages, 10 thumbnails can be shown in the Stage image strip. When the user selects a different dataset in the Data Set image strip, which only has images associated with 5 different stages, the Stage image strip can be updated to show only 5 thumbnails representing the 5 different stages in the currently selected dataset.
  • the image strips have a reduced set of control or display options relative to the image grids. This would help avoiding clutter on a computer screen.
  • the thumbnails can show the full extent of the source images.
  • a zoom control is available to allow the user to zoom into a particular thumbnail (or cause coordinated zooming into all thumbnails) when the thumbnails are too small to view.
  • the Z image strip includes a thumbnail from each Z position.
  • the user can click on each thumbnail and see the entire Time series associated with that Z position cycle in the thumbnail.
  • the thumbnail can be expanded to show each image in the Time series associated with the Z position.
  • the thumbnail can include a play button which the user can invoke to see the Time series play as a movie sequence within the thumbnail.
  • the image strips can also implement a caching mechanism to prepare or load thumbnail images in anticipation of likely user actions.
  • the integrated data visualization interface can also provide a table component and a graph component tool to display numeric measurement data in the dataset.
  • These components can be included with an image grid and/or an image strip in a dataset viewer to display data from a single dataset. These components can be selectively and independently turned on or off.
  • the table and graph components can contain many common features and controls that exist in data management applications used elsewhere, such as spreadsheet functionality.
  • the graph component can provide display formats including but not limited to line plots, scattered plots, bar charts, pie charts, histograms, and so on.
  • the table components can display and manipulate tables of numeric measurement data based on user specification.
  • the integrated data visualization interface has access to the data organization schemes and understands the sources of the measurement data.
  • the data displayed by the graph and the table components can be correlated with images and thumbnails displayed or selected in the image grids and the image strips.
  • the table component can show measurement data (e.g., intensity) from all images in the Z-series associated with a particular time value, or images from all Z-series regardless of associated time value.
  • the table can also display statistics of the numeric measurement data (e.g., the distribution of the intensity values among the Z series images).
  • the graph component can likewise plot correlations between numeric measurement data and numeric measurement data against values in various image-collection dimensions.
  • the measurement set can be derived from a single image, a set of images, the entire dataset, a subsection of the dataset, an entire experiment, and so on.
  • Each measurement set is associated with a distinct set of objects and images. Keeping the clear association between measurement set, images, and objects minimizes confusion over which data is being displayed and provides for a clear interaction model within the dataset viewer. The user is thereby assured that the information displayed in every viewer components correspond to one another.
  • the integrated data visualization interface provides a comprehensive interactive system of viewing and navigating data within a multi-dimensional experiment or multiple related experiments. Once measurements have been made on an image set, the regions of interest/object locations of the measurement are typically known. The source images for the measurement data are also known. This allows the numeric and the image data to respond to user input in concert with one another.
  • the numeric display components (e.g., the table and graph components), in a similar fashion as the image grid and the image strip, can be restricted to display data of the current dataset as selected in the dataset viewer containing the numeric display components.
  • the user can use a numeric display component (e.g., the table or the graph) to select or highlight a particular measurement.
  • the selected or highlighted measurement can be used to adjust the image grid and/or the image strip to display images from which the measurements were made. This can be done without changing the view mode of the image displays.
  • a table component 508 and a graph component 506 are displayed alongside an image grid 502 and an image strip 504.
  • this selection can in turn cause the image grid to display the source image(s) for the measurement or highlight the specific object or pixels used to derive the measurement.
  • the corresponding data point(s) in the graph 506 can also be highlighted (e.g., by a marker 512).
  • the graph 506 can be used to select points and objects which are in turn displayed or highlighted in the image grid 502 and in the table 504.
  • the image grid can also be the input point. For example, selecting an object in an image in the image grid can cause the measurements of the object to be highlighted in the table and/or graph.
  • the highlighted items may be one or more rows and/or columns in the table, or one or more points, lines, histogram bins, or bars in the graph, for example.
  • the graph and the table can be configured to display data that are only calculated from the images currently displayed in the image grid.
  • the graph and the table can also be configured to display data calculated from a larger section of the dataset. These larger sections may include, for example, the entire dataset or some smaller sections of the data set.
  • selection and manipulation of image and measurement data can be synchronized and/or coordinated across multiple dataset viewers containing the multiple image grids.
  • a first dataset viewer includes a first image grid showing a Time series from a first dataset; and a second dataset viewer includes a second image grid showing a Time series from a second dataset.
  • first image grid showing a Time series from a first dataset
  • second dataset viewer includes a second image grid showing a Time series from a second dataset.
  • each dataset viewer can further include an image strip, a graph component, and a table component.
  • the image grid in the first dataset viewer can be updated to show images from which the selected measurement data has been derived
  • the graph component in the first dataset viewer can be updated to highlight the selected measurement data
  • the image strip in the first dataset viewer can be updated to show the thumbnails related to the images from which the numeric
  • the image grid, the graph component, the table component, and the image strip in the second dataset viewer can be updated accordingly as well.
  • the image grid can be updated to show related images of the images currently shown in the image grid in the first dataset viewer, and the film strip, the graph component, and the table component can be updated accordingly to reflect the changes in the image grid in the second dataset viewer.
  • Other modes of interactions between dataset viewers are possible.
  • FIG. 6A-6D show a flowchart of an exemplary process 600 for coordinated presentation and manipulation of images and data between various components of an integrated data visualization interface as shown elsewhere herein.
  • FIG. 6A shows how the coordinated presentation and manipulation within an image grid (shown within the dashed box 603) or across multiple image grids (shown within the dashed box 609) can be accomplished.
  • FIG. 6B shows the coordination of interactions between a table and an image grid in a dataset viewer.
  • FIG. 6C shows the coordination of interactions between a graph and an image grid in a dataset viewer.
  • FIG. 6D shows the coordination of interactions between an image strip and an image grid in a dataset viewer.
  • the exemplary process 600 starts when a first plurality of images are displayed in a first two-dimensional image grid, at 602.
  • the first two-dimensional image grid is in a view mode that is defined by a first dimension and a second dimension.
  • the first dimension and the second dimension are selected from a number of image-collection dimensions, such as Time, Stage, Channel, and Focus Position (Z).
  • image-collection dimensions such as Time, Stage, Channel, and Focus Position (Z).
  • Other image- collection dimensions may be defined as well, as described elsewhere herein.
  • the first plurality of images displayed in the image grid can be selected from a first multi-dimensional dataset, where the dataset includes images of a biological sample taken during a multi-dimensional experiment.
  • Each image in the first plurality of images is associated with a respective value in each of the first and the second dimensions.
  • the first image-collection dimension is Time
  • the second image-collection dimension is Focus Position (Z).
  • Each cell of the image grid represents a particular time value and a particular Z value.
  • Each image displayed in the image grid is associated with the particular time value and the particular Z value represent by the cell in which the image is placed. All the images are ordered in the first two- dimensional image grid according to the associated values of the images in the first and the second dimensions of the image grid.
  • the image grid can also be associated with a particular value in a third dimension, and each of the first plurality of images is also associated with that particular value in the third dimension.
  • This third dimension can also be selected from the plurality of image-collection dimensions (e.g., Time, Stage, Channel, and Z).
  • the user input is received for visually manipulating one of the first plurality of images displayed in the first two-dimensional image grid (604).
  • the user input can include one or more of zooming and panning.
  • the user input can also include one or more other visual manipulation operations, such as thresholding, enhancing, object identification and marking, background subtraction, linear scaling, and so on.
  • views of the first plurality of images that are visible in the first two-dimensional image grid are simultaneously updated by applying the user input to the first plurality of images (606).
  • the simultaneous application of the visual manipulation helps maintain a consistent view of the image grid as a user continues to manipulate an image, but does not require the user to manually keep track of the changes in each image.
  • the coordinated and synchronous updates can be applied to images across multiple image grids as well.
  • a second plurality of images from a second multi-dimensional dataset is displayed in a second two-dimensional image grid (608).
  • the second multi-dimensional dataset can be the same as the first multidimensional dataset.
  • the second multi-dimensional dataset can also be different from the first multi-dimensional dataset.
  • the same visual manipulation can also be applied to the visible images in the second two-dimensional image grid, such that views of each image that is visible in the second two-dimensional image grid are also updated (610).
  • the first multi-dimensional image grid and the second multi-dimensional image grid are associated with the same first and second image-collection dimensions, and each of the second plurality of images is associated with a respective value in each of the first and the second dimensions.
  • a table can be displayed along with the first two-dimensional image grid (612 in FIG. 6B).
  • the table can include numeric measurement data derived from one or more of the first plurality of images.
  • User input selecting the numeric measurement data in the table can be received (614).
  • the first two-dimensional image grid can be updated to visually emphasize the one or more images from which the selected numeric measurement data was derived (616).
  • the one or more images are scrolled into view in the image grid.
  • the selected numeric measurement data is associated with an object in the one or more of the first plurality of images, and the object can be visually emphasized in the images visible in the first two-dimensional image grid after the user input is received.
  • a graph is displayed (see 618 in FIG. 6C).
  • the graph includes data points representing numeric measurement data derived from one or more of the first plurality of images.
  • User input selecting one or more data points on the graph can be received (620).
  • the first two-dimensional image grid can be updated to visually emphasize the one or more images corresponding to the selected data points (622).
  • a browsable image strip can be displayed along with the first two-dimensional image grid in a dataset viewer (624, in FIG. 6D).
  • the image strip includes a sequence of thumbnail images.
  • the sequence of thumbnail images represents a sequence of values in one of the plurality of image-collection dimensions.
  • User input selecting one of the sequence of thumbnail images is received (626).
  • the first two-dimensional image grid is updated to present a second plurality of images associated with a value represented by the selected thumbnail image in the one of the plurality of image-collection dimensions (628).
  • the second plurality of images that are visible in the first two-dimensional image grid are visually manipulated according to the visual manipulation that was previously applied to the first plurality of images (630).
  • FIG. 7 is a flow diagram for an exemplary process 700 for implementing coordinated interaction between an image strip and an image grid in a dataset viewer.
  • the thumbnail images in an image strip are updated according to the currently selected image in the image grid.
  • the thumbnail images are derived based on source images that are associated with the selected image in one or more image-collection dimensions.
  • a first plurality of images are displayed in a two- dimensional image grid (702).
  • a first and a second dimension of the two-dimensional image grid are selected from a plurality of image-collection dimensions including Time, Stage, Channel, and Focus position.
  • the first plurality of images are selected from a first multi-dimensional dataset that include images of a biological sample.
  • Each of the first plurality of images is associated with a respective value in each of the first and the second dimensions of the two-dimensional image grid.
  • the first plurality of images are associated with a common value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the first two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions.
  • a browsable image strip is displayed (704).
  • the browsable image strip includes a plurality of thumbnail images.
  • the plurality of thumbnail images can be derived from a first set of source images from the plurality of multi-dimensional datasets and represent a plurality of values in a respective image-collection dimension.
  • the respective image- collection dimension can be selected from a second plurality of image-collection dimensions including Time, Channel, Stage, Focus Position, and Data Set.
  • User input selecting one of the images displayed in the two-dimensional image grid is received (706).
  • the plurality of thumbnail images in the browsable image strip is updated (708), and the updated plurality of thumbnail images are derived from a second set of source images from a plurality of multidimensional datasets.
  • the second set of source images is different from the first set of source images.
  • the second set of source images is associated with the respective values of the selected image in one or more of the first and the second dimensions.
  • each of the second set of source images is associated with the respective values of the selected image in both the first and the second dimensions.
  • a dataset can be selected from a Data Set image strip, and the images from the newly selected dataset can be displayed in the image grid, replacing the previous set of images occupying the image grid.
  • the state of the image grid prior to the selection of the new dataset can be used for the new dataset such that the user can easily compare the new set of images with a corresponding set of images previously occupying the image grid.
  • a first plurality of images in a two-dimensional image grid is displayed (702).
  • a first and a second dimension of the two-dimensional image grid are selected from a plurality of image-collection dimensions including Time, Stage, Channel, and Focus Position (Z).
  • the first plurality of images are selected from a first multi-dimensional dataset that include images of a biological sample.
  • Each of the first plurality of images is associated with a respective value in the first and the second dimensions.
  • the plurality of images can be associated with a common value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions.
  • a browsable image strip can be displayed (704).
  • the browsable image strip includes a plurality of thumbnail images, and each of the plurality of thumbnail images represents a respective multi-dimensional dataset in a plurality of multi-dimensional datasets.
  • thumbnail image displayed in the browsable image strip is received, where the selected thumbnail image represents a second multi-dimensional dataset (710).
  • the second multi-dimensional dataset is different from the first multidimensional dataset.
  • the two-dimensional image grid is updated to display a second plurality of images from the second multi-dimensional dataset (712).
  • the second plurality of images are associated with the associated values of the first plurality of images in the first, second, and third dimensions and are ordered in the two-dimensional image grid according to the images' associated values in each of the first and the second dimensions.
  • an image grid is associated with a value in a third dimension. If the user selects a thumbnail from an image strip that represent a different value in the third dimension, all the images in the image grid are replaced with images associated with the newly selected value in the third dimension. The state of the image grid prior to the selection of the new value can be preserved for the new set of images such that the user can easily compare the new set of image with a corresponding set of image previously occupying the image grid.
  • a first plurality of images is displayed in a two- dimensional image grid (702).
  • a first and a second dimension of the two-dimensional image grid are selected from a plurality of image-collection dimensions including Time, Stage, Channel, and Focus Position (Z).
  • the first plurality of images can be selected from a multi-dimensional dataset including images of a biological sample. Each image in the first plurality of images is associated with a respective value in each of the first and the second dimensions.
  • the first plurality of images are associated with a first value in a third dimension selected from the plurality of image-collection dimensions and are ordered in the two-dimensional image grid according to the associated values of the images in each of the first and the second dimensions.
  • a browsable image strip can be displayed (704).
  • the browsable image strip can include a plurality of thumbnail images, each of the plurality of thumbnail images representing a respective value in the third dimension.
  • the selected thumbnail image represents a second value in the third dimension, the second value being different from the first value.
  • the two-dimensional image grid can be updated to display a second plurality of images from the multi-dimensional dataset (712).
  • the second plurality of images can be associated with the second value in the third dimension, and each of the second plurality of images corresponds to one of the first plurality of images in associated values in the first and the second dimensions.
  • a plurality of composite images can be generated by overlaying multiple constituent images from multiple focus positions or multiple channels, and each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions.
  • the plurality of composite images can be displayed in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • a plurality of composite images can be generated by overlaying multiple constituent images, where each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions, and at least one object present in one of the plurality of composite images is invisible in at least one of the multiple constituent images of the composite image.
  • the plurality of composite images can be displayed in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • a plurality of composite images can be generated by overlaying multiple constituent images from multiple channels.
  • Each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions, and the multiple constituent images include at least a grayscale image and a colored florescent image.
  • the plurality of composite images can be displayed in the two-dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • a plurality of composite images can be generated by overlaying multiple constituent images from multiple channels or multiple focus positions, where each of the plurality of composite images are associated with a respective value in each of the first and the second dimensions.
  • the plurality of composite images can be displayed in the two- dimensional image grid according to the associated values of the composite images in each of the first and the second dimensions.
  • User input modifying a display option for one of the multiple channels or multiple focus positions can be received and the plurality of composite images displayed in the two-dimensional image grid can be updated by independently modifying the constituent images from the one of the multiple channels or focus positions according to the modified display option.
  • FIG. 8 depicts, schematically, an exemplary integrated multi-dimensional microscopy system 800 that can be used to implement the methods and systems disclosed in this specification.
  • the exemplary integrated multi-dimensional microscopy system 800 includes a computer system 802, a microscopy imaging system 814, a display device 812, and a data storage device 810.
  • the microscopy system 814 includes an image acquisition devices such as a charge-coupled device (CCD) camera 820 and can include components such as motorized stages 816, Z motors 818, motorized filter wheel shutters 822,
  • CCD charge-coupled device
  • the computer system 802 includes software components, such as an acquisition engine 804, a processing engine 806, a visualization engine 808, and communication modules (not shown) for communicating with the other software and hardware components of the integrated multi-dimensional microscopy system 800, and optionally also for communicating with other computing devices not shown in FIG. 8.
  • Display 812 may be considered an integral part of computer system 802, or a part of the imaging system 814, or may be connected directly to both.
  • the integrated multi-dimensional microscopy system 800 can acquire images at a high speed in multiple dimensions, such as the Z dimension, the Stage dimension, the Time dimension, the Channel dimension, and so on.
  • the acquisition engine 804 of the computer system 802 includes software instructions for controlling the various components of the microscopy system during image acquisition.
  • the acquisition engine can control the manipulations of the motorized stages 816, the Z motors 818, the filter wheels shutters 822, the monochromators, the piezo-electric focus devices, and so on.
  • the acquisition engine 804 can allow the user to provide setup parameters for the microscope and other hardware controls, such as illumination, magnification, focus, acquisition speed, imaging location, camera settings, and so on.
  • Images acquired by multi-dimensional microscopy system 800 can be stored in temporary storage (e.g., RAM of the computer system 802) for instant replay or selection by the user. Images can subsequently be stored in the data storage 810 for permanent storage and future process and access.
  • temporary storage e.g., RAM of the computer system 802
  • images can subsequently be stored in the data storage 810 for permanent storage and future process and access.
  • the processing engine 806 can include instructions for processing the acquired images and to derive measurement data from the images.
  • the processing engine 806 can create composite images based on multiple images or image series, create movie sequence, stitch images, create intensity profiles, adjust image contrast, perform binary operations (e.g., invert, erode, and dilate), apply various image filters (e.g., low pass, sharpen, median filter, haze, segmentation, etc.), reduce background, correct shading, perform Fast Fourier Transform operations, identify objects and boundaries within images, and so on.
  • the processing engine 806 can also perform various analyses on the images and measurements.
  • the visualization engine 808 can include software instructions for implementing a user interface for presenting the images and measurement data acquired during multidimensional microscopy experiments.
  • visualization interface disclosed in this specification can be implemented in the visualization engine 808.
  • the software instructions can be implemented in any of several programming languages known to those skilled in the art of programming digital computers. Preferred languages include those that are effective for high intensity numerical operations as well as those that are tailored towards graphical manipulations such as display and transformations of windows. Such languages include, but are not limited to: C, C++, Visual Basic, and Fortran. Other software and hardware components can be included in the example multidimensional microscopy system 800. In some implementations, not all components shown in FIG. 8 need to be included.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions (such as applets, functions, and subroutines), encoded on one or more computer storage media for execution by, or to control the operation of, a data processing apparatus.
  • the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to a suitable receiver apparatus for execution by a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of such devices. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program
  • the computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CD's, disks, or other storage devices).
  • the operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer- readable storage devices or received from other sources.
  • the term "data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple chips, or combinations, of the foregoing.
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross- platform runtime environment, a virtual machine, or a combination of one or more of them.
  • the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
  • a computer program (also known as a program, software, software application, script, or code) that embodies the methods described herein can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
  • the processes and logic flows described herein can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • processors suitable for the execution of a computer program as described herein include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory (ROM) or a random access memory (RAM) or both.
  • ROM read-only memory
  • RAM random access memory
  • the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not include or be coupled to any or all such devices.
  • a computer for use herein can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer having a display device, e.g., a cathode ray tube (CRT), organic electroluminescent display (OED), or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a cathode ray tube (CRT), organic electroluminescent display (OED), or liquid crystal display (LCD) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be
  • Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks.
  • the computing system as used herein can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data (e.g., a hypertext markup language (HTML) page to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
  • HTML hypertext markup language
  • client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
  • Data generated at the client device e.g., a result of the user interaction
  • FIG. 9 An example of one such type of computer for use with a microscopy system herein is shown in FIG. 9.
  • the computing system 900 shown in FIG. 9 may be used to implement the systems and methods described in this document.
  • System 900 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • Computing system 900 includes a processor 902, memory 904, a storage device 906, a high-speed interface 908 connecting to memory 904 and high-speed expansion ports 910, and a low speed interface 912 connecting to low speed bus 914 and storage device 906.
  • Each of the components 902, 904, 906, 908, 910, and 912 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 902 can process instructions for execution within the system 900, including instructions stored in the memory 904 or on the storage device 906 to display graphical information for a GUI on an external input/output device, such as display 916 coupled to high speed interface 908.
  • multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing systems 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 904 stores information within the computing system 900 and output to printers, etc., as shown in FIG. 9.
  • the memory 904 is a computer-readable medium.
  • the memory 904 is a volatile memory unit or units.
  • the memory 904 is a non-volatile memory unit or units.
  • the storage device 906 is capable of providing mass storage for the computing system 900.
  • the storage device 906 is a computer-readable medium.
  • the storage device 906 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described elsewhere herein.
  • the information carrier is a computer- or machine-readable medium, such as the memory 904, the storage device 906, or memory on processor 902.
  • the high speed controller 908 manages high bandwidth-intensive operations for the computing system 900, while the low speed controller 912 manages lower bandwidth- intensive operations. Such allocation of duties is exemplary only. In one
  • the high-speed controller 908 is coupled to memory 904, display 916
  • low- speed controller 912 is coupled to storage device 906 and low-speed expansion port 914.
  • the low-speed expansion port which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard 918, a pointing device 920, a scanner 922, a printer 924, or a networking device 926 such as a switch or router, e.g., through a network adapter 928.
  • the computing system 900 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 928, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 930. In addition, it may be implemented in a personal computer such as a laptop computer 932 or a mobile device (not shown). Alternatively, components from computing system 900 may be combined with other components in a mobile device (not shown). Each of such devices may contain one or more of computing systems 900, and an entire system may be made up of multiple computing systems communicating with each other. Other embodiments

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

La présente invention se rapporte à des procédés, des systèmes et un appareil comprenant des programmes informatiques codés sur un support de stockage informatique et permettant de mettre en œuvre une interface de visualisation de données intégrée pour une image de microscopie multidimensionnelle et des données de mesure. Ladite interface de visualisation de données intégrée utilise les rapports relationnels et dimensionnels connus qui existent entre les images afin de fournir automatiquement une visualisation et des manipulations synchronisées et coordonnées pour plusieurs images et données de mesure liées par le biais de plusieurs éléments d'affichage d'ensembles de données (par exemple des grilles d'images, des bandes d'images explorables, des tableaux ou des graphiques). D'autres attributs de ces éléments d'affichage d'ensembles de données sont également décrits.
PCT/US2009/067751 2009-12-11 2009-12-11 Visualisation de données intégrée pour microscopie multidimensionnelle WO2011071505A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2009/067751 WO2011071505A1 (fr) 2009-12-11 2009-12-11 Visualisation de données intégrée pour microscopie multidimensionnelle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2009/067751 WO2011071505A1 (fr) 2009-12-11 2009-12-11 Visualisation de données intégrée pour microscopie multidimensionnelle

Publications (1)

Publication Number Publication Date
WO2011071505A1 true WO2011071505A1 (fr) 2011-06-16

Family

ID=42537643

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/067751 WO2011071505A1 (fr) 2009-12-11 2009-12-11 Visualisation de données intégrée pour microscopie multidimensionnelle

Country Status (1)

Country Link
WO (1) WO2011071505A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2866071A4 (fr) * 2012-06-21 2015-07-15 Ave Science & Technology Co Ltd Procédé et dispositif de traitement d'image
US9208596B2 (en) 2014-01-13 2015-12-08 International Business Machines Corporation Intelligent merging of visualizations
WO2018042629A1 (fr) * 2016-09-02 2018-03-08 オリンパス株式会社 Dispositif d'observation d'image et système de microscope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1691230A1 (fr) * 2005-02-10 2006-08-16 Olympus Corporation Dispositif photo-micrographique et procédé de contrôle
US20090226061A1 (en) * 2006-12-01 2009-09-10 Nikon Corporation Image processing device, image processing program, and observation system
US20090237502A1 (en) * 2006-11-30 2009-09-24 Nikon Corporation Microscope apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1691230A1 (fr) * 2005-02-10 2006-08-16 Olympus Corporation Dispositif photo-micrographique et procédé de contrôle
US20090237502A1 (en) * 2006-11-30 2009-09-24 Nikon Corporation Microscope apparatus
US20090226061A1 (en) * 2006-12-01 2009-09-10 Nikon Corporation Image processing device, image processing program, and observation system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALAIN BRIOT: "DxO Optics Pro v5 - A Review and Tutorial", February 2008 (2008-02-01), pages 1 - 20, XP002598293, Retrieved from the Internet <URL:http://www.dxo.com/var/dxo/storage/original/application/59ec5161f7ec8b53382580477f6ddb0d.pdf> [retrieved on 20100827] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2866071A4 (fr) * 2012-06-21 2015-07-15 Ave Science & Technology Co Ltd Procédé et dispositif de traitement d'image
US9542592B2 (en) 2012-06-21 2017-01-10 Ave Sciene & Technology Co., Ltd. Image processing method and apparatus
US9208596B2 (en) 2014-01-13 2015-12-08 International Business Machines Corporation Intelligent merging of visualizations
WO2018042629A1 (fr) * 2016-09-02 2018-03-08 オリンパス株式会社 Dispositif d'observation d'image et système de microscope
US10690899B2 (en) 2016-09-02 2020-06-23 Olympus Corporation Image observation device and microscope system

Similar Documents

Publication Publication Date Title
US8564623B2 (en) Integrated data visualization for multi-dimensional microscopy
Zhao et al. Exploratory analysis of time-series with chronolenses
JP6348504B2 (ja) 生体試料の分割画面表示及びその記録を取り込むためのシステム及び方法
JP4122234B2 (ja) データ分析システムおよびデータ分析方法
EP2929453B1 (fr) Systèmes et procédés de sélection et d&#39;affichage d&#39;expression de biomarqueur dans un spécimen biologique et de capture d&#39;enregistrements associés
Peng et al. Extensible visualization and analysis for multidimensional images using Vaa3D
Bertini et al. Quality metrics in high-dimensional data visualization: An overview and systematization
Fekete et al. Interactive information visualization of a million items
EP1676234B1 (fr) Interface graphique pour visualisation en 3d d&#39;une collecte de donnees sur la base d&#39;un attribut des donnees
Schmidt et al. VAICo: Visual analysis for image comparison
US7849024B2 (en) Imaging system for producing recipes using an integrated human-computer interface (HCI) for image recognition, and learning algorithms
Ai-Awami et al. Neuroblocks–visual tracking of segmentation and proofreading for large connectomics projects
Ward et al. Interaction spaces in data and information visualization.
Lekschas et al. Pattern-driven navigation in 2D multiscale visualizations with scalable insets
GB2585423A (en) Utilizing context-aware sensors and multi-dimensional gesture inputs to efficiently generate enhanced digital images
Yang et al. The pattern is in the details: An evaluation of interaction techniques for locating, searching, and contextualizing details in multivariate matrix visualizations
WO2011071505A1 (fr) Visualisation de données intégrée pour microscopie multidimensionnelle
Wybrow et al. Interaction in the visualization of multivariate networks
Brivio et al. PileBars: Scalable Dynamic Thumbnail Bars.
Wang et al. Generating sub-resolution detail in images and volumes using constrained texture synthesis
CN106802929B (zh) 一种大数据三维模型的图形化分析方法及系统
Dos Santos A framework for the visualization of multidimensional and multivariate data
Lekschas Scalable Visualization Tools for Pattern-Driven Data Exploration
Berger VAST Lite Volume Annotation and Segmentation Tool User Manual, VAST Lite 1.1
Lange et al. Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09806064

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09806064

Country of ref document: EP

Kind code of ref document: A1