WO2009108179A2 - Motion-based visualization - Google Patents

Motion-based visualization Download PDF

Info

Publication number
WO2009108179A2
WO2009108179A2 PCT/US2008/013884 US2008013884W WO2009108179A2 WO 2009108179 A2 WO2009108179 A2 WO 2009108179A2 US 2008013884 W US2008013884 W US 2008013884W WO 2009108179 A2 WO2009108179 A2 WO 2009108179A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
motion
graphical
layer
image
Prior art date
Application number
PCT/US2008/013884
Other languages
French (fr)
Other versions
WO2009108179A3 (en
Inventor
Robert J. Bobrow
Aaron Mark Helsinger
Michael J. Walczak
Original Assignee
Bbn Technologies Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/961,242 external-priority patent/US7629986B2/en
Priority claimed from US12/169,934 external-priority patent/US8941680B2/en
Application filed by Bbn Technologies Corp. filed Critical Bbn Technologies Corp.
Publication of WO2009108179A2 publication Critical patent/WO2009108179A2/en
Publication of WO2009108179A3 publication Critical patent/WO2009108179A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present invention is directed to data display. It particularly concerns effectively displaying high-dimensional and complex relational data, including volumetric data, e.g., medical image data, security screening data, or architectural drawing data..
  • hyperspectral data typically, such data are similar to those that result from a camera in the sense that the domain is usually a two- dimensional scene. But the value taken for each picture element ("pixel") in the scene is not a vector representing visible-color components, such as red, green, and blue or cyan, magenta, and yellow. Instead, it is a vector consisting of a relatively large number of components, each of which typically represents some aspect of the radiation received from a respective wave-length band. And the bands often fall outside the visual range. Because of the data's high dimensionality and the limited dimensionality of human visual perception, some degree of selectivity in data presentation is unavoidable, and the decisions that are involved in making the selections have a significant impact on the presentation's usefulness to the human viewer.
  • volumetric medical image data Volumetric images may be quite complex. In such images, a volume element is referred to as a "voxel" (analogous to a pixel in two-dimensional space). Depending on the transparency assigned to the voxels, graphical features that may be of interest to a viewer may be obscured by other voxels. Similarly, the complexity of volumetric images, in some fields, for example, medical imaging, result in the boundaries of various features being difficult to detect.
  • data objects may represent respective individual people, and the dimensions may be age, gender, height, weight, income, etc.
  • the displays can be on the screens of different monitors, for example, or on different parts of a single monitor's screen.
  • a user employs a mouse or other device to select a subset of the objects represented by icons in one display, and the display system highlights other display's objects that represent the same objects.
  • Another approach is the use of stereo and rotating perspective displays in data display systems.
  • stereo views and rotating perspective views increase the human user's understanding of volumetric data and images, they do not by themselves make portions of the displayed image stand out from the other data presented on the display.
  • Another technique, previously proposed in assisting human users to distinguish important graphical features in two-dimensional images is to impart motion to these graphical features.
  • Such a display technique takes advantage of the inherent ability of the human perception system to recognize patterns in data by quickly associating graphical features that are moving in the same fashion.
  • the invention relates to a security screening system.
  • the security screening system includes a security screening device for outputting graphical data about an object being screened, a display, a memory for storing the graphical data output by the security screening device, and a processor.
  • the processor is in communication with the memory and the display.
  • the processor is configured to retrieve the graphical data from the memory, display the graphical data stored in the memory in a plurality of layers of overlaid over one another, and impart a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data representing suspicious materials depicted in the first layer.
  • the suspicious materials may include, for example, metal or chemical substances.
  • the security screening device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers.
  • the processor may be configured to generate the plurality of displayed layers from the retrieved graphical data such that the data included in each respective layer shares a common characteristic with other data in the layer.
  • the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer.
  • the invention in another aspect, relates to a medical image analysis system.
  • the medical image analysis system includes a medical imaging device for outputting graphical data representing characteristics of a body structure being imaged, a display, a memory for storing the graphical data output by the medical imaging device, and a processor.
  • the processor is in communication with the memory and the display.
  • the processor is configured to retrieve the graphical data from the memory, display the graphical data stored in the memory in a plurality of layers of overlaid over one another, and impart a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight characteristics of a body structure represented in the first layer.
  • the medical imaging device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers.
  • the processor may be configured to generate the plurality of displayed layers from the retrieved graphical data such that the data included in each respective layer shares a common characteristic with other data in the layer.
  • the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of the body structure represented within the first layer.
  • the invention in a third aspect, relates to a method for displaying graphical data.
  • This method includes displaying a plurality of layers of graphical data overlaid one another, imparting a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data represented in the first layer, and imparting a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer.
  • at least one of the plurality of displayed layers includes image data generated by a different imaging source then used to generate image data for a second layer in the plurality of displayed layers.
  • the method may include receiving data for display, and generating the plurality of displayed layers from the received data, wherein the data in each respective layer in the plurality of displayed layers shares a common characteristic.
  • At least one layer in the plurality of displayed layers includes image data generated by a different imaging source than used to generate image data for a second layer in the plurality of displayed layers.
  • the different imaging sources could capture the image data using different imaging techniques.
  • the first layer may comprise an image projected on an array of geometric shapes, and imparting the localized motion comprises shifting vertices of geometric shapes in the first area.
  • at least one of the displayed layers is at least partially transparent.
  • the method further includes receiving an input from a user identifying data to be highlighted, and determining the imparted motion in response to the user input.
  • the invention in another aspect, relates to a data visualization system for displaying a volumetric image or an image having at least three dimensions.
  • This system includes a user interface, a display, a memory for storing graphical data, and a processor in communication with the memory and the display.
  • the processor is configured for retrieving graphical data from the memory and displaying a volumetric image incorporating a plurality of graphical features of the data on the display.
  • the processor is configured for processing input from the user interface to identify a first of the displayed graphical features, and to impart motion to the first identified graphical feature relative to the remainder of the volumetric image to highlight the first graphical feature.
  • the system can include a medical imaging device, a security screening device, or a device for generating architectural drawings, each of which may be used for outputting the graphical data stored in the memory.
  • the graphical data may correspond to medical image data, security screening data, or architectural drawing data.
  • the medical image data may be obtained from a plurality of medical imaging devices.
  • the medical image data may be captured from the medical imaging device using different imaging techniques.
  • the user interface of the data visualization system receives a user input.
  • graphical feature refers to a collection of one or more voxels having a logical connection to one another.
  • voxels may correspond to portions of a same physical structure.
  • the voxels may be logically connected due to their relationship to different structures in a common network (e.g., electrical networks, communication networks, social networks, etc.).
  • Voxels may also related due to their sharing a common characteristic.
  • the voxels in a graphical feature may be related in that they correspond to tissue having a common density or to a fluid flow having a common flow rate.
  • the logical connection can be any criteria selected by a user for selecting groups of voxels or any criteria applied by an artificial intelligence, signal processing, or pattern recognition system designed for identifying relevant features in data.
  • the volumetric image may be obtained from a plurality of image sources.
  • the graphical data may correspond to medical image data, security screening data, or architectural drawing data.
  • the medical image data may be captured using different imaging techniques.
  • the processor may obtain three-dimensional data by analyzing a set of data having at least two dimensions, and in other embodiments, at least one part of the graphical data is received from a different source than used for a remainder of the graphical data.
  • the processor also receives an input from a user.
  • This user input may comprise, among other inputs, a query, a cursor brush, or a mouse click.
  • the computer executable instructions cause the processor to identify one of the graphical features of the volumetric image by determining bounds of selected subject matter.
  • the invention in another aspect, relates to a method for analyzing data having at least three dimensions.
  • This method includes receiving data for display, displaying a volumetric image incorporating a plurality of graphical features visually representing portions of the data, and imparting motion to one of the graphical features relative to the remainder of the volumetric image to highlight this graphical feature.
  • FIG. 1 is a block diagram of a data visualization system in which the present invention's teachings may be implemented
  • FIG. 2 is diagram of a display of the type often employed for link analysis
  • FIG. 3 is a diagram that illustrates the result of using such a display in accordance with one of the invention's aspects
  • FlG. 4 depicts exemplary histograms in which brushing is being performed
  • FIGS. 5 A, 5B, and 5C are plots of one component of the motion of a body that represents a data object in accordance with the present invention
  • FIG. 6 is a diagram that illustrates one kind of three-dimensional body in whose features an object's data can be encoded in accordance with one of the invention's aspects
  • FIG. 7 is a flow chart of the manner in which one embodiment of the invention operates.
  • FIG. 8 is a diagram that illustrates one way in which a display can be generated from three-dimensional models that represent data objects in accordance with one of the present invention's aspects
  • FIG. 9 depicts a small segment of a display generated by projecting such models
  • FIG. 10 depicts a larger segment of the display of FIG. 9;
  • FIGS. 1 IA-1 1 C are illustrative outputs of a geographic information system (GIS), according to an illustrative embodiment
  • FIGS. 12A and 12B depict simulated outputs of an X-Ray screening machine incorporating the data visualization technology described herein;
  • FIGS. 13 A and 13B depict the output of a data visualization system integrated with a viewfinder, according to an illustrative embodiment
  • FIG. 14A is a simple illustration of a tooth, an example of a volumetric image of the type that may be displayed in accordance with one of the invention's aspects;
  • FlG. 14B illustrates the volumetric image from FIG. 14A in which a first graphical feature of the volumetric image moves relative to the remainder of the volumetric image;
  • FIG. 14C is a diagram of a display containing a volumetric image with a plurality of graphical features in which all graphical features are displayed
  • FIG. 14D is a diagram of a display containing a volumetric image with a plurality of graphical features, in which only one graphical feature is displayed;
  • FIG. 14E illustrates the volumetric image from FIG. 14A in which a part of the first graphical feature of the volumetric image possesses a localized motion relative to the remainder of the first graphical feature of the volumetric image;
  • FIG. 15A is a schematic that illustrates the application of a medical imaging technique in which 2-dimensional medical images or "slices" of a human skull are captured;
  • FIG. 15B is a schematic that illustrates the application of a medical imaging technique in which a composite volumetric medical image of a part of the human skull of FIG. 15A is created from plurality of two-dimensional image slices;
  • FIG. 15C illustrates the medical image from FIG. 15B in which a first graphical feature of the slices of the image move relative to the remainder of the image;
  • FIG. 15D is a diagram of a display containing the medical image from FIG.
  • FIG. 16A illustrates a volumetric architectural drawing in which multiple graphical features of the drawing are displayed
  • FIG. 16B illustrates the volumetric architectural drawing from FIG. 16A in which one graphical feature of the drawing moves relative to the remainder of the drawing when viewed from one perspective;
  • FIG. 16C illustrates the volumetric architectural drawing from FIG. 16A in which one graphical feature of the drawing moves relative to the remainder of the drawing when viewed from a different perspective;
  • FIG. 16D is a diagram of a display containing a volumetric architectural drawing from FIG. 16A in which some graphical features are displayed and/or moving.
  • FIG. 1 corresponds to a data visualization system 100.
  • the data visualization system 100 includes a processor 104, a memory 106, e.g., Random- Access Memory (RAM), a display 108, and a user interface 1 10.
  • Processor 104 operates on data 102 to form an image in accordance with computer executable instructions loaded into memory 106.
  • the instructions will ordinarily have been loaded into the memory from local persistent storage in the form of, e.g., a disc drive with which the memory communicates.
  • the instructions may additionally or instead be received by way of user interface 1 10.
  • processor 104 may be a general purpose processor, such as a Central Processing Unit (CPU), a special purpose processor, such as a Graphics Processing Unit (GPU), or a combination thereof.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • data visualization system 100 of FIG. 1 may include a medical imaging device, a security screening device, or a device for generating architectural drawings, each of which can generate data 102.
  • data 102 may correspond to medical image data, security screening data, or architectural drawing data.
  • the medical image data may be obtained from a plurality of medical imaging devices e.g. a Computer-Aided Tomography (CAT) scan machine, or a Magnetic Resonance Image (MRI) machine.
  • CAT scans or MRIs e.g. a Computer-Aided Tomography (CAT) scan machine, or a Magnetic Resonance Image (MRI) machine.
  • the processor 104 may obtain three-dimensional data by analyzing a set of data having at least two dimensions.
  • At least one part of the data is received from a different source than used for a remainder of the data, e.g., a set of CAT scans may be received from a CAT scan machine, and a set of MRIs may be received from an MRI machine.
  • the data visualization system 100 of FlG. 1 then combines the data from the two sources to display a single volumetric image.
  • Display 108 may be any display device capable of interfacing with processor 104, e.g., an
  • system 100 could receive user input via user interface 1 10 from devices such as a mouse 1 16 and a keyboard 120.
  • the user input could include, among other inputs, a query 1 12, a mouse click 1 14, or a cursor brush 1 18.
  • the user input could also originate from devices connected to user interface 110 remotely, e.g., via a network connection.
  • Data visualization system 100 may receive data 102 into memory 106 in ways similar to those in which the instructions are received, e.g., a disc drive with which the memory communicates.
  • data 102 may be received from a network, e.g., a local area network, a wireless area network, or another processor.
  • Electromagnetic signals representing the instructions may take any form. They are typically conductor-guided electrical signals, but they may also be visible- or invisible-light optical signals or microwave or other radio-frequency signals.
  • the instructions indicate to the processor how it is to operate on data typically received in ways similar to those in which the instructions are.
  • the instructions cause the processor to present some of the data to one or more human users by driving some type of display, such as the local monitor 126.
  • Processor 104 in data visualization system 100 is configured to operate on data 102.
  • processor 104 is configured to process received input from user interface 1 10, to carry out operations on data 102, to identify graphical features or layers visually representing portions of the processed image, and to display the processed image or identified graphical features or layers of the processed image on display 108.
  • processor 104 can form an image and display an identified graphical feature of, or a layer in, this image on display 108, as will be described further in reference to FIGS. 1 1 - 16.
  • the present disclosure can be applied to representing a wide variety of data objects.
  • One of the invention's aspects is particularly applicable to data that specify various types of relationships between data objects that the data also represent.
  • the data may represent the results of criminal investigations: certain of the data objects may represent surveillance targets such as people, buildings, or businesses. Of particular interest in the context of link analysis, some of the objects may include references to other objects.
  • FIG. 2 illustrates in a simplified manner how the system may present the objects in a display for link analysis.
  • Each of the nodes 204, 206, 208, 210, 212, and 214 represents a different data object.
  • the drawing employs more than one style of icon to represent the nodes. This is not a necessary feature of the invention, but thus varying the icon type is one way to impart additional information.
  • the objects represent surveillance targets, for example, one of each object's fields may indicate what type of target it is, e.g., whether the target is a person, a building, a business, etc. If so, the types of icons placed at those nodes can represent that aspect of the object's contents.
  • the icons at nodes 204, 206, and 208 represent people
  • those at nodes 210 and 212 represent corporations
  • those at nodes 214 and 216 represent buildings.
  • a display feature such as icon shape can be used to represent one of the data's dimensions.
  • Another dimension such as the priority assigned to the target's surveillance, may be represented by the icon's color.
  • the nodes' locations on the display are essentially arbitrary in some link-analysis applications, they represent some aspect of the data, such as the target's geographical location, in others.
  • each object may include fields whose contents represent relationships to other data objects or represent pointers to arrays of such fields.
  • Such a field may include, say, a pointer or handle to the object linked by the represented relationship and may also include information about the relationship's type.
  • the display's lines represent those relationships, and, in this example, the line style conveys information, too.
  • line 218, which is relatively thin represents the fact that the target represented by node 206 has communicated by telephone with the target that node 208 represents.
  • line 220 which is thicker, indicates that target 206 owns target 214.
  • Other types of relationships may be represented by dashed lines, arc-shaped lines, etc. For the sake of simplicity, FIG.
  • the system selectively moves icons for this purpose.
  • the user wants to see all targets that satisfy some criterion.
  • the criterion that the target has to be within two communications links from a base target. The user may have chosen the base target by, say, "clicking" on it.
  • the display system causes their icons to move.
  • FIG. 3 illustrates this.
  • Cursor 302 represents the user's choosing node 304, and the dashed lines represent the resultant motion of nodes 306, 308, and 310, which satisfy that criterion. In most displays, the lines connected to the nodes will "rubber band,” i.e., will so stretch with the node movement as to remain connected despite that motion.
  • That example uses a direct form of user input: the user employs a mouse to select one of the targets. But link analysis does not always require that type of input.
  • the criterion may be that motion is imparted to nodes representing all targets owned by high-priority targets; i.e., the selection is totally data driven.
  • This approach to representing the data is advantageous because, although the user could, by careful attention, identify the targets that are within two communications links of the chosen target, making them move causes them to "jump out" at the viewer, and it can do so without, say, changing any colors and thereby obscuring originally presented information.
  • a similar approach can be applied to what is often termed "brushing," which is a technique often employed when multidimensional data are presented in more than one display simultaneously.
  • the axes in one display may represent one pair of the data components, while those in a different display may represent a different pair.
  • at least one of the displays is an income histogram in which each of the bars is considered to be a stack of icons representing respective people whose incomes belong to the corresponding income range, while another display is an age histogram of the same people.
  • one or more of the diagrams is a cluster diagram: icons representing different objects are clustered together in accordance with some similarity metric computed as some function of the objects' data components.
  • a user in some fashion selects a subset of the object-representing icons in one of the displays, and the display system indicates which of the icons in the other display correspond to the same data objects.
  • the user may, for example, select objects by causing a cursor to touch the corresponding icons or draw an enclosure about them; in the histogram case the user may simply click on one of the bars. Or he may select the objects in some other manner, such as by entering a selection criterion.
  • some conventional display systems highlight the other display's icons that correspond to the same objects. But conventional highlighting can obscure the information provided by, for instance, color. Using motion instead avoids this effect.
  • FIG. 4 illustrates this type of brushing for a situation in which both displays are histograms of the type described above.
  • the user has selected one of the income bins, and, by moving the corresponding icons in the lower plot, the display system illustrates the user-selected income group's distribution among the various age groups.
  • Both other types of motion can be used instead or in addition. Both these types of linear motion could be distinguished from diagonal linear motion, for example. Distinctions could also be made on the basis of phase or frequency: two sets of nodes vibrating linearly in the same direction could be caused to vibrate out of phase with each other, or at different frequencies.
  • the motion need not be linear; it may be elliptical, for instance, in which case another distinction can be made on the basis of whether the motion is clockwise or counterclockwise.
  • the motion is not necessarily a change in position from some rest position; it can, for instance, be a change in shape, such as rhythmic expansion and contraction of the icon that represents the data object.
  • FIGS. 5A, 5B, and 5C depict one component.
  • the plot of FIG. 5 A would be the component parallel to, say, ellipse's major axis, with which the motion component parallel to the minor axis would be 9O.degree. out of phase.
  • the harmonic motion that FIG. 5A depicts is typical. But some embodiments may instead or additionally employ other types of motion, such as the stuttering motion of FIG. 5B. Another example is the repeatedly decaying harmonic motion that FIG. 5C illustrates.
  • Another aspect of the invention is directed to the way in which the motion is generated.
  • the motion results from depicting moving three-dimensional bodies on the display.
  • Each body represents a respective data object, and various features of the body's motion represent respective components of data object's multi-dimensional data.
  • the particular type of body is not critical, but FIG. 6 depicts for the sake of example a simple body type that we have employed.
  • body 602 includes nothing more than an upright 604 and an arm 606 attached to the upright.
  • each pixel is usually represented by a color vector consisting of components for, say, red, green, and blue, cyan, magenta, and yellow, or some similar set of values by which a natural color can be approximated.
  • the data are often the output of a camera whose sensors measure radiation intensities within different visible-light bands.
  • Hyperspectral images are similar in the sense that each pixel is represented by a vector whose components represent radiation within different wavelength bands. The difference is that the number of wavelength bands is usually much more than three, and most bands do not fall within the visible range.
  • the values usually represent intensities; they may additionally or instead represent other quantities, such as Stokes parameters.
  • FIG. 7 is a conceptual block diagram of the overall approach.
  • the raw data will typically be in the form of a two-dimensional array of high-dimensional pixel values. That is, the object's position in the array implicitly encodes the two- dimensional location of the pixel that the (high-dimensional) object represents, although there is no reason in principle why three-dimensional-location information could not be stored, in a three-dimensional array.
  • the raw data's location granularity is coarser or finer than is convenient for employing simulated three-dimensional bodies to represent the objects, so the data may be re-sampled, as block 702 indicates, typically by employing one of the standard multi-rate sampling techniques.
  • a body model is then constructed for each object, as block 704 indicates.
  • That drawing depicts two bodies 802 and 804 in a (three-dimensional) model space.
  • the original image plane is mapped to a map plane 806 or other two-dimensional map surface in model space, and the bodies 802 and 804 are assigned zero-displacement positions at the locations in the model space to which the pixels that they represent are mapped.
  • a body's zero-displacement position may be considered to be the one at which its upright is oriented perpendicular to the map plane and intersects the map plane at the upright's midpoint.
  • Each of a plurality of a given data object's components are then mapped to various aspects of the moving body's features, including size, rate and/or mode of motion, and position.
  • the value of one of the data components e.g., intensity, another Stokes parameter, or some other radiation-indicating quantity in the hyperspectral example— may be encoded in-the arm's elevation angle 810.
  • Another component say, another of the Stokes parameters for the same band— may be encoded in the arm's rate and direction of azimuthal rotation 812.
  • pitch, roll, and yaw axes may be defined with respect to the normal to the map plane, and various components may be encoded in the upright's roll, pitch, and yaw angles and in those angles' rate of change.
  • components can be encoded in the body's size. For example, some embodiments may encode certain components in the arms' and uprights' lengths or thicknesses or in ratios of those lengths or thicknesses or in the rates at which any of those change. If the upright, too, is made to move, other components can be encoded in various aspects of that motion. If the motion is simple up-and-down motion, for example, data components can be encoded in the upright's mean position (with respect to its zero-displacement position) and in the amplitude, phase, and frequency of its vertical motion. If the upright's motion is more complex, further components can be encoded in that motion's other aspects. Note also that some of these features do not require that the body move.
  • the system attributes physical characteristics such as mass, elasticity, etc. to the bodies and that one or more components are encoded into such features.
  • the bodies are simulated as being disposed in a gravitational field and/or as being attached to a common platform that undergoes some type of motion, such as rhythmic or irregular translation or pivoting.
  • the system encodes the data indirectly in the motion: the types of motion that the bodies undergo depend on the underlying data, so, again, the display may reveal patterns in the data. Similar effects may be exhibited if the system simulates wind flowing past the bodies.
  • the bodies may be desirable for the bodies to take the forms of flexible reeds in whose features the object components are so encoded as to affect the reed's flexibility.
  • Other forms of indirect encoding will also suggest themselves to those skilled in the art.
  • the shape parameters on which we have concentrated are the upright's height, the arm's length, the angle that the arm forms with the upright, the upright's angle with respect to the map plane, and the arm's azimuth, i.e., its position around the upright.
  • the motion parameters came in four categories: azimuthal rotation of the upright, changes in the entire body's vertical position, circular changes in its horizontal position, and changes in the upright's tilt angle.
  • the time variation of the motion in each case was a simple sinusoid, so there were three parameters, namely, amplitude, frequency, and phase, within each of the four categories.
  • a further parameter within at least the first three categories is the mean, or "rest" position about which the motion occurs.
  • a data component can be encoded in the difference between this and the zero-displacement position to which the corresponding pixel has been mapped.
  • FIG. 7's block 706 represents all such encoding. It is apparent that, at least theoretically, an extremely high number of different data components can thus be encoded in a body's features. As a practical matter, of course, there comes a point at which the resultant visual information becomes overwhelming to the human viewer. But we believe that a human viewer can effectively comprehend patterns resulting from up to fifteen and possibly more different components encoded in this fashion.
  • FIG. 8 depicts a perspective projection, i.e., one in which points such as point 814 in the model space are projected onto the screen plane 816 along a line such as line 818 from the model-space point to a common viewpoint 820 located a finite distance away. More typically, the projection would be orthogonal: the viewpoint would be disposed at an infinite distance. In any event, the display would then be so driven as to produce the resultant image, as FIG. 7's block 710 indicates.
  • FlG. 9 depicts a small portion of a display that can result when the map plane forms a relatively small angle with the screen plane.
  • the projections of some of the bodies are so small as to be nearly imperceptible, while other bodies' projections are quite long.
  • data components are thus encoded, the user typically would not, in that example, directly infer the values of an individual data object's components from the display. He would instead observe overall patterns, possibly of the type that FIG. 10 illustrates, from which he may be able to infer information about the scene or identify avenues for further inquiry.
  • a display system can enable a user to detect patterns readily in a presentation of highly complex data. The invention thus constitutes a significant advance in the art.
  • Another type of display that benefits from the use of motion to distinguish different sets of data is the type that employs "layers" of data.
  • a simple example is simultaneous presentation of different sets of transistor characteristic curves.
  • a bipolar transistor's characteristics are often given as a set of curves on a common graph, each curve depicting collector current as a function of collector-to-emitter voltage for a different value of base current.
  • To compare transistors it would be helpful to be able to compare their characteristic curves visually.
  • One way to do this is to plot different transistors' curve sets on the same axes.
  • This type of display may be referred to as a "layered" display because different transistors' curves can be thought of as being disposed on transparent sheets, or "layers" that lie on top of one another.
  • a first motion is imparted on the entire layer of interest relative to the remaining layers. If a user is interested in more than one type of data, additional layers may be set in motion. Each layer is imparted with a distinctive motion relative to the remaining layers. For example, a first layer may be vibrated horizontally, a second layer may be vibrated vertically, and a circular motion may be imparted on a third layer.
  • each transistor curve may be assigned to its own layer.
  • a user may then select two transistors for particular attention from a group of, say, ten whose data a display presents.
  • the display may make one selected transistor's curves vibrate vertically and the other's vibrate horizontally.
  • the user could then readily recognize which data belong to the chosen transistors, and the comparison could be aided by having a given curve color represent the same base-current value for all transistors.
  • Graphics software known in the art including DirectX provided by Microsoft Corporation of Redmond, Washington, and OpenGL, an open source graphics library originally made available by Silicon Graphics, Inc. of Sunnydale, California, provide functionality for the display of layered images, as well as imparting relative motion to layers within such layered images.
  • each layer preferably includes data sharing a common characteristic.
  • each layer may include data generated from a different imaging source.
  • An image source may be an image capture device or a data storage medium independent of an image capture device.
  • each image capture device may emit or detect electromagnetic radiation of different wavelengths or energies.
  • one image source may generate images from light in the visible spectrum.
  • a second image source may generate images from light in the infrared portions of the spectrum.
  • a third image source may generate images from light in the ultraviolet portions of the spectrum.
  • X-ray images generated from multiple emission energies may be stored as separate layers.
  • Other suitable image capture devices include, without limitation, radar systems, ultrasound devices, geophones, gravitational field sensors, or any sensor that outputs data in relative to spatial position.
  • FIGS. 1 IA-1 1 C are illustrative outputs of a geographic info ⁇ nation system
  • GIS GIS systems are one class of system that would benefit substantially from the layered display technique described above.
  • the layered-display technique is particularly useful for naturally graphical data such as map data.
  • Maps may include depictions of roads; utilities infrastructure, including power lines, sewage pipes, water main, gas pipes, telecommunications infrastructure, etc.; zoning information; geo-registered satellite or aerial imagery, including imagery generated from light in or out of the visible spectrum; radar information; or other visual representation of data corresponding to a mapped location, including population density, demographic data, meteorological data, intelligence data, vegetation type, etc.
  • each of these data types may be stored separately. Each data type may be stored in a single layer or in multiple layers.
  • road data may be stored as a layer of municipal roads, a layer of state roads, and a layer of federal highways.
  • Zoning data maybe stored so that each zoning classification is stored as a separate layer, or it may be stored as a single map layer.
  • a map displayed by the GIS typically would include two or more layers overlaid one another.
  • at least one of the layers is displayed with at least some degree of transparency such that an underlying layer is at least partially visible underneath.
  • the color of at least some pixels in the displayed image at a given point in time are combinations or mixtures of the colors of associated with overlapping positions in the respective layers.
  • the colors of pixels change to take into account different mixtures and combinations of pixel colors from changes in positions that overlap.
  • a user of the GIS selects layers of interest using a user interface.
  • a legend identifying each of the displayable layers is presented to a user.
  • the user then can select the layers desired to be displayed by, for example, clicking a mouse on a check box to select a layer, and then selecting a desired motion from a drop down menu. Additional user interface controls may be made available to adjust the amplitude of the motion as well as the transparency of any of the layers.
  • the user may select the layers to impart motion on by entering a query. Motion is then imparted on the layers that satisfy the query.
  • FIG. 1 IA includes four individual layers 1 102a-l 102d (generally layers "1 102") of geographical information, corresponding to overlapping geographical space, in this example, a portion of the Boston Metropolitan area.
  • Layer 1 102a includes political boundaries 1 104.
  • Layer 1 102b includes local connector roads 1 106.
  • Layer 1 102c includes interstate highways 1108.
  • Layer 1102d includes commuter rail tracks 1 109.
  • the four layers can be displayed overlaid one another to form map 1 1 10.
  • Each layer 1 102 is at least partially transparent such that features in the underlying layers are visible.
  • FIG. 1 IB includes three simulated screen shots 112Oa-1 120c (generally screen shots "1120") of a portion 1112, outlined in phantom, of the map 1110 FIG. 1 IA.
  • the screen shots 1 12Oa-1 120c simulate the motion that may be imparted on one or more layers 1 102d of a map, according to an illustrative embodiment, to highlight information included in the respective layers.
  • features from each of the layers 1 102a-l 102d are visible, including political boundary 1 104, local connector roads 1106, interstate highways 1 108, and rail tracks 1 109.
  • Screen shot 1 120a illustrates the portion of the map 1 1 10 before any motion is imparted on any layers 1 102.
  • Screen shot 1 120b illustrates the portion of the map at a first instant of time after motion has been imparted on the political boundary and interstate highway layers 1 102a and 1 102c, respectively. The original position of the political boundary 1 104 and highway 1 108 are depicted in phantom for reference.
  • Screen shot 1 120c illustrates the portion of the map 1 1 10 at a second instant in time. As can be seen by comparing screen shot 1 120b to 1 120c, the political boundary layer 1 102a has been put into a vertical oscillatory motion and the interstate highway layer 1 102c has been put into a horizontal oscillatory motion.
  • more dynamic oscillatory motions including any other regular or irregular oscillatory movement may be employed without departing from the scope of the invention.
  • the relative movement of the political boundary layer 1 102a and the interstate highway layer 1 102c relative to the remaining layers 1 102b and 1 102d serve to highlight to a viewer the position of the political boundaries 1 104 and the highway 1 108.
  • FIG. 1 1 C is discussed further below.
  • FIGS. 12A and 12B depict simulated outputs of an X-Ray screening machine incorporating the data visualization technology described herein.
  • the X-Ray machine includes dual- or multi-energy level X-Ray beams generated by one or more X-Ray sources.
  • the images generated from each respective X-Ray beam are saved as separate layers.
  • the layers are then overlaid one another for presentation to a security screener.
  • the X-Ray data collected from each source is color coded with a respective color, corresponding to the atomic number of the material detected in the X-Ray image.
  • the coloring is omitted in the simulated outputs to retain clarity.
  • layers corresponding to such materials are automatically imparted with a predetermined motion relative to the remaining layers, such that such material can be readily observed in context with the location of the material in relation to other materials in an item being examined.
  • a user of the X-Ray machine may manually select one or more layers to impart motion to, as well as the desired motion. Additional controls may be used to adjust the amplitude of the motion and the transparency of the various layers. The same motion may be imparted on each of the layers. Alternatively, a first motion may be imparted on a subset of the layers, and a second motion may be imparted on a second subset of the layers.
  • FIG. 12A includes two simulated X- ray output layers 1202a and 1202b, and an overlay 1204 of the output layers 1202a and 1202b.
  • Layer 1202a includes identified inorganic materials, i.e., a suitcase 1206 and a teddy bear 1208 included therein.
  • Layer 1202b includes metal objects identified by an X-ray scan at a second energy level.
  • the layer 1202b includes a knife 1210, as well as various metal components 1212 of the suitcase 1206.
  • the overlay 1204 illustrates how the packer of the suitcase 1206 may have attempted to obscure the knife 1210 by placing it behind the teddy bear 1208. By imparting motion on the metal layer, a viewer of the overlay 1204 of the layers 1202a and 1202b is able to quickly identify the knife 1210.
  • FIG. 12B includes three screenshots 1220a- 1220c of a portion of the simulated X-ray output of FIG. 12A (depicted in FlG. A as the rectangular region enclosed by dashed lines).
  • the first screen shot 1220a depicts the layers 1202a and 1202b in their original position.
  • Screen shots 1220b and 1220c depict the overlay at two points in time.
  • the phantom lines illustrate the original position of the knife 1210, as in screen shot 1220a.
  • the depicted movement of map layer sin FIG. 1 IB the depicted movement of layer 1202b is simplified to a simple horizontal oscillation. Imparted movement may include a simple oscillation as depicted, or a more dynamic complex oscillation.
  • FIGS. 13A — 13B depict the output of a data visualization system integrated with a viewfinder, for example, of a vehicle, such as a tank or an aircraft, according to an illustrative embodiment of the invention.
  • a viewfinder typically display data generated from visible light cameras as well as infrared sensors and/or radar systems.
  • the viewfinder display may also include mission data and/or instrumentation data, including vehicle speed, location, target range, etc.
  • the data visualization system integrated with the view finder stores visible light data, infrared data, mission data and instrumentation data as separate layers, which and then displayed overlaid one another. To draw attention to data in a particular layer, the data visualization system imparts motion to the particular layer relative to the remaining layers.
  • FIG. 13A depicts thee separate layers 1302a- 1302c of simulated graphical data that may be overlaid one another to form the output of a viewfinder.
  • Layer 1302a includes an image taken from a visible light imaging device. Visible in layer 1302a are two buildings 1304 and a fence 1306.
  • Layer 1302b includes an image taken from an infrared imaging source. In layer 1302b, people 1308 are visible crouched behind the fence 1306 and below a window in the second floor of one of the buildings 1304.
  • Layer 1302 includes computer generated mission data, including speed, current coordinates, target coordinates, and a weapon status indicator.
  • Overlay 1310 depicts the results of graphically overlaying the three layers
  • FIG. 13B illustrates how one of the layers can be oscillated relative to the other layers to highlight information included in the oscillating layer.
  • FIG. 13B includes three screenshots 1320a-l 32Oc of an overlay of the three layers 1302a- 1302c.
  • the first screen shot 1320a of FIG. 13B depicts the original alignment of the layers 1302a-1302c.
  • the remaining screen shots 1320b and 1320c depict the overlay at various times while the layer including infrared data is put into oscillation.
  • the original alignment is indicated in phantom.
  • the phantom lines are included in these figures merely for reference and are not intended to suggest that a viewer would actually see such phantom lines in practice. In some implementations, in which object recognition software is employed, such phantom depiction may be utilized.
  • the data displayed in layers by the data visualization system is not received as distinct layers. Instead, the data is divided into layers having common characteristics after receipt.
  • a machine learning program may identify features in an image and store such features in a layer distinct from the remainder of the image.
  • the data visualization system detects objects of interest in an image using object-class detection algorithms known in the art.
  • the data visualization system may detect, for example, faces and/or text characters and store each in a respective layer.
  • the data visualization system displays the layers overlaid one another. To draw attention to all faces in the image, the data visualization system imparts motion to the layer in the image corresponding to faces.
  • To highlight text the data visualization system imparts motion to the layer in the image corresponding to text characters.
  • Similar data analysis can be applied to other forms of image data, including sonar, radar, or infrared images, in which objects of interest, e.g., submarines, aircraft, or roads, respectively, can be detected based on known signatures. Regions of the images having such signatures are stored in respective layers for overlaid display. Similar processing may be carried out on medical images, including x-rays, catscans, MRIs, etc. For example, portions of images corresponding to particular tissue densities or ranges of tissues densities may be stored in a separate layers. The data visualization system then, automatically in response to software instructions executing on the data visualization system, or manually in response to user input, imparts motion on one or more of the layers to highlight the data stored therein. In the medical image context, selective motion of portions of a medical image based on tissue density may reveal features otherwise difficult to identify, including tumors, nerves, or vasculature.
  • the data visualization system visually conveys additional data by imparting a local motion on a portion of a layer relative to the remainder of the layer.
  • Suitable local motions include harmonic vibrations of regions of the layer similar to those describe above, as well as distortions to regions of the layer. The distortions may result, for example, in the region of the layer appearing to ripple, as if a viewer were viewing the layer through water.
  • the data visualization system may impart motion upon a map layer corresponding to highways relative to a terrain image layer and an electrical grid layer, thus visually highlighting the location of roads on the map relative to the surrounding terrain and electrical infrastructure.
  • the data visualization system imparts a local motion on portions of the layer surround the congested roads such that the roads in that region move or distort relative to the remainder of the road map layer.
  • the data visualization system may impart a different local motion on regions to portions of the electrical grid map layer corresponding to regions having increased power consumption.
  • FIG. 1 1 C illustrates one such distortion effect.
  • FIG. 1 1 C includes two screen shots 1 130a and 1 130b of the portion of the map 1 1 10.
  • Screen shot 1 130a depicts the portion without distortion.
  • screen shot 1 130b local roads in the portion are distorted to depict high traffic volumes.
  • each layer of visual data to be displayed is first projected onto a transparent array of geometric shapes, for example triangles.
  • the data visualization system displays the projections overlaid one another.
  • the data visualization system imparts a rhythmic shifting to the vertices of the geometric shapes in a particular area , stretching or shrinking the content filling the geometric shapes. Additional rippling techniques, as well as different and/or additional visual effects may be used to impart local motion on a portion of a layer without departing from the scope of the invention.
  • one or more additional layers of visual effects are added to a set of overlaid data layers displayed to a user.
  • the visual effect layers include opaque or partially transparent localized visual effects that include some form of dynamic movement. Suitable visual effects include fog, mist, rippling water, smoke, etc.
  • the primary difference between the visual effects in the visual effects layer from the localized movement or distortion imparted on portions of data layers is that the visual effects in the visual effects layer preferably are colored such that the color of portions of underlying layers change as a result of the dynamic movement of the visual effects.
  • any color changes in the displayed image result from changing combinations of the colors associated with overlapping positions in the data layers as points in each layer overlap in different ways as the portions of the data layer move or distort.
  • FIGS. 14A-14E are a series of 3-dimensional images of a tooth displayed on display 108 of data visualization system 100 in FIG. 1 , according to an illustrative embodiment of the invention. These series of images are illustrative of volumetric images that can be processed and displayed by data visualization system 100 in FIG. 1.
  • FIGS. 14A, 14B, and 14E show displays of the tooth including the crown and root, and illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features of the tooth to highlight those graphical features. In some embodiments, if there is more than one graphical feature, the same motion is imparted on each of the graphical features.
  • FIGS. 14C and 14D show a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image graphical features.
  • Graphical features of displayed images depicted with dashed lines in FIGS. 14A- 14E correspond to initial positions of those graphical features relative to the displayed object to aid the reader in discerning imparted motion depicted in the figures.
  • the data visualization system 100 of FIG. 1 need not display these lines.
  • FIG. 14A shows a time series of images of a tooth, an example of a volumetric image displayed by data visualization system 100 of FIG. 1.
  • Graphical data from which the volumetric image was formed may have been captured by a medical imaging device, e.g., a panoramic X-ray machine, that is part of data visualization system 100 of FIG. 1.
  • the data visualization system displays the tooth in various orientations, 1402, 1404, 1406, and 1408, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416.
  • Motion 1416 is a rotation of the tooth about longitudinal axis 1401
  • motion 1414 is a rotation of the tooth in a fixed plane relative to longitudinal axis 1401. In each orientation, both root 1410 and crown 1412 are displayed.
  • FIG. 14B shows a time series of images of the tooth from FIG. 14A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature of a volumetric image by imparting relative motion to that graphical feature.
  • the data visualization system displays the tooth in various orientations, 1422, 1424, 1426, and 1428, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416, which are similar to those shown in FIG. 14A.
  • orientations 1424, 1426, and 1428 respectively, the data visualization system 100 of FIG. 1 imparts motion 1418 on root 1410.
  • the dashed lines 1409 correspond to initial positions of root 1410 relative to the rest of the tooth.
  • root 1410, crown 1412, and the relative motion of root 1410 are displayed.
  • FIG. 14C is a diagram of a user interface window 1430 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features visually representing portions of volumetric images.
  • the tooth of either FIG. 14A or FIG. 14B could be displayed in user interface window 1430.
  • user interface window 1430 is divided into two sub-windows: window 1432 containing a volumetric image, namely the tooth of FIG. 14B, and window 1434 containing a graphical user interface in which a human user, interacting with data visualization system 100 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1432.
  • a human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1434 to choose whether to display crown 1412, root 1410, both crown 1412 and root 1410, or neither crown 1412 or root 1410 by selecting or de-selecting the appropriate boxes 1436. Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1). Alternatively, a user could click directly on the image in window 1432 to select graphical features for display. In FIG. 14C, both display boxes 1436 are selected or marked "x" such that both graphical features of the tooth, crown 1412 and root 1410, are displayed.
  • a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to crown 1412 or root 1410 by selecting an option in the drop-down menus 1438.
  • Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, horizontal motion, circular motion, full- or partial-rotation motion, or no motion ('"none " ).
  • Such a selection could be carried out using a mouse (1 14 in FIG. 1 ) or a keyboard query (1 12 in FIG. 1 ).
  • a user could click or drag directly on the image in window 1432 of data visualization system 100 of FIG. 1 to select graphical features visually representing portions of the data on which the data visualization system will impart motion.
  • FIG. 1 In FIG.
  • the selection of options in drop-down menus 1438 is such that no motion is to be imparted on crown 1412, while vertical motion is to be imparted on root 1410.
  • the data visualization system imparts motion 1418 on root 1410 relative to crown 1412.
  • the initial position of root 1410 is depicted with dashed lines 1409.
  • the data visualization system 100 of FIG. 1 could be used to identify a graphical feature visually representing a portion of a volumetric image by determining bounds of selected subject matter based on user input. Such a determination could be made by a human user using a mouse click, keyboard query, or a cursor brush to select an appropriate graphical feature, e.g., root 1410 or crown 1412, after which the data visualization system automatically determines the bound of the selected graphical feature based on the human user input. In some embodiments, the user input could also be received from an external source. Alternatively, the data visualization system 100 of FIG. 1 could determine the bounds of a selected graphical feature using, e.g., Artificial Intelligence (AI), or image processing algorithms.
  • AI Artificial Intelligence
  • the metes and bounds of a graphical feature could be determined, for example, by finding the parts of a volumetric image that are similar to other parts of the same volumetric image based on a similarity metric. For example, a user may select a part of a map image corresponding to bridges, and this selection would result in the data visualization system 100 in FIG. 1 identifying and displaying all other bridges on the map.
  • the metrics on which similarity is based could be, for example, entered by the user in a query, or initially estimated by an AI algorithm executed by data visualization system 100 of FIG. 1.
  • all the voxels of a volumetric image within a certain user-defined range could be displayed, or all the voxels in a particular part of a volumetric image within a pre-dete ⁇ nined range of the AI algorithm could be displayed.
  • FIG. 14D is diagram of the user interface window 1430 of FIG. 14C generated by the data visualization system 100 of FIG. 1.
  • only one of the display boxes 1436 is selected such that root 1410 is displayed, and no crown is displayed.
  • the selection of options in drop-down menus 1438 is such that data visualization system 100 of FIG. 1 imparts vertical motion 1418 on root 1410.
  • FIG. 14E shows a time series of images of the tooth from FIG. 14B which illustrates how the data visualization system 100 of FIG. 1 can highlight a part of a graphical feature of a volumetric image by imparting motion to that part of the graphical feature relative to the remainder of the graphical feature.
  • the data visualization system displays the tooth in various orientations, 1440, 1442, 1444, and 1446, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416, which are similar to those shown in FIG. 14B.
  • the data visualization system 100 of FIG. 1 imparts motion 1450 on portion 1454 of root 1410.
  • Portion 1454 of root 1410 is contained within the dashed box 1452 in image 1440.
  • the dashed lines 1456 correspond to initial positions of the portion 1454 of root 1410 relative to the remainder of root 1410. Furthermore, in each orientation, i.e., when viewed by the reader from multiple perspectives, root 1410, crown 1412, and the relative motion of portion 1454 of root 1410, are displayed.
  • portion 1454 of root 1410 in FIG. 14E a human user interacting with the data visualization system 100 of FIG. 1 could select the portion 1454 of root 1410 by creating dotted box 1452 within a user interface window of the data visualization system 100 of FIG. 1. Dotted box 1452 could be created by performing a click-and-drag operation with a mouse, for example.
  • data visualization system 100 of FIG. 1 identifies portion 1454 of root 1410 in FIG. 14E using a computer-executable pattern recognition or signal processing algorithm. In some embodiments, such an identification may not involve user input.
  • data visualization system 100 of FIG.1 could impart motion on a part of a graphical feature of a volumetric image relative to the remainder of the graphical feature, as well as impart motion on the part itself relative to the remainder of the image.
  • FIGS. 14B and 14E an example of such a display would be one in which data visualization system 100 of FlG. 1 imparts motion 1450 on portion 1454 of root 1410 and simultaneously imparts vertical motion 1418 on root 1410.
  • data visualization system 100 of FIG. 1 could impart a first motion on the entire displayed volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image.
  • FIGS. 14C and 14D an example would be one in which data visualization system 100 of FIG. 1 imparts rotational motion 1416 (of FIGS. 14A, 14B, and 14E) on the entire tooth, and simultaneously imparts vertical motion 1418 on root 1410.
  • a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via interactions with user interface windows 1432 and 1434 in either of these figures.
  • This approach to representing graphical data is advantageous because, although the user could, by careful attention, identify the bounds of the root on the tooth in FIGS. 14A-14E, automatically determining the bounds of the root and displaying the root in motion cause it to "jump out” at the human viewer. It some instances, e.g. assuming that portion 1454 of root 1410 is diseased, it may be desirable to have data visualization system 100 of FIG. 1 display this graphical feature of the root such that it "stands out” from the rest of the root. This approach could be important for purposes of planning a medical procedure, or for verifying that the data visualization system 100 of FIG. 1 has correctly identified the diseased portion of the root.
  • any given 2-dimensional slice of the tooth would not provide a full understanding of where the diseased portion of the root is located.
  • motion-based visualization for the root, one reduces the risk of misidentifying the diseased portion 1454, or the extent thereof. For instance, if one were to highlight the diseased portion 1454 using a different color or texture instead of imparting motion on the diseased portion 1454, the other displayed parts, e.g., root 1410 and crown 1412, of the tooth may be obscured. In the illustrations of FIGS.
  • the relative motions imparted on graphical features of the image by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random.
  • portion 1454 of root 1410 could, for example, have vertical or lateral vibrational motion relative to the remainder of root 1410.
  • the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of portion 1454 of root 1410 in FIG. 14E.
  • FIGS. 15A-15E are a series of medical images displayed on display 108 of data visualization system 100 in FIG. 1, according to an illustrative embodiment of the invention. These series of images are illustrative of volumetric images that can be processed from 2-dimensional data and displayed by data visualization system 100 in FIG. 1.
  • FIGS. 15A and 15B illustrate the application of a medical imaging technique in which 2-dimensional medical images or "slices" are received by data visualization system 100 of FIG. 1 and processed to create and display a composite volumetric medical image.
  • FIGS. 15C and 15D show displays of graphical features of the image, and illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features of the image to highlight those graphical features.
  • FIG. 15A-15E are a series of medical images displayed on display 108 of data visualization system 100 in FIG. 1, according to an illustrative embodiment of the invention. These series of images are illustrative of volumetric images that can be processed from 2-dimensional data and displayed by data visualization system 100 in FIG. 1.
  • FIG. 15D shows a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image graphical features.
  • Graphical features of displayed images depicted with dashed lines in FIGS. 15C and 15D correspond to initial positions of those graphical features relative to the displayed image to aid the reader in discerning imparted motion depicted in the figures.
  • the data visualization system 100 of FIG. 1 need not display these lines.
  • FIG. 15A shows the application of a medical imaging technique in which 2- dimensional medical images or "slices" of a human skull 1500 are captured and displayed by data visualization system 100 of FIG. 1.
  • Human skull 1500 contains, among other graphical features, a tumor 1506 and brain matter 1504.
  • the slices could be capture by a medical imaging device that is part of data visualization system 100 of FIG. 1.
  • the medical imaging device could be a CAT scan machine or an MRl machine, which produces CAT scan images, or MRI images, respectively.
  • the medical imaging device of data visualization system 100 of FIG.1 captures slices 1502 of human skull 1500.
  • six slices, 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively, are captured and displayed.
  • the number of slices captured may vary, i.e., in some instances as few as 1 or 2 slices may be captured, while in other instances, up to 100 slices may be captured.
  • Each slice displayed by data visualization system 100 of FIG. 1 represents a particular cross-section of human skull 1500, and contains information about tissue density across the cross section. For example, assuming that the tissue density of tumor 1506 is substantially different than brain matter 1504, slices 1502a and 1502f could represent portions of human skull 1500 vertically above and below, respectively, tumor 1506, and thus, do not contain any part of the tumor. In contrast, slices 1502b, 1502c, 1502d, and 1502e, each contain a particular portion of tumor 1506.
  • Data visualization system 100 of FIG. 1 displays each of the slices in 2- dimensions in FIG. 15A, but need not display the schematic of slices 1502 within human skull 1500.
  • FIG. 15B illustrates the application of a medical image processing technique in which a composite volumetric medical image of a part of the human skull of FIG. 15A is created and displayed on data visualization system 100 of FIG. 1.
  • the data visualization system 100 of FIG. 1 assigns each of the two-dimensional image slices 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively, of FIG. 15A a certain depth (or height), such that the slices now have 3 dimensions (left side of FIG. 15B).
  • Each of these slices contains information about tissue density across the cross section and may contain a part of tumor 1506 and/or brain matter 1504.
  • the data visualization system 100 of FIG. 1 displays this stack of slices a composite volumetric image (right side of FIG. 15B).
  • the human skull 1500 is depicted merely for reference and need not be displayed by the data visualization system 100 of FIG. 1.
  • FIG. 15C shows a set of slices of the skull from FIG. 15B which illustrate how the data visualization system 100 of FIG. 1 can highlight a graphical feature within each slice of a volumetric image by imparting relative motion to that graphical feature relative to the slice.
  • the data visualization system 100 of FIG. 1 displays the slices in two orientations, 1510 and 1520, respectively, and imparts motion on tumor 1506 within each slice as illustrated by motion arrows 1530 and 1540.
  • data visualization system 100 of FIG. 1 imparts lateral motion on tumor 1506 within each slice 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively.
  • the dashed lines 1507 or 1508 correspond to initial positions of tumor 1506 relative to the rest of the slice.
  • the quick determination and display of a tumor in a volumetric image by data visualization system 100 of FIG. 1 in this manner is significantly advantageous.
  • the user could, by careful attention, identify the bounds of tumor 1506 in FIGS. 15A- 15C, having data visualization system 100 of FIG. 1 automatically determine the bounds of the tumor and display the tumor in motion in 3 dimensions cause it to "jump out" at the human viewer. It is also beneficial to see tumor 1506 displayed volumetrically in relation to the other anatomical portions in human skull 1500.
  • the data visualization system 100 of FIG. 1 can do so without obscuring originally presented information, for example, human skull 1500 or brain matter 1504.
  • FIG. 15D is a diagram of a user interface window 1550 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features of volumetric images.
  • the composite volumetric images, or the individual slices, of FIGS. 15A-15C could be displayed in user interface window
  • user interface window 1550 of data visualization system 100 of FIG. 1 is divided into two sub-windows: window 1560 containing the composite volumetric image of human skull 1500 from FIG. 15A, and window 1570 containing a graphical user interface in which a human user, interacting with data visualization system 100 of FIG. 1 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1560.
  • a human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1570 to choose whether to display the human skull 1500 (not displayed in FIG. 15D), brain matter 1504, or tumor 1506, by selecting or de-selecting the appropriate boxes 1584, 1582, or 1580, respectively.
  • Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1).
  • a user could click directly on the image in window 1560 to select graphical features for display.
  • display boxes 1580 and 1582 are selected or marked "x" such that data visualization system 100 of FIG. 1 displays brain matter 1504 and tumor 1506.
  • a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to human skull 1500, brain matter 1504, or tumor 1506 by selecting an option in the drop-down menus 1590, 1592, and 1594 respectively.
  • Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, horizontal motion, circular motion, full- or partial-rotation motion, or no motion ("none"). Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1).
  • a user could click or drag directly on the image in window 1560 of data visualization system 100 of FIG. 1 to select graphical features visually representing portions of data on which the data visualization system will impart motion.
  • FIG. 1 illustrates what kind of motion to impart to human skull 1500, brain matter 1504, or tumor 1506 by selecting an option in the drop-down menus 1590, 1592, and 1594 respectively.
  • Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion,
  • the selection of options in drop-down menus 1590, 1592, and 1594 is such that no motion is to be imparted on human skull 1500 (not displayed in FIG. 15D) or brain matter 1504, while lateral motion 1540 is to be imparted on tumor 1506.
  • the data visualization system 100 of FIG. 1 imparts motion 1540 on tumor 1506 relative to brain matter 1504.
  • the initial position of tumor 1506 is depicted with dashed lines 1508.
  • data visualization system 100 of FIG. 1 could impart a first motion on the entire displayed composite volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image.
  • FIGS. 15C-15D an example would be one in which data visualization system 100 of FIG. 1 imparts a rotational motion on the entire composite image, and simultaneously imparts lateral motion on tumor 1506.
  • a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via interactions with user interface windows 1560 or 1570.
  • the data visualization system 100 of FIG. 1 could be used to identify a graphical feature visually representing a portion of a volumetric image by determining bounds of selected subject matter based on user input. Such a determination could be made by a human user using a mouse click, keyboard query, or a cursor brush to select an appropriate graphical feature, e.g., tumor 1506, after which the data visualization system automatically determines the bound of the selected graphical feature based on the human user input.
  • a user may interact with data visualization system 100 of FIG. 1 to enter a query (1 12 in FIG. 1), e.g., enter a range of tissue density corresponding to tumor tissue density, and this query would result in the data visualization system 100 in FIG.
  • data visualization system 100 of FIG. 1 identifying and displaying graphical features of the volumetric image with this range of tissue density.
  • data visualization system 100 of FIG. 1 identifies a graphical feature representing a portion of a volumetric image using a computer-executable pattern recognition or signal processing algorithm. Such an identification may not involve user input.
  • the relative motions imparted by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random.
  • tumor 1506 could, for example, have vertical or lateral vibrational motion relative to the remainder of the image.
  • the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of tumor 1506 in FIGS. 15C and 15D.
  • FIGS. 16A-16D are a set of architectural drawings displayed on display 108 of data visualization system 100 in FIG. 1 , according to an illustrative embodiment of the invention. These series of images are illustrative of data that can be processed and displayed by data visualization system 100 in FIG. 1.
  • FIG. 16A is a set of architectural drawings in which multiple graphical features of a building, for example, the water supply and electrical system, are displayed.
  • FIGS. 16B and 16C show displays of the architectural drawings that illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features visually representing portions of the drawings to highlight those graphical features.
  • FIG. 16D shows a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image portions.
  • FIGS. 16B-16D Graphical features of displayed images depicted with dashed lines in FIGS. 16B-16D correspond to initial positions of those graphical features relative to the displayed image to aid the reader in discerning imparted motion depicted in the figures.
  • the data visualization system 100 of FIG. 1 need not display these lines.
  • FIG. 16A shows a set of architectural drawings, examples of volumetric images displayed by data visualization system 100 of FIG. 1.
  • Graphical data from which the volumetric images were formed may have been captured by a device for generating architectural drawings that is part of data visualization system 100 of FIG. 1.
  • Data visualization system 100 of FIG. 1 displays multiple graphical features of the architectural drawings, for example, the water supply 1608, electrical system 1606, furnishings 1616, and fixtures 1610.
  • architectural drawing 1600 on the left of FIG. 16A, illustrates two floors, 1602 and 1604, of a building, while architectural drawings 1612 and 1614, on the right of FIG. 16A, show a floor 1604 of drawing 1600 from a top- view and a side-view, respectively.
  • FIG. 16B shows a time series of images of architectural drawing 1600 from
  • FIG. 16A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature visually representing a portion of a volumetric image by imparting relative motion to that graphical feature.
  • the data visualization system displays drawing 1600 in various states, 1630, 1640, and 1650, respectively.
  • states 1640, and 1650 data visualization system 100 of FIG. 1 imparts motion 1660 on water supply 1608.
  • the dashed lines 1609 and 161 1 correspond to initial positions of water supply 1608 relative to the rest of the architectural drawing.
  • FIG. 16C shows a time series of images of architectural drawings 1612 and 1614 from FIG. 16A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature visually representing a portion of a volumetric image by imparting relative motion to that graphical feature.
  • the data visualization system displays drawings 1612 and 1614 in various states, 1635, 1645, and 1655, respectively.
  • states 1645, and 1655 data visualization system 100 of FIG. 1 imparts lateral motion 1680 on water supply
  • FIG. 16C is an illustration of the same motion imparted by data visualization system 100 of FIG. 1 as in FIG. 16B, but viewed from a different user point-of-view or perspective.
  • FIG. 16D is a diagram of a user interface window 1690 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features of volumetric images.
  • the architectural drawings of FIGS. 16A- 16C could be displayed in user interface window 1690.
  • user interface window 1690 of data visualization system 100 of FIG. 1 is divided into two sub- windows: window 1694 containing the architectural drawing in state 1655 of FIG. 15C, and window 1692 containing a graphical user interface in which a human user, interacting with data visualization system 100 of FIG. 1 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1694.
  • a human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1692 to choose whether to display, among others, water supply 1608, electrical system 1606, fixtures 1610, furnishings 1616, or any combination thereof, by selecting or deselecting the appropriate boxes 1636.
  • a selection could be carried out using a mouse (114 in FIG. 1) or a keyboard query (1 12 in FIG. 1).
  • a user could click directly on the image in window 1694 to select graphical features for display.
  • display boxes 1636 are selected or marked "x" such that data visualization system 100 of FIG. 1 displays water supply 1608, fixtures 1610, electrical system 1606, furnishings 1616, and structure 1622 of drawing 1655 in window 1694.
  • the display of a selected graphical feature may be accomplished by, for example, the data visualization system 100 of FIG. 1 determining and displaying all the parts of the architectural drawing image which are similar in appearance to the user-selected graphical feature(s) e.g. water supply 1608.
  • data visualization system 100 of FIG. 1 identifies graphical features in the architectural drawings in FIGS. 16A-16D using a computer-executable pattern recognition or signal processing algorithm. Such an identification may not involve user input.
  • a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to water supply 1608, fixtures 1610, electrical system 1606, furnishings 1616, and structure 1622, by selecting an option in the drop-down menus 1638 respectively.
  • Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, or circular motion as shown in expanded drop-down menu 1642. Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query ( 1 12 in FIG. 1 ).
  • a user could click or drag directly on the image in window 1694 of data visualization system 100 of FIG. 1 to select graphical features on which the data visualization system will impart motion.
  • the selection of options in drop-down menus 1638 is such that lateral motion 1680 is to be imparted on water supply 1608 and no motion is to be imparted on electrical system 1606.
  • the data visualization system 100 of FIG. 1 imparts motion 1680 on water supply 1608 relative to architectural drawing 1655.
  • the initial position of water supply 1608 is depicted with dashed lines 1613.
  • data visualization system 100 of FIG. 1 could impart a first motion on the entire volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image.
  • FIGS. 16B-16D an example would be one in which data visualization system 100 of FIG.
  • FIG. 16D a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via user interface windows 1692 or 1694.
  • the relative motions imparted by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random.
  • water supply 1608 could, for example, have vertical or lateral vibrational motion relative to the remainder of the architectural drawing.
  • the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of water supply 1608 in FIGS. 16B-16D.
  • the data visualization system may be used to display security screening images in which particular graphical features of the image are displayed with motion imparted on them.
  • the data processed by the data visualization system may include a mechanical engineering drawing, e.g., of a bridge design, an automotive design and test evaluation drawing, a geological model, e.g., a model of the earth in which identified natural resource deposits move relative to other underground features, such as ground water, soil strata, etc.
  • the data visualization system may be used to display data that may not naturally lend themselves to 3-dimensional displays e.g.
  • epidemiological data in which certain demographic data, e.g., age, income, and weight are mapped to x-y-z dimensions of a volumetric image and others data, such as height, blood pressure, or lung capacity, being mapped to color, brightness, or other visual parameter.
  • the data visualization system may be used in any domain in which one could use a 2-dimensional, 3 -dimensional, or volumetric image for displaying and analyzing data, and in which one is interested in locating logical collections within the data set image relative to the whole data set. It is therefore intended that the following claims cover all such alterations and modifications as fall within the true spirit and scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

Systems and methods for displaying data using motion-based visualization techniques are provided. In one example, a data-display system employs a display in which the representations of data objects are caused to move on the display in order to convey information about the represented data objects. In another example, icons in a link-analysis display that represent data objects satisfying a selection criterion are made to execute distinctive motion. In a third example, three- dimensional models of moving bodies in whose features components of respective data objects are encoded are projected onto a screen plane, and the resultant values are used to generate the display. In other examples, systems and methods for displaying data with multiple graphical features visually representing portions of the data and for imparting motion to one graphical feature relative to a remainder of the volumetric image to highlight the first graphical feature are described.

Description

MOTION-BASED VISUALIZATION
RELATED APPLICATIONS
This application claims priority to U.S. Patent Application No. 1 1/961,242 filed on December 20, 2007, and U.S. Patent Application No. 12/169,934 filed on July 9, 2008. The contents of each application are incorporated herein by reference.
STATEMENT REGARDING FEDERALLY SPONSORED
RESEARCH OR DEVELOPMENT
This invention was made with U.S. Government support under Contract No. NMA401-02-C-0019, awarded by the National Imaging and Mapping Agency. The U.S. Government has certain rights in this invention.
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention is directed to data display. It particularly concerns effectively displaying high-dimensional and complex relational data, including volumetric data, e.g., medical image data, security screening data, or architectural drawing data..
Background Information
It is now commonplace to employ computers to sift desired information from databases far too large for individual-human comprehension. Software has been developed for performing analysis of a highly sophisticated nature, and such software is often able to detect trends and patterns in the data that would, as practical matter, be impossible for an individual human being to find.
The converse is often also true. Particularly when the question to be asked does not lend itself to easy definition, computers often have difficulty detecting patterns that are readily apparent to human beings. And this human capability is best brought to bear when the data in question are presented graphically. Data presented graphically usually are more readily understandable than the same data presented only in, say, tabular form. But the degree of the resultant understanding greatly depends on the nature of the display, and determining what the appropriate display should be can present a significant problem.
True, some data almost automatically suggest the type of presentation to which they are best suited. The speed of an airplane as a function of time, for instance, would in most cases simply be presented in a simple x-y plot. And there rarely is any question about the general form of display appropriate to the data that a camera takes. In the former case, the presentation is trivial, since speed and time are the only variables, so they are readily associated with two presentation axes. In the latter, camera case, the data suggest the mode of presentation just as readily, since the domain is a two-dimensional scene and the range is spanned by the colors conventionally employed in printing or presentation on a display screen.
But the way to represent many other types of data is significantly harder to determine. An example is hyperspectral data. Typically, such data are similar to those that result from a camera in the sense that the domain is usually a two- dimensional scene. But the value taken for each picture element ("pixel") in the scene is not a vector representing visible-color components, such as red, green, and blue or cyan, magenta, and yellow. Instead, it is a vector consisting of a relatively large number of components, each of which typically represents some aspect of the radiation received from a respective wave-length band. And the bands often fall outside the visual range. Because of the data's high dimensionality and the limited dimensionality of human visual perception, some degree of selectivity in data presentation is unavoidable, and the decisions that are involved in making the selections have a significant impact on the presentation's usefulness to the human viewer.
Another example is volumetric medical image data. Volumetric images may be quite complex. In such images, a volume element is referred to as a "voxel" (analogous to a pixel in two-dimensional space). Depending on the transparency assigned to the voxels, graphical features that may be of interest to a viewer may be obscured by other voxels. Similarly, the complexity of volumetric images, in some fields, for example, medical imaging, result in the boundaries of various features being difficult to detect.
High dimensionality also occurs in other kinds of data. In large medical, forensic, and intelligence databases, for example, data objects may represent respective individual people, and the dimensions may be age, gender, height, weight, income, etc.
Presentation problems can arise even in data sets that are not necessarily high-dimensional. Consider link analysis, for example. This type of analysis is used to study subjects as disparate as communications networks and criminal enterprises. Its purpose is to find helpful patterns in the connections between studied entities. To help the user detect such patterns, nodes on a display represent various entities, and lines connecting the nodes represent various relationships between them. In the case of communications networks, for example, the nodes may be, say, Internet Protocol ("IP") routers, and the lines would represent the interconnecting communication links. In the case of a criminal enterprise, the nodes may represent people, organizations, buildings, or other entities under surveillance, while the lines may represent known communications between the entities or represent other relationships, such as ownership, legal control, etc. If the amount of data being presented is large, the resulting diagram can be hard to comprehend even if the underlying data dimensionality is low.
To help human users employ such data and images effectively, there is a need for presentation or data display systems which make important features, (e.g., patterns, structures, etc.) "stand out" from the other data presented on the display. For example, some link-analysis systems employ color, thickness, etc. to highlight the nodes and/or relationships that meet criteria of particular interest. A similar approach is commonly used in "brushing," which is sometimes used when representations of the same data objects are displayed simultaneously in different relative locations in different displays. (The displays can be on the screens of different monitors, for example, or on different parts of a single monitor's screen.) In brushing, a user employs a mouse or other device to select a subset of the objects represented by icons in one display, and the display system highlights other display's objects that represent the same objects. Another approach is the use of stereo and rotating perspective displays in data display systems. However, while stereo views and rotating perspective views increase the human user's understanding of volumetric data and images, they do not by themselves make portions of the displayed image stand out from the other data presented on the display.
Another technique, previously proposed in assisting human users to distinguish important graphical features in two-dimensional images is to impart motion to these graphical features. Such a display technique takes advantage of the inherent ability of the human perception system to recognize patterns in data by quickly associating graphical features that are moving in the same fashion.
SUMMARY OF THE INVENTION
We provide systems and methods for processing graphical data which may be displayed as 2-dimensional, 3-dimensional or volumetric images. The display of this data uses motion-based visualization. Such motion-based visualization provides significant data comprehension benefits with single perspective displays, rotating perspective displays and stereo displays.
In one aspect, the invention relates to a security screening system. The security screening system includes a security screening device for outputting graphical data about an object being screened, a display, a memory for storing the graphical data output by the security screening device, and a processor. The processor is in communication with the memory and the display.
In one embodiment, the processor is configured to retrieve the graphical data from the memory, display the graphical data stored in the memory in a plurality of layers of overlaid over one another, and impart a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data representing suspicious materials depicted in the first layer. The suspicious materials may include, for example, metal or chemical substances. In one embodiment, the security screening device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers. In another embodiment, the processor may be configured to generate the plurality of displayed layers from the retrieved graphical data such that the data included in each respective layer shares a common characteristic with other data in the layer. In other embodiments, the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer.
In another aspect, the invention relates to a medical image analysis system. The medical image analysis system includes a medical imaging device for outputting graphical data representing characteristics of a body structure being imaged, a display, a memory for storing the graphical data output by the medical imaging device, and a processor. The processor is in communication with the memory and the display.
In one embodiment, the processor is configured to retrieve the graphical data from the memory, display the graphical data stored in the memory in a plurality of layers of overlaid over one another, and impart a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight characteristics of a body structure represented in the first layer. In one embodiment, the medical imaging device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers.
In another embodiment, the processor may be configured to generate the plurality of displayed layers from the retrieved graphical data such that the data included in each respective layer shares a common characteristic with other data in the layer. In other embodiments, the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of the body structure represented within the first layer.
In a third aspect, the invention relates to a method for displaying graphical data. This method includes displaying a plurality of layers of graphical data overlaid one another, imparting a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data represented in the first layer, and imparting a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer. In some embodiments, at least one of the plurality of displayed layers includes image data generated by a different imaging source then used to generate image data for a second layer in the plurality of displayed layers. Optionally, the method may include receiving data for display, and generating the plurality of displayed layers from the received data, wherein the data in each respective layer in the plurality of displayed layers shares a common characteristic.
In some embodiments, at least one layer in the plurality of displayed layers includes image data generated by a different imaging source than used to generate image data for a second layer in the plurality of displayed layers. Optionally, the different imaging sources could capture the image data using different imaging techniques.
In one embodiment, the first layer may comprise an image projected on an array of geometric shapes, and imparting the localized motion comprises shifting vertices of geometric shapes in the first area. In other embodiments, at least one of the displayed layers is at least partially transparent. In some embodiments, the method further includes receiving an input from a user identifying data to be highlighted, and determining the imparted motion in response to the user input.
In another aspect, the invention relates to a data visualization system for displaying a volumetric image or an image having at least three dimensions. This system includes a user interface, a display, a memory for storing graphical data, and a processor in communication with the memory and the display. The processor is configured for retrieving graphical data from the memory and displaying a volumetric image incorporating a plurality of graphical features of the data on the display. Additionally, the processor is configured for processing input from the user interface to identify a first of the displayed graphical features, and to impart motion to the first identified graphical feature relative to the remainder of the volumetric image to highlight the first graphical feature. Optionally, the system can include a medical imaging device, a security screening device, or a device for generating architectural drawings, each of which may be used for outputting the graphical data stored in the memory. To this end, the graphical data may correspond to medical image data, security screening data, or architectural drawing data. In a further embodiment, the medical image data may be obtained from a plurality of medical imaging devices. Optionally, the medical image data may be captured from the medical imaging device using different imaging techniques. In some embodiments, the user interface of the data visualization system receives a user input.
The term graphical feature, as used herein, refers to a collection of one or more voxels having a logical connection to one another. For example, such voxels may correspond to portions of a same physical structure. The voxels may be logically connected due to their relationship to different structures in a common network (e.g., electrical networks, communication networks, social networks, etc.). Voxels may also related due to their sharing a common characteristic. For example, in medical imaging, the voxels in a graphical feature may be related in that they correspond to tissue having a common density or to a fluid flow having a common flow rate. Generally, the logical connection can be any criteria selected by a user for selecting groups of voxels or any criteria applied by an artificial intelligence, signal processing, or pattern recognition system designed for identifying relevant features in data.
In some embodiments, the volumetric image may be obtained from a plurality of image sources. Specifically, the graphical data may correspond to medical image data, security screening data, or architectural drawing data. In other embodiments, the medical image data may be captured using different imaging techniques. In certain embodiments, the processor may obtain three-dimensional data by analyzing a set of data having at least two dimensions, and in other embodiments, at least one part of the graphical data is received from a different source than used for a remainder of the graphical data.
In a further embodiment, the processor also receives an input from a user. This user input may comprise, among other inputs, a query, a cursor brush, or a mouse click. In some embodiments, based on the user input, the computer executable instructions cause the processor to identify one of the graphical features of the volumetric image by determining bounds of selected subject matter.
In another aspect, the invention relates to a method for analyzing data having at least three dimensions. This method includes receiving data for display, displaying a volumetric image incorporating a plurality of graphical features visually representing portions of the data, and imparting motion to one of the graphical features relative to the remainder of the volumetric image to highlight this graphical feature.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention description below refers to the accompanying drawings, of which:
FIG. 1 is a block diagram of a data visualization system in which the present invention's teachings may be implemented;
FIG. 2 is diagram of a display of the type often employed for link analysis;
FIG. 3 is a diagram that illustrates the result of using such a display in accordance with one of the invention's aspects;
FlG. 4 depicts exemplary histograms in which brushing is being performed;
FIGS. 5 A, 5B, and 5C are plots of one component of the motion of a body that represents a data object in accordance with the present invention; FIG. 6 is a diagram that illustrates one kind of three-dimensional body in whose features an object's data can be encoded in accordance with one of the invention's aspects;
FIG. 7 is a flow chart of the manner in which one embodiment of the invention operates;
FIG. 8 is a diagram that illustrates one way in which a display can be generated from three-dimensional models that represent data objects in accordance with one of the present invention's aspects;
FIG. 9 depicts a small segment of a display generated by projecting such models;
FIG. 10 depicts a larger segment of the display of FIG. 9;
FIGS. 1 IA-1 1 C are illustrative outputs of a geographic information system (GIS), according to an illustrative embodiment;
FIGS. 12A and 12B depict simulated outputs of an X-Ray screening machine incorporating the data visualization technology described herein;
FIGS. 13 A and 13B depict the output of a data visualization system integrated with a viewfinder, according to an illustrative embodiment;
FIG. 14A is a simple illustration of a tooth, an example of a volumetric image of the type that may be displayed in accordance with one of the invention's aspects;
FlG. 14B illustrates the volumetric image from FIG. 14A in which a first graphical feature of the volumetric image moves relative to the remainder of the volumetric image;
FIG. 14C is a diagram of a display containing a volumetric image with a plurality of graphical features in which all graphical features are displayed; FIG. 14D is a diagram of a display containing a volumetric image with a plurality of graphical features, in which only one graphical feature is displayed;
FIG. 14E illustrates the volumetric image from FIG. 14A in which a part of the first graphical feature of the volumetric image possesses a localized motion relative to the remainder of the first graphical feature of the volumetric image;
FIG. 15A is a schematic that illustrates the application of a medical imaging technique in which 2-dimensional medical images or "slices" of a human skull are captured;
FIG. 15B is a schematic that illustrates the application of a medical imaging technique in which a composite volumetric medical image of a part of the human skull of FIG. 15A is created from plurality of two-dimensional image slices;
FIG. 15C illustrates the medical image from FIG. 15B in which a first graphical feature of the slices of the image move relative to the remainder of the image;
FIG. 15D is a diagram of a display containing the medical image from FIG.
15B in which some graphical features are displayed and/or moving;
FIG. 16A illustrates a volumetric architectural drawing in which multiple graphical features of the drawing are displayed;
FIG. 16B illustrates the volumetric architectural drawing from FIG. 16A in which one graphical feature of the drawing moves relative to the remainder of the drawing when viewed from one perspective;
FIG. 16C illustrates the volumetric architectural drawing from FIG. 16A in which one graphical feature of the drawing moves relative to the remainder of the drawing when viewed from a different perspective; and
FIG. 16D is a diagram of a display containing a volumetric architectural drawing from FIG. 16A in which some graphical features are displayed and/or moving. DETAILED DESCRIPTION OF AN ILLUSTRATIVE EMBODIMENT
In one aspect, the invention can be implemented on a wide range of hardware and/or software, of which FIG. 1 is an example. FIG. 1 corresponds to a data visualization system 100. The data visualization system 100 includes a processor 104, a memory 106, e.g., Random- Access Memory (RAM), a display 108, and a user interface 1 10. Processor 104 operates on data 102 to form an image in accordance with computer executable instructions loaded into memory 106. The instructions will ordinarily have been loaded into the memory from local persistent storage in the form of, e.g., a disc drive with which the memory communicates. The instructions may additionally or instead be received by way of user interface 1 10. In some embodiments, processor 104 may be a general purpose processor, such as a Central Processing Unit (CPU), a special purpose processor, such as a Graphics Processing Unit (GPU), or a combination thereof.
In some embodiments, data visualization system 100 of FIG. 1 may include a medical imaging device, a security screening device, or a device for generating architectural drawings, each of which can generate data 102. Thus, data 102 may correspond to medical image data, security screening data, or architectural drawing data. The medical image data may be obtained from a plurality of medical imaging devices e.g. a Computer-Aided Tomography (CAT) scan machine, or a Magnetic Resonance Image (MRI) machine. In the case of CAT scans or MRIs, for example, the processor 104 may obtain three-dimensional data by analyzing a set of data having at least two dimensions. In some embodiments, at least one part of the data is received from a different source than used for a remainder of the data, e.g., a set of CAT scans may be received from a CAT scan machine, and a set of MRIs may be received from an MRI machine. The data visualization system 100 of FlG. 1 then combines the data from the two sources to display a single volumetric image.
Data visualization system 100 displays the image on display 108. Display 108 may be any display device capable of interfacing with processor 104, e.g., an
LCD display, a projector, a stereo display, a CRT monitor, or a combination thereof. One or more human users may interact with display 108 via user interface 1 10. For instance, system 100 could receive user input via user interface 1 10 from devices such as a mouse 1 16 and a keyboard 120. The user input could include, among other inputs, a query 1 12, a mouse click 1 14, or a cursor brush 1 18. The user input could also originate from devices connected to user interface 110 remotely, e.g., via a network connection.
Data visualization system 100 may receive data 102 into memory 106 in ways similar to those in which the instructions are received, e.g., a disc drive with which the memory communicates.. In addition, data 102 may be received from a network, e.g., a local area network, a wireless area network, or another processor. Electromagnetic signals representing the instructions may take any form. They are typically conductor-guided electrical signals, but they may also be visible- or invisible-light optical signals or microwave or other radio-frequency signals. The instructions indicate to the processor how it is to operate on data typically received in ways similar to those in which the instructions are. In accordance with some of those data operations, the instructions cause the processor to present some of the data to one or more human users by driving some type of display, such as the local monitor 126.
Processor 104 in data visualization system 100 is configured to operate on data 102. In particular, processor 104 is configured to process received input from user interface 1 10, to carry out operations on data 102, to identify graphical features or layers visually representing portions of the processed image, and to display the processed image or identified graphical features or layers of the processed image on display 108. For example, processor 104 can form an image and display an identified graphical feature of, or a layer in, this image on display 108, as will be described further in reference to FIGS. 1 1 - 16.
The present disclosure can be applied to representing a wide variety of data objects. One of the invention's aspects is particularly applicable to data that specify various types of relationships between data objects that the data also represent. For example, the data may represent the results of criminal investigations: certain of the data objects may represent surveillance targets such as people, buildings, or businesses. Of particular interest in the context of link analysis, some of the objects may include references to other objects.
FIG. 2 illustrates in a simplified manner how the system may present the objects in a display for link analysis. Each of the nodes 204, 206, 208, 210, 212, and 214 represents a different data object. For purposes of illustration, the drawing employs more than one style of icon to represent the nodes. This is not a necessary feature of the invention, but thus varying the icon type is one way to impart additional information. If the objects represent surveillance targets, for example, one of each object's fields may indicate what type of target it is, e.g., whether the target is a person, a building, a business, etc. If so, the types of icons placed at those nodes can represent that aspect of the object's contents. In the illustrated example, the icons at nodes 204, 206, and 208, represent people, those at nodes 210 and 212 represent corporations, and those at nodes 214 and 216 represent buildings.
So a display feature such as icon shape can be used to represent one of the data's dimensions. Another dimension, such as the priority assigned to the target's surveillance, may be represented by the icon's color. Also, although the nodes' locations on the display are essentially arbitrary in some link-analysis applications, they represent some aspect of the data, such as the target's geographical location, in others.
In some fashion, the data also specify relationships among the objects. For example, each object may include fields whose contents represent relationships to other data objects or represent pointers to arrays of such fields. Such a field may include, say, a pointer or handle to the object linked by the represented relationship and may also include information about the relationship's type. The display's lines represent those relationships, and, in this example, the line style conveys information, too. For example, line 218, which is relatively thin, represents the fact that the target represented by node 206 has communicated by telephone with the target that node 208 represents. And line 220, which is thicker, indicates that target 206 owns target 214. Other types of relationships may be represented by dashed lines, arc-shaped lines, etc. For the sake of simplicity, FIG. 2 shows only a few nodes and lines. In most situations to which graphical link analysis is applied, though, the number of nodes and lines is much greater, so the display is often difficult to comprehend. One of the present invention's aspects serves to aid comprehension. According to this aspect, the system selectively moves icons for this purpose. Suppose, for example, that the user wants to see all targets that satisfy some criterion. For the sake of simplicity, let us assume the criterion that the target has to be within two communications links from a base target. The user may have chosen the base target by, say, "clicking" on it. To identify the targets that meet this criterion, the display system causes their icons to move. FIG. 3 illustrates this. Cursor 302 represents the user's choosing node 304, and the dashed lines represent the resultant motion of nodes 306, 308, and 310, which satisfy that criterion. In most displays, the lines connected to the nodes will "rubber band," i.e., will so stretch with the node movement as to remain connected despite that motion.
That example uses a direct form of user input: the user employs a mouse to select one of the targets. But link analysis does not always require that type of input. For example, the criterion may be that motion is imparted to nodes representing all targets owned by high-priority targets; i.e., the selection is totally data driven.
This approach to representing the data is advantageous because, although the user could, by careful attention, identify the targets that are within two communications links of the chosen target, making them move causes them to "jump out" at the viewer, and it can do so without, say, changing any colors and thereby obscuring originally presented information.
A similar approach can be applied to what is often termed "brushing," which is a technique often employed when multidimensional data are presented in more than one display simultaneously. For example, the axes in one display may represent one pair of the data components, while those in a different display may represent a different pair. As another example, consider a situation in which at least one of the displays is an income histogram in which each of the bars is considered to be a stack of icons representing respective people whose incomes belong to the corresponding income range, while another display is an age histogram of the same people. In yet another example, one or more of the diagrams is a cluster diagram: icons representing different objects are clustered together in accordance with some similarity metric computed as some function of the objects' data components.
In brushing, a user in some fashion selects a subset of the object-representing icons in one of the displays, and the display system indicates which of the icons in the other display correspond to the same data objects. The user may, for example, select objects by causing a cursor to touch the corresponding icons or draw an enclosure about them; in the histogram case the user may simply click on one of the bars. Or he may select the objects in some other manner, such as by entering a selection criterion. To identify the corresponding icons in the other display, some conventional display systems highlight the other display's icons that correspond to the same objects. But conventional highlighting can obscure the information provided by, for instance, color. Using motion instead avoids this effect.
FIG. 4 illustrates this type of brushing for a situation in which both displays are histograms of the type described above. In that drawing's upper plot, the user has selected one of the income bins, and, by moving the corresponding icons in the lower plot, the display system illustrates the user-selected income group's distribution among the various age groups.
The use of different types of motion can be used in link displays and brushing, too. In those types of displays, the icons meeting a given criterion need not all move in the same way or in synchronism with each other. But consider an embodiment that operates as follows. The user first clicks on one target to cause the system to emphasize the relationships with that target, and the system responds by causing the criterion-satisfying nodes to vibrate vertically. If the user then clicks on another target while, say, holding down the shift key, he thereby indicates that the system should point out the targets linked to newly chosen target while continuing the previous vibrations, and the system causes the targets linked to the newly selected target to vibrate horizontally instead of vertically. In that simple example, the distinction is between two directions of linear motion. Both other types of motion can be used instead or in addition. Both these types of linear motion could be distinguished from diagonal linear motion, for example. Distinctions could also be made on the basis of phase or frequency: two sets of nodes vibrating linearly in the same direction could be caused to vibrate out of phase with each other, or at different frequencies. Also, the motion need not be linear; it may be elliptical, for instance, in which case another distinction can be made on the basis of whether the motion is clockwise or counterclockwise. And the motion is not necessarily a change in position from some rest position; it can, for instance, be a change in shape, such as rhythmic expansion and contraction of the icon that represents the data object.
Nor does the motion have to be harmonic vibration. Among the many motion patterns that may be employed are those of which FIGS. 5A, 5B, and 5C depict one component. (In the case of elliptical motion, for example, the plot of FIG. 5 A would be the component parallel to, say, ellipse's major axis, with which the motion component parallel to the minor axis would be 9O.degree. out of phase.) The harmonic motion that FIG. 5A depicts is typical. But some embodiments may instead or additionally employ other types of motion, such as the stuttering motion of FIG. 5B. Another example is the repeatedly decaying harmonic motion that FIG. 5C illustrates. Moreover, distinctions can be made and additional information imparted not only by the selection of the general type of motion pattern but also by the particular parameters of that motion. When the repeatedly decaying motion of FIG. 5C is employed, for example, some of the bases for distinguishing among data sets or conveying information about individual data objects can be the rate of decay, the repetition rate, etc.
In any event, thus using motion for graphical link analysis, layer-type displays, and similar data-presentation techniques can significantly enhance the user's comprehension.
Another aspect of the invention is directed to the way in which the motion is generated. According to this aspect of the invention, the motion results from depicting moving three-dimensional bodies on the display. Each body represents a respective data object, and various features of the body's motion represent respective components of data object's multi-dimensional data. The particular type of body is not critical, but FIG. 6 depicts for the sake of example a simple body type that we have employed. In that drawing, body 602 includes nothing more than an upright 604 and an arm 606 attached to the upright.
The benefits that this type of motion generation affords extend beyond data- presentation techniques of the type described so far. For example, consider a system in which the data objects are pixel data for a "hyperspectral" image. In natural- vision images, each pixel is usually represented by a color vector consisting of components for, say, red, green, and blue, cyan, magenta, and yellow, or some similar set of values by which a natural color can be approximated. The data are often the output of a camera whose sensors measure radiation intensities within different visible-light bands. Hyperspectral images are similar in the sense that each pixel is represented by a vector whose components represent radiation within different wavelength bands. The difference is that the number of wavelength bands is usually much more than three, and most bands do not fall within the visible range. Also, although the values usually represent intensities; they may additionally or instead represent other quantities, such as Stokes parameters.
Some of such data's dimensionality can be encoded in the colors of a false- color image, but it will enhance a user's ability to detect patterns if some components are encoded in aspects of a three-dimensional body's motion. As will become apparent, this technique's applicability is not limited to hyperspectral imaging; it can be used on a wide range of data types, independently of their dimensionality. But its advantages will be most apparent in scene-type data, such as hyperspectral-sensor data, magnetic-resonance-imaging data and other data whose objects tend to be organized in arrays.
FIG. 7 is a conceptual block diagram of the overall approach. The raw data will typically be in the form of a two-dimensional array of high-dimensional pixel values. That is, the object's position in the array implicitly encodes the two- dimensional location of the pixel that the (high-dimensional) object represents, although there is no reason in principle why three-dimensional-location information could not be stored, in a three-dimensional array. In some cases, the raw data's location granularity is coarser or finer than is convenient for employing simulated three-dimensional bodies to represent the objects, so the data may be re-sampled, as block 702 indicates, typically by employing one of the standard multi-rate sampling techniques.
A body model is then constructed for each object, as block 704 indicates. As an example of how this may be done, consider FIG. 8. That drawing depicts two bodies 802 and 804 in a (three-dimensional) model space. The original image plane is mapped to a map plane 806 or other two-dimensional map surface in model space, and the bodies 802 and 804 are assigned zero-displacement positions at the locations in the model space to which the pixels that they represent are mapped. For example, a body's zero-displacement position may be considered to be the one at which its upright is oriented perpendicular to the map plane and intersects the map plane at the upright's midpoint.
Each of a plurality of a given data object's components are then mapped to various aspects of the moving body's features, including size, rate and/or mode of motion, and position. For example, the value of one of the data components— e.g., intensity, another Stokes parameter, or some other radiation-indicating quantity in the hyperspectral example— may be encoded in-the arm's elevation angle 810. Another component— say, another of the Stokes parameters for the same band— may be encoded in the arm's rate and direction of azimuthal rotation 812. Also, pitch, roll, and yaw axes may be defined with respect to the normal to the map plane, and various components may be encoded in the upright's roll, pitch, and yaw angles and in those angles' rate of change. And components can be encoded in the body's size. For example, some embodiments may encode certain components in the arms' and uprights' lengths or thicknesses or in ratios of those lengths or thicknesses or in the rates at which any of those change. If the upright, too, is made to move, other components can be encoded in various aspects of that motion. If the motion is simple up-and-down motion, for example, data components can be encoded in the upright's mean position (with respect to its zero-displacement position) and in the amplitude, phase, and frequency of its vertical motion. If the upright's motion is more complex, further components can be encoded in that motion's other aspects. Note also that some of these features do not require that the body move.
Also, there may be an element of indirectness in the motion coding. Suppose, for example, that the system attributes physical characteristics such as mass, elasticity, etc. to the bodies and that one or more components are encoded into such features. Suppose further that the bodies are simulated as being disposed in a gravitational field and/or as being attached to a common platform that undergoes some type of motion, such as rhythmic or irregular translation or pivoting. By encoding the data components directly into those features, the system encodes the data indirectly in the motion: the types of motion that the bodies undergo depend on the underlying data, so, again, the display may reveal patterns in the data. Similar effects may be exhibited if the system simulates wind flowing past the bodies. For such a system, it may be desirable for the bodies to take the forms of flexible reeds in whose features the object components are so encoded as to affect the reed's flexibility. Other forms of indirect encoding will also suggest themselves to those skilled in the art.
Our experiments so far have concentrated on a simple body of the type that FIG. 6 illustrates, and we have concentrated on five shape parameters and twelve motion parameters. The shape parameters on which we have concentrated are the upright's height, the arm's length, the angle that the arm forms with the upright, the upright's angle with respect to the map plane, and the arm's azimuth, i.e., its position around the upright. The motion parameters came in four categories: azimuthal rotation of the upright, changes in the entire body's vertical position, circular changes in its horizontal position, and changes in the upright's tilt angle. The time variation of the motion in each case was a simple sinusoid, so there were three parameters, namely, amplitude, frequency, and phase, within each of the four categories. A further parameter within at least the first three categories is the mean, or "rest" position about which the motion occurs. A data component can be encoded in the difference between this and the zero-displacement position to which the corresponding pixel has been mapped. These parameters can be considered akin to shape parameters, since they do not themselves require motion.
FIG. 7's block 706 represents all such encoding. It is apparent that, at least theoretically, an extremely high number of different data components can thus be encoded in a body's features. As a practical matter, of course, there comes a point at which the resultant visual information becomes overwhelming to the human viewer. But we believe that a human viewer can effectively comprehend patterns resulting from up to fifteen and possibly more different components encoded in this fashion.
With the information thus encoded, the system generates the display by mathematically projecting the three-dimensional models onto a screen plane, as FIG. 7's block 708 indicates. The map and screen planes may be parallel, but the invention's advantages are most apparent when there is some angle between those planes. FIG. 8 depicts a perspective projection, i.e., one in which points such as point 814 in the model space are projected onto the screen plane 816 along a line such as line 818 from the model-space point to a common viewpoint 820 located a finite distance away. More typically, the projection would be orthogonal: the viewpoint would be disposed at an infinite distance. In any event, the display would then be so driven as to produce the resultant image, as FIG. 7's block 710 indicates.
FlG. 9 depicts a small portion of a display that can result when the map plane forms a relatively small angle with the screen plane. The projections of some of the bodies are so small as to be nearly imperceptible, while other bodies' projections are quite long. Although data components are thus encoded, the user typically would not, in that example, directly infer the values of an individual data object's components from the display. He would instead observe overall patterns, possibly of the type that FIG. 10 illustrates, from which he may be able to infer information about the scene or identify avenues for further inquiry. By employing the present invention's teachings, a display system can enable a user to detect patterns readily in a presentation of highly complex data. The invention thus constitutes a significant advance in the art.
Another type of display that benefits from the use of motion to distinguish different sets of data is the type that employs "layers" of data. A simple example is simultaneous presentation of different sets of transistor characteristic curves. A bipolar transistor's characteristics are often given as a set of curves on a common graph, each curve depicting collector current as a function of collector-to-emitter voltage for a different value of base current. To compare transistors, it would be helpful to be able to compare their characteristic curves visually. One way to do this is to plot different transistors' curve sets on the same axes.
Although different transistors' data can be distinguished from each other by assigning different colors to different transistors' curves, the results rapidly become hard to read as the number of transistors grows; even three transistors' data can present a challenge to comprehension. Moreover, using color to distinguish one transistor's data from another's prevents another use of color, namely, to indicate which curves for the different transistors correspond to the same base current. True, the display system can employ different line styles (solid, dashed, etc.) to help the viewer distinguish the data, but the display still rapidly becomes confusing as data are added for more transistors.
This type of display may be referred to as a "layered" display because different transistors' curves can be thought of as being disposed on transparent sheets, or "layers" that lie on top of one another. To highlight data found in a layer of interest to a user, a first motion is imparted on the entire layer of interest relative to the remaining layers. If a user is interested in more than one type of data, additional layers may be set in motion. Each layer is imparted with a distinctive motion relative to the remaining layers. For example, a first layer may be vibrated horizontally, a second layer may be vibrated vertically, and a circular motion may be imparted on a third layer. In the example above, each transistor curve may be assigned to its own layer. A user may then select two transistors for particular attention from a group of, say, ten whose data a display presents. In response, the display may make one selected transistor's curves vibrate vertically and the other's vibrate horizontally. The user could then readily recognize which data belong to the chosen transistors, and the comparison could be aided by having a given curve color represent the same base-current value for all transistors. Graphics software known in the art, including DirectX provided by Microsoft Corporation of Redmond, Washington, and OpenGL, an open source graphics library originally made available by Silicon Graphics, Inc. of Sunnydale, California, provide functionality for the display of layered images, as well as imparting relative motion to layers within such layered images.
In layered displays, each layer preferably includes data sharing a common characteristic. For example, each layer may include data generated from a different imaging source. An image source may be an image capture device or a data storage medium independent of an image capture device. For images formed from layers generated by multiple image capture sources, each image capture device may emit or detect electromagnetic radiation of different wavelengths or energies. For example, one image source may generate images from light in the visible spectrum. A second image source may generate images from light in the infrared portions of the spectrum. A third image source may generate images from light in the ultraviolet portions of the spectrum. Similarly, X-ray images generated from multiple emission energies may be stored as separate layers. Other suitable image capture devices include, without limitation, radar systems, ultrasound devices, geophones, gravitational field sensors, or any sensor that outputs data in relative to spatial position.
FIGS. 1 IA-1 1 C are illustrative outputs of a geographic infoπnation system
(GIS), according to an illustrative embodiment of the invention. GIS systems are one class of system that would benefit substantially from the layered display technique described above. In particular, the layered-display technique is particularly useful for naturally graphical data such as map data. Maps may include depictions of roads; utilities infrastructure, including power lines, sewage pipes, water main, gas pipes, telecommunications infrastructure, etc.; zoning information; geo-registered satellite or aerial imagery, including imagery generated from light in or out of the visible spectrum; radar information; or other visual representation of data corresponding to a mapped location, including population density, demographic data, meteorological data, intelligence data, vegetation type, etc. In a GIS each of these data types may be stored separately. Each data type may be stored in a single layer or in multiple layers. For example, road data may be stored as a layer of municipal roads, a layer of state roads, and a layer of federal highways. Zoning data maybe stored so that each zoning classification is stored as a separate layer, or it may be stored as a single map layer.
When viewed by a user, a map displayed by the GIS typically would include two or more layers overlaid one another. Preferably, at least one of the layers is displayed with at least some degree of transparency such that an underlying layer is at least partially visible underneath. As a result of the transparency, the color of at least some pixels in the displayed image at a given point in time are combinations or mixtures of the colors of associated with overlapping positions in the respective layers. As the layers are moved relative to one another, the colors of pixels change to take into account different mixtures and combinations of pixel colors from changes in positions that overlap.
A user of the GIS selects layers of interest using a user interface. In one implementation, a legend identifying each of the displayable layers is presented to a user. The user then can select the layers desired to be displayed by, for example, clicking a mouse on a check box to select a layer, and then selecting a desired motion from a drop down menu. Additional user interface controls may be made available to adjust the amplitude of the motion as well as the transparency of any of the layers. In an alternative embodiment, the user may select the layers to impart motion on by entering a query. Motion is then imparted on the layers that satisfy the query.
Referring specifically to FIG. 1 I A, FIG. 1 IA includes four individual layers 1 102a-l 102d (generally layers "1 102") of geographical information, corresponding to overlapping geographical space, in this example, a portion of the Boston Metropolitan area. Layer 1 102a includes political boundaries 1 104. Layer 1 102b includes local connector roads 1 106. Layer 1 102c includes interstate highways 1108. Layer 1102d includes commuter rail tracks 1 109. The four layers can be displayed overlaid one another to form map 1 1 10. Each layer 1 102 is at least partially transparent such that features in the underlying layers are visible.
FIG. 1 IB includes three simulated screen shots 112Oa-1 120c (generally screen shots "1120") of a portion 1112, outlined in phantom, of the map 1110 FIG. 1 IA. The screen shots 1 12Oa-1 120c simulate the motion that may be imparted on one or more layers 1 102d of a map, according to an illustrative embodiment, to highlight information included in the respective layers. In each screenshot 1 120a- 1 120c, features from each of the layers 1 102a-l 102d are visible, including political boundary 1 104, local connector roads 1106, interstate highways 1 108, and rail tracks 1 109.
Screen shot 1 120a illustrates the portion of the map 1 1 10 before any motion is imparted on any layers 1 102. Screen shot 1 120b illustrates the portion of the map at a first instant of time after motion has been imparted on the political boundary and interstate highway layers 1 102a and 1 102c, respectively. The original position of the political boundary 1 104 and highway 1 108 are depicted in phantom for reference. Screen shot 1 120c illustrates the portion of the map 1 1 10 at a second instant in time. As can be seen by comparing screen shot 1 120b to 1 120c, the political boundary layer 1 102a has been put into a vertical oscillatory motion and the interstate highway layer 1 102c has been put into a horizontal oscillatory motion. In alternative implementations, more dynamic oscillatory motions, including any other regular or irregular oscillatory movement may be employed without departing from the scope of the invention. The relative movement of the political boundary layer 1 102a and the interstate highway layer 1 102c relative to the remaining layers 1 102b and 1 102d serve to highlight to a viewer the position of the political boundaries 1 104 and the highway 1 108. FIG. 1 1 C is discussed further below.
FIGS. 12A and 12B depict simulated outputs of an X-Ray screening machine incorporating the data visualization technology described herein. The X-Ray machine includes dual- or multi-energy level X-Ray beams generated by one or more X-Ray sources. The images generated from each respective X-Ray beam are saved as separate layers. The layers are then overlaid one another for presentation to a security screener. The X-Ray data collected from each source is color coded with a respective color, corresponding to the atomic number of the material detected in the X-Ray image. The coloring is omitted in the simulated outputs to retain clarity. To highlight materials having high levels of a suspicious material (e.g., metal or nitrogen), layers corresponding to such materials are automatically imparted with a predetermined motion relative to the remaining layers, such that such material can be readily observed in context with the location of the material in relation to other materials in an item being examined. Alternatively, a user of the X-Ray machine may manually select one or more layers to impart motion to, as well as the desired motion. Additional controls may be used to adjust the amplitude of the motion and the transparency of the various layers. The same motion may be imparted on each of the layers. Alternatively, a first motion may be imparted on a subset of the layers, and a second motion may be imparted on a second subset of the layers.
Specifically with regard to FlG. 12A, FIG. 12A includes two simulated X- ray output layers 1202a and 1202b, and an overlay 1204 of the output layers 1202a and 1202b. Layer 1202a includes identified inorganic materials, i.e., a suitcase 1206 and a teddy bear 1208 included therein.. Layer 1202b includes metal objects identified by an X-ray scan at a second energy level. The layer 1202b includes a knife 1210, as well as various metal components 1212 of the suitcase 1206. The overlay 1204 illustrates how the packer of the suitcase 1206 may have attempted to obscure the knife 1210 by placing it behind the teddy bear 1208. By imparting motion on the metal layer, a viewer of the overlay 1204 of the layers 1202a and 1202b is able to quickly identify the knife 1210.
FIG. 12B includes three screenshots 1220a- 1220c of a portion of the simulated X-ray output of FIG. 12A (depicted in FlG. A as the rectangular region enclosed by dashed lines). The first screen shot 1220a depicts the layers 1202a and 1202b in their original position. Screen shots 1220b and 1220c depict the overlay at two points in time. In each screen shot 1202b and 1202c, the phantom lines illustrate the original position of the knife 1210, as in screen shot 1220a. As with the depicted movement of map layer sin FIG. 1 IB, the depicted movement of layer 1202b is simplified to a simple horizontal oscillation. Imparted movement may include a simple oscillation as depicted, or a more dynamic complex oscillation.
FIGS. 13A — 13B depict the output of a data visualization system integrated with a viewfinder, for example, of a vehicle, such as a tank or an aircraft, according to an illustrative embodiment of the invention. Such viewfinders typically display data generated from visible light cameras as well as infrared sensors and/or radar systems. The viewfinder display may also include mission data and/or instrumentation data, including vehicle speed, location, target range, etc. In one illustrative implementation, the data visualization system integrated with the view finder stores visible light data, infrared data, mission data and instrumentation data as separate layers, which and then displayed overlaid one another. To draw attention to data in a particular layer, the data visualization system imparts motion to the particular layer relative to the remaining layers.
FIG. 13A depicts thee separate layers 1302a- 1302c of simulated graphical data that may be overlaid one another to form the output of a viewfinder. Layer 1302a includes an image taken from a visible light imaging device. Visible in layer 1302a are two buildings 1304 and a fence 1306. Layer 1302b includes an image taken from an infrared imaging source. In layer 1302b, people 1308 are visible crouched behind the fence 1306 and below a window in the second floor of one of the buildings 1304. Layer 1302 includes computer generated mission data, including speed, current coordinates, target coordinates, and a weapon status indicator.
Overlay 1310 depicts the results of graphically overlaying the three layers
1302a- 1302, with each layer being at least partially transparent such that features of underlying layers are visible. FIG. 13B illustrates how one of the layers can be oscillated relative to the other layers to highlight information included in the oscillating layer. FIG. 13B includes three screenshots 1320a-l 32Oc of an overlay of the three layers 1302a- 1302c. As with FIGS. 1 I B and 12B, the first screen shot 1320a of FIG. 13B depicts the original alignment of the layers 1302a-1302c. The remaining screen shots 1320b and 1320c depict the overlay at various times while the layer including infrared data is put into oscillation. The original alignment is indicated in phantom. The phantom lines are included in these figures merely for reference and are not intended to suggest that a viewer would actually see such phantom lines in practice. In some implementations, in which object recognition software is employed, such phantom depiction may be utilized.
In alternative implementations, the data displayed in layers by the data visualization system is not received as distinct layers. Instead, the data is divided into layers having common characteristics after receipt. In one implementation, a machine learning program may identify features in an image and store such features in a layer distinct from the remainder of the image. For example, the data visualization system detects objects of interest in an image using object-class detection algorithms known in the art. In processing a photograph, the data visualization system may detect, for example, faces and/or text characters and store each in a respective layer. The data visualization system then displays the layers overlaid one another. To draw attention to all faces in the image, the data visualization system imparts motion to the layer in the image corresponding to faces. To highlight text, the data visualization system imparts motion to the layer in the image corresponding to text characters.
Similar data analysis can be applied to other forms of image data, including sonar, radar, or infrared images, in which objects of interest, e.g., submarines, aircraft, or roads, respectively, can be detected based on known signatures. Regions of the images having such signatures are stored in respective layers for overlaid display. Similar processing may be carried out on medical images, including x-rays, catscans, MRIs, etc. For example, portions of images corresponding to particular tissue densities or ranges of tissues densities may be stored in a separate layers. The data visualization system then, automatically in response to software instructions executing on the data visualization system, or manually in response to user input, imparts motion on one or more of the layers to highlight the data stored therein. In the medical image context, selective motion of portions of a medical image based on tissue density may reveal features otherwise difficult to identify, including tumors, nerves, or vasculature.
In another implementation, in addition to the data visualization system highlighting data by imparting motion on a layer of data relative to a remainder of layers, the data visualization system visually conveys additional data by imparting a local motion on a portion of a layer relative to the remainder of the layer. Suitable local motions include harmonic vibrations of regions of the layer similar to those describe above, as well as distortions to regions of the layer. The distortions may result, for example, in the region of the layer appearing to ripple, as if a viewer were viewing the layer through water.
In the context of a map, for example, the data visualization system may impart motion upon a map layer corresponding to highways relative to a terrain image layer and an electrical grid layer, thus visually highlighting the location of roads on the map relative to the surrounding terrain and electrical infrastructure. To simultaneously highlight which roads are experiencing high levels of congestion, the data visualization system imparts a local motion on portions of the layer surround the congested roads such that the roads in that region move or distort relative to the remainder of the road map layer. At the same time, even though the layer corresponding to the electrical grid is not moving relative, the data visualization system may impart a different local motion on regions to portions of the electrical grid map layer corresponding to regions having increased power consumption.
FIG. 1 1 C illustrates one such distortion effect. FIG. 1 1 C includes two screen shots 1 130a and 1 130b of the portion of the map 1 1 10. Screen shot 1 130a depicts the portion without distortion. In screen shot 1 130b, local roads in the portion are distorted to depict high traffic volumes.
Several techniques for implementing localized layer distortion are known in the art, for example, in the context of computer gaming. Software supporting such visual effects include DirectX and OpenGL. In one particular implementation, in order to allow for computationally efficient methods of imparting local distortions to regions of layers, each layer of visual data to be displayed is first projected onto a transparent array of geometric shapes, for example triangles. The data visualization system displays the projections overlaid one another. To generate the local distortions, the data visualization system imparts a rhythmic shifting to the vertices of the geometric shapes in a particular area , stretching or shrinking the content filling the geometric shapes. Additional rippling techniques, as well as different and/or additional visual effects may be used to impart local motion on a portion of a layer without departing from the scope of the invention.
In still another implementation, instead of or in addition to imparting a local motion on a region of a layer (referred to as a data layer), one or more additional layers of visual effects are added to a set of overlaid data layers displayed to a user. The visual effect layers include opaque or partially transparent localized visual effects that include some form of dynamic movement. Suitable visual effects include fog, mist, rippling water, smoke, etc. The primary difference between the visual effects in the visual effects layer from the localized movement or distortion imparted on portions of data layers is that the visual effects in the visual effects layer preferably are colored such that the color of portions of underlying layers change as a result of the dynamic movement of the visual effects. In contrast, the localized movement imparted on data layers does not directly affect the color of the image data in the data layer. Instead, any color changes in the displayed image result from changing combinations of the colors associated with overlapping positions in the data layers as points in each layer overlap in different ways as the portions of the data layer move or distort.
The data visualization system 100 of FIG. 1 may be used to display 2- dimensional, 3-dimensional or volumetric images. For instance, FIGS. 14A-14E are a series of 3-dimensional images of a tooth displayed on display 108 of data visualization system 100 in FIG. 1 , according to an illustrative embodiment of the invention. These series of images are illustrative of volumetric images that can be processed and displayed by data visualization system 100 in FIG. 1. FIGS. 14A, 14B, and 14E, show displays of the tooth including the crown and root, and illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features of the tooth to highlight those graphical features. In some embodiments, if there is more than one graphical feature, the same motion is imparted on each of the graphical features. In other embodiments, if there is more than one graphical feature, a first motion may be imparted on a subset of the graphical features, and a second motion may be imparted on a second subset of the graphical features. FIGS. 14C and 14D show a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image graphical features. Graphical features of displayed images depicted with dashed lines in FIGS. 14A- 14E correspond to initial positions of those graphical features relative to the displayed object to aid the reader in discerning imparted motion depicted in the figures. The data visualization system 100 of FIG. 1 need not display these lines.
FIG. 14A shows a time series of images of a tooth, an example of a volumetric image displayed by data visualization system 100 of FIG. 1. Graphical data from which the volumetric image was formed may have been captured by a medical imaging device, e.g., a panoramic X-ray machine, that is part of data visualization system 100 of FIG. 1. The data visualization system displays the tooth in various orientations, 1402, 1404, 1406, and 1408, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416. Motion 1416 is a rotation of the tooth about longitudinal axis 1401 , while motion 1414 is a rotation of the tooth in a fixed plane relative to longitudinal axis 1401. In each orientation, both root 1410 and crown 1412 are displayed.
FIG. 14B shows a time series of images of the tooth from FIG. 14A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature of a volumetric image by imparting relative motion to that graphical feature. In this series of images, the data visualization system displays the tooth in various orientations, 1422, 1424, 1426, and 1428, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416, which are similar to those shown in FIG. 14A. In addition, in orientations 1424, 1426, and 1428, respectively, the data visualization system 100 of FIG. 1 imparts motion 1418 on root 1410. In each of these orientations, the dashed lines 1409 correspond to initial positions of root 1410 relative to the rest of the tooth. Furthermore, in each orientation, i.e., when viewed by the reader from multiple perspectives, root 1410, crown 1412, and the relative motion of root 1410, are displayed.
FIG. 14C is a diagram of a user interface window 1430 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features visually representing portions of volumetric images. The tooth of either FIG. 14A or FIG. 14B could be displayed in user interface window 1430. In FIG. 14C, user interface window 1430 is divided into two sub-windows: window 1432 containing a volumetric image, namely the tooth of FIG. 14B, and window 1434 containing a graphical user interface in which a human user, interacting with data visualization system 100 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1432.
A human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1434 to choose whether to display crown 1412, root 1410, both crown 1412 and root 1410, or neither crown 1412 or root 1410 by selecting or de-selecting the appropriate boxes 1436. Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1). Alternatively, a user could click directly on the image in window 1432 to select graphical features for display. In FIG. 14C, both display boxes 1436 are selected or marked "x" such that both graphical features of the tooth, crown 1412 and root 1410, are displayed.
In addition, a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to crown 1412 or root 1410 by selecting an option in the drop-down menus 1438. Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, horizontal motion, circular motion, full- or partial-rotation motion, or no motion ('"none"). Such a selection could be carried out using a mouse (1 14 in FIG. 1 ) or a keyboard query (1 12 in FIG. 1 ). Alternatively, a user could click or drag directly on the image in window 1432 of data visualization system 100 of FIG. 1 to select graphical features visually representing portions of the data on which the data visualization system will impart motion. In FIG. 14C, for example, the selection of options in drop-down menus 1438 is such that no motion is to be imparted on crown 1412, while vertical motion is to be imparted on root 1410. Thus, the data visualization system imparts motion 1418 on root 1410 relative to crown 1412. The initial position of root 1410 is depicted with dashed lines 1409.
The data visualization system 100 of FIG. 1 could be used to identify a graphical feature visually representing a portion of a volumetric image by determining bounds of selected subject matter based on user input. Such a determination could be made by a human user using a mouse click, keyboard query, or a cursor brush to select an appropriate graphical feature, e.g., root 1410 or crown 1412, after which the data visualization system automatically determines the bound of the selected graphical feature based on the human user input. In some embodiments, the user input could also be received from an external source. Alternatively, the data visualization system 100 of FIG. 1 could determine the bounds of a selected graphical feature using, e.g., Artificial Intelligence (AI), or image processing algorithms. Using such algorithms, the metes and bounds of a graphical feature could be determined, for example, by finding the parts of a volumetric image that are similar to other parts of the same volumetric image based on a similarity metric. For example, a user may select a part of a map image corresponding to bridges, and this selection would result in the data visualization system 100 in FIG. 1 identifying and displaying all other bridges on the map. In the data visualization system 100 in FIG. 1 , the metrics on which similarity is based could be, for example, entered by the user in a query, or initially estimated by an AI algorithm executed by data visualization system 100 of FIG. 1. For example, all the voxels of a volumetric image within a certain user-defined range could be displayed, or all the voxels in a particular part of a volumetric image within a pre-deteπnined range of the AI algorithm could be displayed.
FIG. 14D is diagram of the user interface window 1430 of FIG. 14C generated by the data visualization system 100 of FIG. 1. In contrast to FIG. 14C, only one of the display boxes 1436 is selected such that root 1410 is displayed, and no crown is displayed. In addition, the selection of options in drop-down menus 1438 is such that data visualization system 100 of FIG. 1 imparts vertical motion 1418 on root 1410.
FIG. 14E shows a time series of images of the tooth from FIG. 14B which illustrates how the data visualization system 100 of FIG. 1 can highlight a part of a graphical feature of a volumetric image by imparting motion to that part of the graphical feature relative to the remainder of the graphical feature. In this series of images, the data visualization system displays the tooth in various orientations, 1440, 1442, 1444, and 1446, respectively, moving the tooth as illustrated by motion arrows 1414 and 1416, which are similar to those shown in FIG. 14B. In addition, in orientations 1442, 1444, and 1446, respectively, the data visualization system 100 of FIG. 1 imparts motion 1450 on portion 1454 of root 1410. Portion 1454 of root 1410 is contained within the dashed box 1452 in image 1440. In each of these orientations, the dashed lines 1456 correspond to initial positions of the portion 1454 of root 1410 relative to the remainder of root 1410. Furthermore, in each orientation, i.e., when viewed by the reader from multiple perspectives, root 1410, crown 1412, and the relative motion of portion 1454 of root 1410, are displayed.
In the case of portion 1454 of root 1410 in FIG. 14E, a human user interacting with the data visualization system 100 of FIG. 1 could select the portion 1454 of root 1410 by creating dotted box 1452 within a user interface window of the data visualization system 100 of FIG. 1. Dotted box 1452 could be created by performing a click-and-drag operation with a mouse, for example. Alternatively, data visualization system 100 of FIG. 1 identifies portion 1454 of root 1410 in FIG. 14E using a computer-executable pattern recognition or signal processing algorithm. In some embodiments, such an identification may not involve user input.
In an alternative embodiment, data visualization system 100 of FIG.1 could impart motion on a part of a graphical feature of a volumetric image relative to the remainder of the graphical feature, as well as impart motion on the part itself relative to the remainder of the image. With respect to FIGS. 14B and 14E, an example of such a display would be one in which data visualization system 100 of FlG. 1 imparts motion 1450 on portion 1454 of root 1410 and simultaneously imparts vertical motion 1418 on root 1410.
In a further embodiment, data visualization system 100 of FIG. 1 could impart a first motion on the entire displayed volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image. With respect to FIGS. 14C and 14D, an example would be one in which data visualization system 100 of FIG. 1 imparts rotational motion 1416 (of FIGS. 14A, 14B, and 14E) on the entire tooth, and simultaneously imparts vertical motion 1418 on root 1410. Although not shown in FIGS. 14C and 14D, a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via interactions with user interface windows 1432 and 1434 in either of these figures.
This approach to representing graphical data is advantageous because, although the user could, by careful attention, identify the bounds of the root on the tooth in FIGS. 14A-14E, automatically determining the bounds of the root and displaying the root in motion cause it to "jump out" at the human viewer. It some instances, e.g. assuming that portion 1454 of root 1410 is diseased, it may be desirable to have data visualization system 100 of FIG. 1 display this graphical feature of the root such that it "stands out" from the rest of the root. This approach could be important for purposes of planning a medical procedure, or for verifying that the data visualization system 100 of FIG. 1 has correctly identified the diseased portion of the root. In addition, it would be beneficial to display this part of the root in a manner such that the originally presented information, i.e. root 1410 and crown 1412, for example, is not obscured. In particular, any given 2-dimensional slice of the tooth would not provide a full understanding of where the diseased portion of the root is located. Furthermore, by using motion-based visualization for the root, one reduces the risk of misidentifying the diseased portion 1454, or the extent thereof. For instance, if one were to highlight the diseased portion 1454 using a different color or texture instead of imparting motion on the diseased portion 1454, the other displayed parts, e.g., root 1410 and crown 1412, of the tooth may be obscured. In the illustrations of FIGS. 14A-14E, the relative motions imparted on graphical features of the image by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random. For example in FIG. 14E, portion 1454 of root 1410 could, for example, have vertical or lateral vibrational motion relative to the remainder of root 1410. And, the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of portion 1454 of root 1410 in FIG. 14E.
FIGS. 15A-15E are a series of medical images displayed on display 108 of data visualization system 100 in FIG. 1, according to an illustrative embodiment of the invention. These series of images are illustrative of volumetric images that can be processed from 2-dimensional data and displayed by data visualization system 100 in FIG. 1. FIGS. 15A and 15B illustrate the application of a medical imaging technique in which 2-dimensional medical images or "slices" are received by data visualization system 100 of FIG. 1 and processed to create and display a composite volumetric medical image. FIGS. 15C and 15D show displays of graphical features of the image, and illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features of the image to highlight those graphical features. In addition, FIG. 15D shows a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image graphical features. Graphical features of displayed images depicted with dashed lines in FIGS. 15C and 15D correspond to initial positions of those graphical features relative to the displayed image to aid the reader in discerning imparted motion depicted in the figures. The data visualization system 100 of FIG. 1 need not display these lines.
FIG. 15A shows the application of a medical imaging technique in which 2- dimensional medical images or "slices" of a human skull 1500 are captured and displayed by data visualization system 100 of FIG. 1. Human skull 1500 contains, among other graphical features, a tumor 1506 and brain matter 1504. The slices could be capture by a medical imaging device that is part of data visualization system 100 of FIG. 1. For instance, the medical imaging device could be a CAT scan machine or an MRl machine, which produces CAT scan images, or MRI images, respectively. The medical imaging device of data visualization system 100 of FIG.1 captures slices 1502 of human skull 1500. In FIG. 15A, six slices, 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively, are captured and displayed. The number of slices captured may vary, i.e., in some instances as few as 1 or 2 slices may be captured, while in other instances, up to 100 slices may be captured. Each slice displayed by data visualization system 100 of FIG. 1 represents a particular cross-section of human skull 1500, and contains information about tissue density across the cross section. For example, assuming that the tissue density of tumor 1506 is substantially different than brain matter 1504, slices 1502a and 1502f could represent portions of human skull 1500 vertically above and below, respectively, tumor 1506, and thus, do not contain any part of the tumor. In contrast, slices 1502b, 1502c, 1502d, and 1502e, each contain a particular portion of tumor 1506. Data visualization system 100 of FIG. 1 displays each of the slices in 2- dimensions in FIG. 15A, but need not display the schematic of slices 1502 within human skull 1500.
FIG. 15B illustrates the application of a medical image processing technique in which a composite volumetric medical image of a part of the human skull of FIG. 15A is created and displayed on data visualization system 100 of FIG. 1. The data visualization system 100 of FIG. 1 assigns each of the two-dimensional image slices 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively, of FIG. 15A a certain depth (or height), such that the slices now have 3 dimensions (left side of FIG. 15B). Each of these slices contains information about tissue density across the cross section and may contain a part of tumor 1506 and/or brain matter 1504. The data visualization system 100 of FIG. 1 then displays this stack of slices a composite volumetric image (right side of FIG. 15B). The human skull 1500 is depicted merely for reference and need not be displayed by the data visualization system 100 of FIG. 1.
FIG. 15C shows a set of slices of the skull from FIG. 15B which illustrate how the data visualization system 100 of FIG. 1 can highlight a graphical feature within each slice of a volumetric image by imparting relative motion to that graphical feature relative to the slice. In this series of images, the data visualization system 100 of FIG. 1 displays the slices in two orientations, 1510 and 1520, respectively, and imparts motion on tumor 1506 within each slice as illustrated by motion arrows 1530 and 1540. Specifically, data visualization system 100 of FIG. 1 imparts lateral motion on tumor 1506 within each slice 1502a, 1502b, 1502c, 1502d, 1502e, and 1502f, respectively. In each of the orientations 1510 and 1520, the dashed lines 1507 or 1508 correspond to initial positions of tumor 1506 relative to the rest of the slice.
The quick determination and display of a tumor in a volumetric image by data visualization system 100 of FIG. 1 in this manner is significantly advantageous. Although the user could, by careful attention, identify the bounds of tumor 1506 in FIGS. 15A- 15C, having data visualization system 100 of FIG. 1 automatically determine the bounds of the tumor and display the tumor in motion in 3 dimensions cause it to "jump out" at the human viewer. It is also beneficial to see tumor 1506 displayed volumetrically in relation to the other anatomical portions in human skull 1500. Furthermore, the data visualization system 100 of FIG. 1 can do so without obscuring originally presented information, for example, human skull 1500 or brain matter 1504.
FIG. 15D is a diagram of a user interface window 1550 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features of volumetric images. The composite volumetric images, or the individual slices, of FIGS. 15A-15C could be displayed in user interface window
1430. In FIG. 15D, user interface window 1550 of data visualization system 100 of FIG. 1 is divided into two sub-windows: window 1560 containing the composite volumetric image of human skull 1500 from FIG. 15A, and window 1570 containing a graphical user interface in which a human user, interacting with data visualization system 100 of FIG. 1 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1560.
A human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1570 to choose whether to display the human skull 1500 (not displayed in FIG. 15D), brain matter 1504, or tumor 1506, by selecting or de-selecting the appropriate boxes 1584, 1582, or 1580, respectively. Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1). Alternatively, a user could click directly on the image in window 1560 to select graphical features for display. In FIG. 15D, display boxes 1580 and 1582 are selected or marked "x" such that data visualization system 100 of FIG. 1 displays brain matter 1504 and tumor 1506.
In addition, a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to human skull 1500, brain matter 1504, or tumor 1506 by selecting an option in the drop-down menus 1590, 1592, and 1594 respectively. Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, horizontal motion, circular motion, full- or partial-rotation motion, or no motion ("none"). Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query (1 12 in FIG. 1). Alternatively, a user could click or drag directly on the image in window 1560 of data visualization system 100 of FIG. 1 to select graphical features visually representing portions of data on which the data visualization system will impart motion. In FIG. 15D, for example, the selection of options in drop-down menus 1590, 1592, and 1594 is such that no motion is to be imparted on human skull 1500 (not displayed in FIG. 15D) or brain matter 1504, while lateral motion 1540 is to be imparted on tumor 1506. Thus, the data visualization system 100 of FIG. 1 imparts motion 1540 on tumor 1506 relative to brain matter 1504. The initial position of tumor 1506 is depicted with dashed lines 1508.
In an alternative embodiment, data visualization system 100 of FIG. 1 could impart a first motion on the entire displayed composite volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image. With respect to FIGS. 15C-15D, an example would be one in which data visualization system 100 of FIG. 1 imparts a rotational motion on the entire composite image, and simultaneously imparts lateral motion on tumor 1506. Although not shown in FIG. 15D, a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via interactions with user interface windows 1560 or 1570.
The data visualization system 100 of FIG. 1 could be used to identify a graphical feature visually representing a portion of a volumetric image by determining bounds of selected subject matter based on user input. Such a determination could be made by a human user using a mouse click, keyboard query, or a cursor brush to select an appropriate graphical feature, e.g., tumor 1506, after which the data visualization system automatically determines the bound of the selected graphical feature based on the human user input. Alternatively, a user may interact with data visualization system 100 of FIG. 1 to enter a query (1 12 in FIG. 1), e.g., enter a range of tissue density corresponding to tumor tissue density, and this query would result in the data visualization system 100 in FIG. 1 identifying and displaying graphical features of the volumetric image with this range of tissue density. Alternatively, data visualization system 100 of FIG. 1 identifies a graphical feature representing a portion of a volumetric image using a computer-executable pattern recognition or signal processing algorithm. Such an identification may not involve user input.
In the illustrations of FIGS. 15C and 15D, the relative motions imparted by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random. For example in FIGS. 15C and 15D, tumor 1506 could, for example, have vertical or lateral vibrational motion relative to the remainder of the image. And, the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of tumor 1506 in FIGS. 15C and 15D.
FIGS. 16A-16D are a set of architectural drawings displayed on display 108 of data visualization system 100 in FIG. 1 , according to an illustrative embodiment of the invention. These series of images are illustrative of data that can be processed and displayed by data visualization system 100 in FIG. 1. FIG. 16A is a set of architectural drawings in which multiple graphical features of a building, for example, the water supply and electrical system, are displayed. FIGS. 16B and 16C show displays of the architectural drawings that illustrate how data visualization system 100 of FIG. 1 can impart motion on graphical features visually representing portions of the drawings to highlight those graphical features. FIG. 16D shows a user interface employed by the data visualization system 100 in FIG. 1 for controlling the display and/or motion of image portions. Graphical features of displayed images depicted with dashed lines in FIGS. 16B-16D correspond to initial positions of those graphical features relative to the displayed image to aid the reader in discerning imparted motion depicted in the figures. The data visualization system 100 of FIG. 1 need not display these lines.
FIG. 16A shows a set of architectural drawings, examples of volumetric images displayed by data visualization system 100 of FIG. 1. Graphical data from which the volumetric images were formed may have been captured by a device for generating architectural drawings that is part of data visualization system 100 of FIG. 1. Data visualization system 100 of FIG. 1 displays multiple graphical features of the architectural drawings, for example, the water supply 1608, electrical system 1606, furnishings 1616, and fixtures 1610. In particular, architectural drawing 1600, on the left of FIG. 16A, illustrates two floors, 1602 and 1604, of a building, while architectural drawings 1612 and 1614, on the right of FIG. 16A, show a floor 1604 of drawing 1600 from a top- view and a side-view, respectively.
FIG. 16B shows a time series of images of architectural drawing 1600 from
FIG. 16A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature visually representing a portion of a volumetric image by imparting relative motion to that graphical feature. In this series of images, the data visualization system displays drawing 1600 in various states, 1630, 1640, and 1650, respectively. In states 1640, and 1650, data visualization system 100 of FIG. 1 imparts motion 1660 on water supply 1608. In each of states 1640 and 1650, the dashed lines 1609 and 161 1 correspond to initial positions of water supply 1608 relative to the rest of the architectural drawing.
FIG. 16C shows a time series of images of architectural drawings 1612 and 1614 from FIG. 16A which illustrates how the data visualization system 100 of FIG. 1 can highlight a graphical feature visually representing a portion of a volumetric image by imparting relative motion to that graphical feature. In this series of images, the data visualization system displays drawings 1612 and 1614 in various states, 1635, 1645, and 1655, respectively. In states 1645, and 1655, data visualization system 100 of FIG. 1 imparts lateral motion 1680 on water supply
1608. In each of states 1645 and 1655, the dashed lines 1619, 1613, 1615, and 1617, correspond to initial positions of water supply 1608 relative to the rest of the architectural drawing. Note that FIG. 16C is an illustration of the same motion imparted by data visualization system 100 of FIG. 1 as in FIG. 16B, but viewed from a different user point-of-view or perspective.
The quick determination and display of a particular graphical feature in an architectural drawing in this manner is advantageous. Although the user could, by careful attention, identify the water supply 1608 in FIG. 16B or 16C, having data visualization system 100 of FIG. 1 automatically determine and display water supply 1608 in motion cause it to "jump out" at the human viewer. Furthermore, the system can do so without obscuring originally presented information, for example, electrical system 1606 or fixtures 1610.
FIG. 16D is a diagram of a user interface window 1690 generated by the data visualization system 100 of FIG. 1 for controlling the display and/or motion of graphical features of volumetric images. The architectural drawings of FIGS. 16A- 16C could be displayed in user interface window 1690. In FIG. 16D, user interface window 1690 of data visualization system 100 of FIG. 1 is divided into two sub- windows: window 1694 containing the architectural drawing in state 1655 of FIG. 15C, and window 1692 containing a graphical user interface in which a human user, interacting with data visualization system 100 of FIG. 1 via user interface 1 10, may select various options for displaying and/or imparting motion on the graphical features of the image in window 1694.
With respect to FIG. 16D, a human user interacting with data visualization system 100 of FIG. 1 can use graphical user interface window 1692 to choose whether to display, among others, water supply 1608, electrical system 1606, fixtures 1610, furnishings 1616, or any combination thereof, by selecting or deselecting the appropriate boxes 1636. Such a selection could be carried out using a mouse (114 in FIG. 1) or a keyboard query (1 12 in FIG. 1). Alternatively, a user could click directly on the image in window 1694 to select graphical features for display. In FIG. 16D, display boxes 1636 are selected or marked "x" such that data visualization system 100 of FIG. 1 displays water supply 1608, fixtures 1610, electrical system 1606, furnishings 1616, and structure 1622 of drawing 1655 in window 1694. The display of a selected graphical feature may be accomplished by, for example, the data visualization system 100 of FIG. 1 determining and displaying all the parts of the architectural drawing image which are similar in appearance to the user-selected graphical feature(s) e.g. water supply 1608. In some embodiments, data visualization system 100 of FIG. 1 identifies graphical features in the architectural drawings in FIGS. 16A-16D using a computer-executable pattern recognition or signal processing algorithm. Such an identification may not involve user input.
In addition, a human user interacting with data visualization system 100 of FIG. 1 can choose what kind of motion to impart to water supply 1608, fixtures 1610, electrical system 1606, furnishings 1616, and structure 1622, by selecting an option in the drop-down menus 1638 respectively. Examples of motion imparted by data visualization system 100 of FIG. 1 could be lateral motion, vertical motion, or circular motion as shown in expanded drop-down menu 1642. Such a selection could be carried out using a mouse (1 14 in FIG. 1) or a keyboard query ( 1 12 in FIG. 1 ). Alternatively, a user could click or drag directly on the image in window 1694 of data visualization system 100 of FIG. 1 to select graphical features on which the data visualization system will impart motion. In FIG. 16D, for example, the selection of options in drop-down menus 1638 is such that lateral motion 1680 is to be imparted on water supply 1608 and no motion is to be imparted on electrical system 1606. Thus, the data visualization system 100 of FIG. 1 imparts motion 1680 on water supply 1608 relative to architectural drawing 1655. The initial position of water supply 1608 is depicted with dashed lines 1613. In a further embodiment, data visualization system 100 of FIG. 1 could impart a first motion on the entire volumetric image, while simultaneously imparting a second motion on a graphical feature of the volumetric image relative to the remainder of the volumetric image. With respect to FIGS. 16B-16D, an example would be one in which data visualization system 100 of FIG. 1 imparts a rotation motion on the entire volumetric image, and simultaneously imparts lateral motion on water supply 1608. Although not shown in FIG. 16D, a human user interacting with the data visualization system 100 of FIG. 1 could select a motion to impart on the entire displayed volumetric image via user interface windows 1692 or 1694.
In the illustrations of FIGS .16B- 16D, the relative motions imparted by data visualization system 100 of FIG. 1 may also be vibrations - harmonic or random. For example, water supply 1608 could, for example, have vertical or lateral vibrational motion relative to the remainder of the architectural drawing. And, the motion is not necessarily a change in position from some rest position; it can, for instance be a small change in shape, such as a rhythmic contraction or expansion of water supply 1608 in FIGS. 16B-16D.
Although the invention has been particularly shown and described above with reference to illustrative embodiments, alterations and modifications thereof may become apparent to those skilled in the art. For example, the data visualization system may be used to display security screening images in which particular graphical features of the image are displayed with motion imparted on them. The data processed by the data visualization system may include a mechanical engineering drawing, e.g., of a bridge design, an automotive design and test evaluation drawing, a geological model, e.g., a model of the earth in which identified natural resource deposits move relative to other underground features, such as ground water, soil strata, etc. Furthermore, the data visualization system may be used to display data that may not naturally lend themselves to 3-dimensional displays e.g. epidemiological data in which certain demographic data, e.g., age, income, and weight are mapped to x-y-z dimensions of a volumetric image and others data, such as height, blood pressure, or lung capacity, being mapped to color, brightness, or other visual parameter. In general, the data visualization system may be used in any domain in which one could use a 2-dimensional, 3 -dimensional, or volumetric image for displaying and analyzing data, and in which one is interested in locating logical collections within the data set image relative to the whole data set. It is therefore intended that the following claims cover all such alterations and modifications as fall within the true spirit and scope of the present invention.

Claims

CLAIMS:
1. A security screening system comprising: a security screening device for outputting graphical data about an object being screened; a display; a memory for storing the graphical data output by the screening device; and a processor in communication with the memory and the display configured for: retrieving the graphical data from the memory; displaying the graphical data stored in the memory in a plurality of layers overlaid over one another on the display; and imparting a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data representing suspicious materials present in the object depicted in the first layer.
2. The system of claim 1 , wherein the security screening device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers.
3. The system of claim 1 , wherein the processor is configured to generate the plurality of displayed layers from the retrieved graphical data such that the data included in each respective layer shares a common characteristic with other data in the layer.
4. The system of claim 1 , wherein the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer.
5. A medical image analysis system comprising: a medical imaging device for outputting graphical data representing characteristics of a body structure being imaged; a display; a memory for storing the graphical data output by the medical imaging device; and a processor configured for: retrieving the graphical data from the memory; displaying the graphical data stored in the memory in a plurality of layers overlaid over one another; and imparting a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight characteristics of a body structure represented in the first layer.
6. The system of claim 5, wherein the medical imaging device includes a plurality of image sources, wherein each image source generates graphical data corresponding to a respective layer of the plurality of displayed layers.
7. The system of claim 5, wherein the processor is configured to generate the plurality of displayed layers from the graphical data retrieved from the memory such that the data included in each respective layer shares a common characteristic with other data in the layer.
8. The system of claim 5, wherein the processor is configured to impart a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of the body structure represented within the first layer.
9. A method for displaying graphical data comprising: receiving graphical data; displaying a plurality of layers of graphical data overlaid one another; imparting a first motion to a first of the displayed layers relative to a remainder of the displayed layers to highlight data represented in the first layer; and imparting a localized motion on a first area of the first layer that is different than a localized motion imparted on a second area of the first layer to visually distinguish characteristics of data within the first layer.
10. The method of claim 9, wherein at least one of the plurality of displayed layers is at least partially transparent.
11. The method of claim 9, wherein at least one layer in the plurality of displayed layers includes image data generated by a different imaging source than used to generate image data for a second layer in the plurality of displayed layers.
12. The method of claim 1 1 , wherein the different imaging sources captures image data using different imaging techniques.
13. The method of claim 9, comprising: receiving data for display, and generating the plurality of displayed layers from the received data, wherein the data in each respective layer in the plurality of displayed layers shares a common characteristic.
14. The method of claim 9, wherein the first layer comprises an image projected on an array of geometric shapes, and imparting the localized motion comprises shifting vertices of geometric shapes in the first area.
15. The method of claim 9, comprising receiving an input from a user identifying data to be highlighted, and determining the imparted motion in response to the user input.
16. A method for analyzing data having at least three dimensions comprising: receiving data for display; displaying a volumetric image incorporating a plurality of graphical features visually representing portions of the data; imparting a first motion to a first of the graphical features relative to a remainder of the volumetric image to highlight the first graphical feature.
17. The method of claim 16, wherein the data corresponds to medical image data.
18. The method of claim 16, wherein the medical image data is captured using different imaging techniques.
19. The method of claim 16, wherein the data corresponds to security screening data.
20. The method of claim 16, wherein the data corresponds to architectural drawing data.
21. The method of claim 16, wherein the three-dimensional data is obtained by analyzing a set of data having at least two dimensions.
22. The method of claim 16, further comprising imparting a second motion on a second graphical feature of the volumetric image that is different than the first motion to visually distinguish the two graphical features from one another.
23. The method of claim 16, further comprising imparting a second motion on the entire volumetric image that is different than the first motion to visually distinguish the first graphical feature of the volumetric image from a remainder of the volumetric image when viewed from multiple perspectives.
24. The method of claim 16, further comprising imparting a localized motion to a first part of the first graphical feature of the volumetric image to visually distinguish the first part of the first graphical feature of the volumetric image from a remainder of the volumetric image
25. The method of claim 16, further comprising receiving an input from a user.
26. The method of claim 25, further comprising identifying the first graphical feature by determining bounds of selected subject matter based on the user input.
27. A system for displaying a volumetric image comprising: a user interface; a display; a memory for storing graphical data; a processor in communication with the memory and the display configured for: retrieving graphical data from the memory; displaying a volumetric image incorporating a plurality of graphical features visually representing portions of the data on the display; processing input from the user interface to identify a first of the displayed graphical features; and imparting a first motion to the first identified graphical feature relative to a remainder of the volumetric image to highlight the first graphical feature.
28. The system of claim 27, further comprising a medical imaging device for outputting the graphical data stored in the memory.
29. The system of claim 27, further comprising a security screening device for outputting the graphical data stored in the memory.
30. The system of claim 27, further comprising a device for generating architectural drawings and outputting the graphical data stored in the memory.
31. The system of claim 27, wherein the processor is configured for obtaining three-dimensional data by analyzing a set of data having at least two dimensions.
32. The system of claim 27, wherein at least one part of the graphical data is received from a different source than used for a remainder of the graphical data.
33. The system of claim 27, wherein the processor is configured for imparting a second motion on a second graphical feature of the volumetric image that is different than the first motion to visually distinguish the two graphical features from one another.
34. The system of claim 27, wherein the processor is configured for imparting a second motion on the entire volumetric image that is different than the first motion to visually distinguish the first graphical feature of the volumetric image from a remainder of the volumetric image when viewed from multiple perspectives.
35. The system of claim 27, wherein the processor is configured for imparting a localized motion to a first part of the first graphical feature of the volumetric image to visually distinguish the first part of the first graphical feature of the volumetric image from a remainder of the volumetric image
36. The system of claim 27, wherein the user interface receives an input from a user.
37. The system of claim 36, wherein the user input comprises one of a query, a mouse click, and a cursor brush.
38. The system of claim 36, wherein the processor is configured for identifying the first graphical feature by determining bounds of selected subject matter based on the user input.
PCT/US2008/013884 2007-12-20 2008-12-19 Motion-based visualization WO2009108179A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/961,242 US7629986B2 (en) 2003-11-05 2007-12-20 Motion-based visualization
US11/961,242 2007-12-20
US12/169,934 US8941680B2 (en) 2008-07-09 2008-07-09 Volumetric image motion-based visualization
US12/169,934 2008-07-09

Publications (2)

Publication Number Publication Date
WO2009108179A2 true WO2009108179A2 (en) 2009-09-03
WO2009108179A3 WO2009108179A3 (en) 2009-10-22

Family

ID=40935544

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/013884 WO2009108179A2 (en) 2007-12-20 2008-12-19 Motion-based visualization

Country Status (1)

Country Link
WO (1) WO2009108179A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2656241A4 (en) * 2010-12-24 2017-06-07 FEI Company Reconstruction of dynamic multi-dimensional image data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509906B1 (en) * 1999-04-29 2003-01-21 Autodesk, Inc. Display representations and streams for objects having authorable and dynamic behaviors and appearances
WO2004095378A1 (en) * 2003-04-24 2004-11-04 Koninklijke Philips Electronics N.V. Combined 3d and 2d views
US6856329B1 (en) * 1999-11-12 2005-02-15 Creative Technology Ltd. Automated acquisition of video textures acquired from a digital camera for mapping to audio-driven deformable objects
US20050093867A1 (en) * 2003-11-05 2005-05-05 Bobrow Robert J. Motion-based visualization
US20050147283A1 (en) * 2003-11-10 2005-07-07 Jeff Dwyer Anatomical visualization and measurement system
US7116749B2 (en) * 2003-06-25 2006-10-03 Besson Guy M Methods for acquiring multi spectral data of an object
US20070257912A1 (en) * 2006-05-08 2007-11-08 Dmitriy Repin Method for locating underground deposits of hydrocarbon including a method for highlighting an object in a three dimensional scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6509906B1 (en) * 1999-04-29 2003-01-21 Autodesk, Inc. Display representations and streams for objects having authorable and dynamic behaviors and appearances
US6856329B1 (en) * 1999-11-12 2005-02-15 Creative Technology Ltd. Automated acquisition of video textures acquired from a digital camera for mapping to audio-driven deformable objects
WO2004095378A1 (en) * 2003-04-24 2004-11-04 Koninklijke Philips Electronics N.V. Combined 3d and 2d views
US7116749B2 (en) * 2003-06-25 2006-10-03 Besson Guy M Methods for acquiring multi spectral data of an object
US20050093867A1 (en) * 2003-11-05 2005-05-05 Bobrow Robert J. Motion-based visualization
US20050147283A1 (en) * 2003-11-10 2005-07-07 Jeff Dwyer Anatomical visualization and measurement system
US20070257912A1 (en) * 2006-05-08 2007-11-08 Dmitriy Repin Method for locating underground deposits of hydrocarbon including a method for highlighting an object in a three dimensional scene

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A.C. ROBINSON: "Highlighting techniques to support geovisualization" ICA WORKSHOP ON GEOVISUALIZATION AND VISUAL ANALYTICS, 2006, pages 1-18, XP002542115 Portland, OR *
BARTRAM: "Filtering and brushing with motion" INFORMATION VISUALIZATION, PALGRAVE MACMILLAN LTD, vol. 1, no. 1, 1 January 2002 (2002-01-01), pages 66-79, XP009121472 ISSN: 1473-8716 *
E.B. LUM, A. STOMPEL, KWAN-LIU MA: "Using motion to illustrate static 3D shape - Kinetic visualization" IEEE TRANS. ON VISUALIZATION AND COMPUTER GRAPHICS, vol. 9, no. 2, 2003, pages 115-126, XP002542114 *
G. RAMOS, G. ROBERTSON, M. CZERWINSKI, D. TAN, P. BAUDISCH, K. HINCKLEY, MANEESH AGRAWALA: "Tumble! Splat! Helping users access and manipulate occluded content in 2D drawings" PROCEEDINGS OF THE WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES, 2006, pages 428-435, XP002542116 *
KRUECKER ET AL: "Fusion of real-time trans-rectal ultrasound with pre-acquired MRI for multi-modality prostate imaging" PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, SPIE, PO BOX 10 BELLINGHAM WA 98227-0010 USA, vol. 6509, 21 March 2007 (2007-03-21), pages 650912/1-650912/12, XP009113917 ISSN: 0277-786X *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2656241A4 (en) * 2010-12-24 2017-06-07 FEI Company Reconstruction of dynamic multi-dimensional image data

Also Published As

Publication number Publication date
WO2009108179A3 (en) 2009-10-22

Similar Documents

Publication Publication Date Title
US7629986B2 (en) Motion-based visualization
US7280122B2 (en) Motion-based visualization
US7280105B2 (en) Occlusion reducing transformations for three-dimensional detail-in-context viewing
Wong et al. Multiresolution multidimensional wavelet brushing
JPH08305525A (en) Apparatus and method for display of information
US20100118049A1 (en) Motion-based visualization
Vehlow et al. Visualizing dynamic hierarchies in graph sequences
Andrienko et al. Visual exploration of the spatial distribution of temporal behaviors
Delort Vizualizing large spatial datasets in interactive maps
US6967653B2 (en) Apparatus and method for semi-automatic classification of volume data
US8941680B2 (en) Volumetric image motion-based visualization
US20100042938A1 (en) Interactive Navigation of a Dataflow Process Image
Kreylos et al. Point-based computing on scanned terrain with LidarViewer
WO2009108179A2 (en) Motion-based visualization
Anderson et al. Voyager: an interactive software for visualizing large, geospatial data sets
Aoyama et al. TimeLine and visualization of multiple-data sets and the visualization querying challenge
Jubair et al. Icosahedral Maps for a Multiresolution Representation of Earth Data.
Hsieh et al. Visual analytics of terrestrial lidar data for cliff erosion assessment on large displays
Gerstner et al. A case study on multiresolution visualization of local rainfall from weather radar measurements
Liu et al. Visualizing acoustic imaging of hydrothermal plumes on the seafloor
Stoppel et al. Graxels: Information Rich Primitives for the Visualization of Time-Dependent Spatial Data.
Palma et al. Enhanced visualization of detected 3d geometric differences
Ghanbari Visualization Overview
Healey et al. Vistre: A visualization tool to evaluate errors in terrain representation
Thomas et al. Topological Visualisation Techniques for Volume Multifield Data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08872747

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08872747

Country of ref document: EP

Kind code of ref document: A2