JP2018533099A - Data visualization system and method using three-dimensional display - Google Patents

Data visualization system and method using three-dimensional display Download PDF

Info

Publication number
JP2018533099A
JP2018533099A JP2018502734A JP2018502734A JP2018533099A JP 2018533099 A JP2018533099 A JP 2018533099A JP 2018502734 A JP2018502734 A JP 2018502734A JP 2018502734 A JP2018502734 A JP 2018502734A JP 2018533099 A JP2018533099 A JP 2018533099A
Authority
JP
Japan
Prior art keywords
3d
data
visualization
3d object
data visualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2018502734A
Other languages
Japanese (ja)
Inventor
ジー ジョルゴヴスキー スタニスラブ
ジー ジョルゴヴスキー スタニスラブ
ドナレック シロ
ドナレック シロ
ダヴィドフ スコット
ダヴィドフ スコット
エストラダ ヴィセンテ
エストラダ ヴィセンテ
Original Assignee
カリフォルニア インスティチュート オブ テクノロジー
カリフォルニア インスティチュート オブ テクノロジー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201562232119P priority Critical
Priority to US62/232,119 priority
Priority to US201662365837P priority
Priority to US62/365,837 priority
Application filed by カリフォルニア インスティチュート オブ テクノロジー, カリフォルニア インスティチュート オブ テクノロジー filed Critical カリフォルニア インスティチュート オブ テクノロジー
Priority to PCT/US2016/053842 priority patent/WO2017054004A1/en
Publication of JP2018533099A publication Critical patent/JP2018533099A/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • G06K9/4671Extracting features based on salient regional features, e.g. Scale Invariant Feature Transform [SIFT] keypoints
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/2256Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles provided with illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

A data visualization system and method for generating a 3D visualization of a multidimensional data space is described. In one preferred embodiment, a 3D data visualization application: loads a set of multidimensional data points into a visualization table; creates a representation of a set of 3D objects corresponding to the set of data points; Receiving a mapping that maps to; determining a visualization attribute for a set of 3D objects based on the selected mapping that maps a data dimension to an attribute of the 3D object; to a selected mapping that maps the data dimension to a visualization attribute Based on each of a plurality of 3D objects, update the visualization dimension in the visualization table to reflect the visibility of each 3D object; and from a defined viewpoint based on the received user input Render 3D data visualization image of 3D object in virtual space That graying, instructs the processing system.

Description

STATEMENT FEDERALLY SUPPORT STATEMENT This invention was made with government support under grant number HCC0917814 awarded by the National Science Foundation. The US government has certain rights in this invention. The present invention described in the present specification was made by performing work under NASA (National Aeronautics and Space Administration) contract No. NNN12AA01C, and the contractor has the title of the invention. It is under the control of the provisions of Public Law No. 96-517 (35 USC 202) selected to do so.

The present invention relates generally to visualization, and more specifically to virtual reality (VR) displays, mixed reality (MR) displays, and augmented reality (AR). reality) relates to the visualization of complex, multidimensional data using 3D display technology including (but not limited to) displays.

  Data visualization generally refers to techniques used to communicate data or information by encoding it into a visual object that can be displayed by a computer. Visualization is an essential component of any data analysis and / or data mining process. In many instances, geometrical (graphical) representations of geometry and data distribution techniques can allow for further insight and selection of appropriate analysis tools that show interpretation of the results. In the “big data” era, the main bottleneck in extracting ready-to-use knowledge from high-dimensional data sets is the lack of the ability for users to visualize patterns in four or more dimensions ) In many cases.

Computer displays typically display information in two dimensions (2D). However, various three-dimensional (3D) display technologies are emerging that simulate depth through a number of different visual effects, including stereoscopic viewing, where they render from different perspectives. Images are displayed separately for the left and right eyes. These two images are then combined in the brain to give a 3D depth perception. A number of head-mounted 3D display technologies are currently available. Paper titled “A Taxonomy of Mixed Reality Visual Displays” by Paul Milgram and Fumio Kishino published in December 1994 in IEICE Transactions on Information Systems, Vol. E77-D, No.12 (Non-patent Document 1) (Incorporated herein by reference in its entirety) proposes the concept of “virtual continuum” that relates to a mixture of classes of objects presented in any particular display context, The environment constitutes one end of the continuum and the virtual environment constitutes the other end of the continuum. In this paper, Milgram and Fumio Kishino explain the difference between virtual and mixed real environments as follows:
“The traditional view of the virtual reality (VR) environment is that the participant-observer can fully immerse and interact with the fully synthetic world. These worlds can imitate the nature of some real-world environments, both real and fictitious; however, these worlds usually have physical laws that govern space, time, dynamics, physical properties, etc. By creating a world that no longer holds, the limits of physical entities can be exceeded, but what can be overlooked in this view is that VR labels are not necessarily related to complete immersion and complete synthesis, It is often used in relation to various other environments that fall somewhere along the virtual continuum.In this paper, VR functions including the fusion of the real world and the virtual world are used. Focusing on a specific subset (subclass) of technology, this subset is collectively referred to as mixed reality (MR). ”(Milgram Paul and Fumio Kishino,“ A taxonomy of mixed reality visual displays ”, IEICE TRANSACTIONS on Information Systems, 77.12 (1994), p.1321)

  Within the mixed reality area, a further distinction can be made between augmented reality (AR) and mixed reality (MR). Both AR and MR displays can be implemented using transparent display technology and / or capture scene images and use the captured images to combine real world scenes and virtual objects. This can be achieved by rendering the display. AR is commonly used to describe 3D display techniques that display virtual objects that give contextual information to real-world scenes. AR is often used to refer to experiences in which real-world objects are augmented or supplemented by sensory input generated by a computer. MR, sometimes referred to as mixed reality, is generally the fusion of the real and virtual worlds to create new environments and visualizations where real and virtual objects coexist and interact in real time including.

  AR, MR, and VR displays all have a similar goal of immersing the user in a virtual environment, whether partially or wholly. AR and VR allow users to interact with virtual objects around them while continuing to touch the real world. According to VR, users are immersed in a completely synthetic world while being isolated from the real world.

Paul Milgram and Fumio Kishino, "A Taxonomy of Mixed Reality Visual Displays", IEICE Transactions on Information Systems, Vol. E77-D, No. 12, December 1994

  Humans have an excellent pattern recognition system and can obtain more information through vision than a combination of all other sensations. Visualization provides the ability to understand large amounts of data by mapping abstract information to more easily understood visual elements. Systems and methods according to various embodiments of the present invention increase the ability of an observer to explore higher dimensional data and observe complex patterns in the data. Humans are biologically optimized for viewing the surrounding world and the patterns in it in three dimensions. Therefore, presenting the data as a visualized image of multidimensional data (ie, the number of dimensions of the displayed data is 3 or more) on a 3D display is a meaningful structure in the data (eg, cluster (group), Correlations, outliers (outliers)), and such structures can contain available knowledge and often exist in higher dimensional spaces, and the data of traditional 2D display techniques According to visualization, it is not easily observable. In addition, immersive AR, MR, and VR environments will inevitably support collaborative data visualization and exploration, helping scientists interact with data with colleagues who share virtual space. Become.

  In explaining data visualization, distinguish between the dimensions of a graphic display (eg, a printing paper or flat screen is typically a 2D device, whereas a VR / AR headset is typically a 3D display) and the data dimension. And the dimension of the data can be the number of features / quantities / parameters associated with each data item (eg, a row in a spreadsheet (spreadsheet) is a single data item, The number of columns is its dimension). As a further illustration, a data set with 3 columns of entries is three dimensional and a data set with 20 columns is 20 dimensions. Either data set can be represented on a 3D display device. An additional distinction is the dimension of the data space where the data is rendered or visualized. Such data visualization space up to three dimensions (three axes) can be (real) spatial; additional dimensions can be encoded by the color, transparency, shape, and size of the data points. In this way, four or more data dimensions can be visualized in a multi-dimensional data space by a 3D display device. If the data set has N dimensions, k subsets of these data can be visualized at any given time, where k ≦ N. If k> 3, up to three dimensions can be encoded as spatial positions (XYZ) in the data visualization space, and the rest can be represented by data point features such as color, size, and shape. . In a scatter plot, each data item (data point) is represented by some spatial coordinate (XYZ) as a separate geometric object, eg, point, square, sphere, etc., and other visible features (eg, color, size, Etc.) encode additional data dimensions. The challenge is to maximize the number of simultaneously visible data dimensions k that can be easily understood by humans.

  In many preferred embodiments, a three-dimensional data visualization system can provide data visualization in various display situations. In certain preferred embodiments, multidimensional data is rendered for a 3D visualization space that can be viewed, navigated, and manipulated using a conventional 2D display device (eg, a flat screen). In many preferred embodiments, optimized rendering of 10 or more data dimensions is used to generate a 3D data visualization of a multidimensional data space. In some preferred embodiments, an enhanced intuitive understanding of the multidimensional data space when the 3D data visualization system displays the multidimensional data space using a 3D display device (eg, a VR / AR headset). Can be provided. Immersion into a multi-dimensional data space using an immersive 3D display is the ability for humans to understand the geometric shapes that may exist in the data and their relationships (clusters, correlations, outliers, deviations, gaps, etc.) Can be enhanced compared to conventional data visualization methods involving the use of 2D displays.

  One preferred embodiment of the present invention includes: a display device, and a computer system, which includes a memory and processing system that includes a 3D data visualization application. In addition to this, this 3D visualization application: tells the processing system to load a set of data points into a visualization table (table) in memory, each data point contains values of multiple data dimensions, and adds A visibility value is assigned to each data point in the visualization table in the visibility dimension; the processing system is instructed to generate a representation of a set of 3D objects corresponding to the set of data points, A 3D object has a set of visualization attributes that define how to render the 3D object, and these visualization attributes include the position of the 3D object in a virtual space having three spatial dimensions; Instructs the processing system to receive a mapping that maps to the object; visualization of the set of 3D objects Directs the processing system to determine the data dimension based on the selected mapping that maps the data dimension to the attribute of the 3D object, and the selected mapping that maps the data dimension to the attribute of the 3D object is the 3D object in the virtual space. For each of the plurality of 3D objects, the visibility dimension in the visualization table reflects the visibility of each 3D object based on the selected mapping that maps the data dimension to the visualization attribute. To the processing system to instruct the processing system to render the 3D data visualization of the 3D object interactively in virtual space from a viewpoint determined based on the received user input.

  In another preferred embodiment, the display device is a 3D display device, and the 3D data visualization image of the 3D object is interactively rendered in a virtual space from a viewpoint determined based on the received user input. Rendering a stereo image displayed by the display device.

  In another preferred embodiment, the 3D data visualization application is implemented using a 3D rendering engine.

  In yet another preferred embodiment, the implementation of the 3D data visualization application further relies on a script executed by the 3D rendering engine.

  In still another preferred embodiment, the visualization attribute includes X coordinate, Y coordinate, Z coordinate, shape, size, color palette, color map, color scale (color scale), transparency, ID (identification), URL ( uniform resource location), mask, displayability, 3D object movement, sound generation, haptic feedback, and vibrotactile feedback, including at least one attribute selected from the group.

  In yet another preferred embodiment, receiving a mapping that maps a data dimension to a visualization attribute further comprises receiving a user selection of a mapping that maps the data dimension to a visualization attribute.

  In yet another preferred embodiment, receiving the mapping that maps the data dimension to the visualization attribute further includes retrieving a stored set of mappings that map the data dimension to the visualization attribute.

  In another preferred embodiment, a 3D data visualization of a 3D object can be interactively rendered in virtual space from a viewpoint determined based on received user input: based on visualization attributes of multiple visible 3D objects, Generating at least one group of 3D objects; and rendering a 3D data visualization of at least one group of the 3D objects interactively in virtual space from a viewpoint determined based on received user input.

  In another preferred embodiment, a 3D data visualization of a 3D object may be interactively rendered in virtual space from a viewpoint determined based on received user input: in response to user input, a virtual environment in the virtual space The 3D object that forms a part of the data point is changed so that the 3D object corresponding to the set of data points remains stationary in the virtual space, and the 3D object that forms a part of the virtual environment in the virtual space is changed. Due to a change in appearance to the virtual environment in the 3D data visualization image; a visible 3D object corresponding to the set of data points and a 3D object forming part of the virtual environment Rendering.

  In yet a further preferred embodiment, in response to user input, changing a 3D object that forms part of the virtual environment in the virtual space: forming part of the virtual environment in response to a user command By changing the size of the 3D object, the size of the 3D object corresponding to the set of data points is changed, and the 3D object corresponding to the set of data points changes the size with respect to the virtual environment. Generating an impression; in response to a user command, moving a 3D object corresponding to the set of data points by moving a position of a 3D object that forms part of the virtual environment; Generating an impression that a 3D object corresponding to the set of objects is moving relative to the virtual environment; and user instructions In response, the position of the 3D object that forms a part of the virtual environment is moved, the 3D object corresponding to the set of data points is rotated, and the 3D object corresponding to the set of data points becomes the virtual environment. Generating at least one change selected from the group including generating a rotating impression.

  In yet another preferred embodiment, a 3D data visualization of a 3D object may be interactively rendered in virtual space from a viewpoint determined based on received user input: illuminating at least a portion of the 3D object, Each of the 3D objects to be performed includes illuminating with a directional illumination source that emits light at the user's viewpoint; and at least rendering the illuminated 3D object based on the user's viewpoint.

  In yet another preferred embodiment, illuminating at least a portion of the 3D object is: measuring a field of view; using a directional illumination light source that emits the 3D object at a user's viewpoint; Illuminating with; rendering a 3D object illuminated within the user's field of view.

  In yet another preferred embodiment, a 3D data visualization of a 3D object may be rendered interactively in virtual space from a viewpoint determined based on received user input: at least a portion of the 3D object is represented by a user's viewpoint. The rotating 3D object so that the appearance of the rotating 3D object is unchanged with respect to the user's viewpoint; and the rotating 3D object is rendered based on the user's viewpoint.

  In yet another preferred embodiment, a 3D data visualization of a 3D object can be interactively rendered in virtual space from a viewpoint determined based on received user input: at least one interaction in the virtual space. Determining the position of the primitive based on the user's viewpoint; and rendering the at least one interaction primitive based on the user's viewpoint.

  Yet another additional preferred embodiment includes determining the transparency of at least one interaction primitive based on a user's viewpoint.

  In yet another additional preferred embodiment, the 3D object includes a 3D object having a shape that preserves depth perception.

  In yet another preferred embodiment, the depth-preserving shape is characterized by a first dimension that is invariant and a second dimension that is a visualization attribute that changes based on the mapped data dimension.

  In still another preferred embodiment, at least one of the shapes for maintaining the perception of the depth is a bullet shape.

  In another additional preferred embodiment, receiving a mapping that maps a data dimension to a visualization attribute: receiving a selection of target features; and at least a subset of a plurality of data dimensions for the target feature Identifying importance; and generating a mapping that maps data dimensions having high importance to specific visualization attributes.

  In another additional preferred embodiment, identifying the importance of at least a subset of the plurality of data dimensions to the feature of interest: identifying the data dimension that is numerical and the data dimension that is categorical Generating a mapping that maps a numerical dimension with high importance to a first set of visualization attributes; and mapping a dimensional data dimension with high importance to a second set of visualization attributes; Generating a mapping.

  In yet another preferred embodiment, the first set of visualization attributes includes an X position, a Y position, a Z position, and a size.

  In yet another additional preferred embodiment, the second set of visualization attributes includes a shape.

  In yet another additional preferred embodiment, the 3D data visualization application further instructs the processing system to receive: at least one updated mapping that maps a data dimension to a visualization attribute; Instructing the processing system to determine an updated visualization attribute for the set of 3D objects based on the selected mapping that maps to an attribute of the object, and the updated mapping of data dimensions to visualization attributes The mapping defines the position of each visible 3D object in the updated virtual space; for a set of visible 3D objects, the update in the virtual space from the position in the virtual space of each 3D object in the set To generate a trajectory up to the specified position. Show; interactively render animation of 3D object movement along a generated trajectory from a position in the virtual space to an updated position in the virtual space from a viewpoint determined based on received user input To the processing system.

  In yet another preferred embodiment, the 3D data visualization application further reflects an updated visibility value for each of the plurality of 3D objects, and the visibility of each 3D object based on the updated mapping. The processing system is instructed to decide to do so.

  In yet another preferred embodiment, interactively rendering the animation of the movement of the 3D object along the generated trajectory means that different sets of 3D objects in the rendered animation can move along the trajectory. It further includes changing the starting time.

  In yet another additional preferred embodiment, the time at which different sets of 3D objects begin to move along their trajectory in the rendered animation is determined based on user input.

  In yet another additional preferred embodiment, interactively rendering an animation of the movement of a 3D object along the generated trajectory includes a different set of 3D objects along the trajectory in the rendered animation. And changing the moving speed.

  In yet another additional preferred embodiment, a 3D data visualization of a 3D object can be interactively rendered in virtual space from a viewpoint determined based on received user input: in the virtual space of at least one affordance A user input indicating movement of the 3D data visualized image onto the at least one affordance initiates a change of the 3D data visualized image; on the at least one affordance Detecting the movement of the 3D data visualization image to the user; changing the 3D data visualization image based on one of the at least one affordance; and changing the changed 3D data visualization image to the user's Rendering based on the viewpoint.

  In yet another additional preferred embodiment, changing the 3D data visualization based on one of the at least one affordance includes changing the size of the 3D data visualization.

  In yet another additional preferred embodiment, modifying the 3D data visualization based on one of the at least one affordance: the visualization corresponding to a 3D object visualized in the 3D visualization Applying a data analysis process to the set of data points in the table; changing a visualization attribute of the 3D object visualized in the 3D visualization based on at least one result of the data analysis process; Rendering a modified 3D data visualization including the modified visualization attributes of the 3D object based on the user's viewpoint.

  In yet another additional preferred embodiment, the data analysis process is a clustering process.

  In yet another additional preferred embodiment, changing the 3D data visualization based on one of the at least one affordance has been moved onto one affordance of the at least one affordance. Rendering a new 3D data visualization of a set of 3D objects represented by at least one 3D object selected in the 3D visualization.

  Yet another additional preferred embodiment further includes: an input device having an elongated handle and an input button. In addition, the 3D data visualization application further: instructs the processing system to obtain a pose (posture) input and button state input from an input device; based on the input pose input and button state Instructing the processing system to change the 3D data visualization image in a manner that depends on the state of the user interface; and rendering the changed 3D data visualization image based on the user's viewpoint. To the processing system.

  Yet another additional preferred embodiment is: based on the input pose input and button state, changing the 3D data visualization in a manner that depends on the status of the user interface. And determining the position of the 3D data visualization image in the virtual space based on the fact that the button state indicates that the button is not pressed; Rotating the 3D data visualized image in the virtual space based on indicating that it has been done.

  In another preferred embodiment, the memory further includes avatar (incarnation) metadata, the avatar metadata including a set of visualization attributes that define how to render the avatar, and these visualization attributes are in the virtual space. Rendering a 3D data visualization image of the 3D object in a virtual space interactively from a viewpoint determined based on the received user input based on the viewpoint and the avatar metadata. Rendering an avatar within the 3D data visualization.

  In another preferred embodiment, the avatar metadata further includes pose information, and rendering the avatar in the 3D data visualization based on the viewpoint and the avatar metadata includes in the avatar metadata. Rendering the avatar's pose in the 3D data visualization based on the pose information.

1A to 1E are diagrams illustrating a set of eight clusters, and these clusters are laid out with a data center at a corner of a virtual cube. In accordance with various embodiments of the present invention, different perspectives that a user can obtain by moving through a 3D visualization of a multi-dimensional data space, and additional visualization attributes for visualizing additional data dimensions It is a figure illustrated. In accordance with various embodiments of the present invention, different perspectives that a user can obtain by moving through a 3D visualization of a multi-dimensional data space, and additional visualization attributes for visualizing additional data dimensions It is a figure illustrated. In accordance with various embodiments of the present invention, different perspectives that a user can obtain by moving through a 3D visualization of a multi-dimensional data space, and additional visualization attributes for visualizing additional data dimensions It is a figure illustrated. In accordance with various embodiments of the present invention, different perspectives that a user can obtain by moving through a 3D visualization of a multi-dimensional data space, and additional visualization attributes for visualizing additional data dimensions It is a figure illustrated. FIG. 3A is a diagram illustrating a visualized image of a 3D graph from a plurality of viewpoints according to an embodiment of the present invention, in which data is visualized as a 3D line plot. FIG. 3B is a diagram illustrating a visualized image of a 3D graph from a plurality of viewpoints according to an embodiment of the present invention, in which data is visualized as a 3D line plot. FIG. 3C is a diagram illustrating a visualized image of a 3D graph from a plurality of viewpoints according to an embodiment of the present invention, in which data is visualized as a 3D line plot. FIG. 3D is a diagram illustrating a visualized image of a 3D graph from a plurality of viewpoints according to an embodiment of the present invention, in which data is visualized as a 3D line plot. FIG. 3E is a diagram illustrating a visualized image of a 3D graph from a plurality of viewpoints according to an embodiment of the present invention, in which data is visualized as a 3D line plot. It is a figure which illustrates the visualization image of 3D graph from a plurality of viewpoints by the embodiment of the present invention, and visualizes data as 3D surface. It is a figure which illustrates the visualization image of 3D graph from a plurality of viewpoints by the embodiment of the present invention, and visualizes data as 3D surface. It is a figure which illustrates the visualization image of 3D graph from a plurality of viewpoints by the embodiment of the present invention, and visualizes data as 3D surface. It is a figure which illustrates the visualization image of 3D graph from a plurality of viewpoints by the embodiment of the present invention, and visualizes data as 3D surface. It is a figure which illustrates the visualization image of 3D graph from a plurality of viewpoints by the embodiment of the present invention, and visualizes data as 3D surface. FIG. 6 illustrates rendering of a 3D graph from different viewpoints of different users according to an embodiment of the present invention. FIG. 6 illustrates rendering of a 3D graph from different viewpoints of different users according to an embodiment of the present invention. FIG. 6 illustrates rendering of a 3D graph from different viewpoints of different users according to an embodiment of the present invention. FIG. 6 illustrates rendering of a 3D graph from different viewpoints of different users according to an embodiment of the present invention. It is a figure which illustrates 3D data visualization image which shows the avatar of several users in virtual space by embodiment of this invention. FIG. 2 conceptually illustrates a system for generating a 3D visualization of a multidimensional data space according to an embodiment of the present invention. FIG. 2 conceptually illustrates a system for generating a 3D visualization of a multidimensional data space according to an embodiment of the present invention. FIG. 2 conceptually illustrates a multidimensional data visualization computer system implemented on a single computation space, in accordance with an embodiment of the present invention. 4 is a flowchart illustrating a process for generating a visualized image of multidimensional data according to an embodiment of the present invention. 6 is a flowchart illustrating a process for rendering a 3D data visualization using a group of 3D objects, according to an embodiment of the invention. FIG. 4 illustrates a 3D visualization of a multidimensional data space according to various embodiments of the present invention, mapping data dimensions to 3D object shape and size attributes. FIG. 4 illustrates a 3D visualization of a multidimensional data space according to various embodiments of the present invention, mapping data dimensions to 3D object shape and size attributes. FIG. 4 illustrates a 3D visualization of a multidimensional data space according to various embodiments of the present invention, mapping data dimensions to 3D object shape and size attributes. It is a figure which shows the small set of 3D object shape designed so that it could recognize also with a high-density plot. It is a figure which shows the change of an external appearance when the 3D object which has a spherical shape is seen from a different direction under fixed illumination with three stationary point light sources. It is a figure which shows the same 3D data object seen from the same viewpoint as shown to FIG. 11A, and this 3D object is illuminating using the directional illumination light source light-emitted from a user's viewpoint (or its vicinity). FIG. 6 illustrates similarities in appearance of similar 3D objects due to object illumination changing with the user's pose while the user moves through the virtual space, according to various embodiments of the present invention. is there. FIG. 6 illustrates similarities in appearance of similar 3D objects due to object illumination changing with the user's pose while the user moves through the virtual space, according to various embodiments of the present invention. is there. FIG. 6 illustrates similarities in appearance of similar 3D objects due to object illumination changing with the user's pose while the user moves through the virtual space, according to various embodiments of the present invention. is there. 6 is a flowchart illustrating updating illumination of a 3D object (or a vertex or surface of a 3D object) when a user's field of view in a virtual space changes according to an embodiment of the present invention. FIG. 3 conceptually illustrates directional illumination of vertices or surfaces of a plurality of 3D objects and / or groups of 3D objects in a virtual space. FIG. 3 illustrates a 3D graph including interaction primitives in the form of grids, axes, axis labels generated by a 3D data visualization system, according to an embodiment of the present invention. FIG. 4 illustrates a user interface according to an embodiment of the present invention, showing recommendations for mapping specific data dimensions to specific attributes of 3D objects that are visible in a 3D data visualization image. 16A-16D are a series of data visualization images according to an embodiment of the present invention, where the X attribute of the 3D data object varies from the first data dimension (ie, “age”) to the second data dimension (ie, “year of service”). Has been changed. FIG. 17A illustrates affordances in a VR user interface that allow a user to control the size of a 3D data visualization image within a virtual world generated by a 3D data visualization system according to various embodiments of the invention. It is a figure to do. FIG. 17B illustrates affordances in a VR user interface that allow a user to control the size of a 3D data visualization image within a virtual world generated by a 3D data visualization system according to various embodiments of the invention. It is a figure to do. FIG. 17C illustrates affordances in a VR user interface that allow a user to control the size of a 3D data visualization image within a virtual world generated by a 3D data visualization system according to various embodiments of the invention. It is a figure to do. FIG. 17D illustrates affordances in a VR user interface that allow a user to control the size of a 3D data visualization image within a virtual world generated by a 3D data visualization system according to various embodiments of the invention. It is a figure to do. FIG. 17E illustrates affordances in a VR user interface that allow a user to control the size of a 3D data visualization image within a virtual world generated by a 3D data visualization system according to various embodiments of the invention. It is a figure to do. FIG. 17F illustrates affordances in the VR user interface that allow the user to control the size of the 3D data visualization image within the virtual world generated by the 3D data visualization system according to various embodiments of the invention. It is a figure to do.

DETAILED DESCRIPTION A data visualization system capable of visualizing multidimensional data as a 3D graph (ie, a 3D data visualization system) and a method for generating a visualized image of a multidimensional data space according to a number of embodiments of the present invention. Using 3D display technology, many of the challenges of effective interactive high-dimensional data visualization can be met. Here, the 3D graph is used in a general sense referring to a 3D object or a group of 3D objects that collectively describe a set of data. A 3D object that makes up a 3D graph can be distinguished from other 3D objects, and this distinction can be used in a 3D visualized image of multidimensional data to represent a virtual environment including a 3D graph. Current data visualization techniques greatly require the user to perceive the environment displayed on the flat screen by looking from the outside. Systems and methods according to some embodiments of the present invention allow visualization of more complex data spaces, use 3D display techniques to place a user inside a visualized image, and data visualization operations in first person. By making your own experience, you can expand your ability to interpret additional dimensions. This approach activates human sensations such as proprioceptive sensation (how people perceive the relative position of their body parts) and motor sensation (how people perceive their own body movements). Describe and describe the experience of the human body in the external environment.

  Presenting high-dimensional data in a three-dimensional visualization is complex and may involve a representation of the structure in the data using subtle changes in 3D object features such as size, shape, and / or texture. Movement and depth perception can confuse some of these visual stimuli, and this confusion changes how the method of rendering 3D objects for 3D display is independent of the underlying dimensions ( For example, it can be complicated in an environment that introduces perceived colors and / or shades that can change size. A 3D data visualization system according to a number of embodiments of the present invention maintains similarities between similar 3D objects in the user's field of view, changes in size due to changes in the size attribute of the 3D object, and up to 3D objects. Utilizing techniques that enhance the user's ability to distinguish size changes due to distance differences, such techniques include (but are not limited to) shape selection and illumination models. In many embodiments, when a 3D object transitions from one 3D visualization image in a multidimensional data space to a 3D visualization image in a different multidimensional data space, a change in the attribute of the 3D object corresponding to a particular 3D data point is detected. By utilizing animations that allow the user to observe, the user's ability to perceive structures in the data is further enhanced.

  The ease of use of a 3D data visualization system according to many embodiments of the present invention provides affordances in the 3D user interface that can be used by the user to automatically change the rendering of high-dimensional data with 3D data visualization images. Can be enhanced by. In some embodiments, a user simply drags the 3D visualization of the multidimensional data space over the affordance to perform certain actions (eg, changing the size of the 3D data visualization or k-means clustering of data points). Is executed. Data is visualized as clear objects (data points) in the 3D data visualization image, but four or more data dimensions are rendered by the nature of individual data points (eg, color, size, shape, etc.) To emphasize.

  As will be readily appreciated, the ability to visualize data in a multi-dimensional data space in 3D expands the myriad possibilities of analyzing complex data, and a 3D data visualization system according to numerous embodiments of the present invention. Makes it possible to perform data exploration in a collaborative manner. In some embodiments, multiple users can independently explore the same shared virtual multidimensional data space, whether or not at the same physical location. In certain embodiments, a single user can lead a “broadcast” interactive session in which all users lead the 3D data visualization space. View from a viewpoint controlled by. Multidimensional data visualization systems and processes (methods) for searching complex data using multidimensional data visualizations according to various embodiments of the present invention are further described below.

3D data visualization When dealing with complex data, 2D mapping often fails to show a unique structure in the data. 1A to 1E illustrate a set of eight clusters, and these clusters are laid out with the data center at the corners (corners, corners) of the virtual cube. The simple 2D projections shown in FIGS. 1A-1C cannot easily show all the structures in the data. Visualizing data in three spatial dimensions as shown in FIGS. 1D and 1E makes it easier to identify cluster patterns. The 3D data visualization system according to many embodiments of the present invention provides motion and parallax that allows the user to more clearly identify structures that may not be readily apparent depending on the viewpoint that renders a particular 3D visualization. Provide users with the ability to interact directly with 3D cues (cue, instructions). In other words, the ability of a user to easily shift the viewpoint in 3D visualization in real time can indicate a visual cue that allows the user to explore the data space from different viewpoints that create additional insights into the data. . The ability to visually observe structures is trained to identify structures in the data by the presence of “outliers” that human users can easily identify by visual inspection from one or more viewpoints. Can be particularly useful in environments where machine learning algorithms (eg, k-means clustering) are not useful.

  FIGS. 2A and 2B show the user by navigating within a 3D visualization of a multidimensional data space (as opposed to being constrained to view data visualized in three spatial dimensions from outside the data space). Figure 3 illustrates different perspective views that can be obtained. As the user's viewpoint moves from the viewpoint shown in FIG. 2A toward the data of interest and the viewpoint shown in FIG. 2B, the structure within a specific subset of the data appears more detailed. As described further below, a 3D data visualization system according to various embodiments of the present invention can support any number of different input modalities, which allows the user to view a 3D data visualization image. Instructions can be provided to control zoom, relative position, and / or orientation. The 3D data visualizations shown in FIGS. 2A and 2B are rendered by mapping data dimensions to features of the 3D object, and these features are represented by the 3D object visibility (some data points based on filtering criteria). May not be shown), including the position of the 3D object in the 3D data visualization, the size of the rendered 3D object, and / or the color of the 3D object. In certain embodiments, higher dimension visualizations can be further generated by defining additional features of the 3D object using data dimension mapping, which renders the 3D object. Shapes used for, 3D object texture, and / or 3D object transparency (but not limited to). FIG. 2C illustrates a 3D visualization of the data set shown in FIG. 2B, which represents additional data dimensions using transparency. FIG. 2D illustrates a 3D visualization of the data set shown in FIG. 2B, which represents additional data dimensions using both transparency and texture. Additional dimensional representations by selecting different 3D shapes according to various embodiments of the present invention are described below, including the use of depth perception to preserve the shape. In other embodiments, data dimensions can be mapped to non-visual aspects in an immersive experience, which non-visual aspects include movement, sound generation, haptic feedback, and / or vibrotactile feedback ( But not limited to them).

A limitation inherent in illustrating 3D data visualization images on 2D paper is that the 3D data visualization images shown in FIGS. 2A-2D are two-dimensional projections of the underlying 3D data visualization images. A 3D data visualization system according to many embodiments of the present invention provides interactive 3D visualization that allows interaction and motion parallax, which is utilized to generate FIGS. 2A and 2D. Lost when projecting 3D data. Accordingly, a video sequence (moving image sequence) illustrating an interactive data search in a 3D data visualization generated by a 3D data visualization system according to an embodiment of the present invention is as follows:
http://www.virtualitics.com/patent/Virtualitics1.mp4, and
http://www.virtualitics.com/patent/Virtualitics2.mp4
More available and the same interactive session 3D video sequence is
http://www.virtualitics.com/patent/Virtualitics3.mp4
More available. Comparison of the 2D video sequence with the 3D video sequence provides a sense of the effect of motion parallax in interpreting the structure of the data used to generate the 3D data visualization image by the 3D data visualization system. .
http://www.virtualitics.com/patent/Virtualitics1.mp4,
http://www.virtualitics.com/patent/Virtualitics2.mp4, and
http://www.virtualitics.com/patent/Virtualitics3.mp4
The 2D and 3D video sequences found in are hereby incorporated by reference in their entirety.

  Most of the description that follows relates to the generation of interactive multi-dimensional visualizations created by rendering 3D objects in virtual space, but utilizes systems and methods according to numerous embodiments of the present invention. Thus, a visualized image of multidimensional data can be generated using a variety of different techniques for representing the data. In some embodiments, the 3D data visualization image can include a 3D line plot (eg, see FIGS. 3A and 3B) and / or a 3D surface (eg, see FIGS. 4A and 4B). 3A and 3B illustrate a 3D graph visualization from multiple viewpoints, where the data is visualized as a series of 3D line plots. For comparison, two-dimensional projections of these 3D line plots are shown in FIGS. 4A and 4B illustrate visualization images of a 3D graph from a plurality of viewpoints, where data is visualized as a surface. For comparison, two-dimensional projections of these 3D surfaces are shown in FIGS. Thus, systems and methods according to different embodiments of the present invention are not limited to specific types of 3D data visualization images, and can be used to generate any of a variety of 3D data visualization images. Systems and methods for performing 3D data visualization that allow a user's cognitive system to interpret and interact with high-dimensional data according to various embodiments of the present invention are further described below. To do.

3D Data Visualization System A 3D data visualization system according to certain embodiments of the invention can be configured for searching a 3D graph by a single user or multiple users. In some embodiments, a 3D data visualization system includes a 3D rendering engine that maps data dimensions to 3D virtual objects, which are used for visualization in virtual space by the 3D rendering engine. Rendered. Machine vision (machine vision) systems and / or sensor systems can be utilized to track the pose of one or more users, and more specifically, the user's head position. The viewpoint can be measured using the head position, and from this viewpoint, a 3D display of the virtual space can be rendered for each user interacting with data in the virtual space. When multiple users are collaborating in one virtual space, the user's head position and / or pose is used to render the 3D display presented to each user and the individual user's avatar to the data space Can be rendered within.

  Rendering of 3D graphs from different viewpoints of different users according to one embodiment of the present invention is conceptually illustrated in FIGS. FIG. 5E illustrates a 3D visualization image showing a plurality of user avatars in virtual space, according to an embodiment of the present invention. In the illustrated 3D data visualization image, a 3D graph 500 is shown. In this graph, data points are visualized as 3D objects 502, and viewpoints where other users are searching the virtual space are shown as avatars 504 and 506. As described below, the ability of a user to confirm his position in virtual space can be enhanced by providing intuitive interaction primitives such as grid lines 508 and axis labels 510. it can. In many embodiments, collaborating users can move independently through the virtual space, or a set of users is controlled by a single user interacting with the virtual space. You can experience the same visualization of virtual space. As can be readily seen, the particular collaborative search mode supported by the 3D data visualization system is highly dependent on the requirements of a given application.

  According to an embodiment of the present invention, it can be used to generate a visualized image of multidimensional data in three spatial dimensions for a user and / or in such a 3D space by the collaboration of multiple users. A multi-dimensional data visualization system that can be used to facilitate multi-dimensional data search is illustrated in FIG. 6A. The 3D data visualization system 600 includes a 3D data visualization computer system 602 that is configured to communicate with a 3D display 604, and in the illustrated embodiment, the 3D display 604 is a head-mounted display.

  The 3D data visualization computer system 602 can also be connected to the camera system 606, and the camera system 606 can be used to capture user image data, from which the user's pose and / or head position is measured. be able to. This camera system can also be used as an input modality for detecting gesture-type input. Additional and / or alternative input modalities can be provided, including (but not limited to) a user input device and a microphone for detecting voice input. The camera system can include any of a variety of different camera systems that can capture image data and can measure a user's pose from such camera systems, such as conventional cameras, flight Including (but not limited to) temporal cameras, structured illumination cameras, and / or multi-view (multi-image) stereo cameras. A pose can be used to describe any representation of both a user's position and orientation in a multidimensional space. A simple representation of a pose is the head position and viewing direction. More complex pose expressions can describe the user's body position using the joint positions of the commissural skeleton. As will be readily appreciated, the specific description of the pose and / or camera system utilized within a given 3D data visualization system 600 is highly dependent on the specific application requirements.

  In many embodiments, the 3D data visualization computer system 602, the 3D display 604, and the camera system 606 are integrated devices. For example, a 3D data visualization computer system 602, a 3D display 604, and a camera system 606 are included in a head-mounted display such as (but not limited to) HoloLens® marketed by Microsoft Corporation of Redmond, Washington. Can be realized. In another embodiment, a 3D data visualization computer system 602 and a 3D display 604 utilize an Oculus Rift® 3D display marketed by Oculus VR, LLC of Menlo Park, California via a wireless connection. It is possible to communicate in the same way. As can be easily seen, the 3D data visualization is displayed in a mixed reality situation using an MR headset (eg HoloLens®) and / or a 3D display for VR (eg Oculus®). ) Can be configured as a collection of virtual objects displayed in a fully immersive environment.

  In certain embodiments, a 3D data visualization computer system can utilize distributed processing. In many embodiments, at least some processing associated with rendering the 3D data visualization image may be performed by a processor in the head mounted display. In some embodiments, additional processing is performed by a local computer system that communicates with the head mounted display. In many embodiments, processing is performed by a remote computer system (eg, a computer resource in a cloud computer cluster) that communicates with the head mounted display via the Internet (possibly via a local computer system). Thus, a 3D data visualization computer system according to various embodiments of the present invention is not limited to a single computer device, but a single computer device and / or a computer system in a head-mounted display, a local computer system, And / or a combination of remote computer systems. As can be readily appreciated, the specific implementation of the 3D data visualization computer system used within a given 3D data visualization system is highly dependent on the specific application requirements.

  FIG. 6B illustrates a multidimensional data visualization system that allows multiple users to simultaneously search within a 3D visualization of a multidimensional data space, according to one embodiment of the present invention. The 3D data visualization system 650 includes two local computer systems 652 that communicate via a server computer system 656 over a network 656. Each of the local computer systems 652 are connected to the 3D display 658 and the camera system 660 in a manner similar to that described above with reference to FIG. 6A.

  In the illustrated embodiment, each of the local computer systems 652 can build a 3D model of a multidimensional data space to be a video sequence (2D or 3D in response to changes in the user's pose. ). In many embodiments, the local computer system 652 is configured to allow independent data exploration by the user, and pause information is shared among the local computers 652 via the server computer system 654. be able to. Next, using the pose information, an avatar indicating a position where a specific user is looking at the virtual space can be rendered in the virtual space. In many embodiments, the local computer system 652 supports a broadcast (wide area transmission) mode, in which one user navigates through the virtual space and pauses the ongoing user. Are communicated via the server computer system 654 to other users' local computer systems 652 in the virtual space. The local computer system 652 that has received the pose information from the ongoing user uses this pose information to display a visualized image of the multidimensional data from the viewpoint of the ongoing user on the 3D display of the other user. Can be rendered.

  In many embodiments, a broadcast mode is supported by rendering a 3D video sequence and streaming the 3D video sequence to another user's local computer system 652. In some embodiments, the server computer system 654 generates a 3D data visualization image for each user based on the pose information received from the local computer system 652 and generates a 3D video sequence over the network 656. Includes sufficient computing power (eg, graphics processing unit) to stream to a local computer for display by a 3D display.

Computer System for 3D Visualization of Multidimensional Data A computer system capable of generating a 3D visualization of multidimensional data can take a variety of forms, from an implementation in which all calculations are performed by a single computer device, Up to complex systems that distribute processing across head-mounted displays, local computer systems, and / or cloud-based server systems. The specific distribution of different processes is highly dependent on the number of users and the requirements for a given application.

  A multidimensional data visualization computer system implemented on a single computer device according to an embodiment of the present invention is shown in FIG. The multi-dimensional data visualization computer system 700 renders personal computer, laptop computer, head mounted display device, and / or 3D display at a sufficient frame rate to meet the demands of interactive 3D data visualization for specific applications. It can be any other computer device that has sufficient processing power to satisfy.

  The 3D data visualization computer system 700 includes a processor 702. The processor 702 is used to refer to one or more devices in a computer device that can be configured to perform calculations according to machine-readable instructions stored in the memory 704 of the 3D data visualization computer system. The processor 702 includes one or more central processing units (CPUs), one or more graphics processing units (GPUs), and one or more digital signal processors (DSPs). signal processor). In addition, the processor 702 can include any of a variety of application specific circuits that have been developed to accelerate 3D data visualization computer systems.

  In the illustrated embodiment, the 3D data visualization computer system 700 has a network interface 706 for communicating with a remote computer system (eg, another user's computer system and / or a remote server computer system), and a 3D display. And / or an input / output (I / O) interface that can be utilized to communicate with various devices including (but not limited to) a camera system. The specific communications and I / O capabilities required for computer systems used to generate 3D visualizations of multidimensional data are generally determined based on the requirements of a given application.

  As can be readily appreciated, various software applications can be utilized to implement a multidimensional data visualization computer system according to embodiments of the present invention. In the illustrated embodiment, the 3D data visualization image is generated by a 3D data visualization application 710 that runs within a computing environment generated by the operating system 712. The 3D data visualization application 710 utilizes a 3D rendering engine 714 to generate a 3D data visualization image that can be displayed by a 3D display. In many embodiments, the 3D data visualization application 710 loads the multi-dimensional data set 716 into the in-memory data structure 718, which is in the low-latency (latency) memory of the 3D data visualization computer system. It is remembered. The multidimensional data set 716 can be stored locally in a file and / or database. In some embodiments, multidimensional data is stored remotely (eg, in a distributed database), and some or all of the multidimensional data is stored in an in-memory data structure 718 maintained by the 3D data visualization application 710. Load it. In many embodiments, multidimensional data is loaded into at least one visualization table. As described below, additional data dimensions can be added to the multidimensional data as the 3D data visualization application 710 loads the multidimensional data into at least one visualization table. In some embodiments, the visualization table includes a visibility dimension, and the 3D data visualization application uses a visibility value of an individual item in the multi-dimensional data set included in the visualization table to correspond to the item in 3D. The object is modified to reflect whether the multidimensional data contained in the visualization table is visible in the current 3D visualization image. As will be readily appreciated, any of a variety of additional dimensions can be added to the multidimensional data by the 3D data visualization application, as long as it meets the requirements of a given application.

  As described further below, the user can select a mapping that maps data dimensions to attributes of 3D objects in the 3D data visualization image, thereby effectively generating a visualization image of multidimensional data. These mappings are stored as data dimension mapping 720. The 3D data visualization application 710 can use the data dimension mapping 720 to provide 3D object attributes to the 3D rendering engine 714. In many embodiments, the 3D rendering engine 714 can create an instance of a 3D object in the 3D model 722 stored in memory and update the attributes of the 3D object. In some embodiments, the 3D rendering engine can create an instance of the 3D object in the 3D model 722 based on the number of data points loaded in the in-memory data structure 718. In this way, the 3D rendering engine 714 can easily generate a 3D data visualization image in response to a user selecting a data dimension for visualization, since an instance of a 3D object is created. This is because the 3D rendering engine 714 only needs to change the attributes of the 3D object in the 3D model 722 to generate a visualized image. In other embodiments, an instance of a 3D object is created in response to a user defining a 3D object attribute.

  The 3D rendering engine 714 can utilize the 3D model 722 to render a stereo image that can be presented by a 3D display. In many embodiments, a 3D data visualization application uses a display driver 724 to display a rendering viewpoint 726 with a 3D display. The particular rendering viewpoint is based on pose data 728 received from a remote computer system (eg, in broadcast mode) or based on pose data 728 determined by the 3D data visualization computer system from images and / or other sensor data. Can be decided.

  In the illustrated embodiment, a 3D visualization application 710 receives pause data 728 from a machine vision application 730 that obtains image data from a camera system using one or more camera drivers 732. . The machine vision application 720 configures the processor 702 to extract the user's pose, including (but not limited to) the user's head position and orientation, from the captured image data. As described above, a user's pose can be used to determine a viewpoint 726 for rendering an image from the 3D model 722. In many embodiments, the user's pose is also used to control the elements of the 3D model, including the lighting of the 3D object, the speed of movement through the virtual space, and / or the visibility of interaction primitives ( But not limited to them). Specific methods by which a user's pose can be used to change the rendering of 3D visualization of multidimensional data according to various embodiments of the present invention are further described below.

  When the multidimensional data visualization computer system 700 generates a 3D visualization image of multidimensional data in which a plurality of users are simultaneously visualizing the same virtual space, the 3D rendering engine 714 displays pose information for each avatar. Using the avatar metadata 734 and the (optionally) avatar identification information in such a way that the avatar located within the user's field of view is visible in the rendering viewpoint 726 by the 3D rendering engine; These avatars can also be included in the 3D model 722.

  In many embodiments, the 3D rendering engine 714 forms part of a 3D graphics engine or 3D game engine, such as (but not limited to) a scripting language. 3D data visualization application 710 can be implemented in the 3D graphics engine using a simple mechanism. In other embodiments, the 3D rendering engine forms part of a 3D data visualization application. As will be readily appreciated, the 3D data visualization application, 3D rendering engine, and / or machine vision application described above can be independently, as a single application, or (but not limited to) a web browser application. Or can be implemented as a plug-in for other applications. The specific way in which 3D data visualization applications can be implemented depends largely on the requirements of a given computer system and / or use case.

  Certain multidimensional data visualization systems and 3D data visualization computer systems are described above with respect to FIGS. 6A-7, but utilize any of a variety of computer platforms (bases), 3D displays, and / or camera systems. Thus, a process that enables interactive searching within 3D data visualization images according to various embodiments of the present invention can be implemented. Interactive searching of 3D data according to numerous embodiments of the present invention is further described below.

Generating a 3D Data Visualization Image A process for generating a 3D data visualization image according to a number of embodiments of the present invention includes loading data into an in-memory data structure, and then setting the data dimension of the 3D object. Mapping to attributes to allow rendering of 3D data visualizations on 3D displays. A process for generating a visualized image of multidimensional data according to one embodiment of the present invention is illustrated in FIG. 8A. Process 800 includes loading (802) the data points into an in-memory data structure such as (but not limited to) a visualization table. In the illustrated embodiment, an instance of a 3D object is created for each data point (step 804). As mentioned above, creating an instance of a 3D object before receiving a mapping that maps data dimensions to attributes of the 3D object can reduce the latency for rendering a 3D data visualization. . In other embodiments, an instance of the 3D object is not created until a data mapping is defined that determines the attributes of the 3D object. As can be readily seen, the timing of 3D object instantiation for 3D display rendering is highly dependent on the requirements of a given application. In some embodiments, the process of loading data points into an in-memory data structure includes generating additional data dimensions that describe the visibility of specific data points in the 3D data visualization image. Visibility data dimensions for individual data points can be updated by process 800 to indicate that a given data point should not be part of a 3D data visualization. In this relationship, visibility is a distinction from being in the user's field of view, and instead refers to the decision by the process 800 not to render the data point in the 3D graph. Reasons for excluding a data point may include that the data point has no value or has an invalid value for one of the data dimensions mapped to an attribute of a 3D object ( But is not limited to that). As can be readily appreciated, any of a variety of reasons can be used to determine that a particular data point should not be included in the 3D visualization if it meets the requirements of a given application. The visibility data dimension added during data acquisition provides a mechanism for reflecting the decision not to visualize individual data points.

  Process 800 includes determining 806 attributes of the 3D object using data dimension mapping. The user interface can present information about the data dimensions that describe the data points and allow the user to select specific data dimensions to map to the attributes of the 3D object. In some embodiments, the mapping of the data dimension defines the characteristics of the 3D object, which are the visibility of the 3D object, the position of the 3D object in virtual space, the shape used to render the 3D object, the virtual It includes (but is not limited to) the size of the 3D object rendered in space and / or the color of the 3D object. In certain embodiments, visualizations of four or more data dimensions can be generated by using a data dimension mapping to define additional features of the 3D object, and these features can be Including (but not limited to) textures and / or transparency of 3D objects. In many embodiments, the list of attributes that can be defined is: X (floating point value), Y (floating point value), Z (floating point value), shape (floating point value), size ( Floating point value), color palette (floating point value, string (string)), color map (floating point value, string), color scale (color scale) (floating point value, string), color scale ( Including (but not limited to) floating point values, strings), and transparency (floating point values). As a complement to visual attributes, data dimension mappings can also be defined with respect to metadata describing the data points represented by the 3D object, which are: ID (string), URL (string) , Mask (floating point value used to indicate whether a data point is selected), displayable (floating point, string, used to indicate whether a data point should be displayed based on a filter (Only data points having a country name value equal to “United States”) are shown (but not limited to). Additional attributes associated with the 3D object may include subtle movements of the 3D object (eg, putting data into different speed categories of jitter or torsion), sound generation, haptic feedback, and / or vibrotactile feedback. (But not limited to them). Although specific attributes are described above, any subset and / or combination of some or all of the attributes described above can be combined with additional attributes in the visualization of the data points. The specific attributes used to visualize the data points in the 3D graph are highly dependent on the requirements of a given 3D data visualization system.

  The mapping that maps the data dimension to the attribute (step 806) is often performed by the user, but the user and / or 3D data visualization system uses the previously stored 3D data visualization image to determine the data dimension. You can also define mappings that map to attributes. In this way, the user loads a new or updated data set (eg, a data set with new data points added) and uses the previously selected set of mappings to retrieve the data. Can be visualized. Accordingly, the attributes of the 3D object can be automatically determined based on the mapping included in the previously generated 3D data visualization (step 806). In many embodiments, multiple users can share a 3D data visualization and utilize the mapping in the shared 3D data visualization to determine the attributes of the 3D object (step 806). Multiple users can share a 3D data visualization for independent use by other users and / or as part of a 3D data visualization that is broadcast.

Process 800 renders a 3D display based on the user's viewpoint (step 814). Accordingly, the user's pose can be measured (step 808) and the pose can be used to render a 3D display based on the user's position in virtual space and the 3D object in the user's field of view. As described further below with reference to FIG. 8B,
Additional computational efficiency can be obtained during rendering by creating one or more meshes for a group of 3D objects based on a mesh of many (or all) 3D objects. In this way, processes including physical processes such as (but not limited to) collision handling can be performed on a much smaller group of 3D objects. As can be readily seen, grouping multiple 3D objects into a group of 3D objects in order to reduce the computations required to render the 3D data visualization image is highly dependent on the requirements of a given application.

  Effective visualization of 3D data enhances the user's ability to perceive structure in the data and avoids incorporating changes in appearance that are independent of the characteristics of the data being visualized into 3D objects in the 3D graph. In many embodiments, various aspects of 3D data visualization are modified based on the user's pose to enhance the ability of the user to perceive the structure of the data. In some embodiments, the process of rendering a 3D display includes illuminating a 3D object based on the user's pose (810). As described below, illuminating each 3D object in the user's field of view with a directional light source that emits light at the user's viewpoint or at a location slightly offset from the user's viewpoint is a similar 3D object. The appearance similarity can be maintained throughout the user's field of view. A process for illuminating a 3D object based on a user's pose according to various embodiments of the present invention is described below. If the 3D object has a different appearance based on the viewing direction, the orientation of the 3D object within the user's field of view can be re-oriented to “face” the user (but as described below, Facing the user actually involves directing the 3D object to an angle that fits the 3D shape features better.) In this way, the appearance of the object will always be the same in the 3D graph, regardless of the direction in which the user looks at the object, avoiding the user from confusing the difference in orientation consciously or unconsciously with meaningful information. Is done. However, in other embodiments, the orientation can be fixed and / or the orientation can be used to visualize additional data dimensions (eg, one data dimension can be oriented relative to the user's viewpoint or rotated Mapped to movement like speed).

  In many embodiments, the user's pose is used to modify many other aspects of 3D data visualization, including (but not limited to) transparency and / or position of interaction primitives. . For example, interaction primitives such as grid lines and / or navigation affordances can be included in the virtual space to aid in orientation and navigation. In many embodiments, the user's pose determines the degree to which the interaction primitive obscures the 3D object that represents the data point. Whether the 3D data visualization system is tailored to the specific user experience that it is aiming to achieve, uses different criteria to increase transparency and / or change the visibility of interaction primitives You can decide whether or not. In many embodiments, the 3D graph can be included in a visible space that includes a virtual environment (eg, a virtual office section or a room of a virtual office). When the user manipulates the 3D graph in the virtual environment (eg, rotates the 3D graph or increases the size of the 3D graph), the 3D graph is maintained as a stationary object, and the virtual environment is based on the user's viewpoint. Changing the mesh associated with the 3D graph (for example, the mesh for drawing a table, chair, desk, wall, etc.) (for example, changing the size of the virtual environment or the virtual environment and the user's viewpoint) Can be calculated by rotating the 3D graph relative to the 3D graph. The mesh associated with the virtual environment is generally simpler than the mesh of 3D objects that make up the 3D graph. Thus, shifting the user perspective and 3D objects associated with the virtual environment relative to the 3D graph while maintaining the ability of the user to perform operations on the 3D graph can provide significant computational advantages. Operations on 3D graphs include (but are not limited to) rotating, moving, and / or resizing the 3D graph within the virtual environment. As can be readily appreciated, the elements of the virtual space that can be changed in response to the user's pose are not limited to the visibility of lighting and interaction primitives, but 3D data if it meets the requirements of a given application. Any of various aspects of visualization can be included, such as the speed at which the user moves through the virtual space and / or the user in the virtual space based on the position and / or pose of the user Including (but not limited to) changing the speed with which the 3D object can interact. As described below, a 3D data visualization system according to many embodiments of the present invention can switch between different visualization modes based on a user's pose and / or situation.

  The specific method of rendering the 3D display based on the user's pose (step 814) is highly dependent on the particular 3D display technology being utilized. In an embodiment that utilizes a stereo 3D display, such as that used in many head-mounted AR, MR, and VR headsets, renders two frames from different viewpoints, and these frames are each in the stereo display. To provide the user with a simulated depth perception.

  Process 800 continues to update the rendered 3D display based on changes in the user's location (step 808) and / or mapping changes that map data dimensions to attributes (step 818). When the user ends the interactive session (step 816), the process is complete.

  As can be readily appreciated, the interactivity of the 3D data visualization image depends on the speed at which the update of the visualization image can be rendered. In many embodiments, the 3D data visualization system targets a frame rate of at least 30 frames per second. In some embodiments, a target frame rate of at least 60 frames per second and / or 120 frames per second is supported. Updating the 3D data visualization image at a high frame rate involves a large amount of computation. In many instances, too much computation is required to maintain a high frame rate, and the 3D data visualization cannot render one or more frames in time for display, resulting in frame drops (frame missing). And is commonly referred to as Certain embodiments render a portion of the frame that is in the middle of the user's field of view, and support a good quality degradation that does not render the portion of the frame that is in the user's ambient field of view. The specific way to deal with the fact that a given 3D data visualization system cannot render all the frames needed for the target frame rate depends on the requirements of the given application.

  The likelihood that the target frame rate can be achieved can be increased by reducing the complexity of rendering the 3D data visualization image. In many embodiments, computational efficiency is achieved by creating groups of 3D objects that are essentially collections of multiple visible 3D objects. Reducing the number of objects can reduce the computations associated with aspects of the rendering pipeline, these computations being performed by the physics engine to detect collisions between 3D objects, and Includes the drawing process itself. In some embodiments, a single group of 3D objects is created with all 3D objects corresponding to visible data points in the 3D graph. In many embodiments, multiple groups of 3D objects are created that are fewer than the total number of visible 3D objects. In a particular embodiment, the group of 3D objects is simply a mesh having the shape of a collection of 3D objects.

  A process for rendering a 3D data visualization using a group of 3D objects according to one embodiment of the invention is illustrated in FIG. 8B. The process 850 begins with instantiation (step 852) of a collection of 3D data objects that includes multiple visualization attributes. The visualization attribute of a 3D data object can be determined in the same manner as described above using a set of data dimension mappings. In a particular embodiment, the data dimension mapping defines the values of the data dimensions in the visualization table and processes these data dimension values to determine the specific visualization attributes of the 3D object. One or more groups of 3D objects are created by generating meshes and textures for each group of 3D objects using a plurality of visible 3D object meshes and textures (step 856). In some embodiments, as many as 100,000 3D objects are used to create a group of 3D objects. The specific number of 3D objects used to create a group of 3D objects (step 856) generally depends on the requirements of a given application.

  The user's pose is measured (step 858) and a group of 3D objects is illuminated based on the user's pose (step 860). In many embodiments, a group of 3D objects is drawn on a per-vertex basis, and a directional light source is used to determine each vertex based on the line of sight from the user's viewpoint (or a point close to the user's viewpoint). Illuminate in the direction. Next, collision processing can be performed on 3D objects (including groups of 3D objects) in the virtual space, and the detected collision can be resolved according to the constraints imposed on the virtual space by the 3D data visualization system. . The virtual space can then be rendered from the user's viewpoint (step 864). In the illustrated embodiment, a 3D display is rendered. In many embodiments, the rendered display drives a 3D display device.

  Utilizing groups of objects can significantly reduce the processing associated with rendering 3D data visualizations interactively at high frame rates. However, grouping objects causes the same processing overhead as changing all 3D objects to change a single 3D object. Therefore, considering the number of 3D objects corresponding to a data point that are combined into a single group of 3D objects can be reduced by the computational overhead reduced when interacting with the 3D graph and the data mapping update. It is possible to achieve a balance with maintaining interactive properties when the 3D graph changes. The movement from one 3D graph to another 3D graph as the mapping that maps data dimensions to attribute values changes is described below. Using a group of 3D objects, a group of data points can be animated together to achieve a high frame rate during animation display.

  A specific process for visualizing 3D data has been described above with reference to FIGS. 8A and 8B, but based on the dimensions of the data points and the user's location in virtual space, as long as it meets the requirements of a given application. Any of a variety of processes for rendering 3D data visualizations can be utilized. Techniques that can be utilized to enhance the effectiveness of 3D data visualization showing the structure in the data to the user are further described below.

Enhancing the effectiveness of 3D visualization of composite data A 3D data visualization system according to various embodiments of the present invention has the ability to generate a visualization image of data of eight or more dimensions. The challenge of representing high-dimensional data in 3D form is that the 3D data visualization image essentially incorporates changes in the appearance of the 3D object that are independent of the attributes of the underlying 3D object. For example, it may be difficult for the user to perceive the relative sizes of 3D objects located at different distances from the user and having different shapes. Lighting, and more specifically shading, may also incorporate changes in the appearance of the 3D object that are independent of the attributes of the underlying 3D object. Experiments with 3D data visualization images are performed in the same way throughout the user's field of view and / or in a shape that preserves the ability of the user to distinguish between a change in size due to depth and a change in size as a data attribute. It shows that the effectiveness of 3D data visualization can be enhanced by utilizing a lighting mode that does not involve illuminating 3D objects and casting shadows on other 3D objects. The use of depth-preserving depth perception and illumination modes in 3D visualization according to various embodiments of the present invention to increase the user's ability to perceive structures in the data is further described below.

Depth perception preserving shape Preserving depth perception can be important in maintaining the ability of the user to understand the dimensions of the data being visualized. For example, when visualizing data dimensions using size, the size of a particular 3D object rendered in the user's field of view is the size attribute of the 3D object and the distance from the user to the 3D object in virtual space. Depends on both. When mapping data dimensions to 3D object shape attributes as well, the shape of the object may further confuse the size comparison (in a manner that increases in degree due to depth differences). Experiments show that relative size perception for different shapes such as cubes, spheres, and cylinders is adversely affected by various factors including distance, object alignment, color, and illumination. . In many embodiments, a polyhedron having multiple faces, such as (but not limited to) a icosahedron having a spherical appearance, is utilized as the shape of the 3D object in the 3D data visualization image. Experiments show that the user can accurately perceive the relative size of the spheres in the 3D environment. However, spheres are complex shapes to render in 3D shapes and are typically rendered as polyhedra with hundreds of faces. Thus, the use of a polyhedron with fewer faces in the tens of faces as opposed to hundreds of faces can greatly reduce the computations associated with rendering a 3D graph. FIGS. 9A-9C show a 3D visualization of a multidimensional data space in which data dimensions are mapped to 3D object shape and size attributes from several viewpoints.

  In many embodiments, given the distance from the user to the 3D object in the virtual space, using depth perception to keep the 3D shape helping the user in measuring the relative size of the 3D object. Visualize data points. FIG. 10 shows a small set of 3D objects designed to be recognizable even in dense plots. These 3D object shape configurations are made with some criteria in mind: front, side, top, protrusion, and corners for curved areas.

  For the front surface, the initial template had a basic shape common to the 2D plot. These include circles, triangles, stars, boxes, plus (cross), and X-shaped. The 3D shape was derived from those listed on the front. The circle is converted into a sphere 1000 and an annular surface 1004. Triangles are converted into pyramid (or tetrahedron) 1006 and cone 1004. The box shape is converted into a cube 1002 and a cylinder 1010. X-shapes and plus-shapes can be made into 3D objects in a similar manner, but each branch of these 3D shapes exits over the obscuring object and is a different kind of 3D shape or a simpler 3D shape. May be confused with the features in The 3D shape of a star plot system can also encounter the same difficulties.

  These selected basic shapes represent various top and bottom surfaces: circular, square, point, triangular, and elliptical. Thus, some shapes, especially cylinders and cubes, share the same front face, but the number of protruding corners (or lack thereof) and the number of top faces are recognizable even in dense plot areas. Allows you to leave. In addition, illumination can enhance the visual distinction between shapes and helps the ability to distinguish between these shapes. Similarly, the cone 1004 and the pyramid shape 1006 may indicate the same front face, eg, a triangle, depending on the orientation. Thus, the cone 1004 is selected to face up, while the pyramid 1006 places one horizontal edge 1012 on top while facing outward.

  A 3D shape that is designed so that its multiple planes are parallel to the front plane tends to produce a larger plane rather than a more complex plane when it overlaps another 3D shape. Similarly, when the upper surface and the lower surface are aligned with the floor surface, the upper and lower surfaces are hidden when orthographic projection is used. Accordingly, all 3D shapes shown in FIG. 10 are rotated by the same amount on both the horizontal and vertical planes. The resulting rotation is equivalent to rotating 30 degrees about an axis 30 degrees west from north (ie, 30 degrees to the right of the screen rise vector). As can be readily appreciated, the specific rotation to produce a non-planar rendering of a large population of shapes is highly dependent on the requirements of a particular 3D data visualization system. Adding additional features 1014 on the side of the 3D object also benefits from the horizontal rotation component to give greater visual importance to one side (by approaching) than the other side be able to.

  Although the 3D shape shown in FIG. 10 includes additional features, the 3D shape utilized by various embodiments according to the present invention need not include additional features. In many embodiments, depth perception is preserved by limiting only one dimension of the 3D shape to be an attribute of visualization. For example, the visualized 3D shape has a constant height, but the width changes based on the value of the mapped data dimension. Certain embodiments utilize a 3D shape that is bullet-shaped (ie, has round or hemispherical ends). In many embodiments, the width of the bullet-shaped 3D shape (ie, the diameter of the bullet-shaped cylinder portion) varies based on the value of the mapped data dimension, and the height of the bullet-shaped 3D shape is the data value. Regardless. In this way, the width conveys information and the height provides a depth cue (cue, indication). As will be readily appreciated, the specific shape utilized is highly dependent on the requirements of a given application. Methods for enhancing 3D data visualization utilizing 3D shaped illumination according to various embodiments of the present invention are described below.

Lighting Model for Visualizing 3D Data The lighting model used in 3D data visualization can greatly affect the ease with which a user can interpret the visualized data. As mentioned above, the effectiveness of 3D data visualization can be reduced if the visualization introduces changes in appearance between 3D objects that are independent of the data dimension being visualized. FIG. 11A shows a change in appearance of a 3D object having a spherical shape when viewed from different directions under constant illumination by three stationary point light sources. As can be readily seen, the change in shading on the surface of the 3D object conveys no information about the data dimensions visualized by the 3D object, and is located in different regions within the user's field of view (and thus different Makes it difficult to identify and / or measure relative sizes of similar 3D objects. A 3D data visualization system according to a number of embodiments of the present invention utilizes a lighting model, in which a separate orientation that emits light at (or adjacent to) the user's viewpoint when rendering a 3D data visualization image. An illuminant light source is used to illuminate each 3D data object in the user's field of view. The same 3D data object viewed from the same viewpoint as shown in FIG. 11A is illustrated in FIG. 11B, and this 3D object is illuminated in the manner described above using a directional illumination source that emits light at the user's viewpoint. By illuminating the 3D object in this way, the 3D object has the same appearance from any viewpoint. In this way, the similarity of the appearance of similar 3D objects can be maintained while the user moves through the virtual space, because the illumination of the object is the user, as shown in FIGS. Because it changes with the pose.

  The process of updating the illumination of a 3D object as the user's field of view in virtual space changes is illustrated in FIG. 13 according to one embodiment of the invention. Process 1300 includes obtaining pose information and measuring the user's position and field of view using the pose information (step 1302). A 3D object within the user's field of view can be identified and selected (steps 1304, 1310). The position of the 3D object relative to the position of the user in the virtual space can be used to determine the lighting direction (step 1306). The direction of illumination is generally selected as the direction from the user position to the 3D object, or the direction from the location adjacent to the user position to the 3D object. However, the direction of illumination can be changed based on the requirements for a given application.

  In the illustrated embodiment, the directional illumination source is used to illuminate each 3D object in the user's field of view and update all the lighting in the user's field of view (step 1310), completing the process. Directional light involves simulating illumination by the sun and includes using an illumination model that includes parallel rays in a single direction. Illumination of multiple 3D objects in virtual space based on the pose of the viewer is conceptually illustrated in FIG. Illuminating each 3D object with a separate directional light source provides significant advantages in providing uniform illumination of the 3D object across the user's field of view, but according to various embodiments of the present invention, certain Other lighting models that provide uniform illumination can be used as long as the application requirements are met. As described above, 3D data visualization can be enhanced by configuring each 3D object so as not to cast a shadow in the virtual space.

  Most of the above description regarding the selection of 3D shapes and lighting in virtual space relates to the way in which data can be represented to facilitate the search for 3D data, but 3D according to numerous embodiments of the present invention. The data visualization system adds additional 3D objects in the form of interaction primitives in the virtual space, and these interaction primitives are used to maintain awareness of the user's position and data orientation in the virtual space. , Help users. User primitives that can be utilized within a 3D data visualization system according to various embodiments of the present invention are described further below.

Utilizing Interaction Primitives in Virtual Space The freedom of movement of a user in the virtual space provided by the 3D data visualization system according to many embodiments of the present invention allows the user to change his or her orientation relative to the data in the virtual space. It means that you can lose sight quickly while chasing. A 3D data visualization system according to some embodiments of the present invention utilizes interaction primitives to enable a user to maintain a sense of their relative orientation with respect to the data being visualized. A secure anchor. In many embodiments, a 3D graph containing 3D objects is bounded by a cube on which a grid pattern is visible on the inner surface of the cube. As described above, the user's position can be used to make one or more faces of the cube transparent so that grid lines do not obscure 3D objects in the 3D graph. In certain embodiments, the labeled axes can be shown continuously in the user's field of view to provide the user with a visual cue regarding the orientation of the data.

  A 3D graph including interaction primitives in the form of grids, axes, and axis labels generated by a 3D data visualization system according to one embodiment of the present invention is illustrated in FIG. 15A. The 3D object 1500 is contained within a 3D graph bounded by a planar grid 1502. Orthogonal axes 1504 encoded in three colors provide visual anchors enhanced by oriented axis labels 1506.

  While specific interaction primitives and their use have been described above to assist the user in maintaining a sense of orientation with respect to the 3D graph during interactive searching (especially within the 3D graph), In many embodiments, any of the interaction primitives that provide the user with a visual cue for orientation can be used if it meets the requirements of a given application.

Achieving ordering of importance in assigning dimensions to visual attributes The main problem associated with pattern recognition in high-dimensional data sets is the ill-effects of dimensionality and the distinctive features of the visualized data sets This problem can be addressed by selecting only the set. When the meaning of a feature is important and the goal of 3D data visualization is to find the relationship between features and better understand the data set, feature selection is feature transformation (eg, principal component analysis) Often preferred for. In many data sets, the data dimensions can be separated into data dimensions that are numerical (eg, varying amounts) or data dimensions that are categorical (eg, regions). A 3D data visualization system according to a number of embodiments of the present invention is configured to automatically detect whether the data being captured is numerical or categorical, so that the user can determine if the data dimension classification is incorrect. (For example, ZIP (zone improvement program)) code (zip code) can be identified as being numerical, but is actually categorical-"91107" is more than "91101" Is not large).

  In some embodiments, 3D data visualization provides recommendations for specific mappings that perform a feature selection process to map data dimensions to visualization attributes. In many embodiments, the user selects a feature of interest (eg, which is the most important variable with respect to low / high operating profits in a set of e-commerce funds). Next, a feature selection process is performed on both the numerical and categorical data dimensions to determine the dimension most relevant to the feature of interest. In certain embodiments, separate feature selection processes are performed for numerical data dimensions and for categorical data dimensions. In many embodiments, the feature selection process performs a univariate feature selection process. In other embodiments, any of a variety of feature selection processes can be utilized as long as the requirements for a given application are met. The feature selection process produces an ordered list of features. In some embodiments, the 3D data visualization system generates an ordered list of numerical data dimensions and an ordered list of categorical data dimensions separately. In some embodiments, only a subset of the data dimensions is considered in forming an ordered list.

  Once the ordering of data dimension relationships to features is obtained, the ordering information can be used to generate or recommend specific mappings that map data dimensions to specific visualization attributes. In many embodiments, a data visualization is generated that maps the four most important numerical data dimensions to the X, Y, and Z spatial position coordinates of the 3D object and the size attribute of the 3D object. In many embodiments, the two most important categorical attributes are mapped to displayability and shape visualization attributes. In many embodiments, recommendations regarding data dimensions that can be assigned to additional attributes and / or interactions that make additional recommendations can be provided. In particular embodiments, features of interest utilized to generate importance ordering of other data dimensions can be mapped to color visualization attributes. In many embodiments, a specifically recommended mapping is determined based on the relative relevance of different numerical and categorical data dimensions. In many situations, the categorical data dimension is most important from the user's quantitative expectations and / or perspective. Instead of mapping categorical values to the 3D object's X, Y, and Z spatial coordinates, 3D objects can be used to have 3D bee charts (swarm plots) or other points with non-overlapping points. A variety of categorical scatter plots can be generated. A user interface showing recommendations for mapping that maps specific data dimensions to specific attributes of 3D objects visible in the 3D data visualization image, according to one embodiment of the present invention, is illustrated in FIG. 15B.

  Although a specific process for performing importance ordering and a specific process for recommending and / or assigning mappings that map data dimensions to visualization attributes is described above, various embodiments of the invention According to the specific application requirements, any of a variety of techniques for recommending specific data dimensions that map to specific visualization attributes can be used. Including techniques to perform importance ordering using the Relief algorithm, Fisher discriminant ratio, correlation-based feature selection, fast correlation-based filter, and / or multi-class feature selection ( , But not limited to them).

Automated color palette selection based on data characteristics Visualization of specific data dimensions in the same way that the selection of specific data dimensions for 3D data visualization can be made significant in emphasizing patterns in the data The method of mapping to can also be important in effectively communicating information. In many embodiments, numerical data dimensions are mapped in a non-linear fashion to continuous visualization attributes such as (but not limited to) color, and the relationship between the data dimension and other data dimensions or object characteristics. The maximum difference in the color of the 3D object is generated with respect to the range of values that convey the maximum information about the 3D object. In many embodiments, when mapping a data dimension to a color, the data dimension is mapped to a discrete number of colors to add a visual distinction of color attributes. As will be readily appreciated, according to various embodiments of the present invention, data dimension values can be mapped to specific colors using any of a variety of techniques, provided that the requirements of a given application are met. be able to.

Use of animated transitions in 3D data visualization images Users analyzing high-dimensional data frequently change the data dimensions that are visualized to explore different relationships in the data. In many embodiments, the 3D data visualization system uses animation to illustrate the correspondence between specific 3D objects that represent discrete data points as the data mapping changes. The ability to observe points moving from the first 3D graph to the second 3D graph can be represented by mapping alternative combinations of data dimensions to attributes of the 3D object, which can exist in the data. These 3D graphs can change one or more attributes of the 3D object corresponding to the data points. A series of 3D data visualizations according to one embodiment of the present invention are illustrated in FIGS. 16A-16D, where the X attribute of the 3D object changes from the first data dimension (ie, “age”) to the second. It has been changed to the data dimension (ie “year of service”).

  In some embodiments, additional subsets of data can be provided by animating different subsets of data at different rates. For example, utilizing a clustering algorithm to analyze the data in the first 3D graph, the animation can include movements of 3D data objects in different clusters at different speeds. In certain embodiments, upon receiving user input instructing movement of a 3D object, the user can control the animation such that multiple sets of 3D objects begin moving. In this way, the user can partition a specific set of 3D objects and observe how these 3D objects are mapped from one 3D graph to another 3D graph. In many embodiments, the user can reserve the animation to repeat the animation and / or gain further insight.

  Although specific techniques for animating 3D objects have been described above with reference to FIGS. 16A-16D, according to various embodiments of the present invention, mapping of data dimensions using a 3D data visualization system is possible. When changing, if any of the various animation techniques are used, the attributes of the 3D object can be changed from one 3D graph to another 3D graph (in addition to the change in position) using any of the various animation techniques. Change in shape, change in color, change in size, change in texture, etc.).

Data Situation Representation Many of the processes described above for enhancing 3D data visualization focus on methods for generating 3D data visualization images. The situation of presenting a 3D data visualization image can also affect the ability of the user to identify meaningful insights from the data. Users generally interact with 3D graphs in different ways depending on the amount of free space and the degree of freedom for the user to move within the real-world environment performing 3D data visualization. As the user interacts with the 3D graph while facing the desk, the user generally prefers that the 3D graph be displayed in a small manner (eg, a 1 foot × 1 foot × 1 foot cube). When the user is standing and experiencing 3D data visualization, and in an environment where the user has great freedom of movement, the user may enlarge the 3D graph to a larger size and move through the 3D graph. More likely to occur. In some embodiments, the virtual environment and / or the mixed reality environment includes affordances that allow data manipulation. As described above, the appearance of the 3D graph moving with respect to the virtual environment, and the affordance within the virtual environment, the 3D object forming the 3D graph is not moved, and the 3D object related to the virtual environment and the virtual environment This can be realized by moving the user's viewpoint to generate an appearance in which the 3D graph is manipulated with respect to the environment. In addition, when the relative size of the 3D graph relative to the virtual environment changes, and depending on the distance from the 3D graph to the user's viewpoint in the movable environment, the user can interact with the 3D graph. Can be changed.

  In many embodiments, the 3D visualization system provides affordances in a 3D user interface that is displayed to the user controlling the resizing of the 3D graph. In many embodiments, the 3D data visualization system can use information about the real world environment in which the 3D data visualization is being performed, and the size of the 3D visualization image of the multi-dimensional data space is adapted to fit within the environment. Can be adjusted.

  An affordance in the VR user interface that allows the user to control the size of the 3D data visualization image generated in the virtual world by the 3D data visualization system according to one embodiment of the present invention is illustrated in FIG. 17A. . The virtual environment 1700 includes a virtual office cube with affordances 1702 that can resize the 3D graph to a predetermined “desktop” scale. The virtual environment 1700 also includes a second affordance 1704 and a third affordance 1706 that can change the size of the 3D graph to a predetermined “sitting” size that is larger than the “desktop” size. The third affordance 1706 can change the size of the 3D graph to the maximum predetermined “standing (standing)” scale. When the user moves the 3D data visualized image on the affordance, the size of the 3D data visualized image is changed. Following the resizing, the user can further manipulate the 3D data visualization image, which can change the scale of the 3D data visualization image (eg, reduce the 3D graph to a “desktop” scale, then , Enlarge the 3D graph to move through the data (space) toward a specific cluster of interest.

  17B-17F conceptually illustrate how a 3D data visualization image can be resized using affordances in the 3D user interface. 17B and 17C illustrate a user's pose (1708) and 3D data visualization (1710) in a mobile environment. An affordance (1702) that can be used to change the size of the 3D data visualization image to the “desktop” scale is seen in FIG. 17C. Moving the 3D data visualization (1710) on the affordance (1702) and the resulting resized 3D data visualization (1712) are conceptually illustrated in FIGS. 17E and 17F. As described above, resizing does not fix the size of the 3D graph. Following resizing, the user can continue to change the size of the 3D visualization image, and the way the 3D data visualization system responds to user input can change (eg, based on the size of the 3D data visualization image). And can respond differently to the movement of the 3D gesture). Affordance only provides a mechanism that allows the user to move between different situations and cause the 3D data visualization system to change the 3D data visualization image. The method of changing the 3D data visualization image is a real-world space. Reflecting (but not limited to) differences in mobility and / or freedom of movement available to the user in the real world to explore the data. FIG. 17F conceptually illustrates the scale of an increase in the user's situation so that the user can see the 3D data visualization image as the user shifts to the standing mode with freedom of movement.

Although the above description refers to three different modes or operating conditions, according to various embodiments of the present invention, a 3D data visualization system can be used in any number of different ways if it meets the requirements of a specified application. Modes and / or situations can be supported. Furthermore, affordances in the user interface that automatically change the 3D data visualization are not limited to affordances that only resize the data. Various embodiments provide an array rich in affordances, which are:
Applying a machine learning algorithm (eg, k-means clustering) to the data and / or generating a new 3D visualization of multidimensional data represented by a single 3D object in the initial 3D visualization Includes (but is not limited to) affordances responsive to moving a 3D graph over the affordances. In addition, automated actions such as resizing can be performed in response to (but not limited to) a predetermined input, which includes gesture input, voice input, one or more input devices. And / or any combination of a series of inputs can be included (but is not limited to). Thus, the 3D data visualization system is not limited to the use of any particular affordance, but any affordance, set of affordances, and / or input modalities (means) for interacting with the 3D graph as long as the requirements of a given application are met. ) Can be used.

  In some embodiments, the 3D data visualization system defines a real world situation and dynamically changes the rendering of the 3D data visualization image to be realistically included within this real world environment. For example, instead of a virtual cube, a similar resize operation can be performed in a real cube and / or office space. In many embodiments, a camera system that detects depth can be utilized to obtain information about the volume of free space that surrounds the user. In other embodiments, any of a variety of suitable machine vision techniques can be utilized. In some embodiments, the 3D data visualization system detects a change in volume of the space related to the user's pose and / or change in viewpoint, and the size of the 3D data visualization image is visible from the user's viewpoint. Change in a way that suits the new volume. As can be readily appreciated, 3D data visualization systems generally limit the ability of a user to change the size of a 3D data visualization image based on the volume of free space available to contain the visualization image. Not provided. Instead, the user can expand the 3D data visualization in a way that allows interactive data exploration.

  In many embodiments, various input modalities are supported by the 3D data visualization system. In some embodiments, a user can interact with a 3D data visualization system using a desktop device, such as a conventional Windows icon menu pointer (WIMP). ) Using a mobile device that uses a program and / or a touch gesture based user interface. As users move to interacting with 3D data visualization systems via immersive 3D displays such as (but not limited to) headsets for AR, MR, or VR, various additional input modalities Can be used to obtain user input. In some embodiments, a machine vision system can be used to observe 3D gesture-based inputs. In many embodiments, a user is provided with a bar-shaped user input device having an elongated handle in wireless communication with a 3D data visualization system. In many embodiments, the bar is a single button that communicates over a wireless communication link. The 3D data visualization system can obtain user input by tracking the pose of the rod and the state of the buttons. Depending on the situation, the 3D data visualization system can interpret the buttons as conveying different information. A simple input modality allows the user to move the position of the 3D data visualization image relative to the user when the button is not pressed, and to rotate the 3D data visualization image when the button is pressed Including doing. In other embodiments, a remote controller that includes only the user's gaze direction and / or one or more buttons can be used as a user input. As will be readily appreciated, according to various embodiments of the present invention, any of various processes can be initiated based on a combination of pause input and button input, as long as the requirements for a given application are met. . In addition, any of a variety of additional input modalities can be supported if adapted to the needs of a given 3D data visualization application.

Although the present invention has been described in certain aspects, many additional modifications and variations will be apparent to those skilled in the art. It is therefore evident that the invention can be practiced otherwise than as specifically described without departing from the scope and spirit of the invention, including but not limited to such methods. Includes various implementation changes other than those described herein, such as alternative display technologies (eg, immersive experience devices). Accordingly, the embodiments of the invention are to be considered in all respects as illustrative and not restrictive.

Claims (30)

  1. A data visualization system for generating a visualization image of a multidimensional data space,
    A display device;
    With a computer system,
    The computer system includes:
    A memory containing a 3D data visualization application;
    With processing system,
    The 3D data visualization application is:
    Instructing the processing system to load a set of data points into a visualization table in the memory, each of the data points including a value of multiple data dimensions, and an additional visibility value A dimension assigned to each of the data points in the visualization table;
    Directing the processing system to generate a representation of a set of 3D objects corresponding to the set of data points, each of the 3D objects having a set of visualization attributes defining how to render the 3D object; The visualization attribute includes the position of the 3D object in a virtual space having three spatial dimensions,
    Instructing the processing system to receive a mapping that maps a data dimension to the visualization attribute;
    Instructs the processing system to determine the visualization attribute of the set of 3D objects based on a selected mapping that maps the data dimension to the attribute of the 3D object, and sets the data dimension to the attribute of the 3D object. The selected mapping to map defines the position of each of the 3D objects in the virtual space;
    For each of the plurality of 3D objects, update the visibility dimension in the visualization table to reflect the visibility of the 3D object based on the selected mapping that maps a data dimension to the visualization attribute. To the processing system,
    Instructing the processing system to interactively render a 3D data visualization image of the 3D object in the virtual space from a viewpoint determined based on a received user input;
    Data visualization system.
  2. The display device is a 3D display device;
    Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input renders a stereo image displayed by the 3D display device. Including,
    The data visualization system according to claim 1.
  3.   The data visualization system according to claim 1, wherein the 3D data visualization application is implemented using a 3D rendering engine.
  4.   The data visualization system of claim 3, wherein implementation of the 3D data visualization application further relies on a script executed by the 3D rendering engine.
  5. The visualization attribute is
    X coordinate,
    Y coordinate,
    Z coordinate,
    shape,
    size,
    Color palette,
    Color map,
    Color scale,
    Transparency,
    ID,
    URL,
    mask,
    Displayability,
    3D object movement,
    Sound generation,
    The data visualization system of claim 1, comprising at least one attribute selected from the group consisting of haptic feedback and vibrotactile feedback.
  6.   The data visualization system of claim 1, wherein receiving the mapping that maps a data dimension to the visualization attribute further comprises receiving a user selection of the mapping that maps a data dimension to the visualization attribute.
  7.   The data of claim 1, wherein receiving the mapping that maps a data dimension to the visualization attribute further comprises retrieving a stored set of mappings that map a data dimension to the visualization attribute. Visualization system.
  8. Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input;
    Generating at least one group of 3D objects based on the visualization attributes of the plurality of visible 3D objects;
    The data visualization system of claim 1, comprising interactively rendering in the virtual space from a viewpoint determined based on the received user input, at least one group of 3D data visualization images of the 3D object.
  9. Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input;
    In response to the user input, changing the 3D object that forms part of the virtual environment in the virtual space, the 3D object corresponding to the set of data points remains stationary in the virtual space; Causing the virtual environment in the 3D data visualization to appear to change due to the change of the 3D object that forms part of the virtual environment;
    The data visualization system of claim 1, comprising rendering the visible 3D object corresponding to the set of data points and the 3D object forming part of the virtual environment.
  10. In response to the user input, changing the 3D object that forms part of the virtual environment in the virtual space;
    Responsive to a user command, the size of the 3D object corresponding to the set of data points is changed by changing the size of the 3D object that forms part of the virtual environment. Generating an impression that the 3D object corresponding to is changing in size relative to the virtual environment;
    In response to the user's command, moving the 3D object corresponding to the set of data points to move the position of the 3D object that forms part of the virtual environment to the set of data points Generating an impression that the corresponding 3D object is moving relative to the virtual environment, and moving the position of the 3D object that forms part of the virtual environment in response to the user command Selecting from a group including rotating the 3D object corresponding to the set of data points to generate an impression that the 3D object corresponding to the set of data points is rotated relative to the virtual environment. The data visualization system of claim 9, comprising at least one change.
  11. Rendering the 3D data visualization of the 3D object interactively into the virtual space from a viewpoint determined based on the received user input;
    Illuminating at least a portion of the 3D object, and each of the illuminated 3D objects is illuminated with a directional illumination light source that emits light at a user's viewpoint;
    The data visualization system of claim 1, comprising rendering at least the illuminated 3D object based on the user's viewpoint.
  12. Illuminating at least a portion of the 3D object;
    Measuring the field of view;
    Illuminating the 3D object within the field of view of the user using a directional illumination light source that emits light at the user's viewpoint;
    12. The data visualization system of claim 11, comprising rendering the 3D object illuminated in the user's field of view.
  13. Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input;
    Rotating at least a portion of the 3D object based on the viewpoint of the user so that the appearance of the rotating 3D object is invariant to the viewpoint of the user;
    The data visualization system of claim 1, comprising rendering the rotating 3D object based on the user's viewpoint.
  14. Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input;
    Determining a position of at least one interaction primitive in the virtual space based on the viewpoint of the user;
    The data visualization system of claim 1, comprising rendering the at least one interaction primitive based on the user's viewpoint.
  15.   The data visualization system of claim 14, comprising determining transparency of the at least one interaction primitive based on the user's viewpoint.
  16.   The data visualization system according to claim 1, wherein the 3D object includes a 3D object having a shape that maintains a perception of depth.
  17. Receiving the mapping that maps the data dimension to the visualization attribute;
    Receiving a selection of features of interest;
    Identifying the importance of at least a subset of the plurality of data dimensions to the feature of the object;
    The data visualization system of claim 1, comprising generating a mapping that maps the data dimension having the high importance to a specific visualization attribute.
  18. Identifying the importance of at least a subset of the plurality of data dimensions to the feature of the object;
    Identifying data dimensions that are numerical and categorical, and
    Generating a mapping that maps the numerical data dimension with high importance to the first set of visualization attributes;
    18. The data visualization system of claim 17, further comprising generating a mapping that maps the categorical data dimension with high importance to the second set of visualization attributes.
  19.   The data visualization system of claim 18, wherein the first set of visualization attributes includes an X position, a Y position, a Z position, and a size.
  20.   The data visualization system of claim 19, wherein the second set of visualization attributes includes a shape.
  21. The 3D data visualization application further includes:
    Instructing the processing system to receive at least one updated mapping that maps the data dimension to the visualization attribute;
    Instructing the processing system to determine the updated visualization attribute for the set of 3D objects based on the selected mapping that maps the data dimension to an attribute of the 3D object; The updated mapping that maps to visualization attributes locates each of the visible 3D objects in the updated virtual space;
    Instructing the processing system to generate, for the set of visible 3D objects, a trajectory of each 3D object in the set from a position in the virtual space to an updated position in the virtual space. And
    Rendering an animation of the movement of the 3D object along the generated trajectory from a position in the virtual space to an updated position in the virtual space from a viewpoint determined based on the received user input Instructing the processing system to
    The data visualization system according to claim 1.
  22.   The 3D data visualization application further determines an updated visibility value for each of the plurality of 3D objects to reflect the visibility of each of the 3D objects based on the updated mapping. The data visualization system according to claim 21, wherein the processing system is instructed to do so.
  23.   Interactively rendering an animation of the movement of the 3D object along the generated trajectory, wherein different sets of 3D objects begin moving along the trajectory of the 3D object in the rendered animation 23. The data visualization system according to claim 22, further comprising changing a time to perform.
  24.   24. The data visualization system of claim 23, wherein in the rendered animation, a time at which a different set of 3D objects starts moving along the trajectory of the 3D object is determined based on user input.
  25.   Rendering an animation of the movement of the 3D object along the generated trajectory interactively is the speed at which different sets of 3D objects move along the trajectory of the 3D object in the rendered animation. The data visualization system of claim 21, further comprising changing.
  26. Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input;
    Determining a position of at least one affordance in the virtual space, wherein a user input directing movement of the 3D data visualized image onto the at least one affordance initiates a change of the 3D data visualized image; To do
    Detecting movement of the 3D data visualization image on the at least one affordance;
    Modifying the 3D data visualization based on one of the at least one affordance;
    The data visualization system according to claim 1, comprising rendering the modified 3D data visualization image based on the viewpoint of the user.
  27.   27. The data visualization system of claim 26, wherein changing the 3D data visualization image based on one of the at least one affordance comprises changing the size of the 3D data visualization image.
  28. The memory further includes avatar metadata, the avatar metadata includes a set of visualization attributes that define how to render the avatar, and the visualization attributes include the location of the avatar in the virtual space;
    Rendering the 3D data visualization of the 3D object interactively in the virtual space from a viewpoint determined based on the received user input, based on the viewpoint and the avatar metadata, the 3D data Rendering the avatar in a visualized image;
    The data visualization system according to claim 1.
  29. The avatar metadata further includes pose information;
    Rendering the avatar in the 3D data visualized image based on the viewpoint and the avatar metadata is based on the pose information in the avatar metadata and the avatar in the 3D data visualized image. Further comprising rendering a pose of the
    The data visualization system according to claim 28.
  30. A data visualization system for generating a visualization image of a multidimensional data space,
    A 3D display device;
    With a computer system,
    The computer system includes:
    A memory containing a 3D data visualization application;
    With processing system,
    The 3D data visualization application is:
    Instructing the processing system to load a set of data points into a visualization table in the memory, each of the data points including a value of multiple data dimensions, and an additional visibility value A dimension assigned to each of the data points in the visualization table;
    Directing the processing system to generate a representation of a set of 3D objects corresponding to the set of data points, each of the 3D objects having a set of visualization attributes defining how to render the 3D object; The visualization attribute includes the position of the 3D object in a virtual space having three spatial dimensions,
    Instructing the processing system to receive a mapping selected by the user that maps a data dimension to the visualization attribute;
    Instructs the processing system to determine the visualization attribute of the set of 3D objects based on a selected mapping that maps the data dimension to the attribute of the 3D object, and sets the data dimension to the attribute of the 3D object. The selected mapping to map defines the position of each of the 3D objects in the virtual space;
    For each of the plurality of 3D objects, update the visibility dimension in the visualization table to reflect the visibility of the 3D object based on the selected mapping that maps a data dimension to the visualization attribute. To the processing system,
    Instructing the processing system to render the 3D data visualization image of the 3D object as a stereo image displayed on the 3D display interactively in the virtual space from a viewpoint determined based on a received user input;
    The rendering is
    Generating at least one group of 3D objects based on the visualization attributes of the plurality of visible 3D objects;
    In response to the user input, changing the 3D object that forms part of the virtual environment in the virtual space, the at least one group of the 3D objects remains stationary in the virtual space, Due to the change of the 3D object forming part of the environment to appear to change relative to the virtual environment in the 3D data visualization image;
    Rendering the visible 3D objects corresponding to the at least one group of the 3D objects and the 3D objects forming part of the virtual environment interactively into the virtual space from a viewpoint determined based on the received user input A data visualization system that does this.
JP2018502734A 2015-09-24 2016-09-26 Data visualization system and method using three-dimensional display Pending JP2018533099A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US201562232119P true 2015-09-24 2015-09-24
US62/232,119 2015-09-24
US201662365837P true 2016-07-22 2016-07-22
US62/365,837 2016-07-22
PCT/US2016/053842 WO2017054004A1 (en) 2015-09-24 2016-09-26 Systems and methods for data visualization using tree-dimensional displays

Publications (1)

Publication Number Publication Date
JP2018533099A true JP2018533099A (en) 2018-11-08

Family

ID=58387512

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2018502734A Pending JP2018533099A (en) 2015-09-24 2016-09-26 Data visualization system and method using three-dimensional display

Country Status (4)

Country Link
US (2) US9665988B2 (en)
EP (1) EP3353751A4 (en)
JP (1) JP2018533099A (en)
WO (1) WO2017054004A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019113299A1 (en) * 2017-12-06 2019-06-13 Reconstructor Holdings Llc Methods and systems for representing relational information in 3d space
US10127574B2 (en) * 2011-11-30 2018-11-13 Cynthia Brown Internet marketing analytics system
WO2016068901A1 (en) * 2014-10-29 2016-05-06 Hewlett-Packard Development Company, L.P. Visualization including multidimensional graphlets
JP2018533099A (en) 2015-09-24 2018-11-08 カリフォルニア インスティチュート オブ テクノロジー Data visualization system and method using three-dimensional display
US10134179B2 (en) 2015-09-30 2018-11-20 Visual Music Systems, Inc. Visual music synthesizer
RU2669716C1 (en) * 2017-05-12 2018-10-15 Общество с ограниченной ответственностью "ВИЗЕКС ИНФО" System and method for processing and analysis of large amounts of data
US20190188893A1 (en) * 2017-12-18 2019-06-20 Dataview Vr, Llc Simulated reality data representation system and method
US10438414B2 (en) 2018-01-26 2019-10-08 Microsoft Technology Licensing, Llc Authoring and presenting 3D presentations in augmented reality
US20190347837A1 (en) * 2018-05-14 2019-11-14 Virtualitics, Inc. Systems and Methods for High Dimensional 3D Data Visualization

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461708A (en) 1993-08-06 1995-10-24 Borland International, Inc. Systems and methods for automated graphing of spreadsheet information
US6057856A (en) 1996-09-30 2000-05-02 Sony Corporation 3D virtual reality multi-user interaction with superimposed positional information display for each user
US6154723A (en) 1996-12-06 2000-11-28 The Board Of Trustees Of The University Of Illinois Virtual reality 3D interface system for data creation, viewing and editing
US6456285B2 (en) * 1998-05-06 2002-09-24 Microsoft Corporation Occlusion culling for complex transparent scenes in computer generated graphics
US6750864B1 (en) 1999-11-15 2004-06-15 Polyvista, Inc. Programs and methods for the display, analysis and manipulation of multi-dimensional data implemented on a computer
US20040041846A1 (en) * 2002-04-10 2004-03-04 Peter Hurley System and method for visualizing data
US8131471B2 (en) 2002-08-08 2012-03-06 Agilent Technologies, Inc. Methods and system for simultaneous visualization and manipulation of multiple data types
CA2403300A1 (en) * 2002-09-12 2004-03-12 Pranil Ram A method of buying or selling items and a user interface to facilitate the same
US8042056B2 (en) * 2004-03-16 2011-10-18 Leica Geosystems Ag Browsers for large geometric data visualization
US7283654B2 (en) 2004-08-26 2007-10-16 Lumeniq, Inc. Dynamic contrast visualization (DCV)
US20070211056A1 (en) * 2006-03-08 2007-09-13 Sudip Chakraborty Multi-dimensional data visualization
KR101257849B1 (en) * 2006-09-29 2013-04-30 삼성전자주식회사 Method and Apparatus for rendering 3D graphic objects, and Method and Apparatus to minimize rendering objects for the same
US9098559B2 (en) * 2007-08-31 2015-08-04 Sap Se Optimized visualization and analysis of tabular and multidimensional data
GB2458927B (en) * 2008-04-02 2012-11-14 Eykona Technologies Ltd 3D Imaging system
KR101590763B1 (en) * 2009-06-10 2016-02-02 삼성전자주식회사 Apparatus and method for generating 3d image using area extension of depth map object
US8890946B2 (en) 2010-03-01 2014-11-18 Eyefluence, Inc. Systems and methods for spatially controlled scene illumination
US20130097563A1 (en) * 2010-06-24 2013-04-18 Associacao Instituto Nacional De Matematica Pura E Aplicada Multidimensional-data-organization method
RU2634677C2 (en) 2011-07-28 2017-11-02 Шлюмбергер Текнолоджи Б.В. System and method for performing well operations with hydraulic fracture
US9824469B2 (en) 2012-09-11 2017-11-21 International Business Machines Corporation Determining alternative visualizations for data based on an initial data visualization
WO2014130044A1 (en) * 2013-02-23 2014-08-28 Hewlett-Packard Development Company, Lp Three dimensional data visualization
US20160119615A1 (en) * 2013-05-31 2016-04-28 Hewlett-Packard Development Company, L.P. Three dimensional data visualization
US20150113460A1 (en) 2013-10-23 2015-04-23 Wal-Mart Stores, Inc. Data Analytics Animation System and Method
US20150205840A1 (en) * 2014-01-17 2015-07-23 Crytek Gmbh Dynamic Data Analytics in Multi-Dimensional Environments
US9734595B2 (en) 2014-09-24 2017-08-15 University of Maribor Method and apparatus for near-lossless compression and decompression of 3D meshes and point clouds
JP2018533099A (en) 2015-09-24 2018-11-08 カリフォルニア インスティチュート オブ テクノロジー Data visualization system and method using three-dimensional display
DE102015221998B4 (en) 2015-11-09 2019-01-17 Siemens Healthcare Gmbh A method of assisting a finder in locating a target structure in a breast, apparatus and computer program
US20170185668A1 (en) 2015-12-28 2017-06-29 Informatica Llc Method, apparatus, and computer-readable medium for visualizing relationships between pairs of columns
US10482196B2 (en) 2016-02-26 2019-11-19 Nvidia Corporation Modeling point cloud data using hierarchies of Gaussian mixture models
WO2017221221A1 (en) 2016-06-24 2017-12-28 Analytics For Life Non-invasive method and system for measuring myocardial ischemia, stenosis identification, localization and fractional flow reserve estimation
US9953372B1 (en) 2017-05-22 2018-04-24 Insurance Zebra Inc. Dimensionality reduction of multi-attribute consumer profiles

Also Published As

Publication number Publication date
EP3353751A4 (en) 2019-03-20
US9665988B2 (en) 2017-05-30
EP3353751A1 (en) 2018-08-01
US20170193688A1 (en) 2017-07-06
US20170092008A1 (en) 2017-03-30
WO2017054004A1 (en) 2017-03-30
US10417812B2 (en) 2019-09-17
WO2017054004A8 (en) 2018-02-08

Similar Documents

Publication Publication Date Title
Kim Designing virtual reality systems
US9928654B2 (en) Utilizing pseudo-random patterns for eye tracking in augmented or virtual reality systems
US8749557B2 (en) Interacting with user interface via avatar
Gutierrez et al. Stepping into virtual reality
Dai Virtual reality for industrial applications
Hardiess et al. The International Encyclopedia of the Social and Behavioral Sciences
EP2946264B1 (en) Virtual interaction with image projection
JP5976019B2 (en) Theme-based expansion of photorealistic views
Noh et al. A review on augmented reality for virtual heritage system
Hanson et al. Constrained 3D navigation with 2D controllers
Amin et al. Comparative study of augmented reality SDKs
US20160133230A1 (en) Real-time shared augmented reality experience
Anthes et al. State of the art of virtual reality technology
US8643569B2 (en) Tools for use within a three dimensional scene
Wright Jr et al. OpenGL SuperBible: comprehensive tutorial and reference
Muhanna Virtual reality and the CAVE: Taxonomy, interaction challenges and research directions
Vince Introduction to virtual reality
Vince Essential virtual reality fast: how to understand the techniques and potential of virtual reality
Schou et al. A Wii remote, a game engine, five sensor bars and a virtual reality theatre
US9224237B2 (en) Simulating three-dimensional views using planes of content
US9898864B2 (en) Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
EP0927406A4 (en) Programmable computer graphic objects
US9891712B2 (en) User-defined virtual interaction space and manipulation of virtual cameras with vectors
US8610714B2 (en) Systems, methods, and computer-readable media for manipulating graphical objects
Wang et al. Mixed reality in architecture, design, and construction

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20190709