US20080129819A1 - Autostereoscopic display system - Google Patents
Autostereoscopic display system Download PDFInfo
- Publication number
- US20080129819A1 US20080129819A1 US12/025,296 US2529608A US2008129819A1 US 20080129819 A1 US20080129819 A1 US 20080129819A1 US 2529608 A US2529608 A US 2529608A US 2008129819 A1 US2008129819 A1 US 2008129819A1
- Authority
- US
- United States
- Prior art keywords
- scene
- video
- viewpoints
- rendering
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/305—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/307—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/363—Image reproducers using image projection screens
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/398—Synchronisation thereof; Control thereof
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/27—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
- G02B30/29—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays characterised by the geometry of the lenticular array, e.g. slanted arrays, irregular arrays or arrays of varying shape or size
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/16—Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
Definitions
- a multiple-display video system and method are provided by which a rendering image processor is coupled to a plurality of virtual cameras, which in one embodiment occupy separate nodes on a network.
- a rendering image processor is coupled to a plurality of virtual cameras, which in one embodiment occupy separate nodes on a network.
- Associated with the rendering image processor is a first memory that defines a world having three dimensional spatial coordinates, a second memory for storing graphical image data for a plurality of objects, and a third memory for storing instructions on the positioning of the objects in the world.
- a viewpoint of the world is defined and stored.
- the rendering image processor renders a scene of the world according to the viewpoint of the virtual camera.
- Each virtual camera has at least one display associated with it to display the scene rendered according to the virtual camera's viewpoint.
- the virtual camera viewpoints may be chosen to be different from each other.
- a rendering node or server has first, second and third memories as above defined, the third memory storing instructions for positioning the objects in the virtual world and animating these objects.
- a plurality of clients which are preferably disposed remotely from the server, each have associated memory and processing capability. Each of the clients has one or more display units associated with it, and viewpoints are established for each.
- Each of the clients stores, prior to a first time, graphical image data for the objects to be displayed.
- Each of the clients constructs a respective scene based on instructions received from the server at the first time.
- the previous storage of the graphical image data (such as textural and geometric data) associated with the animated objects dramatically reduces the amount of bandwidth necessary to communicate animation instructions from the server to each of the clients, permitting real-time animation effects across a large number of associated displays.
- these displays may be physically sited to be contiguous with each other so as to create a single large display.
- contiguous displays can be directed to display the scene or overlapping scenes and the viewpoints of the displays can be varied so that, to an observer passing by the displays, the rendered scene appears to shift as a function of the position of the observer, such as it would if the observer were looking at a real scene through a bank of windows.
- Other viewpoint shifts are possible to produce, e.g., arcuate or circumferential virtual camera arrays, of either convex or concave varieties.
- a large multiple-screen animated array may be provided at a commercial location and used to display a combination of animations and text data derived from a local database.
- These data such as the Flight Information Data System (FIDS) of an airline at an airport, can be used to display such things as airline arrivals and departures on predetermined portions of the displays.
- FIDS Flight Information Data System
- the present invention provides apparatus for producing an overlay of the FIDS data on the animated sequences.
- the method and system of the invention may be used to illuminate large lenticular arrays to create an autostereoscopic display.
- FIG. 1 is a high-level schematic network diagram for a video projection array according to the invention
- FIG. 2 is a high level schematic block diagram of a virtual camera establishment, animation and imaging process according to the invention
- FIG. 3 is a viewpoint configuration or virtual camera protocol process flow diagram, and is a detail of FIG. 2 ;
- FIG. 4 is a schematic diagram of parameters establishing a viewpoint for a virtual camera
- FIG. 5 is a schematic diagram of the “world” and “universe” concepts as used in the invention.
- FIG. 6 is a block diagram showing modules of the image rendering process and system according to the invention.
- FIG. 7A is a schematic block diagram showing the integration of text data into displayed images by a rendering server according to the invention.
- FIG. 7B is a schematic block diagram of a client process corresponding to the rendering server process shown in FIG. 7A ;
- FIG. 8 is a schematic plan view of a graphics card and motherboard architecture according to one embodiment of the invention.
- FIG. 9A is a schematic diagram of a preferred hardware configuration of a rendering server according to the invention.
- FIG. 9B is a block diagram showing calculation of total output resolution
- FIG. 10 is a high-level schematic diagram of a server/client network according to a second embodiment of the invention.
- FIG. 11A is a block diagram showing placement of multiple channels or stations to constitute a single, extended-length display
- FIG. 11B is a diagram showing the superposition of text data on the display illustrated in FIG. 11A ;
- FIG. 12 is a high-level schematic diagram of a server/client network according to a third embodiment of the invention.
- FIG. 13 is a multiple-display imaging array according to a fourth embodiment of the invention, illustrating different virtual camera position arrays
- FIG. 14 is a high-level schematic block diagram showing a portion of a system using the invention, and the execution, data transfer and storage of software and electronic data components thereof, and
- FIG. 15 is a high-level schematic block diagram of an autostereoscopic system employing the invention.
- FIG. 1 illustrates a representative layout of a contiguous video projection array according to the invention, the illustrated embodiment being an airport terminal display system that displays animated graphics and a text data overlay from a flight information data system (FIDS) database.
- the video projection array system indicated generally at 10 , includes a main server 12 which accepts FIDS data or data from any other text source, such as may be presented in Oracle or SGL, through an internal Ethernet port 14 as joined by a high speed switching hub 16 .
- the hub 16 makes it possible for multiple sourcing of the FIDS data for several isolated imaging arrays, only one such array 10 being shown in FIG. 1 .
- the preferably UNIX-based main server 12 transceives data through a series of separate switching Ethernet hubs 18 , 20 and 22 . Each of the hubs 18 - 22 is directly linked to one or more groups 24 - 28 of imaging or rendering computers 38 - 50 .
- Each of the hubs 18 - 22 has associated with it a respective rendering server 38 , 44 or 48 .
- the rendering server 38 controls clients 40 and 42 through hub 18 .
- the rendering server 44 controls a client 46 through hub 20 .
- the rendering server 46 controls a client 48 through hub 22 .
- the rendering servers 38 , 44 and 48 and their respective clients 40 - 42 , 46 , 50 together constitute the imaging computers 38 - 50 that run the multipanel displays in the embodiment illustrated in FIG. 1 .
- the rendering servers 38 , 44 , 48 have at least the same capacity and resolution capability as their client counterparts 40 - 42 , 46 , 50 and in the illustrated embodiment all contain four video channel outputs and four corresponding logical or virtual cameras generating output on these video channels. Using current hardware, a maximum number of eight video channels per imaging computer 38 - 50 can be used.
- the imaging computers 38 - 50 may in general have minicomputer architecture, and may use any of several operating systems such as Windows NT, Windows 2000 or LINUX 6.3.
- Server/client groups 24 , 26 and 28 preferably are kept isolated from each other by the use of hubs 18 - 22 to prevent unnecessary cross talk.
- Each of the imaging computers 38 - 50 has a set 52 , 54 , 56 , 58 of projectors, each projector 52 - 58 being controlled by a “virtual camera” set up by the software as will be described below and accepting one video channel output from a respective controlling imaging computer 38 - 50 .
- the illustrated CRT projectors 52 - 58 are exemplary only in kind and number and are one of many possible kinds of display units, which also include rear projectors, various kinds of flat panel displays or autostereoscopic projection screens (see FIG. 15 and its accompanying discussion below).
- the video projectors or other display units 52 - 58 may be sequenced from left to right or from top to bottom, may provide rear screen or front screen projection imagery, and may be of any size or of any specific resolution. As making up a video wall, the projection units 52 - 58 are preferably equal in resolution to each other and should provide a contiguous composite image.
- the system 10 also includes a plurality of video multiplexers 60 , 62 , each of which accepts one or more channels per client workstation 38 - 50 .
- the multiplexers 60 , 62 are used to relay video signals from the imaging computers 38 - 50 to a monitoring station at which are positioned monitors 64 , 66 for user-induced functional changes, imagery updating or image alignment as may be necessary for a particular type of video wall or other multiunit display.
- a single monitor 64 or 66 may be connected to each of the multiplexers 60 , 62 , so as to be capable of instantaneous switching between the large number of video channels present.
- the server 12 further provides high speed conduits 69 , 70 , 71 with each of the hubs 18 , 20 and 22 while keeping those hubs 18 - 22 effectively isolated from each other.
- conduits 69 - 71 may pass packets of positional data or sequencing information that relay positioning and rendering queues among the rendering servers 38 , 44 , 48 .
- the conduits 69 - 71 further simultaneously transmit FIDS text data as overlay text information on animations displayed on the (e.g.) video wall created by units 52 - 58 .
- a further workstation 72 which may be UNIX-based, monitors activity on the entire system through main server 12 .
- Workstation 72 also supports a link 74 to the outside world, through firewall 76 .
- the external connection permits data pertaining to the imaging array to be accessed remotely through the firewall 76 , and permits remote network management of the system.
- artwork shown on the video wall constituted by projection units 52 - 58 may be transformed or reconstituted by commands issued remotely, and may also be viewed remotely to verify image quality and stability.
- the *.cfg file, described below and copied to each of the rendering computers 38 , 44 , 48 contains animation start functions and further permits the recognition of an interrupt sent from the workstation 72 in order to effect changes in the animation.
- Path 74 may be used to load new sets of textures and geometries onto the hard drive storage of server 12 , and thence to rendering servers 38 , 44 , 48 , in order to partly or completely replace the imagery shown on the video wall, nearly instantaneously. In the illustrated embodiment, it is preferred that these changes be done by replacing the old *.cfg file with a new one.
- System 10 is modular in its design, easily permitting the addition of further rendering servers and associated client imaging computers, with no theoretical upward limit to the number of video channels to be included in the total system.
- FIG. 14 is a schematic diagram of a single server/client group 24 of the networked group of computers illustrated in FIG. 1 .
- This diagram shows where different ones of the software elements of the system are installed on which of the imaging computers.
- the server 38 and each of the clients 40 , 42 have an executable initiation or “*.ini” file and a configuration or “*.cfg” file 550 , 552 , 554 stored on their hard drives.
- the *.cfg files will be identical to each other, and the *.ini files nearly so. These two files work in tandem.
- the *.ini file uses listed parameters to define (a) how many sequential images will be loaded either into the rendering servers 38 , 44 , 48 or into the various client computer stations 40 - 42 , 46 , 50 linked thereto; (b) the functions, sequences and timing of the animation; (c) the number of imaging computers 38 - 50 that may exist on the hub node; and (d) the manner in which sequences of images are assigned to respective graphics card output channels (described below) inside the workstations 38 - 50 .
- the *.ini file may contain as many as two hundred separate parameter adjustments, and an even greater number of specifications of parameters pertaining to the animation.
- the *.ini file on any one imaging computer will differ from the *.ini file on any other imaging computer in its assignment of station ID and node ID.
- each imaging computer controls four stations or virtual cameras.
- Each imaging computer will also be assigned a unique node number.
- the *.ini file further contains a bit which tells the system whether the imaging computer in question is a render server or not.
- the imaging computer uses the station ID contained in the *.ini file to determine which of the several virtual cameras or viewpoints it should use; to minimize network traffic the parameters for all of the virtual cameras for all of the viewpoints are stored on each imaging computer hard drive.
- the *.cfg file responds to commands from the *.ini file.
- the *.cfg file is an artwork developer's tool for configuring specific sequences of preloaded art material to behave in certain ways.
- the *.cfg file responds directly to the textures and geometries which the art developer has established for the animation sequences, and has a direct association with all textures and geometries that are stored on all mass storage media in the system.
- the *.cfg file controls how the animation progresses; it contains calls to portions of the rendering sequence, such as layerings, timings of certain sequences and motions of specific objects found in the texture and geometry files.
- the *.cfg file either contains or points to all of the information that any rendering client or server would need to handle its portion of the full rendering of the entire multi-channel array.
- the *.cfg files distributed to the imaging computers controlling the individual display panels will be identical to each other, but the information and calls therein are accessed and interpreted differently from one computer to the next according to whether the computer has been identified in the *.ini file as a render server or not, the node ID of the imaging computer, and the station IDS controlled by that imaging computer.
- the *.cfg file also contains command lines used to make an interrupt, as when the system administrator wishes to change the animation or other scene elements during runtime.
- Each of the rendering servers and clients has stored thereon a world scene 556 or a replica 558 , 560 thereof.
- These world scenes are constructed using a library of graphical imaging data files (in this embodiment, partitioned into geometry and texture files) 562 , 564 and 566 stored on the hard drives.
- the render server 38 further has foreground, background, viewpoint generation and sequencing algorithms 568 which it accesses to set the viewpoints. Algorithms 568 together make up an overall system monitoring protocol which permits the system administrator to manually review or intervene in making on-line changes and adjustments to any viewpoint already established on the system.
- an executable (*.exe) file which, when executed by any imaging computer's processor, interprets data stream commands coming from the rendering server and received by each of the clients.
- the render server 38 further keeps a clock 570 that is used to synchronize the animation across all of the displays.
- FIG. 2 is a block diagram illustrating the high-level operation of the imaging computers according to the invention.
- the system 10 as shown in FIG. 1 is used to provide an array of multiple, contiguous displays for the projection of a unified video image containing animation characteristics and overlaid text.
- the *.ini file and the companion *.cfg file are loaded from the mass storage media associated with respective ones of the imaging computers to RAM.
- the illustrated embodiment uses, at each imaging computer, one or more general-purpose processors that are programmed to carry out the invention with computer programs that are loaded and executed; it is also possible to hard-wire many of the listed functions and to use special-purpose processors.
- “virtual cameras” are created by the render server viewpoint algorithm which correspond to each of the output video channels. These “virtual cameras” are logical partitions of the processors and memories of imaging computers 38 - 50 , four such virtual cameras being created for each imaging computer 38 - 50 in the illustrated embodiment. The system administrator sets up the properties of these virtual cameras in the software in advance of execution.
- the “align cameras” process 102 begins selecting previously stored imaging textures and geometries so as to lead to the creation of the final set of images.
- Camera alignment step 102 is linked to a step 104 , which in the illustrated airport terminal embodiment establishes each of these virtual cameras as driving a display for either a desk or as a gate.
- Process step 104 makes it possible to assign certain text data to each of the virtual camera nodes established at step 102 . Registration with the FIDS server at step 104 also includes defining a prescribed set of locations for the overlay of the animation by these text data.
- Step 102 establishes which prestored geometries and texture files are needed for a scene.
- Step 106 queries these files and loads them.
- a geometry file possesses information on the exterior limits of a displayed object.
- a texture file relates to a color/surface treatment of such an object or of the background.
- each rendering server or node 38 , 44 , 48 establishes a scene by compiling the previously loaded geometries and textures, setting their values in terms of displayed geometric positions and orientations within this newly created scene.
- the results are sent (step 114 ) by each render server and are received (step 110 ) by each client 40 - 42 , 46 , 50 .
- This data flow of vector positions and orientations, also known as sequencing instructions, across the network tells the imaging computers 38 - 50 (and the virtual cameras set up by them) how to direct their respective portions of the full scene's animation layout across any of the screens or displays of the composite video array.
- the novel approach of transmitting geometries and textures to clients/virtual camera nodes first, and compositing them later into scenes (step 116 ) using subsequently transmitted vector information provides the technical advantage of greatly reducing the amount of information that has to flow across the network between the rendering servers 38 , 44 , 48 and their respective clients 40 - 42 , 46 , 50 .
- the transmissions between the servers 38 , 44 , 48 and their respective clients 40 - 42 , 46 , 50 consist only of vector positions of the stored textures and geometries instead of transmitting very large graphical data sets generating by the rendering computers.
- the positions and orientations are used to place the geometries within scenes.
- the placement step 116 uses a coordinate system previously established by the software.
- the geometries, positions and orientations may change or may be modified as rapidly as the rendering servers 38 , 44 , 48 and the client computers 40 - 42 , 46 , 50 can individually generate the subsequent set of rendered images, or as fast as the speed of the network in relaying new positions and coordinates to the referenced client computers to produce the full scene, whichever factor is more limiting.
- step 118 the FIDS data accessed by the UNIX server 12 (which in turn is linked to the network via path 74 , FIG. 1 ) are directed to the appropriate ones of the rendering servers 38 , 44 , 48 and composited over the animation graphics.
- each output screen 52 - 58 along the video array shares a preset list of textual flight information. This flight information may be updated independently of the animation rendering process.
- the rendered scene at step 120 is refreshed with the next set of geometries to be established with new orientation coordinates on the same textured scene as background with the FIDS data stream continuing to project flight information within the same pre-established locations.
- step 122 the texture memory is purged to replenish available space for new imaging data in the animation to be loaded. The process then reverts to step 106 for the next cycle.
- FIG. 14 overlays the principal steps of this process on one server/client group 24 of the network.
- an executable file initiates data stream commands to begin the image rendering process. These commands are passed by the UNIX server 12 to each of the clients 40 , 42 , at which an executable file 574 receives the commands or cues and begins to construct viewpoint map images based on them. The images to be displayed are rendered by each of the clients at steps 576 . When these images are completed, each client 40 , 42 sends back a synchronization return signal 578 through server 12 to render server 38 . Render server 38 waits until all such synchronization signals have been collected before initiating the next cycle in the animation.
- FIG. 3 is a flow diagram showing how a user selects viewpoints for each of the virtual cameras he or she wishes to set up in the multiple display system.
- a viewpoint defines a position and an orientation from which all of the geometries associated with the displayed animation imagery are rendered and projected onto one of the displays 52 - 58 .
- Each “world”, as that term is defined herein, has at least one viewpoint associated with it, and more typically multiple viewpoints, and it is from these viewpoints that scenes associated with the respective virtual camera windows are drawn.
- worlds 190 , 191 are defined as subsets of a universe 192 that is created by the user. When a universe is created in the software, a single virtual camera window viewpoint is automatically assigned to it.
- a world in this sense comprises a set of viewpoints limited to a sector of the defined universe, with additional worlds within the same universe either existing adjacent to one another, partially overlapping, or as FIG. 5 illustrates, on opposite sides of the universe.
- Multiple universes may also be established with additional worlds as separate subsets to those designated universes, and these universes may reside on separate rendering servers.
- a predetermined conversion process may be used among worlds (for example, a separate world can be instantiated by each of separate server groups 508 , 510 , 514 ) to transfer geometry and texture positions and orientations among them.
- a scene may be rendered from several different viewpoints, each of which is associated with a particular virtual camera.
- Each virtual camera is associated with a scene graph.
- the same scene graph may be shared between or among several virtual cameras, where their perspective views intersect. If, for example, two different rows of virtual cameras cross each other at some intersection point, then only those two overlapping virtual cameras might end up sharing a particular scene graph since they share the same viewpoint perspective field. Virtual camera windows depicting different scenes would use different scene graphs. In this manner, the viewpoint is determined before the scene is rendered.
- the user (system administrator) writes the position coordinates for the origin of a viewpoint. Once this is done, at step 152 the user determines the orientation parameters (see FIG. 4 ) associated with the viewpoint.
- a corresponding identity matrix for the scene graph is enabled.
- Position and orientation are parameterizations within an X, Y and Z coordinate system which defines the identity matrix.
- this coordinate system 170 is illustrated with the X axis pointing to the right, the Y axis pointing straight down and the Z axis pointing straight ahead (into the paper).
- These coordinate frame axes, at step 156 are highlighted to the user on an administrative display screen such as monitor 64 in FIG. 1 .
- the user then chooses an aspect ratio adjustment, which is a vertical scale factor applied to the screen image. This value is useful in correcting for any monitor or pixel distortions in the display system.
- parallax settings are selected.
- the parallax settings may be used to establish a separation distance between virtual cameras along a linear path that is virtually spaced from the scene being rendered. This shape of this path is arbitrary.
- the path may be curved or straight; FIG. 13 shows examples of straight, curved and closed or endless paths 508 , 510 , 514 along which virtual cameras 509 , 512 , 516 have been distributed.
- a convergence angle may be desired among the virtual cameras on the path, depending on the type of scene selected, and this convergence angle is supplied at step 160 .
- this convergence angle is supplied at step 160 .
- the viewpoint established in the scene it may be desirable for the viewpoint established in the scene to vary from one display to the next as an observer walks along the displays on a path parallel to them.
- the establishment of a convergence angle provides for a balanced and smooth proportional viewing of a scene and the matching of infinity point perspective from one display to the next.
- the viewpoint of the scene is created and stored in the virtual camera memory and is available at runtime for the rendering and projection of the image.
- FIG. 4 is a schematic representation of a viewpoint coordinate system and the world coordinate system upon which it is based.
- the world coordinate frame axes are shown at 170 .
- the viewpoint coordinate frame axes are shown at 172 , and as shown will typically be different from the world coordinate frame axes for the second and subsequent virtual camera viewpoints established for that world.
- the viewpoint coordinate frame axes establish the direction of the viewpoint.
- a hither clipping plane 174 outlines the limits of the viewpoint angle of view as it projects outward toward a view plane 176 .
- the size of the view plane 176 can be regulated, and therefore the range of the viewpoint itself. In this fashion, the view position and orientation can be established relative to the global world coordinate frame 170 .
- the Y axis of the viewpoint frame 172 and the world coordinate frame 170 happen to be parallel, this need not be the case.
- FIG. 5 illustrates the spatial relationship between two representative world localities 190 and 191 as they are situated graphically within a defined universe 192 .
- the worlds 190 and 191 are subsets of universe 192 , and several such worlds may overlap or exist oppositely within the same universe.
- a virtual camera object always corresponds to a region of the screen in which a particular view of the graphical universe is displayed. With the virtual camera structure of the invention, multiple views can be displayed simultaneously and flexibly to different parts of the screen. For example, a set of virtual camera windows can be assigned to a given world, which is itself confined to a specific region 190 of the universe 192 with viewpoints only defined for that particular world 190 . At the same time, another set of virtual camera windows can be directly associated with another separate region 191 of the same universe 192 , limiting those particular viewpoints to that individual world.
- a central axis 194 serves at the point of origin directed toward each individual world, spread out 3600 around the center of that universe 192 .
- Each world may be defined as its own sector of that universe, and may be accessed as such. This attribute becomes necessary and useful in displaying concurrent multiple worlds within the same universe, or even in the multiple display of multiple universes, which can be achieved by using several rendering servers and their corresponding client computers.
- a first rendering server and related group of clients can have loaded onto them the same universe information database as a second rendering server and its related group of clients.
- the displayed outputs of each server can be directed to opposite poles 190 , 191 of the universe 192 . Since the two rendering servers may be joined on a network, positional data relating to imaged objects may be exchanged between them thereby allowing for two separate worlds to coexist within the same networked system. It is also possible to have two separate universes running on two separate rendering servers, also linked within the same system, and visible on adjoining sets of output screens or displays, with data positions transferring between the rendering servers using a predetermined conversion process.
- FIG. 6 is a schematic flow diagram showing the rendering process within each rendering server.
- a rendering server such as server 38 ( FIG. 1 ), within a multiple-channel imaging array 24 , handles all of the user interaction devices open to it.
- the rendering server 38 provides the framework under which the software protocols distribute real time animation commands across multiple channels to its clients 40 - 42 .
- the rendering server 38 uses a communication protocol that provides a unique pathway through the system, which in turn enables the assignment of specific viewpoints of a given scene to respective graphics card video outputs along the array, and provides a method of synchronizing the whole array.
- the rendering server 38 controls the animation simulation to be displayed.
- the clients 40 - 42 are slaves to the server 38 and execute the commands addressed to them.
- Server-shortened command stubs are provided as a way to map the software animation application programming interface (API) calls to their distributed equivalents.
- API application programming interface
- the clients' API or stub procedures provide a way to map commands received by the servers over the network to local software API calls.
- Copies of the APIs reside both on the rendering servers 38 , 44 , 48 and their respective clients 40 - 42 , 46 and 50 .
- Both the server and the matching client(s) maintain a copy of the current scene graph, which may be edited remotely through the network, and each scene graph is identical across each server group (e.g., group 24 FIG. 1 ) in the animation simulation.
- a naming scheme or module 200 allows the client and the server to which the client is connected to address remote objects within the scene graph and to specify operations to be performed on them.
- the name module 200 is linked to a pointer to a name map at 202 .
- both the client and the server use calls to the software's network functions to connect to a multicast group.
- the rendering server 38 issues commands to its multicast group 24 .
- the application level protocol uses a net item syntax that is included within the animation software.
- a timing interval referenced as a type field is used to distinguish data items from command items.
- the command items are distinguished from the data items by the most significant four bits of the type field, which are all ones.
- Type values 0XF0 to 0XFF are reserved for command codes.
- the server loads a terrain model and computes the behavior at 204 for the activity taking place within the terrain. It initiates changes to the scene graph at 206 by making software calls to the client stub procedures. It may also make use of the naming module 200 to name objects in the scene graph.
- the rendering server 38 may also use a command encoding/decoding module 208 to process items addressed to it by respective clients, or by commands delivered to it from outside the network to re-edit or recompile an updated set of scene graph features at 206 .
- the server 38 initializes and controls the scene at 210 .
- Rendering server 38 is responsible for initializing the animation simulation at 204 and also manages swap synchronization at 212 of all client computers linked with it.
- the main role of the associated clients 40 - 42 (and similar logic within server 38 itself) is to render the scene from the respective viewpoints of the virtual camera objects that have been created in them, which have been adjusted for their respective viewing pyramids (see FIG. 4 ) and their respective orientations with respect to a perpendicular plane.
- the clients read their configurations from a text file referred to as an “*.ini” file. Following this, each client regularly decodes packets of data sent over the network and executes software calls locally on its copy of the scene graph.
- This map 214 is set up statically and all clients 40 - 42 rendering under the designated server 38 must have a copy of this map before the simulation can begin.
- the clients use their copies of the naming module 200 to resolve client references at 202 to objects in the overall scene graph.
- FIGS. 7A and 7B illustrate how text information may be overlaid on the image displays.
- FIDS data which is Oracle based and exists within a UNIX platform environment, may be obtained through an Ethernet connection outside of the rendering server and client network and then integrated into the animation process.
- the flight information derived from the FIDS database is available in airports throughout the United States and in other countries throughout the world and provides arrival and departure information for passengers traveling by air. Displays carrying the FIDS information are situated in flight desk areas and gate areas for specific airlines.
- a listening thread 220 is initiated that queries the incoming FIDS data received by the system.
- the system results are then transferred to a set of client threads 222 , 224 , 226 (a representative three of which are shown) which analyze the information and begin the operation of parsing the data and organizing it into data groups to be routed to the appropriate scenes within the video wall established by the imaging system.
- a fire and forget protocol 228 is generated, completing the sectioning of the data, and then detaching and resetting itself for further queries.
- the listening thread 220 When the listening thread 220 detects a parcel of flight data in response to a preloaded data query, it delivers a sequential set of commands to a desk monitor thread 230 , a flight monitor thread 234 and a command listen thread 238 . Threads 230 , 234 and 238 each activate in response to receiving these commands and route appropriate information to either a desk or a gate.
- the desk monitor thread 230 selects which desks are to receive which sets of flight arrival and departure information; different ones of these information sets pertain to particular desks.
- a desk thread 232 is updated ( 233 ) by the system.
- Flight monitor thread 234 completes a process of determining a flight thread 236 .
- the command listen thread 238 acknowledges the arrival of all of the data, which is now fully parsed.
- the command listen thread 238 issues commands as to how the text is to be allocated within the video array as well as into the independent gates within the terminal, switching a set of command threads 240 , 242 , 244 (a representative three of which are shown) to complete this stage of the process.
- Command threads 240 - 244 are “fire and forget” operations, which engage and then detach, logging a respective update thread 246 , 248 or 250 as they finish.
- FIG. 7A illustrates operations taking place on the UNIX server 12 side of the system.
- client side taking place within any of the imaging computers 38 - 50 ; rendering servers 38 , 44 , 48 are also “clients” for the purpose of FIDS data distribution and imaging functions
- FIG. 7B a new listen thread 252 is engaged responsive to a command addressed particularly to that client by main server 12 , and prepares itself to receive the text portion of the FIDS data, including flights 256 for both desks 258 and gates 260 .
- a status thread 254 checks and logs the completion of the operation, and resets itself for the next series of queried FIDS data.
- the frequency of the querying is adjustable by the user of the system. If flight data are not present by a certain preset time, the controlled screen does not display the new flight data until the occurrence of both a new timing period and the arrival of new flight data.
- the user may monitor the system remotely through telneting to the UNIX server 12 or through software loaded onto the server 12 that reveals the complete graphics of each of the video wall screens and gate display screens.
- the illustrated embodiment is one form of overlaying text associated with animations displayed along large video walls with other adjacent screens that are located at gates within an airport environment.
- the present invention is also useful in situations where rapidly changing or time-variant text is closely integrated with large video walls having a multiplicity of screens where detailed animations, simulations and video overlays stretch along the full length of the video wall, and where such animations are to be monitored and modified remotely by the users via the Internet.
- the present invention has applications which include public municipal stations, malls, stadiums, museums, and scientific research laboratories and universities.
- FIG. 8 illustrates a main motherboard assembly 300 that, in a preferred embodiment, exists in the all of the imaging computers 38 - 50 .
- Each of these motherboards 300 may be identical for all computers operating in the network, or they may be of a different type or manufacturer, so long as the same motherboards are used within the same render server/client groups 24 , 26 or 28 . This feature allows for a differentiation of functions of different motherboards to be spread out across multiple rendering computers used throughout the system.
- Each motherboard 300 must be equipped with a BIOS 302 which acknowledges the presence of multiple graphics cards 304 - 318 plugged into their specific slots.
- these include both 32-bit and 64-bit PCI slots 304 - 316 , numbering up to seven slots per motherboard, and one AGP high speed slot 318 .
- the BIOS 302 built onto the motherboard must be able to assign different memory addresses to each of the cards 304 - 318 , enabling separate video driver information to be sent to this specific card through the PCI or AGP bus (not shown), in turn allowing for video output information data to be allocated to that card.
- the imaging system can detect each card and direct each respective virtual camera windowing aperture frame to the VGA output of that card.
- an AGP card 318 with one VGA output port can share the same motherboard with at least three PCI cards 304 - 308 of the same type, providing a total of four video output channels on that motherboard 300 .
- Each video output then occupies the same resolution value and color depth for that computer, which can be modified independently on each video channel.
- Using dual or even quad CPU processors 320 , 322 (a representative two of which are shown) on motherboard 300 maximizes the graphical computational speed delivered through the AGP and PCI buses to the graphics cards to enhance the speed of the rendering animation.
- each motherboard 300 contains sufficient RAM 326 to transfer the graphical data, interacting with the cards' own video drivers and the available texture RAM 327 on each of the video cards 304 - 318 .
- the addition of two or even four video output ports on the AGP cards 318 will increase the data throughput to an even greater level, due to the existence of more on-board AGP graphics card pipelines provided by the manufacturers, passing data more quickly through the faster AGP bus to the rest of the motherboard 300 .
- This configuration can also use multiport AGP cards 318 with multiport PCI cards 304 - 316 on the same motherboard to increase the number of channels per computer, provided that BIOS 302 can recognize each of the video addresses for each of the video ports.
- BIOS 302 can recognize each of the video addresses for each of the video ports.
- the software created for this imaging system array assists in this process.
- FIG. 9A is a more detailed view of each of the rendering server and client architectures.
- Each of the motherboards in these computers contains CPUs 320 and 322 , main system RAM 326 , and PCI and AGP bus controller interface circuits 328 , 330 , 332 , 334 and their associated buses 333 , 335 (the buses for the first two PCI interface circuits 328 , 330 not being shown).
- IDE and SCSI controller interface circuits 336 , 325 are provided for “legacy” devices.
- Central, main chipset components 338 - 344 regulate the speed and bandwidth of data transferred between all devices connected to the motherboard, and provide the main conduit pathways for communication between these devices.
- the north bridge 338 serves as a main conduit for signals passing between devices in the central processing portion of the motherboard, including the CPUs 320 , 322 , RAM 326 and cache memory devices (not shown).
- the north bridge also connects to the AGP bus controller 334 the memory address data path device 344 , which provides an optimized interleaving memory function for the system RAM 326 , and the I/O bridge intermediate chip 340 .
- the AGP port controller 334 is therefore permitted direct computational contact with the CPUs 320 , 322 at preset, high front-side bus speeds set by the system BIOS 302 (such as 400 MHz), which is also connected to the north bridge 338 , as well as the RAM 326 , thereby giving it at least four times the speed of the other, PCI buses used to interconnect to the PCI graphics cards 304 , 306 , 308 .
- system BIOS 302 such as 400 MHz
- a primary PCI bus controller 332 is joined directly to the I/O bridge 340 and serves as the maximum throughput device for the PCI cards 304 , 306 , 308 connected to the motherboard, in the illustrated embodiment operating at 66 MHz.
- the other PCI controller interfaces 328 , 330 are attached at a juncture 356 between I/O bridge 340 and south bridge 342 , and in the illustrated embodiment run at secondary, lower speeds of 33 MHz. It is preferred that the PCI graphics cards 304 , 306 , 308 or their equivalents communicate at bus speeds of at least 66 MHz to the rest of the system.
- South bridge 342 joins all “legacy” devices such as SCSI controllers (one shown at 325 ), IDE controllers (one shown at 336 ), onboard networks and USB ports (not shown). It also connects to network port 358 , from which is transferred positional coordinates of an animation's formatted graphics. South bridge 342 is meant to attach to lower-speed, data storage devices including the disk array 324 from which source data for the system is derived.
- the architecture shown in FIG. 9A has been demonstrated to be superior in motherboard performance in terms of data transfer speeds and bandwidth capability for multiple graphics card inter-communication on the motherboard and is preferred.
- Each of the graphics cards 304 - 318 has a respective graphics card CPU or processor 362 , 364 , 366 or 368 .
- the “processor” or processing function of the invention is therefore, in the illustrated embodiment, made up by CPUs 320 , 322 , and 362 - 368 .
- the graphics processors 362 - 368 complete the image rendering processes started by general-purpose processors 320 and 322 .
- General-purpose processors 320 and 322 also handle all of the nonrendering tasks required by the invention.
- FIG. 9B shows how the operation of the motherboard results in total output resolution.
- Each successive graphics card present on its respective bus communicates to the BIOS its slot numbered position at step 350 , thereby directing the BIOS 302 on how to address the video driver to handle multiple output video channels, selecting a numerical value as to the number of channels available.
- the user may manually select the final resolution of each video output on each video card, which at 354 sets the overall resolution of the entire video animation image emanating from that particular computer box.
- the total resolution of the video wall made up of these contiguous video channels arranged and positioned precisely together is a summation of each of the resolutions set by each channel on each graphics card, including all multi port channels wherever they might be available on their respective cards.
- FIG. 10 shows an alternative system 400 in which a group of rendering servers 402 , 404 , 406 may be joined with their corresponding rendering client computers 408 - 414 , 416 - 422 and 424 - 430 through a series of independent hubs 432 , 434 , 436 , which link the clients with their respective servers.
- the hubs 432 , 434 , and 436 are themselves joined to a central UNIX-based server 438 .
- FIG. 10 illustrates the modular nature of the system and how additional server rendering groups may be added onto the full system 400 , increasing the number of total channels in a video wall animation.
- the preferably UNIX-based main server 438 joining the hubs linked to the groups of rendering servers is the entry point for the introduction of the FIDS text data to be overlaid on the various animation screens of the multi-channel imaging system.
- a total of eight virtual camera windows may be provided for each of the rendering servers 402 , 404 , 406 and there is no upper limit to the number of rendering servers which can be brought into the system.
- the number of client computers 408 - 414 in each server group may number as high as eight, matching the number of separate virtual camera windows permitted within each server, or have no upper limit if repetition is required for establishing more operations taking place on these separate client computers that distinguish them from the first group.
- Each rendering server 402 - 406 may be identified with one particular world, or it may function to elaborate upon that same world with an additional set of virtual camera windows set up on another rendering server with its own set of new clients.
- the hardware used with each client and its respective server must be the same for purposes of symmetry in computing of the final video image, but different sets of hardware, including graphics cards, drivers and motherboards, may be used in each separate rendering server group.
- each rendering server 402 - 406 provides a consecutive set of video channels that match precisely in a graphical sense as one views the video array from left to right, with the last image of the first group matching its right side with the left side of the first image from the second rendering server group, and so on.
- the real-time animation rendering is regulated by the processing speed of each client computer box, the server computer boxes, and the network that joins them.
- FIG. 11A shows an example of how a contiguous set of virtual camera viewpoints may look when projected onto a large video wall.
- Each of the video channels are numbered sequentially from left to right as channels 1 , 2 , 3 and 4 .
- the right edge of image frame 1 maps precisely onto the left edge of image frame 2 at a boundary 450 , and so on along the expanse of the video wall, with no upper limit as to the number of channels which may be added.
- the timing of the animation sequences within the scene graph is regulated such that objects that move out of one frame and into the adjacent frame left or right do so continuously without breaks or pauses.
- Each rendering server and its adjoining client computer units make up contiguous portions of the video wall, which may be directed both horizontally or vertically, numbering from bottom to top for vertical video walls.
- a video wall constructed according to the system may have other shapes and directions, including cylindrical, domed, spherical, parabolic, rear or front screen projected configurations, and may include additional tiers of horizontal rows of video screens.
- graphical overlays or superimpositions of other rows of real time animation are possible, since more than one viewpoint may be assigned to the same video output channel, and with one of the virtual camera window settings having a smaller aperture than the other, with those sets of smaller apertures extending across the video walls in a contiguous fashion.
- the source of this second superimposed viewpoint series may come from another region of the same world, or a separate world altogether.
- FIG. 12 shows how separate video drivers may be used simultaneously in the multi-channel imaging system connecting with same UNIX server 470 that links the data flow from the separate hubs 472 , 474 that join the respective rendering servers 476 , 478 and their respective client computers 480 , 482 and 484 , 486 .
- the graphics cards and their associated video drivers 488 , 490 must be confined to their own groups of rendering servers and clients.
- Using multi graphics card types within the same system has the advantage of using one card's special features, such as processing speed and resolution, with those of another graphics card.
- Some graphics cards will have tremendously greater processing speed, anti-aliasing features, and greater texture memory, which are useful for certain types of video animation. The user can allocate these cards to worlds that are intricate in nature, requiring greater computational speed to display the animations. Other cards which are not quite as fast in terms of processing may be then designated for simpler animations, directed towards the other sets of screens in the video array installation.
- All video drivers introduced into the system may be used to access worlds, but some worlds may be created to suit one video card's manner of displaying imagery through its own specific video driver.
- newer graphics cards that are recently introduced to the market may be loaded and tested against the existing video cards present on the system without having to rewrite software code for the entire system.
- a new set of differentiated tests may be implemented into the video array while the system remains continually online.
- FIG. 13 shows a system having multiple camera base configurations running concurrently within the same network 500 .
- Each base configuration uses a separate rendering server 502 , 504 or 506 , with associated client groups acting upon worlds whose geometry and texture data are accessed within that same network.
- a first camera base or array of virtual cameras 508 is “horizontal” in that the virtual cameras of it are equispaced along a virtual straight line and have viewpoint axes which are parallel to each other.
- a second camera base 510 takes the shape of an arc; its virtual cameras 512 have viewpoint axes which are not parallel but which rather converge.
- a third camera base 514 forms an endless loop with the viewpoint axes of its virtual cameras 516 outwardly directed.
- each camera base instance the same worlds may be used, or separate worlds may be newly introduced.
- the parallax value in each base configuration 508 , 510 , 514 is chosen by the user, as well as the three-dimensional coordinate system parameters that describe the particular virtual camera base orientation responsible for capturing the viewpoints within a particular world.
- the “horizontal”, linear based configuration 508 has a parallax value set as a virtual distance between each of the virtual cameras 509 .
- an arcing base 510 anchors convergent viewpoints whose coordinates the user may select in the software's parameters.
- Such curved camera bases are able to work with the convergence used in certain animations which encourage the viewer to focus more on activity and objects that exist in the foreground as opposed to the more distant background features, depending on the angles between the curving set of viewpoints.
- a linear horizontal base may not provide needed convergence but a curved virtual camera base will.
- the arcuate path 510 can be used, for example, in a set of displays arranged along a wall to simulate a set of windows in the wall to the outside. As the viewer moves along the wall, the viewpoint changes such that what the viewer is seeing mimics what he or she would see if those displays really were windows.
- the circular virtual camera base 514 covers a full 360° sweep of an animated world. This camera base lends itself to more three dimensional applications of animation viewing, requiring the system to allocate geometries and textures around the entire perimeter of a world.
- An endless base 514 can be used to show portions of multiple worlds in larger detail.
- Arcing virtual camera bases like base 510 can be used in video projection for “caves” and other rounded enclosures, where the projected imagery surrounds the viewer or viewers in a theater type arrangement.
- the three dimensional coordinate system that defines the viewpoints set by the user of this system determines the degree of arc of the projected imagery against a curved or sloping screen surface.
- the nonlinear aspects of projecting against any curved surface may be programmed into the system to compensate for the curvature of the projection screen, even if that curved surface is discontinuous.
- the final image will be viewed as a transposition of a flat rectilinear scene onto a curved surface screen, without distortions or with reduced distortions, in either a rear projected or a front projected format. Consequently, the contiguous set of images along an arc may also be joined together seamlessly, in the same fashion as a set of contiguous flat images that are precisely matched along each other on a flat display screen.
- the viewpoints of contiguous displays could differ one from the next in elevation, such that, as a passer-by viewed these displays, he or she would perceive the same scene from an ever-higher viewpoint.
- the viewpoints of the displays could be selected such that the perceived change in viewpoint matched, or was a function of, the viewer's real change in elevation.
- the change in viewpoint from one virtual camera to the next have to be at a constant spacing; a set of viewpoints could be chosen such that the change in viewpoint from one virtual camera to the next could be accelerated or decelerated.
- the software controls enable the user to set the shapes of the viewpoint windows themselves, thereby creating apertures that are rectangular, triangular, or keystoned, depending on the nature of the projection screen's shapes.
- the projection apparatus had to be fitted with special lenses and apertures on the projectors to create an undistorted balanced image on a curved screen.
- the networked set of rendering server and client computers all share the same programmed curvilinear settings for projecting each image on an elongated curved screen, and are not limited in number of terms of channels used in the full system. This feature provides the capability of increasing the resolution of the final projected image along the inside of the caved enclosure by increasing the number of channels per horizontal degree of view.
- the system further provides for the introduction of rows or tiers of curved images, vertically, which can be especially useful in the projection of images within large domes or spheres, or where imagery is seen both above and below the vantage point of the viewers.
- the use of superimposed projected imagery as illustrated in FIG. 11B may also be used in a curved screen surface environment. If different shapes of curved projected material are to be used simultaneously, the multi-channel networked imaging system can assist to allocate one set of images for one shape of screen, and another for another shape.
- the modularity of the system as shown in FIG. 13 permits its adaptation to multiple cave or domed theater enclosures employing multiple sizes and shapes, with the same or different sets of subject matter to be projected.
- Multiple rendering servers may be employed simultaneously, each with separate sets of viewpoint windows tailored precisely for a certain enclosed screen's configuration, programmed for those rendering servers and their connected client computer boxes. This permits a uniquely differentiated set of worlds to be shown for different cave enclosures, where portions of cave enclosures at the same time, within the data set of a single universe or even linked for multiple universes that are joined together by the same UNIX server network.
- both front and rear projection may be chosen for an installation involving different cave enclosures, altering the manner in which images appear on the enclosed viewing screen.
- a group of rendering servers and their client computers would be assigned for rear projection, and another separate group would be assigned to front projection imagery, each specifically addressing the nonlinearity corrections necessary for projecting onto curved surfaces.
- a single cave enclosure may provide both front and rear screen viewing zones simultaneously within the same chamber, as in the case of a sphere or dome inside a large spheroidal or domed theater enclosure.
- the outer spheroidal sets of screens may use front protection, joined with one group of rendering servers and their rendering clients, and an inner sphere or domed structure would make use of rear projection for another associated group of rendering servers and their own rendering clients.
- separate sets of differing graphics cards and their corresponding video drivers 488 , 490 and functions may be applied and installed with separate groups 518 , 520 of rendering servers and their designated client computers, where the application requires preferred types of graphical computation in each.
- the UNIX server 470 that joins the network of all rendering servers provides a high speed computational link that addresses the positions of the varying textures and geometries made visible in and around the enclosures.
- FIG. 15 illustrates two particular applications of the invention's multidisplay architecture: an autostereoscopic projection array and a flat panel display interface.
- the present invention has the ability to compile and project multiple perspective image viewpoints of a given scene simultaneously, which may be interfaced directly with various classes of newly developed autostereoscopic display devices such as flat panel 600 and rear projection screens 604 .
- Such display devices free the viewer from the need of wearing shuttered or polarized glasses to view 3D stereoscopic images, greatly enhancing the wide angle viewing capabilities of autostereo images, and improving clarity and brightness of the final image set.
- Screen device 604 is a rear projection system that includes two large rectangular lenticular lenses 605 , 607 positioned one behind the other, on a central axis 632 , with their vertical lenticules identical in spacing number such as 50 lines per inch.
- a front view detail of each of these lenticular lenses 605 , 607 is shown at 609 .
- the lenticules are arranged to be parallel to each other and are separated laterally by a fractional amount of a single lenticule. This lateral offset is determined by the focal length of the lenses, which should also be identical, and the spacing between the two lenses, which the user may adjust to shift the convergence point of the incident projectors placed behind the viewing screen assembly 604 . Clear spacing plates such as acrylic plates 611 , 613 may be used between the lenses to keep their separation fixed.
- the designer may also insert an additional circular lenticular lens 615 (a front view detail being shown at 617 ) between the two outer vertical lenticular lenses to change the size of the viewing cone or angle of viewing for 3D images to be viewed by audiences in front of the screen assembly.
- an additional circular lenticular lens 615 (a front view detail being shown at 617 ) between the two outer vertical lenticular lenses to change the size of the viewing cone or angle of viewing for 3D images to be viewed by audiences in front of the screen assembly.
- the video projectors 612 - 626 should have identical focal length lenses, resolution and aperture size, and should be anchored along a single stationary arc having an axis 632 which is orthogonal to the screen 604 . With very large screens, the degree of arcing is slight. If the size of the rear screen assembly 604 is small, the arcing is more pronounced. While eight projectors 612 - 626 are shown, any number of projectors greater than or equal to two can be used.
- Screen device 604 receives the array of light beams directed towards the back of the screen, and after that array travels through several layers of lenticular lensing material sandwiched inside the screen, re-projects the projector light rays from the front of the screen with a summation of each of the projectors' rays across a widened viewing aperture.
- the point of convergence 636 of all of the projectors' beams is located at the intersection of a central axis 632 , itself perpendicular to the plane of screen 604 , and a rear surface 634 of the rear lenticular lens 605 .
- the rectangular pattern created on the back of the rear lenticular screen by video projectors 612 - 626 should be identical in size and shape, and any keystone corrections should be done electronically either within each video projector 612 - 626 or by software operating within the graphics cards in the imaging computer 608 or 610 driving the projectors.
- increasing the number of projectors 612 - 626 increases the number of views visible to viewers in front of the screen 604 .
- the distance between the projectors 612 - 626 and convergence point 636 is determined by the size of the rectangular image they create on the rear lenticular lens 605 of screen 604 , with the objective of completely filling the viewing aperture of the rear lenticular lens 605 .
- the lenticular lenses themselves will be able to support a number of lines per inch greater than 50 and as high as 150, thereby increasing the total number of views perceived on the front of the screen for 3D viewing.
- the typical light path for a rear projector beam first passes through the rear lenticular lens 605 at a given incident angle with respect to surface 634 .
- the rear lenticular lens 605 then refracts this incident beam at an angle determined by the focal length of the lenticular lens 605 and the angle of the incident beam, as well as the distance of the projector from convergence point 636 .
- the first, rear lenticular lens 605 establishes an initial number of viewing zones and directs these rays through the middle, circular lenticular lens 615 , which widens the viewing zones set by the first, rear lenticular lens 605 .
- the amount of widening is set by the focal length of this middle lens.
- the number of contiguous perspective viewing zones is multiplied.
- the amount of this multiplication is determined by the number of lines per inch of the lenticular lens, the number of projectors arrayed behind the rear lenticular lens, the amount of right or left offset distance of the front lenticular lens relative to the rear lenticular lens, and the separation distance between the planes of the front and rear lenticular lenses.
- this multiplication factor is three times.
- the lenticular lenses are held firmly into flat positions by glass plates or by acrylic plates 611 , 613 mounted in frames, depending on the thickness of the lenticular lenses being used.
- the projector array 612 - 626 in conjunction with screen 604 possesses the ability to repeat the total number of views delivered to the back of the screen several times in order to provide an even wider 3D convergent viewing zone for large audiences to collectively view such autostereoscopic images in a large theatre environment, or along a video wall.
- Several screens may be optically joined together to provide an immersive 3D enclosure, consisting of the screens' individual systems of lenticules, or the screen may be curved or shaped to arc around the audience's viewing perspectives.
- the real-time rendering facilities inherent in the distributed image processing of the invention permit the rapid movement associated with large-scale, high-resolution motion 3D viewing.
- a video multiplexer 628 autostereoscopic flat panel devices such as device 600 may be joined to the system, for smaller 3D viewing applications that don't require stereo glasses or head-tracking devices.
- a lenticular printer 630 may be added to the system to view, edit, and print lenticular photos and 3D animations created within the multi-channel imaging system. This is a particularly useful aspect of the system in that it gives the 3D lenticular work creator the ability to view artwork changes instantaneously on a 3D screen with regard to a lenticular image he is constructing, instead of having to reprint an image array many times on an inkjet or laser printer to fit the kind of 3D viewing he wishes to make.
- the way in which autostereoscopic images may be delivered or constructed within the system of the invention is based on the parameters set up to control the perspective fields of the various images to be assembled.
- This specialized software is capable of selecting these values for a given 3D world, which may be computer generated or transferred from an external source of 3D data from digital camera sources or film photography scannings.
- Such controls may regulate viewing distance from a centralized scene, viewing angles, parallax adjustments between virtual cameras, the number of virtual cameras used, perspective convergence points, and the placement of objects or background material compositionally for the scene.
- recorded source data that possess only a low number of views, or even just two views may be expanded through a mathematical algorithm used within the system to generate more views between or among the original set of views.
- the results of this 3D reconstruction of an actual scene may be composited with other autostereo images in much the same way as portions of a 3D world may be joined together.
- software interleaving functions that are established within the multi-channel imaging system may be used to optically join multiple perspective views in combination with a video multiplexer to support a minimum of four channels, with the upper limit regulated by the line pitch of the lenticular lens positioned on the 3D panel, as well as the flat panel 600 's total screen resolution.
- a real-time, animated, multiple screen display system has been shown and described in which is set up a plurality of virtual cameras, each having its own viewpoint.
- the present invention permits animated objects to displace themselves across multiple displays, allows changing text data to be superimposed on these images, and permits multiple screen contiguous displays of other than flat shape and capable of displaying scenes from different viewpoints.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Closed-Circuit Television Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
An autostereoscopic display system includes a lenticular lens display screen that projects a plurality of views of a scene from its front surface. A plurality of video projectors are disposed to the rear of the display screen and focus on a convergence point of the display screen's rear surface. Imaging computers drive the video projectors, each having a memory storing a scene to be displayed on the display screen. Each computer renders the scene from a preselected viewpoint that may be different from the viewpoints of the other imaging computers.
Description
- This application is a continuation of U.S. patent application Ser. No. 10/955,339 filed Sep. 24, 2004, which is in turn a division of U.S. patent application Ser. No. 09/921,090 filed Aug. 2, 2001, the specification of which is fully incorporated by reference herein.
- As display screens have grown in size and fineness of resolution, investigators have experimented with placing several such display screens adjacent to each other and causing three dimensional graphical data to be displayed on them. In 1992, the University of Illinois introduced a multi-user, room-sized immersive environment called the Pyramid CAVE (for “CAVE automatic virtual environment”). Three dimensional graphics were projected onto the walls and floors of a large cube composed of display screens, each typically measuring eight to ten feet. The cubic environment uses stereoscopic projection and spatialized sound to enhance immersion. Computers and display systems by Silicon Graphics, Inc. have created multi-panel displays which process three dimensional graphics, imaging and video data in real time. However, known “CAVES” and light displays by SGI and others share a single apex point of view, with all panels around the viewers having only perspective views streaming from that apex point. Further, much of the prior work requires shuttered or Polaroid glasses on the viewer for stereoscopic output. A need therefore continues to exist for multiple-display imaging systems permitting the imaging of three-dimensional scenes from multiple perspectives. Further, the treatment of animation graphics across multiple displays currently requires extremely high end, custom hardware and software and large bandwidth capability. The cost and communication requirements or rendering and displaying animation across multiple displays should be reduced.
- According to one aspect of the invention, a multiple-display video system and method are provided by which a rendering image processor is coupled to a plurality of virtual cameras, which in one embodiment occupy separate nodes on a network. Associated with the rendering image processor is a first memory that defines a world having three dimensional spatial coordinates, a second memory for storing graphical image data for a plurality of objects, and a third memory for storing instructions on the positioning of the objects in the world. For each virtual camera, a viewpoint of the world is defined and stored. The rendering image processor renders a scene of the world according to the viewpoint of the virtual camera. Each virtual camera has at least one display associated with it to display the scene rendered according to the virtual camera's viewpoint. The virtual camera viewpoints may be chosen to be different from each other.
- According to a second aspect of the invention, a rendering node or server has first, second and third memories as above defined, the third memory storing instructions for positioning the objects in the virtual world and animating these objects. A plurality of clients, which are preferably disposed remotely from the server, each have associated memory and processing capability. Each of the clients has one or more display units associated with it, and viewpoints are established for each. Each of the clients stores, prior to a first time, graphical image data for the objects to be displayed. Each of the clients constructs a respective scene based on instructions received from the server at the first time. The previous storage of the graphical image data (such as textural and geometric data) associated with the animated objects dramatically reduces the amount of bandwidth necessary to communicate animation instructions from the server to each of the clients, permitting real-time animation effects across a large number of associated displays.
- In a third aspect of the invention, these displays may be physically sited to be contiguous with each other so as to create a single large display. Relatedly, contiguous displays can be directed to display the scene or overlapping scenes and the viewpoints of the displays can be varied so that, to an observer passing by the displays, the rendered scene appears to shift as a function of the position of the observer, such as it would if the observer were looking at a real scene through a bank of windows. Other viewpoint shifts are possible to produce, e.g., arcuate or circumferential virtual camera arrays, of either convex or concave varieties.
- According to a fourth aspect of the invention, a large multiple-screen animated array may be provided at a commercial location and used to display a combination of animations and text data derived from a local database. These data, such as the Flight Information Data System (FIDS) of an airline at an airport, can be used to display such things as airline arrivals and departures on predetermined portions of the displays. The present invention provides apparatus for producing an overlay of the FIDS data on the animated sequences.
- According to a fifth aspect of the invention, the method and system of the invention may be used to illuminate large lenticular arrays to create an autostereoscopic display.
- Further aspects of the invention and their advantages can be discerned in the following detailed description, in which like characters denote like parts and in which:
-
FIG. 1 is a high-level schematic network diagram for a video projection array according to the invention; -
FIG. 2 is a high level schematic block diagram of a virtual camera establishment, animation and imaging process according to the invention; -
FIG. 3 is a viewpoint configuration or virtual camera protocol process flow diagram, and is a detail ofFIG. 2 ; -
FIG. 4 is a schematic diagram of parameters establishing a viewpoint for a virtual camera; -
FIG. 5 is a schematic diagram of the “world” and “universe” concepts as used in the invention; -
FIG. 6 is a block diagram showing modules of the image rendering process and system according to the invention; -
FIG. 7A is a schematic block diagram showing the integration of text data into displayed images by a rendering server according to the invention; -
FIG. 7B is a schematic block diagram of a client process corresponding to the rendering server process shown inFIG. 7A ; -
FIG. 8 is a schematic plan view of a graphics card and motherboard architecture according to one embodiment of the invention; -
FIG. 9A is a schematic diagram of a preferred hardware configuration of a rendering server according to the invention; -
FIG. 9B is a block diagram showing calculation of total output resolution; -
FIG. 10 is a high-level schematic diagram of a server/client network according to a second embodiment of the invention; -
FIG. 11A is a block diagram showing placement of multiple channels or stations to constitute a single, extended-length display; -
FIG. 11B is a diagram showing the superposition of text data on the display illustrated inFIG. 11A ; -
FIG. 12 is a high-level schematic diagram of a server/client network according to a third embodiment of the invention; -
FIG. 13 is a multiple-display imaging array according to a fourth embodiment of the invention, illustrating different virtual camera position arrays; -
FIG. 14 is a high-level schematic block diagram showing a portion of a system using the invention, and the execution, data transfer and storage of software and electronic data components thereof, and -
FIG. 15 is a high-level schematic block diagram of an autostereoscopic system employing the invention. -
FIG. 1 illustrates a representative layout of a contiguous video projection array according to the invention, the illustrated embodiment being an airport terminal display system that displays animated graphics and a text data overlay from a flight information data system (FIDS) database. InFIG. 1 , the video projection array system, indicated generally at 10, includes amain server 12 which accepts FIDS data or data from any other text source, such as may be presented in Oracle or SGL, through aninternal Ethernet port 14 as joined by a highspeed switching hub 16. Thehub 16 makes it possible for multiple sourcing of the FIDS data for several isolated imaging arrays, only onesuch array 10 being shown inFIG. 1 . The preferably UNIX-basedmain server 12 transceives data through a series of separateswitching Ethernet hubs - Each of the hubs 18-22 has associated with it a
respective rendering server rendering server 38controls clients hub 18. Therendering server 44 controls aclient 46 throughhub 20. Therendering server 46 controls aclient 48 through hub 22. Therendering servers FIG. 1 . Therendering servers - Server/
client groups FIG. 15 and its accompanying discussion below). The video projectors or other display units 52-58 may be sequenced from left to right or from top to bottom, may provide rear screen or front screen projection imagery, and may be of any size or of any specific resolution. As making up a video wall, the projection units 52-58 are preferably equal in resolution to each other and should provide a contiguous composite image. - The
system 10 also includes a plurality ofvideo multiplexers multiplexers monitors single monitor multiplexers - The
server 12 further provideshigh speed conduits hubs main server 12, conduits 69-71 may pass packets of positional data or sequencing information that relay positioning and rendering queues among therendering servers - A
further workstation 72, which may be UNIX-based, monitors activity on the entire system throughmain server 12.Workstation 72 also supports alink 74 to the outside world, throughfirewall 76. The external connection permits data pertaining to the imaging array to be accessed remotely through thefirewall 76, and permits remote network management of the system. For example, artwork shown on the video wall constituted by projection units 52-58 may be transformed or reconstituted by commands issued remotely, and may also be viewed remotely to verify image quality and stability. The *.cfg file, described below and copied to each of therendering computers workstation 72 in order to effect changes in the animation.Path 74 may be used to load new sets of textures and geometries onto the hard drive storage ofserver 12, and thence torendering servers -
System 10 is modular in its design, easily permitting the addition of further rendering servers and associated client imaging computers, with no theoretical upward limit to the number of video channels to be included in the total system. -
FIG. 14 is a schematic diagram of a single server/client group 24 of the networked group of computers illustrated inFIG. 1 . This diagram shows where different ones of the software elements of the system are installed on which of the imaging computers. Theserver 38 and each of theclients file rendering servers - The *.ini file may contain as many as two hundred separate parameter adjustments, and an even greater number of specifications of parameters pertaining to the animation. The *.ini file on any one imaging computer will differ from the *.ini file on any other imaging computer in its assignment of station ID and node ID. In the illustrated embodiment, each imaging computer controls four stations or virtual cameras. Each imaging computer will also be assigned a unique node number. The *.ini file further contains a bit which tells the system whether the imaging computer in question is a render server or not. The imaging computer uses the station ID contained in the *.ini file to determine which of the several virtual cameras or viewpoints it should use; to minimize network traffic the parameters for all of the virtual cameras for all of the viewpoints are stored on each imaging computer hard drive.
- As loaded and executing on one of the general-purpose processors of the imaging computer, the *.cfg file responds to commands from the *.ini file. The *.cfg file is an artwork developer's tool for configuring specific sequences of preloaded art material to behave in certain ways. The *.cfg file responds directly to the textures and geometries which the art developer has established for the animation sequences, and has a direct association with all textures and geometries that are stored on all mass storage media in the system. The *.cfg file controls how the animation progresses; it contains calls to portions of the rendering sequence, such as layerings, timings of certain sequences and motions of specific objects found in the texture and geometry files. The *.cfg file either contains or points to all of the information that any rendering client or server would need to handle its portion of the full rendering of the entire multi-channel array. For any one contiguous display, the *.cfg files distributed to the imaging computers controlling the individual display panels will be identical to each other, but the information and calls therein are accessed and interpreted differently from one computer to the next according to whether the computer has been identified in the *.ini file as a render server or not, the node ID of the imaging computer, and the station IDS controlled by that imaging computer. The *.cfg file also contains command lines used to make an interrupt, as when the system administrator wishes to change the animation or other scene elements during runtime.
- All of the software components shown in
FIG. 14 are written to the hard drives of the imaging computers prior to execution of the animation sequences. This greatly decreases the amount of required network traffic. - Each of the rendering servers and clients has stored thereon a
world scene 556 or areplica server 38 further has foreground, background, viewpoint generation andsequencing algorithms 568 which it accesses to set the viewpoints.Algorithms 568 together make up an overall system monitoring protocol which permits the system administrator to manually review or intervene in making on-line changes and adjustments to any viewpoint already established on the system. - Also present on all rendering computers (servers and clients) is an executable (*.exe) file which, when executed by any imaging computer's processor, interprets data stream commands coming from the rendering server and received by each of the clients. The render
server 38 further keeps aclock 570 that is used to synchronize the animation across all of the displays. -
FIG. 2 is a block diagram illustrating the high-level operation of the imaging computers according to the invention. A typical application of the invention, thesystem 10 as shown inFIG. 1 is used to provide an array of multiple, contiguous displays for the projection of a unified video image containing animation characteristics and overlaid text. InFIG. 2 atstep 100, and for eachrendering server - At
process step 102, “virtual cameras” are created by the render server viewpoint algorithm which correspond to each of the output video channels. These “virtual cameras” are logical partitions of the processors and memories of imaging computers 38-50, four such virtual cameras being created for each imaging computer 38-50 in the illustrated embodiment. The system administrator sets up the properties of these virtual cameras in the software in advance of execution. The “align cameras”process 102 begins selecting previously stored imaging textures and geometries so as to lead to the creation of the final set of images.Camera alignment step 102 is linked to astep 104, which in the illustrated airport terminal embodiment establishes each of these virtual cameras as driving a display for either a desk or as a gate.Process step 104 makes it possible to assign certain text data to each of the virtual camera nodes established atstep 102. Registration with the FIDS server atstep 104 also includes defining a prescribed set of locations for the overlay of the animation by these text data. - Step 102 establishes which prestored geometries and texture files are needed for a scene. Step 106 queries these files and loads them. A geometry file possesses information on the exterior limits of a displayed object. A texture file relates to a color/surface treatment of such an object or of the background. These geometries and textures are stored prior to runtime on the mass storage device(s) of each server and client, so that they don't have to be transmitted over the network.
- At
step 112, each rendering server ornode rendering servers servers - At
step 116, which takes place in each of the client and server imaging computers, the positions and orientations are used to place the geometries within scenes. Theplacement step 116 uses a coordinate system previously established by the software. The geometries, positions and orientations may change or may be modified as rapidly as therendering servers - Once the geometries pertaining to the animation are properly positioned at
step 116, atstep 118 the FIDS data accessed by the UNIX server 12 (which in turn is linked to the network viapath 74,FIG. 1 ) are directed to the appropriate ones of therendering servers step 118, the rendered scene atstep 120 is refreshed with the next set of geometries to be established with new orientation coordinates on the same textured scene as background with the FIDS data stream continuing to project flight information within the same pre-established locations. - At the termination of each of these cycles at a
step 122, the texture memory is purged to replenish available space for new imaging data in the animation to be loaded. The process then reverts to step 106 for the next cycle. -
FIG. 14 overlays the principal steps of this process on one server/client group 24 of the network. Atstep 572, an executable file initiates data stream commands to begin the image rendering process. These commands are passed by theUNIX server 12 to each of theclients executable file 574 receives the commands or cues and begins to construct viewpoint map images based on them. The images to be displayed are rendered by each of the clients atsteps 576. When these images are completed, eachclient synchronization return signal 578 throughserver 12 to renderserver 38. Renderserver 38 waits until all such synchronization signals have been collected before initiating the next cycle in the animation. -
FIG. 3 is a flow diagram showing how a user selects viewpoints for each of the virtual cameras he or she wishes to set up in the multiple display system. A viewpoint defines a position and an orientation from which all of the geometries associated with the displayed animation imagery are rendered and projected onto one of the displays 52-58. Each “world”, as that term is defined herein, has at least one viewpoint associated with it, and more typically multiple viewpoints, and it is from these viewpoints that scenes associated with the respective virtual camera windows are drawn. As shown inFIG. 5 ,worlds universe 192 that is created by the user. When a universe is created in the software, a single virtual camera window viewpoint is automatically assigned to it. Once it is established, the user is permitted to construct additional virtual cameras each having possibly different viewpoints, and further has the ability to switch among them. A world in this sense comprises a set of viewpoints limited to a sector of the defined universe, with additional worlds within the same universe either existing adjacent to one another, partially overlapping, or asFIG. 5 illustrates, on opposite sides of the universe. Multiple universes may also be established with additional worlds as separate subsets to those designated universes, and these universes may reside on separate rendering servers. One example is the embodiment shown inFIG. 13 . A predetermined conversion process may be used among worlds (for example, a separate world can be instantiated by each ofseparate server groups - Within any world, a scene may be rendered from several different viewpoints, each of which is associated with a particular virtual camera. Each virtual camera is associated with a scene graph. In some instances, the same scene graph may be shared between or among several virtual cameras, where their perspective views intersect. If, for example, two different rows of virtual cameras cross each other at some intersection point, then only those two overlapping virtual cameras might end up sharing a particular scene graph since they share the same viewpoint perspective field. Virtual camera windows depicting different scenes would use different scene graphs. In this manner, the viewpoint is determined before the scene is rendered.
- At
step 150 inFIG. 3 , the user (system administrator) writes the position coordinates for the origin of a viewpoint. Once this is done, atstep 152 the user determines the orientation parameters (seeFIG. 4 ) associated with the viewpoint. - Next, at
step 154, a corresponding identity matrix for the scene graph is enabled. Position and orientation are parameterizations within an X, Y and Z coordinate system which defines the identity matrix. InFIG. 4 , this coordinatesystem 170 is illustrated with the X axis pointing to the right, the Y axis pointing straight down and the Z axis pointing straight ahead (into the paper). These coordinate frame axes, at step 156 (FIG. 3 ), are highlighted to the user on an administrative display screen such asmonitor 64 inFIG. 1 . The user then chooses an aspect ratio adjustment, which is a vertical scale factor applied to the screen image. This value is useful in correcting for any monitor or pixel distortions in the display system. - At
step 158 parallax settings are selected. The parallax settings may be used to establish a separation distance between virtual cameras along a linear path that is virtually spaced from the scene being rendered. This shape of this path is arbitrary. The path may be curved or straight;FIG. 13 shows examples of straight, curved and closed orendless paths virtual cameras - In many cases, a convergence angle may be desired among the virtual cameras on the path, depending on the type of scene selected, and this convergence angle is supplied at
step 160. For example, when a scene is being rendered in multiple displays, it may be desirable for the viewpoint established in the scene to vary from one display to the next as an observer walks along the displays on a path parallel to them. The establishment of a convergence angle provides for a balanced and smooth proportional viewing of a scene and the matching of infinity point perspective from one display to the next. Atstep 162, after all of these coordinates and parameters have been selected by the user, the viewpoint of the scene is created and stored in the virtual camera memory and is available at runtime for the rendering and projection of the image. -
FIG. 4 is a schematic representation of a viewpoint coordinate system and the world coordinate system upon which it is based. The world coordinate frame axes are shown at 170. The viewpoint coordinate frame axes are shown at 172, and as shown will typically be different from the world coordinate frame axes for the second and subsequent virtual camera viewpoints established for that world. The viewpoint coordinate frame axes establish the direction of the viewpoint. Ahither clipping plane 174 outlines the limits of the viewpoint angle of view as it projects outward toward aview plane 176. By making adjustment to ahither distance 176 which is the distance between theview position 178 and thehither clipping plane 174, the size of theview plane 176 can be regulated, and therefore the range of the viewpoint itself. In this fashion, the view position and orientation can be established relative to the global world coordinateframe 170. Although in the example given inFIG. 4 the Y axis of theviewpoint frame 172 and the world coordinateframe 170 happen to be parallel, this need not be the case. -
FIG. 5 illustrates the spatial relationship between tworepresentative world localities universe 192. Theworlds universe 192, and several such worlds may overlap or exist oppositely within the same universe. A virtual camera object always corresponds to a region of the screen in which a particular view of the graphical universe is displayed. With the virtual camera structure of the invention, multiple views can be displayed simultaneously and flexibly to different parts of the screen. For example, a set of virtual camera windows can be assigned to a given world, which is itself confined to aspecific region 190 of theuniverse 192 with viewpoints only defined for thatparticular world 190. At the same time, another set of virtual camera windows can be directly associated with anotherseparate region 191 of thesame universe 192, limiting those particular viewpoints to that individual world. - In
FIG. 5 , acentral axis 194 serves at the point of origin directed toward each individual world, spread out 3600 around the center of thatuniverse 192. Each world may be defined as its own sector of that universe, and may be accessed as such. This attribute becomes necessary and useful in displaying concurrent multiple worlds within the same universe, or even in the multiple display of multiple universes, which can be achieved by using several rendering servers and their corresponding client computers. - For example, and as laid out in
FIG. 5 , a first rendering server and related group of clients can have loaded onto them the same universe information database as a second rendering server and its related group of clients. The displayed outputs of each server can be directed toopposite poles universe 192. Since the two rendering servers may be joined on a network, positional data relating to imaged objects may be exchanged between them thereby allowing for two separate worlds to coexist within the same networked system. It is also possible to have two separate universes running on two separate rendering servers, also linked within the same system, and visible on adjoining sets of output screens or displays, with data positions transferring between the rendering servers using a predetermined conversion process. -
FIG. 6 is a schematic flow diagram showing the rendering process within each rendering server. A rendering server, such as server 38 (FIG. 1 ), within a multiple-channel imaging array 24, handles all of the user interaction devices open to it. Therendering server 38 provides the framework under which the software protocols distribute real time animation commands across multiple channels to its clients 40-42. Therendering server 38 uses a communication protocol that provides a unique pathway through the system, which in turn enables the assignment of specific viewpoints of a given scene to respective graphics card video outputs along the array, and provides a method of synchronizing the whole array. Therendering server 38 controls the animation simulation to be displayed. The clients 40-42 are slaves to theserver 38 and execute the commands addressed to them. - The clients and server(s) communicate using an application level protocol. Server-shortened command stubs are provided as a way to map the software animation application programming interface (API) calls to their distributed equivalents. Reciprocally, the clients' API or stub procedures provide a way to map commands received by the servers over the network to local software API calls. Copies of the APIs reside both on the
rendering servers group 24FIG. 1 ) in the animation simulation. - A naming scheme or
module 200 allows the client and the server to which the client is connected to address remote objects within the scene graph and to specify operations to be performed on them. Thename module 200 is linked to a pointer to a name map at 202. - In the communication protocol, both the client and the server use calls to the software's network functions to connect to a multicast group. For example, the
rendering server 38 issues commands to itsmulticast group 24. The application level protocol uses a net item syntax that is included within the animation software. In the actual transmission of information between any of the clients 40-42 and theserver 38, a timing interval referenced as a type field is used to distinguish data items from command items. In the illustrated embodiment, the command items are distinguished from the data items by the most significant four bits of the type field, which are all ones. Type values 0XF0 to 0XFF are reserved for command codes. - The server loads a terrain model and computes the behavior at 204 for the activity taking place within the terrain. It initiates changes to the scene graph at 206 by making software calls to the client stub procedures. It may also make use of the
naming module 200 to name objects in the scene graph. Therendering server 38 may also use a command encoding/decoding module 208 to process items addressed to it by respective clients, or by commands delivered to it from outside the network to re-edit or recompile an updated set of scene graph features at 206. Theserver 38 initializes and controls the scene at 210. -
Rendering server 38 is responsible for initializing the animation simulation at 204 and also manages swap synchronization at 212 of all client computers linked with it. The main role of the associated clients 40-42 (and similar logic withinserver 38 itself) is to render the scene from the respective viewpoints of the virtual camera objects that have been created in them, which have been adjusted for their respective viewing pyramids (seeFIG. 4 ) and their respective orientations with respect to a perpendicular plane. As explained in conjunction withFIGS. 2 and 14 , the clients read their configurations from a text file referred to as an “*.ini” file. Following this, each client regularly decodes packets of data sent over the network and executes software calls locally on its copy of the scene graph. It uses its copy of the command encoding/decoding module 208 to map, at 214, the command code to its appropriate procedure. Thismap 214 is set up statically and all clients 40-42 rendering under the designatedserver 38 must have a copy of this map before the simulation can begin. The clients use their copies of thenaming module 200 to resolve client references at 202 to objects in the overall scene graph. -
FIGS. 7A and 7B illustrate how text information may be overlaid on the image displays. In the illustrated embodiment, FIDS data, which is Oracle based and exists within a UNIX platform environment, may be obtained through an Ethernet connection outside of the rendering server and client network and then integrated into the animation process. In the illustrated embodiment, the flight information derived from the FIDS database is available in airports throughout the United States and in other countries throughout the world and provides arrival and departure information for passengers traveling by air. Displays carrying the FIDS information are situated in flight desk areas and gate areas for specific airlines. - In the software protocol shown in
FIG. 7A , a listeningthread 220 is initiated that queries the incoming FIDS data received by the system. The system results are then transferred to a set ofclient threads protocol 228 is generated, completing the sectioning of the data, and then detaching and resetting itself for further queries. - When the listening
thread 220 detects a parcel of flight data in response to a preloaded data query, it delivers a sequential set of commands to adesk monitor thread 230, aflight monitor thread 234 and acommand listen thread 238.Threads - The
desk monitor thread 230 selects which desks are to receive which sets of flight arrival and departure information; different ones of these information sets pertain to particular desks. For each desk, adesk thread 232 is updated (233) by the system.Flight monitor thread 234 completes a process of determining aflight thread 236. Once this occurs, the command listenthread 238 acknowledges the arrival of all of the data, which is now fully parsed. The command listenthread 238 issues commands as to how the text is to be allocated within the video array as well as into the independent gates within the terminal, switching a set ofcommand threads respective update thread -
FIG. 7A illustrates operations taking place on theUNIX server 12 side of the system. On the client side (taking place within any of the imaging computers 38-50;rendering servers FIG. 7B , anew listen thread 252 is engaged responsive to a command addressed particularly to that client bymain server 12, and prepares itself to receive the text portion of the FIDS data, includingflights 256 for bothdesks 258 andgates 260. As the rendering servers and clients integrate the text information for the screens controlled by them, astatus thread 254 checks and logs the completion of the operation, and resets itself for the next series of queried FIDS data. The frequency of the querying is adjustable by the user of the system. If flight data are not present by a certain preset time, the controlled screen does not display the new flight data until the occurrence of both a new timing period and the arrival of new flight data. The user may monitor the system remotely through telneting to theUNIX server 12 or through software loaded onto theserver 12 that reveals the complete graphics of each of the video wall screens and gate display screens. - The illustrated embodiment is one form of overlaying text associated with animations displayed along large video walls with other adjacent screens that are located at gates within an airport environment. The present invention is also useful in situations where rapidly changing or time-variant text is closely integrated with large video walls having a multiplicity of screens where detailed animations, simulations and video overlays stretch along the full length of the video wall, and where such animations are to be monitored and modified remotely by the users via the Internet. The present invention has applications which include public municipal stations, malls, stadiums, museums, and scientific research laboratories and universities.
-
FIG. 8 illustrates amain motherboard assembly 300 that, in a preferred embodiment, exists in the all of the imaging computers 38-50. Each of thesemotherboards 300 may be identical for all computers operating in the network, or they may be of a different type or manufacturer, so long as the same motherboards are used within the same render server/client groups - Each
motherboard 300 must be equipped with aBIOS 302 which acknowledges the presence of multiple graphics cards 304-318 plugged into their specific slots. In the illustrated embodiment these include both 32-bit and 64-bit PCI slots 304-316, numbering up to seven slots per motherboard, and one AGPhigh speed slot 318. TheBIOS 302 built onto the motherboard must be able to assign different memory addresses to each of the cards 304-318, enabling separate video driver information to be sent to this specific card through the PCI or AGP bus (not shown), in turn allowing for video output information data to be allocated to that card. Once this is achieved, the imaging system can detect each card and direct each respective virtual camera windowing aperture frame to the VGA output of that card. Different video cards and their manufacturers have differing means of assigning these addresses for their respective video drivers under this arrangement, requiring that all video cards loaded onto themotherboard 300 in a multiple array be of the same type. The customization of the video drivers for this imaging system array and its software controls allows for different video card types to share the same motherboard under the operating systems of Windows NT 4.0 and Windows 2000, if the motherboard chosen to be used has aBIOS 302 that can acknowledge all the separate cards and assign unique memory addresses for those cards. - In a preferred hardware configuration, an
AGP card 318 with one VGA output port can share the same motherboard with at least three PCI cards 304-308 of the same type, providing a total of four video output channels on thatmotherboard 300. This is a typical arrangement for all the rendering servers and their client counterparts with the multiple channel imaging software being used. Each video output then occupies the same resolution value and color depth for that computer, which can be modified independently on each video channel. Using dual or evenquad CPU processors 320, 322 (a representative two of which are shown) onmotherboard 300 maximizes the graphical computational speed delivered through the AGP and PCI buses to the graphics cards to enhance the speed of the rendering animation. Since the textures and geometries of the animation sequence reside on all of thehard drives 324 existing on their designated computers, the speed of accessing those libraries is maximized through the motherboard's own SCSI or fiber channel buses 325 (FIG. 9A ). Further, eachmotherboard 300 containssufficient RAM 326 to transfer the graphical data, interacting with the cards' own video drivers and theavailable texture RAM 327 on each of the video cards 304-318. The addition of two or even four video output ports on theAGP cards 318 will increase the data throughput to an even greater level, due to the existence of more on-board AGP graphics card pipelines provided by the manufacturers, passing data more quickly through the faster AGP bus to the rest of themotherboard 300. This configuration can also usemultiport AGP cards 318 with multiport PCI cards 304-316 on the same motherboard to increase the number of channels per computer, provided thatBIOS 302 can recognize each of the video addresses for each of the video ports. The software created for this imaging system array assists in this process. - Choosing the number of video cards per
motherboard 300 must also take into account the most efficient use of available CPU speed on theboard 300, the speed of the onboard network, and the presence of other cards running in the system. The addition of video frame grabber cards (not shown) on themotherboard 300 concurrently allows for live outside video to be introduced to the outputted animation video as nondestructive overlays, and may be routed along the video array at a desired degree of resolution. -
FIG. 9A is a more detailed view of each of the rendering server and client architectures. Each of the motherboards in these computers containsCPUs main system RAM 326, and PCI and AGP buscontroller interface circuits buses 333, 335 (the buses for the first twoPCI interface circuits controller interface circuits north bridge 338 serves as a main conduit for signals passing between devices in the central processing portion of the motherboard, including theCPUs RAM 326 and cache memory devices (not shown). The north bridge also connects to theAGP bus controller 334 the memory addressdata path device 344, which provides an optimized interleaving memory function for thesystem RAM 326, and the I/O bridgeintermediate chip 340. TheAGP port controller 334 is therefore permitted direct computational contact with theCPUs north bridge 338, as well as theRAM 326, thereby giving it at least four times the speed of the other, PCI buses used to interconnect to thePCI graphics cards - A primary
PCI bus controller 332 is joined directly to the I/O bridge 340 and serves as the maximum throughput device for thePCI cards juncture 356 between I/O bridge 340 andsouth bridge 342, and in the illustrated embodiment run at secondary, lower speeds of 33 MHz. It is preferred that thePCI graphics cards -
South bridge 342 joins all “legacy” devices such as SCSI controllers (one shown at 325), IDE controllers (one shown at 336), onboard networks and USB ports (not shown). It also connects to networkport 358, from which is transferred positional coordinates of an animation's formatted graphics.South bridge 342 is meant to attach to lower-speed, data storage devices including thedisk array 324 from which source data for the system is derived. The architecture shown inFIG. 9A has been demonstrated to be superior in motherboard performance in terms of data transfer speeds and bandwidth capability for multiple graphics card inter-communication on the motherboard and is preferred. - Each of the graphics cards 304-318 has a respective graphics card CPU or
processor CPUs purpose processors purpose processors -
FIG. 9B shows how the operation of the motherboard results in total output resolution. Each successive graphics card present on its respective bus communicates to the BIOS its slot numbered position atstep 350, thereby directing theBIOS 302 on how to address the video driver to handle multiple output video channels, selecting a numerical value as to the number of channels available. At 352 the user may manually select the final resolution of each video output on each video card, which at 354 sets the overall resolution of the entire video animation image emanating from that particular computer box. The total resolution of the video wall made up of these contiguous video channels arranged and positioned precisely together is a summation of each of the resolutions set by each channel on each graphics card, including all multi port channels wherever they might be available on their respective cards. - It is also useful to consider the ability of the motherboard, its drivers, and its BIOS to perform these tasks within other operating systems such as LINUX, running on separate rendering servers and client computer systems in a manner that is more efficient in the retrieving and compiling of the graphical data. This may also be a determining factor in the methodology of accessing the fullest computational time usage of the multiple CPU processors on the motherboards in terms of multithreading of the animation rendering software integrated within the functions of the graphics chart cards chosen for the system.
-
FIG. 10 shows analternative system 400 in which a group ofrendering servers independent hubs hubs server 438.FIG. 10 illustrates the modular nature of the system and how additional server rendering groups may be added onto thefull system 400, increasing the number of total channels in a video wall animation. - The preferably UNIX-based
main server 438 joining the hubs linked to the groups of rendering servers is the entry point for the introduction of the FIDS text data to be overlaid on the various animation screens of the multi-channel imaging system. A total of eight virtual camera windows may be provided for each of therendering servers - In a standard contiguous video wall arrangement, each rendering server 402-406 provides a consecutive set of video channels that match precisely in a graphical sense as one views the video array from left to right, with the last image of the first group matching its right side with the left side of the first image from the second rendering server group, and so on. Under this arrangement, there is no upper limit to the length of the video wall, and the real-time animation rendering is regulated by the processing speed of each client computer box, the server computer boxes, and the network that joins them.
-
FIG. 11A shows an example of how a contiguous set of virtual camera viewpoints may look when projected onto a large video wall. Each of the video channels are numbered sequentially from left to right aschannels image frame 1 maps precisely onto the left edge ofimage frame 2 at aboundary 450, and so on along the expanse of the video wall, with no upper limit as to the number of channels which may be added. The timing of the animation sequences within the scene graph is regulated such that objects that move out of one frame and into the adjacent frame left or right do so continuously without breaks or pauses. - Each rendering server and its adjoining client computer units make up contiguous portions of the video wall, which may be directed both horizontally or vertically, numbering from bottom to top for vertical video walls. A video wall constructed according to the system may have other shapes and directions, including cylindrical, domed, spherical, parabolic, rear or front screen projected configurations, and may include additional tiers of horizontal rows of video screens. This feature included within this multi-channel imaging system is enabled because the virtual camera windows the user selects to assign viewpoints to specific video card outputs are based upon a coordinate system that the user is able to define and control as a part of the viewpoint software, with the animation rendering portion of the software responding to those portions of worlds the user has established within each rendering server or client computer.
- As shown in
FIG. 11B , graphical overlays or superimpositions of other rows of real time animation are possible, since more than one viewpoint may be assigned to the same video output channel, and with one of the virtual camera window settings having a smaller aperture than the other, with those sets of smaller apertures extending across the video walls in a contiguous fashion. The source of this second superimposed viewpoint series may come from another region of the same world, or a separate world altogether. -
FIG. 12 shows how separate video drivers may be used simultaneously in the multi-channel imaging system connecting withsame UNIX server 470 that links the data flow from theseparate hubs respective rendering servers respective client computers video drivers - All video drivers introduced into the system may be used to access worlds, but some worlds may be created to suit one video card's manner of displaying imagery through its own specific video driver. In addition to this, newer graphics cards that are recently introduced to the market may be loaded and tested against the existing video cards present on the system without having to rewrite software code for the entire system. By distinguishing and separating the newer cards' video driver from another set of video drivers already present within the system, a new set of differentiated tests may be implemented into the video array while the system remains continually online.
-
FIG. 13 shows a system having multiple camera base configurations running concurrently within thesame network 500. Each base configuration uses aseparate rendering server virtual cameras 508 is “horizontal” in that the virtual cameras of it are equispaced along a virtual straight line and have viewpoint axes which are parallel to each other. Asecond camera base 510 takes the shape of an arc; itsvirtual cameras 512 have viewpoint axes which are not parallel but which rather converge. A third camera base 514 forms an endless loop with the viewpoint axes of itsvirtual cameras 516 outwardly directed. - In each camera base instance, the same worlds may be used, or separate worlds may be newly introduced. The parallax value in each
base configuration configuration 508 has a parallax value set as a virtual distance between each of thevirtual cameras 509. On aseparate rendering server 504 and itsclients arcing base 510 anchors convergent viewpoints whose coordinates the user may select in the software's parameters. Such curved camera bases are able to work with the convergence used in certain animations which encourage the viewer to focus more on activity and objects that exist in the foreground as opposed to the more distant background features, depending on the angles between the curving set of viewpoints. Also, within certain types of generated worlds, a linear horizontal base may not provide needed convergence but a curved virtual camera base will. Thearcuate path 510 can be used, for example, in a set of displays arranged along a wall to simulate a set of windows in the wall to the outside. As the viewer moves along the wall, the viewpoint changes such that what the viewer is seeing mimics what he or she would see if those displays really were windows. - The circular virtual camera base 514 covers a full 360° sweep of an animated world. This camera base lends itself to more three dimensional applications of animation viewing, requiring the system to allocate geometries and textures around the entire perimeter of a world. An endless base 514 can be used to show portions of multiple worlds in larger detail. Arcing virtual camera bases like
base 510 can be used in video projection for “caves” and other rounded enclosures, where the projected imagery surrounds the viewer or viewers in a theater type arrangement. In this instance, the three dimensional coordinate system that defines the viewpoints set by the user of this system determines the degree of arc of the projected imagery against a curved or sloping screen surface. Since the viewpoint controls within the software allow for both flat plane as well as curved surface structure, the nonlinear aspects of projecting against any curved surface may be programmed into the system to compensate for the curvature of the projection screen, even if that curved surface is discontinuous. The final image will be viewed as a transposition of a flat rectilinear scene onto a curved surface screen, without distortions or with reduced distortions, in either a rear projected or a front projected format. Consequently, the contiguous set of images along an arc may also be joined together seamlessly, in the same fashion as a set of contiguous flat images that are precisely matched along each other on a flat display screen. - While three representative virtual camera baselines or
paths - The software controls enable the user to set the shapes of the viewpoint windows themselves, thereby creating apertures that are rectangular, triangular, or keystoned, depending on the nature of the projection screen's shapes. Prior to the invention, the projection apparatus had to be fitted with special lenses and apertures on the projectors to create an undistorted balanced image on a curved screen. According to the invention, the networked set of rendering server and client computers all share the same programmed curvilinear settings for projecting each image on an elongated curved screen, and are not limited in number of terms of channels used in the full system. This feature provides the capability of increasing the resolution of the final projected image along the inside of the caved enclosure by increasing the number of channels per horizontal degree of view. The system further provides for the introduction of rows or tiers of curved images, vertically, which can be especially useful in the projection of images within large domes or spheres, or where imagery is seen both above and below the vantage point of the viewers. The use of superimposed projected imagery as illustrated in
FIG. 11B may also be used in a curved screen surface environment. If different shapes of curved projected material are to be used simultaneously, the multi-channel networked imaging system can assist to allocate one set of images for one shape of screen, and another for another shape. - The modularity of the system as shown in
FIG. 13 permits its adaptation to multiple cave or domed theater enclosures employing multiple sizes and shapes, with the same or different sets of subject matter to be projected. Multiple rendering servers may be employed simultaneously, each with separate sets of viewpoint windows tailored precisely for a certain enclosed screen's configuration, programmed for those rendering servers and their connected client computer boxes. This permits a uniquely differentiated set of worlds to be shown for different cave enclosures, where portions of cave enclosures at the same time, within the data set of a single universe or even linked for multiple universes that are joined together by the same UNIX server network. - In certain cases both front and rear projection may be chosen for an installation involving different cave enclosures, altering the manner in which images appear on the enclosed viewing screen. In such an embodiment a group of rendering servers and their client computers would be assigned for rear projection, and another separate group would be assigned to front projection imagery, each specifically addressing the nonlinearity corrections necessary for projecting onto curved surfaces. A single cave enclosure may provide both front and rear screen viewing zones simultaneously within the same chamber, as in the case of a sphere or dome inside a large spheroidal or domed theater enclosure. Within this structure, the outer spheroidal sets of screens may use front protection, joined with one group of rendering servers and their rendering clients, and an inner sphere or domed structure would make use of rear projection for another associated group of rendering servers and their own rendering clients.
- As shown for example in
FIG. 12 , separate sets of differing graphics cards and theircorresponding video drivers separate groups UNIX server 470 that joins the network of all rendering servers provides a high speed computational link that addresses the positions of the varying textures and geometries made visible in and around the enclosures. Since the real time animation rendering capacity is enabled on all servers and their rendering clients in this regard, increasing the output resolution per degree of arc for the projectors and other connected display devices used in this system is achieved by increasing the total number of video channels joined throughout the system, with no upper limit, to further enhance the makeup of the entire video projection array. -
FIG. 15 illustrates two particular applications of the invention's multidisplay architecture: an autostereoscopic projection array and a flat panel display interface. The present invention has the ability to compile and project multiple perspective image viewpoints of a given scene simultaneously, which may be interfaced directly with various classes of newly developed autostereoscopic display devices such asflat panel 600 and rear projection screens 604. Such display devices free the viewer from the need of wearing shuttered or polarized glasses to view 3D stereoscopic images, greatly enhancing the wide angle viewing capabilities of autostereo images, and improving clarity and brightness of the final image set. - Since each
rendering server 606 and itsrendering clients 608, 610 (a representative two of which are shown) has established within it a software set of angled viewpoint controls assigned to video output ports, such ports may be used to supply images to angled projectors 612-626 that converge their output beams on a central point behind theautostereoscopic screen device 604. These screen devices are available from several manufacturers but their construction and operation may be summarized as follows.Screen device 604 is a rear projection system that includes two large rectangularlenticular lenses central axis 632, with their vertical lenticules identical in spacing number such as 50 lines per inch. A front view detail of each of theselenticular lenses viewing screen assembly 604. Clear spacing plates such asacrylic plates - The video projectors 612-626 should have identical focal length lenses, resolution and aperture size, and should be anchored along a single stationary arc having an
axis 632 which is orthogonal to thescreen 604. With very large screens, the degree of arcing is slight. If the size of therear screen assembly 604 is small, the arcing is more pronounced. While eight projectors 612-626 are shown, any number of projectors greater than or equal to two can be used.Screen device 604 receives the array of light beams directed towards the back of the screen, and after that array travels through several layers of lenticular lensing material sandwiched inside the screen, re-projects the projector light rays from the front of the screen with a summation of each of the projectors' rays across a widened viewing aperture. The point ofconvergence 636 of all of the projectors' beams is located at the intersection of acentral axis 632, itself perpendicular to the plane ofscreen 604, and arear surface 634 of the rearlenticular lens 605. - The rectangular pattern created on the back of the rear lenticular screen by video projectors 612-626 should be identical in size and shape, and any keystone corrections should be done electronically either within each video projector 612-626 or by software operating within the graphics cards in the
imaging computer - In this embodiment, increasing the number of projectors 612-626 increases the number of views visible to viewers in front of the
screen 604. The distance between the projectors 612-626 andconvergence point 636 is determined by the size of the rectangular image they create on the rearlenticular lens 605 ofscreen 604, with the objective of completely filling the viewing aperture of the rearlenticular lens 605. - If the number of the projectors 612-626 is large, as in eight or more, and if the resolution of the projectors 612-626 is large, for example 1280×1024 pixels each, then the lenticular lenses themselves will be able to support a number of lines per inch greater than 50 and as high as 150, thereby increasing the total number of views perceived on the front of the screen for 3D viewing.
- The typical light path for a rear projector beam first passes through the rear
lenticular lens 605 at a given incident angle with respect tosurface 634. The rearlenticular lens 605 then refracts this incident beam at an angle determined by the focal length of thelenticular lens 605 and the angle of the incident beam, as well as the distance of the projector fromconvergence point 636. The first, rearlenticular lens 605 establishes an initial number of viewing zones and directs these rays through the middle, circularlenticular lens 615, which widens the viewing zones set by the first, rearlenticular lens 605. The amount of widening is set by the focal length of this middle lens. As the ray passes through the frontlenticular lens 607, which preferably is identical to the rear lens and is offset to the right or left by a fractional distance less than the width of a single lenticule, the number of contiguous perspective viewing zones is multiplied. The amount of this multiplication is determined by the number of lines per inch of the lenticular lens, the number of projectors arrayed behind the rear lenticular lens, the amount of right or left offset distance of the front lenticular lens relative to the rear lenticular lens, and the separation distance between the planes of the front and rear lenticular lenses. Usually, this multiplication factor is three times. The lenticular lenses are held firmly into flat positions by glass plates or byacrylic plates screen 604 possesses the ability to repeat the total number of views delivered to the back of the screen several times in order to provide an even wider 3D convergent viewing zone for large audiences to collectively view such autostereoscopic images in a large theatre environment, or along a video wall. - In this embodiment, with eight projections 612-626 positioned behind the
screen 604, a viewer in front ofscreen 604 would see a succession of eight stereo views of the given scene, with his or her left eye observing a left perspective view, and his or her right eye seeing a right perspective view, the view determined by the angle of view that he or she has with respect to the front surface ofscreen 607. - Several screens may be optically joined together to provide an immersive 3D enclosure, consisting of the screens' individual systems of lenticules, or the screen may be curved or shaped to arc around the audience's viewing perspectives. The real-time rendering facilities inherent in the distributed image processing of the invention permit the rapid movement associated with large-scale, high-
resolution motion 3D viewing. - With the addition of a
video multiplexer 628, autostereoscopic flat panel devices such asdevice 600 may be joined to the system, for smaller 3D viewing applications that don't require stereo glasses or head-tracking devices. Furthermore, alenticular printer 630 may be added to the system to view, edit, and print lenticular photos and 3D animations created within the multi-channel imaging system. This is a particularly useful aspect of the system in that it gives the 3D lenticular work creator the ability to view artwork changes instantaneously on a 3D screen with regard to a lenticular image he is constructing, instead of having to reprint an image array many times on an inkjet or laser printer to fit the kind of 3D viewing he wishes to make. - The way in which autostereoscopic images may be delivered or constructed within the system of the invention is based on the parameters set up to control the perspective fields of the various images to be assembled. This specialized software is capable of selecting these values for a given 3D world, which may be computer generated or transferred from an external source of 3D data from digital camera sources or film photography scannings. Such controls may regulate viewing distance from a centralized scene, viewing angles, parallax adjustments between virtual cameras, the number of virtual cameras used, perspective convergence points, and the placement of objects or background material compositionally for the scene.
- Since there is no upper limit to the number of viewpoints created by the system, recorded source data that possess only a low number of views, or even just two views, may be expanded through a mathematical algorithm used within the system to generate more views between or among the original set of views. The results of this 3D reconstruction of an actual scene may be composited with other autostereo images in much the same way as portions of a 3D world may be joined together. For the 3D
flat panel display 600, software interleaving functions that are established within the multi-channel imaging system may be used to optically join multiple perspective views in combination with a video multiplexer to support a minimum of four channels, with the upper limit regulated by the line pitch of the lenticular lens positioned on the 3D panel, as well as theflat panel 600's total screen resolution. - In summary, a real-time, animated, multiple screen display system has been shown and described in which is set up a plurality of virtual cameras, each having its own viewpoint. The present invention permits animated objects to displace themselves across multiple displays, allows changing text data to be superimposed on these images, and permits multiple screen contiguous displays of other than flat shape and capable of displaying scenes from different viewpoints.
- While the present invention has been described in conjunction with the illustrated embodiments, the invention is not limited thereto but only by the scope and spirit of the appended claims.
Claims (10)
1. An autostereoscopic display system, comprising:
a lenticular lens display screen having a front surface and a rear surface, the lenticular lens display screen projecting a plurality of views of a scene from the front surface of the display screen;
a plurality of video projectors disposed to the rear of the lenticular lens display screen, each of the video projectors focused on a convergence point on the rear surface of the lenticular lens display screen; and
a plurality of imaging computers driving the video projectors, memories of each of the imaging computers storing a scene to be displayed on the lenticular lens display screen, each imaging computer rendering the scene from one or more viewpoints preselectable to be different from other ones of the viewpoints, each projector projecting an image from a respective one of the viewpoints.
2. An autostereoscopic display system, comprising:
a lenticular lens display screen having a front surface and a rear surface, multiple viewpoints of a scene visible to a viewer in front of the front surface of the screen;
a plurality of video projectors disposed to the rear of the screen, each of the video projectors focused on a convergence point on the rear surface of the screen;
a plurality of client imaging computers driving the video projectors, memories of each of the client imaging computers storing the scene to be displayed, object imaging data used to render animated objects within the scene, and a plurality of the viewpoints from which the scene is to be rendered; and
a rendering server having a memory for storing animation sequencing instructions, the rendering server coupled to each of the imaging computers for communicating the sequencing instructions to the client imaging computers at a time after the storing, by the client imaging computers, of the scene and the object imaging data, each of the client imaging computers rendering the scene from one or more of the stored viewpoints responsive to the sequencing instructions and causing the projectors to project respective images from respective ones of the viewpoints.
3. An autostereoscopic display system, comprising:
at least one flat panel display;
a lenticular lens positioned on the flat panel display;
more than two video channels being received by the flat panel display, more than two viewpoints of an imaged scene being transmitted by respective ones of the video channels, the flat panel display and lenticular lens permitting a viewer to view different ones of the viewpoints from different positions relative to the lenticular lens; and
for each viewpoint, a virtual camera coupled to the flat panel display for transmitting thereto a respective channel of video data, the virtual camera rendering the imaged scene from a respective viewpoint.
4. The autostereoscopic display system of claim 3 , wherein the virtual cameras are logical partitions of one or more imaging computers.
5. An autostereoscopic display system, comprising:
a plurality of flat panel displays together forming a video wall;
a lenticular lens positioned on each flat panel display;
a plurality of video channels being received by the flat panel displays, a plurality of viewpoints of an imaged scene being transmitted by respective ones of the video channels, the flat panel displays and lenticular lenses permitting a viewer to view different ones of the viewpoints from different positions relative to the lenticular lenses; and
for each viewpoint, a virtual camera coupled to the flat panel displays for transmitting thereto a respective channel of video data, the virtual camera rendering the imaged scene from a respective viewpoint, a contiguous set of the virtual camera viewpoints projected onto the video wall.
6. An autostereoscopic display system, comprising:
at least first and second autostereoscopic display devices having characteristics which are different from each other;
a plurality of virtual cameras coupled to each of the autostereoscopic display devices, each virtual camera rendering a scene from a preselected viewpoint;
for each autostereoscopic display device, images of a scene appearing thereon being viewable from different ones of the viewpoints depending on the position of a viewer relative to the display device,
a plurality of client imaging computers programmed to instantiate the virtual cameras, each client imaging computer storing the scene to be displayed, object imaging data to render animated objects within the scene, and at least one of the viewpoints from which the scene is to be rendered; and
a rendering server having a memory for storing animation sequencing instructions, the rendering server coupled to each of the imaging computers for communicating the sequencing instructions to the client imaging computers at a time after the storing, by the client imaging computers, of the scene and the object imaging data, each of the client computers implementing one or more of the virtual cameras to render the scene from one or more of the stored viewpoints responsive to the sequencing instructions and causing the autostereoscopic display devices to display images from the viewpoints of the respective virtual cameras.
7. The system of claim 6 , wherein the virtual cameras are logical partitions of one or more imaging computers.
8. The system of claim 6 , in which at least one of the autostereoscopic display devices is a flat panel display on which has been positioned a lenticular lens array, at least one other of the autostereoscopic devices not including a flat panel display.
9. An autostereoscopic display system, comprising:
an autostereoscopic display device displaying more than two different viewpoints of an imaged scene; and
more than two virtual cameras coupled to the display device for supplying respective channels of video data corresponding to said more than two different viewpoints, said more than two virtual cameras being comprised of a single central processor unit and a single graphics processor card coupled to the central processor unit with more than video output ports, each port outputting a channel of video data to the display device.
10. The autostereoscopic display system of claim 9 , wherein the plurality of video channels is at least eight and the plurality of viewpoints is at least eight.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/025,296 US20080129819A1 (en) | 2001-08-02 | 2008-02-04 | Autostereoscopic display system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/921,090 US6803912B1 (en) | 2001-08-02 | 2001-08-02 | Real time three-dimensional multiple display imaging system |
US10/955,339 US20050062678A1 (en) | 2001-08-02 | 2004-09-24 | Autostereoscopic display system |
US12/025,296 US20080129819A1 (en) | 2001-08-02 | 2008-02-04 | Autostereoscopic display system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/955,339 Continuation US20050062678A1 (en) | 2001-08-02 | 2004-09-24 | Autostereoscopic display system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080129819A1 true US20080129819A1 (en) | 2008-06-05 |
Family
ID=25444900
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/921,090 Expired - Fee Related US6803912B1 (en) | 2001-08-02 | 2001-08-02 | Real time three-dimensional multiple display imaging system |
US10/955,339 Abandoned US20050062678A1 (en) | 2001-08-02 | 2004-09-24 | Autostereoscopic display system |
US12/025,296 Abandoned US20080129819A1 (en) | 2001-08-02 | 2008-02-04 | Autostereoscopic display system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/921,090 Expired - Fee Related US6803912B1 (en) | 2001-08-02 | 2001-08-02 | Real time three-dimensional multiple display imaging system |
US10/955,339 Abandoned US20050062678A1 (en) | 2001-08-02 | 2004-09-24 | Autostereoscopic display system |
Country Status (4)
Country | Link |
---|---|
US (3) | US6803912B1 (en) |
AU (1) | AU2002355849A1 (en) |
GB (1) | GB2396281B (en) |
WO (1) | WO2003012490A2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060012675A1 (en) * | 2004-05-10 | 2006-01-19 | University Of Southern California | Three dimensional interaction with autostereoscopic displays |
US20100042614A1 (en) * | 2008-08-11 | 2010-02-18 | Jeremy Selan | Deferred 3-d scenegraph processing |
US20100283839A1 (en) * | 2009-05-07 | 2010-11-11 | Chunfeng Liu | Stereoscopic display apparatus and method and stereoscopic display wall |
US20110202914A1 (en) * | 2010-02-12 | 2011-08-18 | Samsung Electronics Co., Ltd. | Method and system for installing applications |
US20120212510A1 (en) * | 2011-02-22 | 2012-08-23 | Xerox Corporation | User interface panel |
US20130293547A1 (en) * | 2011-12-07 | 2013-11-07 | Yangzhou Du | Graphics rendering technique for autostereoscopic three dimensional display |
US20160094837A1 (en) * | 2014-09-30 | 2016-03-31 | 3DOO, Inc. | Distributed stereoscopic rendering for stereoscopic projecton and display |
Families Citing this family (51)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6722888B1 (en) * | 1995-01-20 | 2004-04-20 | Vincent J. Macri | Method and apparatus for tutorial, self and assisted instruction directed to simulated preparation, training and competitive play and entertainment |
US6803912B1 (en) * | 2001-08-02 | 2004-10-12 | Mark Resources, Llc | Real time three-dimensional multiple display imaging system |
US20030095681A1 (en) * | 2001-11-21 | 2003-05-22 | Bernard Burg | Context-aware imaging device |
US20030140073A1 (en) * | 2001-12-18 | 2003-07-24 | Todd Wostrel | Data table input and real-time dynamic display on a handheld device |
EP1333376A1 (en) * | 2002-02-05 | 2003-08-06 | Fulvio Dominici | Encoding method for efficient storage, transmission and sharing of multidimensional virtual worlds |
US7734085B2 (en) * | 2002-06-28 | 2010-06-08 | Sharp Kabushiki Kaisha | Image data delivery system, image data transmitting device thereof, and image data receiving device thereof |
JP3744002B2 (en) * | 2002-10-04 | 2006-02-08 | ソニー株式会社 | Display device, imaging device, and imaging / display system |
SE0401682D0 (en) * | 2004-06-30 | 2004-06-30 | Saab Automobile | Graphical Instrument Panel |
WO2006017198A2 (en) * | 2004-07-08 | 2006-02-16 | Actuality Systems, Inc. | Architecture for rendering graphics on output devices |
US7173619B2 (en) * | 2004-07-08 | 2007-02-06 | Microsoft Corporation | Matching digital information flow to a human perception system |
US7711681B2 (en) | 2004-11-05 | 2010-05-04 | Accenture Global Services Gmbh | System for distributed information presentation and interaction |
WO2006053271A1 (en) | 2004-11-12 | 2006-05-18 | Mok3, Inc. | Method for inter-scene transitions |
TW200634524A (en) * | 2005-03-16 | 2006-10-01 | Coretronic Corp | Intelligent auto-turn on/off module and intelligent auto-turn on/off method |
US7471292B2 (en) * | 2005-11-15 | 2008-12-30 | Sharp Laboratories Of America, Inc. | Virtual view specification and synthesis in free viewpoint |
US8209620B2 (en) | 2006-01-31 | 2012-06-26 | Accenture Global Services Limited | System for storage and navigation of application states and interactions |
US8279168B2 (en) * | 2005-12-09 | 2012-10-02 | Edge 3 Technologies Llc | Three-dimensional virtual-touch human-machine interface system and method therefor |
KR101187787B1 (en) * | 2006-02-18 | 2012-10-05 | 삼성전자주식회사 | Method and apparatus for searching moving picture using key frame |
US20070260675A1 (en) * | 2006-05-08 | 2007-11-08 | Forlines Clifton L | Method and system for adapting a single-client, single-user application to a multi-user, multi-client environment |
ES2319592B1 (en) * | 2006-09-25 | 2010-01-29 | Insca Internacional, S.L. | VIRTUAL SIMULATION SYSTEM OF SPACES, DECORATED AND SIMILAR. |
US7880739B2 (en) * | 2006-10-11 | 2011-02-01 | International Business Machines Corporation | Virtual window with simulated parallax and field of view change |
US9047123B2 (en) | 2007-06-25 | 2015-06-02 | International Business Machines Corporation | Computing device for running computer program on video card selected based on video card preferences of the program |
US9047040B2 (en) * | 2007-06-25 | 2015-06-02 | International Business Machines Corporation | Method for running computer program on video card selected based on video card preferences of the program |
KR101382618B1 (en) * | 2007-08-21 | 2014-04-10 | 한국전자통신연구원 | Method for making a contents information and apparatus for managing contens using the contents information |
CN101488079B (en) * | 2008-01-14 | 2011-08-24 | 联想(北京)有限公司 | Method for processing operation command in computer and computer thereof |
US8253728B1 (en) * | 2008-02-25 | 2012-08-28 | Lucasfilm Entertainment Company Ltd. | Reconstituting 3D scenes for retakes |
US20090219381A1 (en) * | 2008-03-03 | 2009-09-03 | Disney Enterprises, Inc., A Delaware Corporation | System and/or method for processing three dimensional images |
JP2009273865A (en) * | 2008-04-17 | 2009-11-26 | Konami Digital Entertainment Co Ltd | Game program, game machine, and game control method |
US20100141552A1 (en) * | 2008-12-04 | 2010-06-10 | Andrew Rodney Ferlitsch | Methods and Systems for Imaging Device and Display Interaction |
US20100238161A1 (en) * | 2009-03-19 | 2010-09-23 | Kenneth Varga | Computer-aided system for 360º heads up display of safety/mission critical data |
US20100309290A1 (en) * | 2009-06-08 | 2010-12-09 | Stephen Brooks Myers | System for capture and display of stereoscopic content |
EP2261827B1 (en) * | 2009-06-10 | 2015-04-08 | Dassault Systèmes | Process, program and apparatus for displaying an assembly of objects of a PLM database |
US20100321382A1 (en) | 2009-06-18 | 2010-12-23 | Scalable Display Technologies, Inc. | System and method for injection of mapping functions |
US9728006B2 (en) | 2009-07-20 | 2017-08-08 | Real Time Companies, LLC | Computer-aided system for 360° heads up display of safety/mission critical data |
TWI407773B (en) * | 2010-04-13 | 2013-09-01 | Nat Univ Tsing Hua | Method and system for providing three dimensional stereo image |
WO2011149558A2 (en) | 2010-05-28 | 2011-12-01 | Abelow Daniel H | Reality alternate |
US20140006152A1 (en) * | 2011-12-09 | 2014-01-02 | Alexander D. Wissner-Gross | Providing a Proximity Triggered Response in a Video Display |
EP2801029A1 (en) * | 2012-01-06 | 2014-11-12 | Aselsan Elektronik Sanayi ve Ticaret Anonim Sirketi | Distributed image generation system |
JP5572647B2 (en) * | 2012-02-17 | 2014-08-13 | 任天堂株式会社 | Display control program, display control device, display control system, and display control method |
WO2013126868A1 (en) * | 2012-02-23 | 2013-08-29 | Jadhav Ajay | Persistent node framework |
DE102013201377A1 (en) | 2013-01-29 | 2014-07-31 | Bayerische Motoren Werke Aktiengesellschaft | Method and apparatus for processing 3d image data |
US9606738B2 (en) * | 2014-03-10 | 2017-03-28 | Kabushiki Kaisha Toshiba | Memory system with a bridge part provided between a memory and a controller |
JP6468757B2 (en) * | 2014-08-25 | 2019-02-13 | 株式会社ミツトヨ | 3D model generation method, 3D model generation system, and 3D model generation program |
KR101609368B1 (en) * | 2014-10-08 | 2016-04-05 | 주식회사 맥키스컴퍼니 | Multi-screen synchronization system for realtime 3D image |
KR101695931B1 (en) * | 2016-10-25 | 2017-01-12 | 오재영 | Image apparatus for multi-screens |
CN111406412B (en) | 2017-04-11 | 2021-09-03 | 杜比实验室特许公司 | Layered enhanced entertainment experience |
CN111194550B (en) * | 2017-05-06 | 2021-06-08 | 北京达佳互联信息技术有限公司 | Processing 3D video content |
CN110163943B (en) * | 2018-11-21 | 2024-09-10 | 深圳市腾讯信息技术有限公司 | Image rendering method and device, storage medium and electronic device |
TWI709076B (en) * | 2019-05-31 | 2020-11-01 | 技嘉科技股份有限公司 | Motherboard outputting image data and operation system |
CN111612919A (en) * | 2020-06-19 | 2020-09-01 | 中国人民解放军国防科技大学 | Multidisciplinary split-screen synchronous display method and system of digital twin aircraft |
WO2024166379A1 (en) * | 2023-02-10 | 2024-08-15 | 日本電信電話株式会社 | Delivery control system, delivery control device, delivery control method, and program |
CN116681869B (en) * | 2023-06-21 | 2023-12-19 | 西安交通大学城市学院 | Cultural relic 3D display processing method based on virtual reality application |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5564000A (en) * | 1993-03-01 | 1996-10-08 | Halpern Software, Inc. | Method and apparatus for viewing three dimensional objects |
US5673145A (en) * | 1995-03-24 | 1997-09-30 | Wilson; Robert M. | Rear projection screen multi-panel connection system |
US5990972A (en) * | 1996-10-22 | 1999-11-23 | Lucent Technologies, Inc. | System and method for displaying a video menu |
US6057898A (en) * | 1997-08-29 | 2000-05-02 | Kabushiki Kaisha Toshiba | Multipanel liquid crystal display device having a groove on the edge surface |
US6072478A (en) * | 1995-04-07 | 2000-06-06 | Hitachi, Ltd. | System for and method for producing and displaying images which are viewed from various viewpoints in local spaces |
US6256061B1 (en) * | 1991-05-13 | 2001-07-03 | Interactive Pictures Corporation | Method and apparatus for providing perceived video viewing experiences using still images |
US6282455B1 (en) * | 1998-10-19 | 2001-08-28 | Rockwell Technologies, Llc | Walk-through human/machine interface for industrial control |
US6329994B1 (en) * | 1996-03-15 | 2001-12-11 | Zapa Digital Arts Ltd. | Programmable computer graphic objects |
US20020118194A1 (en) * | 2001-02-27 | 2002-08-29 | Robert Lanciault | Triggered non-linear animation |
US6481849B2 (en) * | 1997-03-27 | 2002-11-19 | .Litton Systems, Inc. | Autostereo projection system |
US6496598B1 (en) * | 1997-09-02 | 2002-12-17 | Dynamic Digital Depth Research Pty. Ltd. | Image processing method and apparatus |
US20040012594A1 (en) * | 2002-07-19 | 2004-01-22 | Andre Gauthier | Generating animation data |
US6798409B2 (en) * | 2000-02-07 | 2004-09-28 | British Broadcasting Corporation | Processing of images for 3D display |
US6803912B1 (en) * | 2001-08-02 | 2004-10-12 | Mark Resources, Llc | Real time three-dimensional multiple display imaging system |
US6922201B2 (en) * | 2001-12-05 | 2005-07-26 | Eastman Kodak Company | Chronological age altering lenticular image |
US6943788B2 (en) * | 2001-10-10 | 2005-09-13 | Samsung Electronics Co., Ltd. | Three-dimensional image display apparatus |
-
2001
- 2001-08-02 US US09/921,090 patent/US6803912B1/en not_active Expired - Fee Related
-
2002
- 2002-08-01 AU AU2002355849A patent/AU2002355849A1/en not_active Abandoned
- 2002-08-01 GB GB0404346A patent/GB2396281B/en not_active Expired - Fee Related
- 2002-08-01 WO PCT/US2002/024434 patent/WO2003012490A2/en not_active Application Discontinuation
-
2004
- 2004-09-24 US US10/955,339 patent/US20050062678A1/en not_active Abandoned
-
2008
- 2008-02-04 US US12/025,296 patent/US20080129819A1/en not_active Abandoned
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6256061B1 (en) * | 1991-05-13 | 2001-07-03 | Interactive Pictures Corporation | Method and apparatus for providing perceived video viewing experiences using still images |
US5495576A (en) * | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5564000A (en) * | 1993-03-01 | 1996-10-08 | Halpern Software, Inc. | Method and apparatus for viewing three dimensional objects |
US5673145A (en) * | 1995-03-24 | 1997-09-30 | Wilson; Robert M. | Rear projection screen multi-panel connection system |
US6072478A (en) * | 1995-04-07 | 2000-06-06 | Hitachi, Ltd. | System for and method for producing and displaying images which are viewed from various viewpoints in local spaces |
US6331861B1 (en) * | 1996-03-15 | 2001-12-18 | Gizmoz Ltd. | Programmable computer graphic objects |
US6329994B1 (en) * | 1996-03-15 | 2001-12-11 | Zapa Digital Arts Ltd. | Programmable computer graphic objects |
US5990972A (en) * | 1996-10-22 | 1999-11-23 | Lucent Technologies, Inc. | System and method for displaying a video menu |
US6481849B2 (en) * | 1997-03-27 | 2002-11-19 | .Litton Systems, Inc. | Autostereo projection system |
US6057898A (en) * | 1997-08-29 | 2000-05-02 | Kabushiki Kaisha Toshiba | Multipanel liquid crystal display device having a groove on the edge surface |
US6496598B1 (en) * | 1997-09-02 | 2002-12-17 | Dynamic Digital Depth Research Pty. Ltd. | Image processing method and apparatus |
US6282455B1 (en) * | 1998-10-19 | 2001-08-28 | Rockwell Technologies, Llc | Walk-through human/machine interface for industrial control |
US6798409B2 (en) * | 2000-02-07 | 2004-09-28 | British Broadcasting Corporation | Processing of images for 3D display |
US20020118194A1 (en) * | 2001-02-27 | 2002-08-29 | Robert Lanciault | Triggered non-linear animation |
US6803912B1 (en) * | 2001-08-02 | 2004-10-12 | Mark Resources, Llc | Real time three-dimensional multiple display imaging system |
US6943788B2 (en) * | 2001-10-10 | 2005-09-13 | Samsung Electronics Co., Ltd. | Three-dimensional image display apparatus |
US6922201B2 (en) * | 2001-12-05 | 2005-07-26 | Eastman Kodak Company | Chronological age altering lenticular image |
US20040012594A1 (en) * | 2002-07-19 | 2004-01-22 | Andre Gauthier | Generating animation data |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060012675A1 (en) * | 2004-05-10 | 2006-01-19 | University Of Southern California | Three dimensional interaction with autostereoscopic displays |
US7787009B2 (en) * | 2004-05-10 | 2010-08-31 | University Of Southern California | Three dimensional interaction with autostereoscopic displays |
US20100042614A1 (en) * | 2008-08-11 | 2010-02-18 | Jeremy Selan | Deferred 3-d scenegraph processing |
US8612485B2 (en) * | 2008-08-11 | 2013-12-17 | Sony Corporation | Deferred 3-D scenegraph processing |
US20100283839A1 (en) * | 2009-05-07 | 2010-11-11 | Chunfeng Liu | Stereoscopic display apparatus and method and stereoscopic display wall |
US20110202914A1 (en) * | 2010-02-12 | 2011-08-18 | Samsung Electronics Co., Ltd. | Method and system for installing applications |
US8935690B2 (en) * | 2010-02-12 | 2015-01-13 | Samsung Electronics Co., Ltd. | Method and system for installing applications |
US20120212510A1 (en) * | 2011-02-22 | 2012-08-23 | Xerox Corporation | User interface panel |
US9508160B2 (en) * | 2011-02-22 | 2016-11-29 | Xerox Corporation | User interface panel |
US20130293547A1 (en) * | 2011-12-07 | 2013-11-07 | Yangzhou Du | Graphics rendering technique for autostereoscopic three dimensional display |
CN103959340A (en) * | 2011-12-07 | 2014-07-30 | 英特尔公司 | Graphics rendering technique for autostereoscopic three dimensional display |
US20160094837A1 (en) * | 2014-09-30 | 2016-03-31 | 3DOO, Inc. | Distributed stereoscopic rendering for stereoscopic projecton and display |
Also Published As
Publication number | Publication date |
---|---|
GB0404346D0 (en) | 2004-03-31 |
GB2396281B (en) | 2005-12-21 |
US20050062678A1 (en) | 2005-03-24 |
US6803912B1 (en) | 2004-10-12 |
AU2002355849A1 (en) | 2003-02-17 |
WO2003012490A3 (en) | 2003-05-08 |
GB2396281A (en) | 2004-06-16 |
WO2003012490A2 (en) | 2003-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6803912B1 (en) | Real time three-dimensional multiple display imaging system | |
Gaitatzes et al. | Virtual reality systems and applications | |
EP3360029B1 (en) | Methods and systems of automatic calibration for dynamic display configurations | |
US20070070067A1 (en) | Scene splitting for perspective presentations | |
DeFanti et al. | The future of the CAVE | |
US7602395B1 (en) | Programming multiple chips from a command buffer for stereo image generation | |
CN101548277B (en) | The computer graphics system of multiple parallel processor | |
EP3219100A1 (en) | A system comprising multiple digital cameras viewing a large scene | |
Kuchera-Morin et al. | Immersive full-surround multi-user system design | |
CN101334891A (en) | Multichannel distributed plotting system and method | |
JP7550222B2 (en) | Virtual, augmented, and mixed reality systems and methods | |
CN115830199B (en) | XR technology-based ubiquitous training campus construction method, system and storage medium | |
Raffin et al. | Pc clusters for virtual reality | |
CN115423916A (en) | XR (X-ray diffraction) technology-based immersive interactive live broadcast construction method, system and medium | |
US6559844B1 (en) | Method and apparatus for generating multiple views using a graphics engine | |
Gaitatzes et al. | Media productions for a dome display system | |
Peterka et al. | Dynallax: solid state dynamic parallax barrier autostereoscopic VR display | |
Ogi et al. | Usage of video avatar technology for immersive communication | |
Zhang et al. | An interactive multiview 3D display system | |
Peake et al. | The virtual experiences portals—a reconfigurable platform for immersive visualization | |
CN112911260A (en) | Multimedia exhibition hall sand table projection display system | |
Soares et al. | PC clusters for virtual reality. | |
Grogorick et al. | Gaze and motion-aware real-time dome projection system | |
EP4070153A1 (en) | Virtual, augmented, and mixed reality systems and methods | |
Bettio et al. | Scalable rendering of massive triangle meshes on light field displays |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |