WO2001093209A1 - Procede de preparation d'un environnement de realite virtuelle - Google Patents

Procede de preparation d'un environnement de realite virtuelle Download PDF

Info

Publication number
WO2001093209A1
WO2001093209A1 PCT/CH2001/000341 CH0100341W WO0193209A1 WO 2001093209 A1 WO2001093209 A1 WO 2001093209A1 CH 0100341 W CH0100341 W CH 0100341W WO 0193209 A1 WO0193209 A1 WO 0193209A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual reality
concept
environment
user
Prior art date
Application number
PCT/CH2001/000341
Other languages
German (de)
English (en)
Inventor
Markus Gross
Oliver Staadt
Andreas Kunz
Markus Meier
Luc Van Gool
Maia Engeli
Original Assignee
Eidgenössische Technische Hochschule Zürich, Gruppe Für Graphische Datenverarbeitung, Institut Für Wissenschaftliches Rechnen,
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eidgenössische Technische Hochschule Zürich, Gruppe Für Graphische Datenverarbeitung, Institut Für Wissenschaftliches Rechnen, filed Critical Eidgenössische Technische Hochschule Zürich, Gruppe Für Graphische Datenverarbeitung, Institut Für Wissenschaftliches Rechnen,
Priority to AU2001260002A priority Critical patent/AU2001260002A1/en
Publication of WO2001093209A1 publication Critical patent/WO2001093209A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the invention relates to the field of "virtual reality” (VR, virtual reality). It relates in particular to a method, a computer program, a device and a device according to the independent claims.
  • VR virtual reality
  • VR Virtual Reality
  • the first objective is to allow several spatially separated users to interact at a virtual meeting.
  • the second objective is the interaction between users and simulated VR objects, e.g. objects.
  • Every user moves in a VR environment.
  • a VR environment can be formed, for example, by projecting an image containing virtual reality objects onto at least one wall of a room serving as a VR location. Instead of an entire room, only a small cell that does not encompass the entire user can be present as a VR location. Instead of a projection on a wall, a display on at least one screen can also be provided. Alternatively, there is even the.
  • Possibility that a display field (“head-up display") attached directly in front of the eyes represents the image. Sensory impressions are generated.
  • a VR environment often also has one or more loudspeakers for displaying sounds. Actuators can also be present that appeal to other senses, such as the sense of touch or smell.
  • a VR environment also has cameras and other recording devices, on the basis of which a VR object representing the first user can be created for at least one second user remote from the first user. Furthermore, a VR environment has devices with which the user can influence the image section presented to him in the VR environment. For example, he can move through the rooms of a virtual building.
  • the VR object here is the sum of all sensory impressions that a user is taught in a VR environment to represent an object. Such sensory impressions can be visual (and possibly perceived three-dimensionally), acoustically, haptically (regarding the sense of touch), olfactic, etc.
  • a VR device has at least one VR location. If there is more than one VR location, the VR device still has means of communication between the VR locations.
  • VR objects In addition to real objects, for example people who are at a distant location and confer as VR objects with the user, for example, objects that do not exist in reality can also be VR objects. Examples of this are imaginary objects, graphic representations or the like, which are only available in electronic form, for example as a CAD representation of a technical object or a building, and which only receive something like a physical reality as VR objects.
  • the scenery of a VR environment in which the users move will even very often be a virtual object, for example a planned building, etc.
  • a large number of virtual reality facilities already exist.
  • stand-alone in which at least one user can move through a virtual environment
  • collaborative facilities in which, as an alternative or in addition, an interaction between users who are located at separate locations is made possible.
  • Stand - Standalone VR devices allow, for example, the access to the rooms of a planned building that exist as CAD drawings.
  • Standard collaborative VR devices usually aim to enable the reciprocal conferencing of people staying in different locations can embed the image of a first user in a VR environment of a second user.
  • the image of the first user is projected onto a simple geometric element, for example a polygon, which is virtually in a location in a VR environment of the second user is placed
  • the image is made, for example, with the aid of a single video camera.
  • the video camera can be installed near the screen in the VR location of the first user.
  • a user should not only be able to perceive a projected image that describes a virtual reality, but should also be in a complete VR environment.
  • Real-time conditions for acquisition and playback In order to guarantee interaction between users, recordings relating to the user must be able to be processed immediately in a VR environment.
  • a VR environment should be configured in such a way that it can be easily converted by an application module to work on an application without the processing speed suffering.
  • a concept data record is assigned to an object to be displayed in the VR environment of a first user and contains permanent or temporary properties of the object. It can both serve to record and process data, for example image data at the location of a real object, to influence and control, as well as to help with the rendering of the object as a VR object.
  • the object is captured by primary data which contain at least one video recording.
  • video recording is understood to mean a moving, continuously updated image.
  • the concept data record is then taken into account when processing the primary data into secondary data.
  • secondary data are processed image data, the processing being carried out with regard to the representation as a VR object and for the transmission to the corresponding VR location. Only after the secondary data has been generated is this secondary data transmitted to the VR location of the first user, where the object is represented as a VR object and is integrated into a VR environment
  • the method according to the invention allows selective and therefore fast data acquisition and processing, which enables real-time playback.
  • the invention allows an integration of immersive reproduction and acquisition, which is explained in more detail using the examples below.
  • a first example of a concept data record according to the invention is a VR object position data record, which contains information about the position of the object of at least one VR environment of a first user and can thus be actively influenced by the first user.
  • the position data record can also contain further information relating to the VR environment of the first user and relevant for the representation of the object, such as information about the contrast. It is used when generating a secondary data set from a primary data set, namely, for example, an image data set and or a geometry data set, from raw image data.
  • the image data record and / or the geometry data record is then transmitted to the VR location of the first user and processed there, likewise with the aid of the position data record, to form a VR object integrated in a VR environment.
  • the position data set is an example of a volatile concept data set, ie the position data is continuously replaced by updated position data.
  • the processing of the primary data record to the secondary data record can be carried out selectively: depending on the distance of the VR object from the user, the secondary data record, for example, only has to have a low image resolution.
  • a so-called image-based rendering algorithm can also be used. Of particular importance is a light field approach. This is based on a suitable approximation of the plenoptic function.
  • the plenoptic function (cf. for example L .McMillan and G.Bishop, SIGGRAPH 95 Conference Proceedings, p.39) is the function that assigns to every point in space in function of time all light radiation received from this point including frequency, directional information and intensity. It therefore contains image information about all objects in space, seen from all possible directions.
  • the light field function is an approximation of the plenoptic function. It takes into account that only information about light radiation are essential, which he comes from a certain area (in which the relevant object is located) and arrives at certain locations, namely at observation levels. With a known light field function, an image generation process is therefore merely a reproduction of the light field function with suitably fixed parameters such as viewing angle and position. A finite set of video recordings can be viewed as a real-time approximation of the object's light field function.
  • the light field function In order for the object to be reproduced as realistically as possible under different possible observer positions, the light field function must be approximated relatively precisely, ie a large number of possible viewing angles must be covered and a good resolution must be achieved. The large amount of data resulting from this requirement has hitherto prevented the provision of an immersive real-time VR device.
  • the light field function can be approximated selectively. A subset can be selected from a large number of raw camera view data and processed into an image data set that is sufficient for the viewer to see. Because of this selectivity, a secondary data set can have high-quality image data adapted to the situation and still satisfy the condition that the amount of data has to be drastically reduced in order to transfer the light field data from one VR location to the other.
  • the VR object is at the same time a second user, who in turn is in a VR location and is in a VR environment in which he can see the first user, i.e. in which a VR object representing this first user can be present.
  • a position record of the first user in addition to the position record of the second user, there will also advantageously be a position record of the first user, which is used in an analogous manner when generating a VR object representing the first user.
  • a second, supplementary example of a concept data set according to the invention enables the light field to be enriched with information about the geometry of the object.
  • the concept data set contains information about the geometry of the object. Since the geometry of a moving object also has certain constants, for example dimensions such as arm length and arm thickness of a person, etc., such a concept data set can be volatile or only partially volatile. Geometry information can be present, for example, as a so-called visual envelope (cf. A. Laurentini, IEEE Transactions PAMI 16 (2), pp. 150-162 (1994)) of the object.
  • the visual envelope of an object is related to its convex envelope, but may contain more information.
  • the visual shell is obtained as follows: In a first step, several silhouettes of the object are extracted from different directions in an environment of the object. This is done, for example, with the chroma keying technique.
  • the background comes with a Color, for example blue, illuminated that the object does not have.
  • the silhouette is extracted from a picture of the object and the background on the basis of the color information. If you connect such a (essentially two-dimensional) silhouette to the (point-like) recording device, a cone is created, within which the object is certainly located. The surface of the intersection of several such cones results in a surface approximating the object surface, the visual envelope.
  • a stereo projection calculation can also be carried out.
  • this area which serves as concept data information, can be used to create a geometry-enriched light field approximation of the object from a primary image data set.
  • a view can be, for example, the view of the object (including color information, texture) seen from a recording angle determined by the position data record, together with the depth of the object from this point of view.
  • This depth information available in addition to, for example, image recordings necessary for a realistic 3D projection can be used to correctly reproduce the shadow cast by the object, certain interactions and restrictions on freedom of movement etc.
  • a third example for a concept data record is a property library from which data for information processing are selected depending on the context.
  • the properties library contains properties of a variety of objects, along with a context label.
  • Such a properties library takes into account the fact that the representation of properties of an object such as texture, color, reflection properties, shape, or also mass, hardness, noises generated etc. are fundamentally different, depending on the type of the object and its actions are at all.
  • Image processing can differ at a basic level, depending on whether the object is a person or a building, a man or a woman, young or old, a head or a hand.
  • a property library is preferably not just a database, but has entries, hereinafter called cells, which, in addition to data, can also contain program codes, references to subroutines, etc.
  • a specific cell can be selected by a recognition algorithm running in the background, for example during image processing. If a context has been assigned to the object, it receives a context pointer which points to the cell (s) corresponding to the object and accompanies the object. In the event that the image data of the object do not correspond to the patterns provided in the cell, it can also be provided that the cell is supplemented independently. To do this, a search algorithm is started that searches for matching patterns in other cells. If this search algorithm was found in another cell, the cell receives a corresponding reference to the other cell.
  • a properties library can renew itself, and artificial intelligence approaches can also be implemented.
  • a properties library also allows haptic interactions to be implemented in VR environments, provided the necessary hardware components are of course available.
  • a property library is an example of non-volatile, possibly additive context information.
  • tracking When determining position data according to the first example, the problem of so-called “tracking” arises: depending on the VR location of the first user, tools are necessary to determine where the user is in his VR environment and where to he watches etc.
  • "Tracking" according to the prior art works, for example, with alternating magnetic fields. Three local transmitters produce orthogonal alternating magnetic fields in pairs. The field strengths are detected by three receiving antennas, for example attached to the head of a user, which are also orthogonal in pairs. An evaluation of the data provides both position and orientation information.
  • the disadvantage of this method is the susceptibility to faults, for example in the presence of metallic objects.
  • a concept data record according to the invention which is based, for example, on a concept library, permits tracking directly on the basis of Image data.
  • a VR location or an object can have reference marks, for example, which are marked as such by a context pointer, which enable tracking and are eliminated when the VR object is reproduced.
  • pattern recognition can also be used ("self-calibration tracking"
  • an interface is preferably provided for a so-called ABIT (“application building interface toolkit”).
  • ABIT application building interface toolkit
  • This is, for example, a group of computer programs and / or subroutines written in a higher programming language and data which have the object-specific information.
  • the ABIT thus provides So to speak, the link between the software basis and the application.
  • an ABIT contains VR objects such as buildings, tools, etc.
  • concept data records with a contextual designation
  • the ABIT can now have concept data in the above sense with contextual designation in addition to geometry information about objects. This concept data may relate both to the VR objects contained in the ABIT and to other VR objects, e.g. on the VR object representing a user.
  • Application example 1 The product development environment.
  • a VR device can be used for the development of products, for example the machine industry, by a group of developers, not all of whom work in the same place.
  • Product development is namely a parallel process and the direct interaction of developers of different product components is extremely beneficial to both product quality and the development dynamics and speed.
  • An ABIT provides an interface through which the VR device can be expanded to a virtual product development environment. Development units located somewhere in the world can be expanded into a common virtual development space.
  • Example of use 2 Architectural structure.
  • the invention can make it possible, for example, to involve a specialist who is located at a great distance from the operating room during an operation.
  • the specialist looks virtually over the shoulder of the operator or even intervenes himself.
  • Another application could be that the VR object to be recorded is the inside of a body and that doctors or even the patient himself can walk and look at the bloodstream, the digestive tract etc. from the inside. Recording techniques with microscopic cameras are already available. 'The hunt for grandfather's kidney stone' could become a reality in the not too distant future
  • Example of use 4 Feature films
  • the use of the method according to the invention, for example together with an ABIT, on the turning of feature films is an example of the fact that the method can also be useful in stand-alone versions.
  • the processing of the primary to secondary data and / or the generation of a VR object from the secondary data does not necessarily take place simultaneously with the recording, but rather, for example, only when cutting.
  • the use of VR raw data results in a new dimension of cutting: not only scenes, but also shooting angles, etc. can still be active by the director and the editor after turning off the scenes be designed.
  • the recordings can be made against a blue background (“blue screening”), which background can be simply filtered out when the object is integrated into an environment.
  • the method according to the invention additionally allows the use of depth information and therefore the introduction of 3D blue screening, the VR location in which the object, for example an actor, is recorded functions as a dynamic 3D scanner within the scope of this use.
  • the VR environment can be an environment in which different players can move as users and come across computer-generated objects.
  • the method according to the invention now also allows a player and his movements to be reproduced in real time in the surroundings of another player, which enables interaction between the players.
  • FIG. 1 shows a schematic view of a VR device with two VR locations with one user each, each user appearing in the VR environment of the other user as a VR object,
  • FIG. 2 shows a diagram of an image processing method using a position data record
  • FIG. 4 shows a diagram of a data transmission method for controlling the connection of two VR locations and for the efficient transmission of VR objects.
  • FIG. 1 shows two VR locations 1, 2 in which a user 3, 4 is located and which are connected to one another by means of communication 30.
  • Each VR location 1, 2 has at least one camera and preferably a plurality of cameras 10, which are distributed in such a way that as many different viewing angles as possible are covered by them.
  • each VR location 1, 2 is designed so that its wall has the shape of a vertical cylinder. The cameras are then distributed around the circumference at a height of, for example, between one and two meters.
  • each VR location 1, 2 has at least one and, for example, two projectors 20 with which a VR environment can be projected onto the wall. It can be provided that the three colors red, green and blue are projected from separate sources 21, 22 and 23, respectively.
  • Each VR location also has means 40 for active lighting and data processing means 50. Furthermore, there is a communication and / or synchronization device 60 for communication and / or for the transmission of synchronization signals between the data processing means 50 on the one hand and the cameras 10, the projectors 20 and the active one Illumination 40, on the other hand.
  • the communication means 30 comprise a conventional data transmission line as well as data processing means for creating and decoding existing data transmission protocols.
  • a VR environment is created for the user 3 in the VR location by images projected onto the wall. This VR environment can have a multiplicity of VR objects, the second user 4 being one of these objects.
  • the location 1 now has means with which the user 1 can move through his VR environment. Moving through the VR environment means that the environment is gradually changed so that the user has the impression of getting around in this environment.
  • Such means for locomotion in the VR environment can, for example, be a “joystick” -like device in a simple manner.
  • the cameras record primary data of the object, ie of the user 3.
  • the primary data recorded in this way represent snapshots and are volatile, which means that they are continuously replaced by new primary data.
  • the lighting is used for active lighting.
  • the data reproduction for user 3 takes place, for example, in two sequences.
  • the user 3 also wears shutter glasses (not shown) which allow the view available to each eye to be blocked individually, ie to switch each of the glasses individually to dark.
  • the communication and / or synchronization device 60 between the shutter glasses as well as the projectors 20 and the data processing means 50 allows the switching process of the glasses to be controlled by the data processing means.
  • the projectors 20 then alternately project the image presented to the right and the left eye onto the wall, the shutter glasses on the left switching to dark during the projection of the image for the right eye and vice versa by selecting a sufficiently high clock frequency, e.g. of 60 Hz, a 3D impression can be created.
  • a sufficiently high clock frequency e.g. of 60 Hz
  • a 3D impression can be created.
  • another way of generating an SD impression is also possible, for example via different polarization directions of the light in combination with glasses with a polarization filter.
  • a waiver of a 3D Impression is conceivable, with the reproduction then corresponding, for example, to a simple film projection, at best on an all-around canvas.
  • the data processing means then process the primary image data into a secondary image data record. Position data transmitted by the second VR location 2 are taken into account.
  • audio signals are of course also recorded and reproduced.
  • touch sensors with which, for example, the movement of individual fingers is recorded.
  • heat sensors, chemical sensors, etc. can also be used.
  • actuators arranged close to the skin, with which an impression of contact or even a counterforce against movements can be conveyed.
  • the two users 3 and 4 are each completely immersively in a VR environment.
  • the fact that both users simultaneously acquire real-time data and play back data enables the VR environments of both users to contain the same object in which they move together (e.g. a building).
  • the users can communicate and, depending on that, manipulate other VR objects.
  • FIG. 2 also shows a diagram of the image data processing as it takes place in a device such as that described above if the concept data record contains position data.
  • the primary image data are the raw data, for example pixel-wise (PAL, NTSC or comparable format), as they are delivered by the cameras.
  • An image secondary data record is generated on the basis of this primary data.
  • VR position data is used. In the example described, these include the position (distance, orientation) of the first user as a VR object in the VR environment of the second user.
  • a selection is made of, for example, four video signals which reflect the optimal viewing angle based on the position data.
  • Progressive compression can also be provided in a hierarchical coarse-fine structure, similar to the one already available in the Internet standard format GIF for pure image information: First, an image with only a few pixels is created, which is gradually refined. Such progressive compression is also known for multimedia data, for example from the MPEG-4 standard. It is particularly useful if the data transfer rate is not constant and it is not possible to predict how large the amount of data that can be transferred if the real-time condition is taken into account. Through a procedure according to the invention, however, the compression can take place as required, depending on the concept data record. This can happen, for example, in such a way that the degree of compression depends on the distance and also on the contrast of the first user as a VR object to his background.
  • a geometry data record can also be generated during the data acquisition.
  • a subset of the image primary data is processed into a visual envelope which has the information about the silhouettes.
  • additional acquisition means such as infrared sensors may be available for the acquisition of this subset.
  • the geometry can already be used in the generation of the image secondary data, for example in the context of the light field approach with geometry enrichment.
  • a geometry data record is also advantageously used to generate a stereo image that can be used for the 3D view of the second user.
  • Both the image data on which the secondary data set is based and the geometry data are preferably only partially volatile. While individual image sections change very dynamically (e.g. the mouth region and the arms when speaking and gesturing) others remain relatively constant. This can be exploited by only updating the said data selectively, which in turn leads to only partially volatile data. In the case of geometry information, this can be done in a particularly favorable manner by means of a suitable parameterization: proportions and mass of the body members remain constant, only their relative orientation, which can be described with a few angle parameters, changes as a function of time. For the secondary data set, only the volatile portion of the image data is advantageously processed in order to achieve a high transmission speed. When selecting volatile-non-volatile, concept data can also be used again.
  • the image secondary data record is then transmitted to the VR environment of the second user.
  • a VR object is then generated from this data record.
  • the VR position data record is taken into account again during generation (size, orientation of the VR object).
  • the geometry data set can also be used when generating the VR object, e.g. for 3D impressions, casting shadows, etc.
  • the context can only be inferred on the basis of first secondary image data, but a context pointer is then introduced, which enables the use of concept data even during the further conversion of primary data into secondary data.
  • Properties from the concept library can of course also be used when generating a VR object from the secondary data in the VR environment of the second user.
  • a position data record of the type described above is used when generating the secondary image data of a user, the position data record will include position data of the user in the VR environment of all other users.
  • FIG. 4 shows schematically the efficient transmission of control and monitoring information and VR objects.
  • the network connection of two VR locations takes place via one or more data channels. in this connection Different requirements of the data records to be transmitted to the data channel must be observed.
  • two classes of data sets can be distinguished: control and control data sets and also, for example, concept data on the one hand, and VR object data streams on the other.
  • Control and control data records are required to establish and control network connections within a single VR location and between two or more VR locations.
  • One or more services are provided for implementation. Such services include, for example, a connection service (CX service) for establishing a connection, an event service for processing different events, a quality service (QoS service) for controlling the network resources and a time service for time synchronization of the VR locations ,
  • CX service connection service
  • QoS service quality service
  • the different services are implemented according to a standardized middleware, the "Common Object Request Broker Architecture” (CORBA) (cf. "The Common Object Request Broker: Architecture and Specification.” Object Management Group, Revision 2.4, October 2000).
  • CORBA Common Object Request Broker Architecture
  • control and control data sets and also of concept data is preferably carried out bidirectionally. This can also be the case if, for example, only one VR location is available and if instead of the second VR location in FIG. 4, only one location is provided at which data of one or more VR objects are recorded.
  • VR object data streams real-time data streams
  • VR location 1 data stream source
  • VR Location 2 data flow sink
  • coding methods codec Plugins
  • Protocol Plugins network protocols

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé de représentation d'un objet dans un environnement de réalité virtuelle d'un utilisateur. Ce procédé repose sur la préparation d'un enregistrement conceptuel contenant des informations sur des propriétés d'un objet, p. ex. un deuxième utilisateur. Les données primaires dudit objet, contenant des images vidéo, sont traitées à l'emplacement de l'enregistrement de données à l'aide de l'enregistrement conceptuel, de manière à générer des données secondaires. Les données secondaires ainsi obtenues sont transmises à un emplacement de réalité virtuelle, où un objet de réalité virtuelle, représentant ledit objet et à intégrer dans l'environnement de réalité virtuelle, est généré sur la base des données secondaires. Un enregistrement conceptuel peut par exemple être un enregistrement de position, contenant des données de position de l'objet de réalité virtuelle dans ledit environnement de réalité virtuelle.
PCT/CH2001/000341 2000-06-02 2001-06-01 Procede de preparation d'un environnement de realite virtuelle WO2001093209A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001260002A AU2001260002A1 (en) 2000-06-02 2001-06-01 Method for the production of a virtual reality environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CH11012000 2000-06-02
CH1101/00 2000-06-02

Publications (1)

Publication Number Publication Date
WO2001093209A1 true WO2001093209A1 (fr) 2001-12-06

Family

ID=4557495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CH2001/000341 WO2001093209A1 (fr) 2000-06-02 2001-06-01 Procede de preparation d'un environnement de realite virtuelle

Country Status (2)

Country Link
AU (1) AU2001260002A1 (fr)
WO (1) WO2001093209A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0696018A2 (fr) * 1994-08-03 1996-02-07 Nippon Telegraph And Telephone Corporation Méthode d'affichage d'un espace virtuel partagé et dispositif utilisant cette méthode
EP0753834A2 (fr) * 1995-07-11 1997-01-15 Sony Corporation Méthode et système de partage d'un espace à trois dimensions à réalité virtuelle
US5714997A (en) * 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
EP0838787A2 (fr) * 1996-10-16 1998-04-29 HE HOLDINGS, INC. dba HUGHES ELECTRONICS Système et méthode de réalité virtuelle en temps réel multi-utilisateurs
WO1998045816A1 (fr) * 1997-04-07 1998-10-15 Synapix, Inc. Modelisation et segmentation adaptative de trains d'images visuelles
WO1999045503A1 (fr) * 1998-03-06 1999-09-10 Societe Rasterland S.A. Systeme de visualisation d'images tridimensionnelles realistes virtuelles en temps reel

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0696018A2 (fr) * 1994-08-03 1996-02-07 Nippon Telegraph And Telephone Corporation Méthode d'affichage d'un espace virtuel partagé et dispositif utilisant cette méthode
US5714997A (en) * 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
EP0753834A2 (fr) * 1995-07-11 1997-01-15 Sony Corporation Méthode et système de partage d'un espace à trois dimensions à réalité virtuelle
EP0838787A2 (fr) * 1996-10-16 1998-04-29 HE HOLDINGS, INC. dba HUGHES ELECTRONICS Système et méthode de réalité virtuelle en temps réel multi-utilisateurs
WO1998045816A1 (fr) * 1997-04-07 1998-10-15 Synapix, Inc. Modelisation et segmentation adaptative de trains d'images visuelles
WO1999045503A1 (fr) * 1998-03-06 1999-09-10 Societe Rasterland S.A. Systeme de visualisation d'images tridimensionnelles realistes virtuelles en temps reel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ADELSON E H ET AL: "THE PLENOPTIC FUNCTION AND THE ELEMENTS OF EARLY VISION", COMPUTATIONAL MODELS OF VISUAL PROCESSING, XX, XX, 1991, pages 3 - 20, XP000934213 *

Also Published As

Publication number Publication date
AU2001260002A1 (en) 2001-12-11

Similar Documents

Publication Publication Date Title
DE69725875T2 (de) Videobetrachtungserfahrungen mit hilfe von standbildern
DE19825302A1 (de) System zur Einrichtung einer dreidimensionalen Abfallmatte, welche eine vereinfachte Einstellung räumlicher Beziehungen zwischen realen und virtuellen Szeneelementen ermöglicht
DE60225933T2 (de) Tragbare virtuelle realität
DE69820112T2 (de) Verbessertes bilderfassungssystem mit virtueller kamera
DE69433061T2 (de) Netzwerk der virtuellen realität
DE69907644T2 (de) Echtzeit system zur dreidimensionalen realistischen virtuellen-bildanzeige
DE102015210453B3 (de) Verfahren und vorrichtung zum erzeugen von daten für eine zwei- oder dreidimensionale darstellung zumindest eines teils eines objekts und zum erzeugen der zwei- oder dreidimensionalen darstellung zumindest des teils des objekts
EP3427474B1 (fr) Procédé de traitement d'images, moyen de traitement d'images et dispositif de traitement d'images pour générer des reproductions d'une partie d'un espace tridimensionnel
EP3347876B1 (fr) Dispositif et procédé pour générer un modèle d'un objet au moyen de données-image de superposition dans un environnement virtuel
DE19953595A1 (de) Verfahren und Vorrichtung zur Verarbeitung dreidimensionaler Bilder
DE112017005879T5 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm
DE69837165T2 (de) Verfahren und gerät für automatische animation von dreidimensionalen grafischen szenen für verbesserte 3-d visualisierung
CN112446939A (zh) 三维模型动态渲染方法、装置、电子设备及存储介质
DE102017203721A1 (de) Vorrichtung und Verfahren zur Darstellung eines Raumbilds von einem Objekt in einer virtuellen Umgebung
EP0859977A2 (fr) Zone d'interaction pour representation de donnees
EP2831839B1 (fr) Procédé d'exploitation automatique d'un système de surveillance
WO2001093209A1 (fr) Procede de preparation d'un environnement de realite virtuelle
DE112021002093T5 (de) Verfahren zum ändern des blickpunkts im virtuellen raum
WO2014044661A1 (fr) Dispositif client pour représenter des images d'une caméra pilotable, procédé, programme d'ordinateur ainsi que système de surveillance comprenant le dispositif client
EP4168881A1 (fr) Procédé de vidéoconférence et système de vidéoconférence
DE102011119082A1 (de) Vorrichtungsanordnung zur Schaffung eines interaktiven Bildschirms aus einem Bildschirm
DE102017109106B4 (de) Überwachungssystem und Videoanzeigeverfahren
EP0775415B1 (fr) Procede et dispositif permettant de produire une sequence d'images
EP1502145B1 (fr) Procede et dispositif permettant de representer des objets reels et virtuels de maniere coherente
DE112022001916T5 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP