CN1273656A - Virtual environment viewpoint control - Google Patents

Virtual environment viewpoint control Download PDF

Info

Publication number
CN1273656A
CN1273656A CN99800233A CN99800233A CN1273656A CN 1273656 A CN1273656 A CN 1273656A CN 99800233 A CN99800233 A CN 99800233A CN 99800233 A CN99800233 A CN 99800233A CN 1273656 A CN1273656 A CN 1273656A
Authority
CN
China
Prior art keywords
user
virtual environment
cursor
interaction area
symbol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN99800233A
Other languages
Chinese (zh)
Other versions
CN1132117C (en
Inventor
J·鲁特格斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Publication of CN1273656A publication Critical patent/CN1273656A/en
Application granted granted Critical
Publication of CN1132117C publication Critical patent/CN1132117C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5258Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6653Methods for processing data by generating or executing the game program for rendering three dimensional images for altering the visibility of an object, e.g. preventing the occlusion of an object, partially hiding an object
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6684Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera by dynamically adapting its position to keep a game object in its viewing frustrum, e.g. for tracking a character or a ball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A multi-user interactive virtual environment system wherein each user is provided with data to generate a respective image of the virtual environment and characters therein, including an assigned character (100) particular to that individual user, from a respective virtual camera (110) viewpoint (A, B, C) determined at least partially by the user-directed motion of their assigned character. Each character has an interaction zone of predetermined size and shape maintained about its current virtual environment location. When the respective interaction zones of two or more user-assigned characters (100, 130) overlap, their respective virtual cameras (110) are controlled to move from first- to third-person (A-C) viewpoints for as long as the overlap remains. In a refinement (FIGURE 17), at least one further interaction zone at a given location within the virtual environment, and independent of any particular character within the virtual environment, takes control of all character virtual cameras as those characters enter the zone.

Description

Virtual environment viewpoint control
The present invention relates to resemble online game and virtual reality or share (multi-user) virtual environment system such provide the interactive environment system of virtual environment view for a user, user's virtual presence that computing machine produces wherein appears, and can be optionally and the characteristic interaction of other similar user's virtual presence and environment itself.Particularly, the present invention relates to such system, it has the device that is used for controllably changing viewpoint, and from this device reproducing environment image (being used for showing the user), this characteristic is known as " virtual gamma camera " control at this.
The system of the virtual environment (or CyberSpace) that a kind of long-distance user of providing can visit has been provided in European patent application EP-A-0 697 613 (Sony).Described this system comprises a server that virtual reality space is provided, and the user terminal that is connected to server by high-speed communicating network (using optical fiber or similar circuit).In when work, server keeps a plurality of virtual environments and by supporting a plurality of different terminals at the switch target between information object and the ownership goal: switch target between each configuration of each terminal type and the virtual environment supported forward backward communication the individual treated conversion is provided.
At each user terminal, the bidimensional image of observed three-dimensional virtual environment offers the user on user's oneself that will be from three-dimensional environment the certain view, and will represent that offer user as the icon (representation) that is produced by computing machine of other Any user that is in the virtual environment same area of observing the user this moment.Rather than produce all or part of icon by the observation user in the seen image of this user, EP-A-0 697 613 systems use the first view (i.e. " eyes " seen image of the symbol that produces by subscriber computer) and the cursor of an operable simple arrowhead form of user are provided, by cursor in the bidimensional image of shown environment on/bottom left/move right, perhaps come from virtual environment indication or select option, thereby between two users, begin dialogue or carry out that other is mutual by the virtual sign of clicking next user.EP-A-0 697 613 these used technology are the improvement to given prior art systems, the symbol that uses the central authorities always appear at the image that offers the user to reproduce in prior art systems comes representative of consumer, thereby makes the user observe their icon with the 3rd people's angle in virtual environment.
Strengthened the sensation that the user is melted into virtual environment though reproduce from the first visual angle, but carrying out when mutual when the virtual icon with other user then is not to make us very satisfied, and the 3rd people visual angle can be in relevant mode for offering user's more information alternately.It will be of great value can selecting the viewpoint (virtual camera position) of relative yourself's icon, but require to do like this and will become an overtime amusement.
Therefore, an object of the present invention is to propose a kind of system that is designed to adjust automatically the virtual camera position, thus in multiple factor according to user's virtual appearance and whether taking place alternately between user's the icon in addition, a suitable viewpoint is provided.
The multiusers interaction virtual environment system that provides according to a first aspect of the invention comprises: first data-carrier store that comprises the defining virtual environmental data; Second data-carrier store that comprises the data of the external performance that defines a plurality of symbols; With a processor, it is used for receiving input command and arranging visit first and second storeies from a plurality of separate users, and determine a specific viewpoint with position and direction in the virtual environment of determining by the movable part ground of the user of user-specified character indication at least, for each user produces in the virtual environment image and symbol separately, comprising the special symbol of specifying this individual consumer, it is characterized in that: the interaction area generation device is used for being the area coordinate separately of each symbol maintenance renewal that this coordinate is used to represent the corresponding pre-sizing of current relatively virtual environment position and the zone of shape; And the supervising device that links to each other with regional generation device, be used for determining that the corresponding interaction area of two or more user's designated symbols is when overlapping and it is notified to described processor, as long as exist overlappingly, then the pre-defined rule collection that provides based on processor to small part is determined viewpoint position and direction separately for each such user's designated symbols.
By providing interaction area (it is suitable for not seen by the user) to provide a trigger mechanism to be used to switch user's camera position.As described in the example of back, when not having interaction area (not having region overlapping), virtual camera can be followed its symbol simply also effectively from " its shoulder in the past " on a position, and then rotates to the 3rd people's view so that the image that has more two interactive icons information is provided.
Along with the raising of the ability that realizes complicated user's virtual presence, the size and/or the complexity of virtual world that can modeling also will increase, and be same, can the given while time accesses virtual world the different user quantity of a same part also will increase.Its influence is on than the zonule the overlapping of interaction area to be taken place on a large scale, produces the unacceptable load of processor when causing the computing camera position.For avoiding this potential problem, processor suitably keeps at least one further interaction area in a fixed position of virtual environment, and fixedly interaction area or zone are independent of any special symbol in the virtual environment to make this.By providing these fixed area on the normal and/or crowded position in virtual environment, and by controlling the ad hoc rules collection of camera location, can be for all sign conventions in the zone cover this regional panorama camera view, so that avoid calculating the single camera position.This characteristic can preset (irrelevant with this locational symbol/cursor quantity) for the ad-hoc location in the virtual environment or goes up at an arbitrary position when the interaction area of for example determining 5 or more symbols is overlapping and dynamically use this characteristic.
By improving, these fixedly interaction area can constitute by being provided with one heart of at least two subregions, when a symbol interaction area only with the subregion of outside when overlapping processor the part of rule set only is provided.In other words, when near the inside of subregion (having the panorama camera location), moving the degree of determining the respective symbol mobile camera moving by symbol will reduce.
By below reading to the description of the preferred embodiments of the present invention, further characteristic of the present invention and advantage will more obvious, institute does the description general and only provides with the form of example, and the following accompanying drawing of reference:
Fig. 1 is the schematic block diagram of data processing equipment that is suitable for constituting the configuration of a user terminal that embodies feature of the present invention;
Fig. 2 represents the User by the virtual environment of the shoulder of star user icon or cursor;
Fig. 3 represents a vision cone, by its highlighted direction indication of Fig. 2 cursor;
Fig. 4 represents the possible relative positioning of the subregion of camera view in cursor and virtual camera and the screen area;
Fig. 5 represents the technology that the virtual camera tracking cursor rotates;
Fig. 6 represents motionless cursor and moving cursor display position different on screen;
Fig. 7 represents to have a pair of cursor of corresponding interaction area;
The a pair of interaction area of Fig. 8 presentation graphs 7 overlapping;
Fig. 9 represents that the cursor outside is divided into the nominal subregion in a plurality of discontinuous zones;
Figure 10 and a pair of cursor of 11 expressions are initial mutual by the vision cone;
Two of Figure 12 to 15 expressions and three mutual virtual cameras of cursor move;
Figure 16 is illustrated in mutual in two or three discontinuous group of a plurality of cursors in the virtual environment; With
Figure 17 represents to be fit to the fixedly interaction area of big mutual cursor group.
The network virtual environmental system that Fig. 1 provides comprises a user data disposal system 2, people's computing machine one by one for example, and it is as the main frame of application software, is configured to be used for the browser of the data of defining virtual environment.Connection by network 8 obtains data from remote source 4, and other user 6 with similar data handling system is connected to source 4 equally.Custom system 2 comprises a central processing unit (CPU) 10, and is connected to storage (RAM) and read-only (ROM) storage arrangement 14,16 at random by address and data bus 12.By for this system provides from additional memory device, the device that reads such as CD-ROM (not shown) for example can improve the ability of these storage arrangements.
By first and second user input apparatus 18,20 in addition that bus 12 links to each other with CPU 10, they can comprise cursor control and the selecting arrangement that keyboard and mouse or tracking ball are such.The audio frequency output of system is to realize by earphone or one or more loudspeaker 22 that Audio Processing platform 24 drives; Except amplification was provided, the Audio Processing platform also was designed to provide signal handling capacity under the control of CPU 10, so that allow to increase the acoustic processing resemble the echo in existing voice data.According to custom system ability and from the source 4 data layouts that provide, the video output of system can be at the continuous two-dimensional image that passes through under the driving of display driver platform 28 on the display screen 26, and perhaps the continuous three-dimensional image on automatic stereoscopic display device or three-dimensional head-mounted display (not shown) is realized.
As mentioned above, another data source of system is by online connections such as for example Internets, comprises as the data source of defining virtual environment and the server 4 controlled being connected to remote site.For this purpose, will offer system by the network interface 30 that bus 12 links to each other with CPU 10.The precision architecture of interface is not a fundamental characteristics of the present invention, but will recognize that, the structure of interface will depend on the type of the data network 8 that links to each other with network: for example, system is used for individual domestic consumer, and the data link generally is connected to the local service provider by phone.In this case, to be suitable for be a modulator-demodular unit to interface 30.For the link of the data of other type, resemble ISDN and connect, then configuration interface correspondingly.
In operation, the User of virtual environment is produced by virtual camera, do not having under the situation of other movement directive, the symbol of the representative of consumer virtual presence that the position of supposing this camera produces at computing machine or the back of cursor 100, this position be in icon a little above a bit or a side (resembling the effect of looking) from its " on the shoulder " provide and be almost the first viewpoint.In the example of back, the symbol that subscriber computer produces, and other user's symbol in the same environment, all be that star is embodied, it is not only as user's virtual presence, the function of cursor also is provided, wherein for the user provides not only and other this class (other user's) cursor, and with virtual environment in the further feature that may occur and the device of target interaction.Being embodied in the back and will being called as cursor of these symbols.
Reach shown in Figure 2ly as mentioned above, basic camera viewpoint is by cursor 100 " on the shoulders ", thereby gives the view of a continual relatively virtual environment 102 of user, keeps visual minimum the existence to help user's orientation simultaneously.Camera position, convergent-divergent and other effect as eyeglass selection etc. are all determined automatically by the moving of cursor operations performance, action and profile (back will be described this), thereby are made the user not pass through UID (18,20; Fig. 1) camera is directly controlled in operation.
As shown in Figure 3, cursor 100 has " a vision cone " 104, indicates its intention by it: the relative cursor in visual field has 90 ° angle, and the main shaft 106 of vision cone extends perpendicular to the general closed planar cursor in other words.In operation, the image reconstruction device that produces environmental image for the user notice the position of the relative environment of vision cone and relatively external object with the level of more detailed (or focusing) object in the vision cone is reproduced so that the intuition focusing of the cursor point of paying close attention to is provided.
The view of virtual camera (offers user's view on screen 26; Fig. 1) cut apart in screen area, this zone user can't see, and determines the reorientating automatically of camera (back will be described) by the position of vision cone on different screen areas.Fig. 4 is the plan view of virtual camera 110 and cursor 100, and the expression from be split into the different screen zone (A.1, B.1 ..., the view 112 that virtual camera inside E.5) is seen.Along with visual light target direction upwards, down, left or right moves, camera 110 moves along with visual direction.In fact, if cursor 100 continuously changes visual direction, for example left, then camera and cursor will finally move and get back to the view of virtual environment initial part along a circle.Fig. 5 represents along with the vision cone moves on to the central screen region exterior (at 2 of this front elevation by rotation, 3 and 4) time, how camera 110 oneself is reorientated and will be reproduced viewpoint and refer to what light echo mark 100 focused on, by three step (A of the position of vision cone main shaft 106 indications, B, C).Interval between camera and the vision cone allows to accept cursor because the minute movement that the uncertain operation of user's control brings, thereby avoids the corresponding shake from the image of virtual camera.
On the shoulders the composition of passing through of the cursor of representing with A among Fig. 6 100 is illustrated in settling position when cursor is static in the virtual environment.When cursor 100 motion, camera position changes automatically and provides more the 3rd people's viewpoint, for example for the user provide more navigation instruction based on ambient Property.When cursor was static, its cursor part visible part appeared at the edge of displayed image; When mobile (direction shown in Fig. 6 view B arrow 114), cursor speed is fast more, and it will approach the central authorities of image more.Move the 3rd people's view (that is, cursor fully as seen in screen) that the direction that changes the vision cone simultaneously will cause cursor, in the crooked angle of determining by rotation direction and size of icon of this view cursor.
Forward mutual between the different user cursor now to, as shown in Figure 7, cursor 100 has an effect or interaction area circle 200, and in multi-user environment, each cursor 100,120 has interaction area 200,220 separately.In three-dimensional virtual environment, the zone that surrounds cursor can be spherical or it can be one or more direction elongations, and for example the part that its volume is bigger generally is being arranged in vision cone main shaft (106; It is avette that cursor Fig. 3) is located previously.The size of interaction area 200 decision cursor 100 may and/or be ready and the mutual degree of other cursor because when interaction area merges as shown in Figure 8 or is overlapping can only cursor to or group between take place alternately.Its regional affected cursor to each, the merging of two or more interaction area or overlappingly also the control of its camera is exerted an influence, as will be described below.
The user can arrange or influence one's profession or economic status in his/her user profile demonstration (having the data structure that different access requirements comprise nested form information).In a preferred embodiment, the value volume and range of product of information can determine the size of interaction area during profile showed.For example, a cursor comprises bulk information in (public) layer outside its profile shows, then trend towards also may having bigger interaction area alternately.
Information during the user profile of cursor shows can be by other cursor access: scan another cursor outside by the vision cone (scan cursor and detect this vision) with it, then as seen the user profile information of cursor will become.The outside of cursor is divided into a plurality of regional 121-125, the data (for example elemental user particulars and right of priority) of the common layer that profile shows are pressed predetermined arranged distribution in a plurality of zones, and zone line 126 comprises cursor user's more a plurality of people and private data, as shown in Figure 9.Response to the scanning of other cursor, some (particularly perimeter 121-125) is than other zone " sensitivity " more in these zones, just be easy to provide their content: for example, personal data in the zone line 126 can not provide automatically, but require clear and definite release and/or sent by the user of cursor.The degree of the convergent-divergent that the camera of the susceptibility decision spotting scaming cursor in zone provides, for example, if in the scanning area of the information in the scanning area corresponding to maintenance of scanning cursor or searching, then coupling identification will make the camera of spotting scaming cursor dwindle the visual angle on the match information source.The method of another kind of control convergent-divergent is to move a special region by the cursor that may scan towards possible scanning cursor, and dwindles the visual angle by area triggering cursor camera.
Do not having under the overlapping situation of interaction area, the vision cone still can make cursor mutual.Open uni directional communication channel and make two cursor commanders vision cone each other.This uni directional communication channel allows cursor to send data segment from their common user's profile demonstration, and this data segment may be a sound, view data etc.Figure 10 has provided a pair of cursor 100,103, and their vision cones 104,134 separately separately, thereby have only the one-way channel to exist on another cursor and their interaction area 200,230th between them.When two cursors are close to each other, their interaction area is overlapping as shown in figure 11, and trigger from the unidirectional conversion to bi-directional communication links 150 between two cursors 100,130, wherein scan cursor and can not only obtain data from the dark profile layer of the cursor that is scanned from outside common layer.
Figure 12 to 15 has further provided the structure of overlapping interaction area.The camera position that constitutes handled User is by the interaction area definition that merges.Any camera position of two cursors of demonstration all is arranged in half of shown in Figure 12 regional 160 and by merging the interaction area definition, this zone is got by 180 ° of space cinematographies.The camera position that two cursors 100,130 reproduce is by 180 ° of definition spaces, strictly uses this technology and avoided the influence of getting lost, otherwise will get lost when cursor one side switches to opposite side when viewpoint.
After the merging interaction area had defined 180 ° of spaces 160, this space then was divided into two 1/4th (both sides of line 162), makes the virtual camera of each cursor can only be positioned at its corresponding 1/4th zones.Each cursor has its camera curve in this 1/4th zone, and it is used to locate its camera.Figure 13 provides camera 110 and is the whole diagrammatic sketch of cursor 100 to the figure left positioner.Camera can move on the curve of 1/4th edges of regions glossily.Pass this curve, camera can be from the 3rd people's viewpoint C, through passing through the skew of viewpoint B on the shoulders, move on to the first viewpoint A: notice in the first viewpoint, for avoiding hiding interacted light target view, the User of their cursors may remove from reproduce view in some embodiments.
When two cursors 100,130 met, as mentioned above, their camera curve will be toward each other.As shown in figure 14, the 3rd cursor 170 of outside, 180 ° of spaces will be passive when initial, to the not influence of other two cursors.Yet, as shown in figure 15, when one in the 3rd cursor 170 and first cursor 100 or second cursor 130 when mutual, 180 ° of spaces 160 will be transformed to surround three cursors alternately to 100,170, and turn back to and be used for original cursor to 100,130 o'clock initial position (supposing that two cursors all move therein) and recover mutual.
Except simply meet with other user (or optionally with virtual environment in other characteristic meet), show from user profile, perhaps from being included in partly (zone line 126 of cursor body " software "; Swap date Fig. 9) in the more individual data also will cause the change of camera position.If in two cursors, transmit bulk information, will control they separately camera position make it from the first or shift to the 3rd people's view gradually or apace by view on the shoulders.The passivity of a cursor also may produce the change of camera diagrammatic sketch: if having bi-directional communication links between a pair of cursor, but have only a cursor to be in to send data and/or preparation to be used for the state of activation of reading of data, the camera of following the tracks of passive cursor so will focus on the activation cursor.
If a plurality of cursors are arranged in virtual environment, generally will be as mentioned above according to whether existing overlapping interaction area and/or control vision cone to control their camera positions separately.Yet, say that practically system handles and speed ability have provided the restriction of simultaneously mutual cursor quantity.For example, for merge the necessity of carrying out in the zone calculate system is slowed to make the insufferable degree of user before, may limit and have only 5 overlapping interaction area of while, yet, in single environment, as shown in figure 16, can provide many combinations and the cursor group of cursor to forming with three cursors.
Among the virtual environment set that the user can use, some environment may attract a large number of users (by their cursor), if become " congested " then may not need the modification of some forms because attempting mutual immediately too many cursor in these zones.This can realize by their camera control protocol is provided for these zones: in fact, these zones are the fixedly interaction area that are not attached to a cursor., shear automatic specified camera choreography according to the area part at cursor place, fade out rhythm, color and gating effect.Provided among Figure 17 and had concentric regions A, B, the space of C, wherein concentric regions has determined to give each camera control level of each cursor, its scope is from as the mutual comprehensive control of two or three cursors the previously described perimeter C, and the nothing that the viewpoint that cursor all comprised of the single regional body in the interior zone A generates is controlled separately.It can only be to stay regional center or leave that user's control is selected.
Show and exchange subscriber aspect the profile information at regional A internally, agreement is fit to make the common layer (skin of all cursors from them, the layer that is easier to access of internal data store structure) " leakage " some information, the information of revealing can be obtained by other cursor in this zone, if they are to its interested words.In case obtained an information, then notice cursor and two cursors 302,304 of revealing it are shifted to the first frontier district B, support therein by viewpoint on the shoulders reproduce mutual, thereby give between two cursors exchange secret to a certain degree.Based on this exchange, if determine two cursors to wish to exchange more personal data, they are moved out to zone C by area B, merge the restriction that does not have based on the zone for camera control or interaction area there, shown in cursor 306,308.
Though the example of front concentrates on emphasis the robotization of virtual camera location, but for a person skilled in the art, by with the position of virtual camera with orientation replaces with virtual microphone or stereophony microphone is right, the audio frequency control that aforementioned techniques is applied to virtual environment is readily appreciated that, thereby produces bidimensional or three-dimensional sound field (24 by the audio platform of system; Fig. 1).
Except the mutual generation sound of directly other user from user and virtual environment or target (passing through cursor), can also there be background sound, thereby the prompting or the relevant information of further exploration are provided for the user from Controlling Source.For example, when when sending the virtual environment zone of loud noise very, then point out this position of user may comprise and a large amount of other mutual the users of other cursor.Can discern composition according to obtaining from background sound, the user can also determine the type or the purposes of this part in the environment, and if do not meet his/her hobby then can avoid it.Can hear sound but the place that begins but to can't see by from the stereo audio of cursor angle control (in virtual environment, the source should be monophony and change the sound size according to the distance of leaving cursor separately) user can being moved at least in virtual environment.
Though claims are illustrated particular combinations into each feature at this application has, but should be appreciated that, no matter whether relate to and state identical invention with current claim, perhaps whether relax any as the present invention or all same technical matters, the application scope of disclosure also comprises the characteristic of disclosed herein or obvious or implicit any novelty, perhaps property combination.The applicant draws attention at this, characteristic that can be like this process of any further application of understanding the application or obtaining from the application or like this characteristic constitute new claims.

Claims (5)

1. a multiusers interaction virtual environment system comprises: first data-carrier store that comprises the defining virtual environmental data; Second data-carrier store that comprises the data of the external performance that defines a plurality of symbols; With a processor, it is used for receiving input command and arranging visit first and second storeies from a plurality of isolated users, and determine a specific viewpoint with position and the direction determined by the movable part ground of the user symbol of user's appointment at least, for each user produces in the virtual environment image and symbol separately, special symbol comprising specifying this individual consumer is characterized in that:
The interaction area generation device is used to each symbol to keep the area coordinate separately that upgrades, and this coordinate is used to represent the corresponding pre-sizing of current relatively virtual environment position and the zone of shape;
And with the continuous supervising device of regional generation device, be used for determining that the corresponding interaction area of two or more user symbols is when overlapping and it is notified to described processor, as long as exist overlappingly, then the pre-defined rule collection that provides based on processor to small part is determined viewpoint position and direction separately for each such user symbol.
2. system according to claim 1, wherein the fixed position of processor in virtual environment keeps at least one further interaction area, and described fixedly interaction area is independent of any specific character in the virtual environment.
3. system according to claim 2, wherein at least one further interaction area comprises the zone of at least two dwell of cams, and when the symbol interaction area when only the perimeter is overlapping processor the part of rule set only is provided.
4. system according to claim 1, wherein in the virtual environment zone of selecting, processor provides a further interaction area at least, and described further interaction area is independent of any special symbol in the virtual environment, but the symbol that is positioned at predetermined quantity is specified on the overlapping definite position of interaction area.
5. system according to claim 4, wherein processor provides a described further interaction area on the overlapping definite position of interaction area of 5 or more symbol appointments.
CN99800233XA 1998-01-09 1999-01-07 Virtual environment viewpoint control Expired - Fee Related CN1132117C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB9800397.3A GB9800397D0 (en) 1998-01-09 1998-01-09 Virtual environment viewpoint control
GB9800397.3 1998-01-09

Publications (2)

Publication Number Publication Date
CN1273656A true CN1273656A (en) 2000-11-15
CN1132117C CN1132117C (en) 2003-12-24

Family

ID=10825017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN99800233XA Expired - Fee Related CN1132117C (en) 1998-01-09 1999-01-07 Virtual environment viewpoint control

Country Status (7)

Country Link
US (1) US6241609B1 (en)
EP (1) EP0966716A2 (en)
JP (1) JP4276704B2 (en)
KR (1) KR100597329B1 (en)
CN (1) CN1132117C (en)
GB (1) GB9800397D0 (en)
WO (1) WO1999035597A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100355272C (en) * 2005-06-24 2007-12-12 清华大学 Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
CN102067179A (en) * 2008-04-14 2011-05-18 谷歌公司 Swoop navigation
CN103164612A (en) * 2011-08-03 2013-06-19 迪士尼企业公司 Zone-based positioning for virtual worlds
CN111589114A (en) * 2020-05-12 2020-08-28 腾讯科技(深圳)有限公司 Virtual object selection method, device, terminal and storage medium
CN112188922A (en) * 2018-05-21 2021-01-05 微软技术许可有限责任公司 Virtual camera placement system
CN115639976A (en) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 Multi-mode and multi-angle synchronous display method and system for virtual reality content

Families Citing this family (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11128533A (en) * 1997-10-30 1999-05-18 Nintendo Co Ltd Video game device and memory media for the same
AU5012600A (en) * 1999-05-14 2000-12-05 Graphic Gems Method and apparatus for a multi-owner, three-dimensional virtual world
US6947044B1 (en) * 1999-05-21 2005-09-20 Kulas Charles J Creation and playback of computer-generated productions using script-controlled rendering engines
JP2001149640A (en) * 1999-09-16 2001-06-05 Sega Corp Game machine, game processing method, and recording medium recording program
KR20010065751A (en) * 1999-12-30 2001-07-11 박영신 An education method,which makes use of AVATA,in 3D internet space
US6891566B2 (en) 2000-03-14 2005-05-10 Joseph Robert Marchese Digital video system using networked cameras
US6672961B1 (en) * 2000-03-16 2004-01-06 Sony Computer Entertainment America Inc. Computer system and method of displaying images
US7353274B1 (en) * 2000-05-09 2008-04-01 Medisys/Rjb Consulting, Inc. Method, apparatus, and system for determining whether a computer is within a particular location
US6837790B1 (en) * 2000-07-26 2005-01-04 Igt Gaming device with moving screen simulation
AU2001284375A1 (en) * 2000-09-07 2002-03-22 Omnisky Corporation Coexistent interaction between a virtual character and the real world
US20050206610A1 (en) * 2000-09-29 2005-09-22 Gary Gerard Cordelli Computer-"reflected" (avatar) mirror
FR2814891B1 (en) * 2000-10-04 2003-04-04 Thomson Multimedia Sa AUDIO LEVEL ADJUSTMENT METHOD FROM MULTIPLE CHANNELS AND ADJUSTMENT DEVICE
CA2328795A1 (en) 2000-12-19 2002-06-19 Advanced Numerical Methods Ltd. Applications and performance enhancements for detail-in-context viewing technology
JP3699660B2 (en) * 2001-03-30 2005-09-28 コナミ株式会社 Game device and network game system
US20030035013A1 (en) * 2001-04-13 2003-02-20 Johnson Edward M. Personalized electronic cursor system and method of distributing the same
US8416266B2 (en) 2001-05-03 2013-04-09 Noregin Assetts N.V., L.L.C. Interacting with detail-in-context presentations
CA2345803A1 (en) 2001-05-03 2002-11-03 Idelix Software Inc. User interface elements for pliable display technology implementations
WO2002101534A1 (en) 2001-06-12 2002-12-19 Idelix Software Inc. Graphical user interface with zoom for detail-in-context presentations
US9760235B2 (en) 2001-06-12 2017-09-12 Callahan Cellular L.L.C. Lens-defined adjustment of displays
US7084886B2 (en) 2002-07-16 2006-08-01 Idelix Software Inc. Using detail-in-context lenses for accurate digital image cropping and measurement
JP3482602B2 (en) * 2001-08-21 2003-12-22 コナミ株式会社 Competitive game program
CA2361341A1 (en) 2001-11-07 2003-05-07 Idelix Software Inc. Use of detail-in-context presentation on stereoscopically paired images
US7050050B2 (en) * 2001-12-07 2006-05-23 The United States Of America As Represented By The Secretary Of The Army Method for as-needed, pseudo-random, computer-generated environments
CA2370752A1 (en) * 2002-02-05 2003-08-05 Idelix Software Inc. Fast rendering of pyramid lens distorted raster images
US6990639B2 (en) 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US6982697B2 (en) * 2002-02-07 2006-01-03 Microsoft Corporation System and process for selecting objects in a ubiquitous computing environment
US6917370B2 (en) * 2002-05-13 2005-07-12 Charles Benton Interacting augmented reality and virtual reality
US7734085B2 (en) * 2002-06-28 2010-06-08 Sharp Kabushiki Kaisha Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
US8120624B2 (en) 2002-07-16 2012-02-21 Noregin Assets N.V. L.L.C. Detail-in-context lenses for digital image cropping, measurement and online maps
CA2393887A1 (en) 2002-07-17 2004-01-17 Idelix Software Inc. Enhancements to user interface for detail-in-context data presentation
CA2406131A1 (en) 2002-09-30 2004-03-30 Idelix Software Inc. A graphical user interface using detail-in-context folding
JP3744002B2 (en) * 2002-10-04 2006-02-08 ソニー株式会社 Display device, imaging device, and imaging / display system
CA2449888A1 (en) 2003-11-17 2005-05-17 Idelix Software Inc. Navigating large images using detail-in-context fisheye rendering techniques
US20070097109A1 (en) * 2005-10-18 2007-05-03 Idelix Software Inc. Method and system for generating detail-in-context presentations in client/server systems
CA2411898A1 (en) 2002-11-15 2004-05-15 Idelix Software Inc. A method and system for controlling access to detail-in-context presentations
JP3669587B2 (en) * 2003-01-14 2005-07-06 コナミ株式会社 Game progress synchronization control server, terminal device and program
WO2004107763A1 (en) * 2003-05-28 2004-12-09 Sanyo Electric Co., Ltd. 3-dimensional video display device and program
CN1802193B (en) * 2003-06-11 2010-09-08 索尼计算机娱乐公司 Image display apparatus, image display method, and image display system
US7355593B2 (en) * 2004-01-02 2008-04-08 Smart Technologies, Inc. Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region
US7667700B1 (en) * 2004-03-05 2010-02-23 Hrl Laboratories, Llc System and method for navigating operating in a virtual environment
US20050248566A1 (en) * 2004-04-05 2005-11-10 Vesely Michael A Horizontal perspective hands-on simulator
CN101065783A (en) * 2004-04-05 2007-10-31 迈克尔·A·韦塞利 Horizontal perspective display
US7486302B2 (en) 2004-04-14 2009-02-03 Noregin Assets N.V., L.L.C. Fisheye lens graphical user interfaces
US7787009B2 (en) * 2004-05-10 2010-08-31 University Of Southern California Three dimensional interaction with autostereoscopic displays
JP4474640B2 (en) * 2004-05-11 2010-06-09 株式会社セガ Image processing program, game processing program, and game information processing apparatus
US8106927B2 (en) 2004-05-28 2012-01-31 Noregin Assets N.V., L.L.C. Graphical user interfaces and occlusion prevention for fisheye lenses with line segment foci
JP2008506140A (en) * 2004-06-01 2008-02-28 マイケル エー. ベセリー Horizontal perspective display
US9317945B2 (en) * 2004-06-23 2016-04-19 Callahan Cellular L.L.C. Detail-in-context lenses for navigation
US7714859B2 (en) 2004-09-03 2010-05-11 Shoemaker Garth B D Occlusion reduction and magnification for multidimensional data presentations
WO2006058408A1 (en) 2004-09-21 2006-06-08 Timeplay Entertainment Corporation System, method and handheld controller for multi-player gaming
US20080214273A1 (en) * 2004-09-21 2008-09-04 Snoddy Jon H System, method and handheld controller for multi-player gaming
US7995078B2 (en) 2004-09-29 2011-08-09 Noregin Assets, N.V., L.L.C. Compound lenses for multi-source data presentation
US20060126926A1 (en) * 2004-11-30 2006-06-15 Vesely Michael A Horizontal perspective representation
US7580036B2 (en) * 2005-04-13 2009-08-25 Catherine Montagnese Detail-in-context terrain displacement algorithm with optimizations
US20060244831A1 (en) * 2005-04-28 2006-11-02 Kraft Clifford H System and method for supplying and receiving a custom image
US20060250391A1 (en) * 2005-05-09 2006-11-09 Vesely Michael A Three dimensional horizontal perspective workstation
WO2006121956A1 (en) * 2005-05-09 2006-11-16 Infinite Z, Inc. Biofeedback eyewear system
US8717423B2 (en) 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
JP4312737B2 (en) * 2005-05-13 2009-08-12 任天堂株式会社 GAME PROGRAM AND GAME DEVICE
US7875132B2 (en) * 2005-05-31 2011-01-25 United Technologies Corporation High temperature aluminum alloys
US7375678B2 (en) * 2005-06-29 2008-05-20 Honeywell International, Inc. Displaying obstacles in perspective view
US20070043466A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
US20070040905A1 (en) * 2005-08-18 2007-02-22 Vesely Michael A Stereoscopic display using polarized eyewear
WO2007033201A2 (en) * 2005-09-13 2007-03-22 Multimedia Games, Inc. System for presenting gaming results employing a gaming display interactive character
US8031206B2 (en) 2005-10-12 2011-10-04 Noregin Assets N.V., L.L.C. Method and system for generating pyramid fisheye lens detail-in-context presentations
US9166883B2 (en) 2006-04-05 2015-10-20 Joseph Robert Marchese Network device detection, identification, and management
US7983473B2 (en) 2006-04-11 2011-07-19 Noregin Assets, N.V., L.L.C. Transparency adjustment of a presentation
US9327191B2 (en) * 2006-05-08 2016-05-03 Nintendo Co., Ltd. Method and apparatus for enhanced virtual camera control within 3D video games or other computer graphics presentations providing intelligent automatic 3D-assist for third person viewpoints
WO2007146347A2 (en) * 2006-06-14 2007-12-21 Wms Gaming Inc. Wagering game with multiple viewpoint display feature
JP4125762B2 (en) * 2006-07-06 2008-07-30 株式会社スクウェア・エニックス Online video game control server
JP5013773B2 (en) * 2006-08-18 2012-08-29 パナソニック株式会社 In-vehicle image processing apparatus and viewpoint conversion information generation method thereof
US8277316B2 (en) 2006-09-14 2012-10-02 Nintendo Co., Ltd. Method and apparatus for using a common pointing input to control 3D viewpoint and object targeting
US20080215974A1 (en) * 2007-03-01 2008-09-04 Phil Harrison Interactive user controlled avatar animations
US8882594B2 (en) * 2007-04-05 2014-11-11 Microsoft Corporation Control scheme for real time strategy game
NZ582133A (en) * 2007-05-18 2012-12-21 Uab Research Foundation Virtual reality system that reders common view to multiple users for medical image manipulation
US9026938B2 (en) 2007-07-26 2015-05-05 Noregin Assets N.V., L.L.C. Dynamic detail-in-context user interface for application access and content access on electronic displays
US8834245B2 (en) * 2007-08-17 2014-09-16 Nintendo Co., Ltd. System and method for lock on target tracking with free targeting capability
JP5390093B2 (en) * 2007-12-21 2014-01-15 任天堂株式会社 GAME PROGRAM AND GAME DEVICE
EP2241357B1 (en) * 2008-02-15 2015-08-19 Sony Computer Entertainment Inc. Game device, game control method, and game control program
US8649554B2 (en) 2009-05-01 2014-02-11 Microsoft Corporation Method to control perspective for a camera-controlled computer
US8303387B2 (en) * 2009-05-27 2012-11-06 Zambala Lllp System and method of simulated objects and applications thereof
US20100306825A1 (en) 2009-05-27 2010-12-02 Lucid Ventures, Inc. System and method for facilitating user interaction with a simulated object associated with a physical location
US8717360B2 (en) * 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
JP5573426B2 (en) * 2010-06-30 2014-08-20 ソニー株式会社 Audio processing apparatus, audio processing method, and program
JP5656514B2 (en) * 2010-08-27 2015-01-21 キヤノン株式会社 Information processing apparatus and method
JP5102868B2 (en) * 2010-09-09 2012-12-19 株式会社コナミデジタルエンタテインメント Game system
BR112013019302A2 (en) 2011-02-01 2018-05-02 Timeplay Entertainment Corporation multi-location interaction system and method for providing interactive experience to two or more participants located on one or more interactive nodes
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US9886552B2 (en) 2011-08-12 2018-02-06 Help Lighting, Inc. System and method for image registration of multiple video streams
JP5586545B2 (en) * 2011-09-09 2014-09-10 任天堂株式会社 GAME SYSTEM, PORTABLE GAME DEVICE, INFORMATION PROCESSOR CONTROL METHOD, AND INFORMATION PROCESSOR CONTROL PROGRAM
US20130293580A1 (en) 2012-05-01 2013-11-07 Zambala Lllp System and method for selecting targets in an augmented reality environment
US9020203B2 (en) 2012-05-21 2015-04-28 Vipaar, Llc System and method for managing spatiotemporal uncertainty
US9710968B2 (en) 2012-12-26 2017-07-18 Help Lightning, Inc. System and method for role-switching in multi-reality environments
US9940750B2 (en) 2013-06-27 2018-04-10 Help Lighting, Inc. System and method for role negotiation in multi-reality environments
US9888174B2 (en) 2015-10-15 2018-02-06 Microsoft Technology Licensing, Llc Omnidirectional camera with movement detection
US10277858B2 (en) 2015-10-29 2019-04-30 Microsoft Technology Licensing, Llc Tracking object of interest in an omnidirectional video
US11328155B2 (en) 2015-11-13 2022-05-10 FLIR Belgium BVBA Augmented reality labels systems and methods
GB2561746B (en) 2015-11-13 2022-02-09 Flir Systems Video sensor fusion and model based virtual and augmented reality systems and methods
CN105597311B (en) * 2015-12-25 2019-07-12 网易(杭州)网络有限公司 Camera control method and device in 3d game
US10824320B2 (en) * 2016-03-07 2020-11-03 Facebook, Inc. Systems and methods for presenting content
JP7140465B2 (en) * 2016-06-10 2022-09-21 任天堂株式会社 Game program, information processing device, information processing system, game processing method
JP6789830B2 (en) * 2017-01-06 2020-11-25 任天堂株式会社 Information processing system, information processing program, information processing device, information processing method
CN110546601B (en) * 2017-04-03 2023-09-26 索尼公司 Information processing device, information processing method, and program
CN108376424A (en) * 2018-02-09 2018-08-07 腾讯科技(深圳)有限公司 Method, apparatus, equipment and storage medium for carrying out view angle switch to three-dimensional virtual environment
US10846898B2 (en) * 2019-03-28 2020-11-24 Nanning Fugui Precision Industrial Co., Ltd. Method and device for setting a multi-user virtual reality chat environment
GB2598927B (en) * 2020-09-18 2024-02-28 Sony Interactive Entertainment Inc Apparatus and method for data aggregation

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5359703A (en) * 1990-08-02 1994-10-25 Xerox Corporation Moving an object in a three-dimensional workspace
WO1992009948A1 (en) * 1990-11-30 1992-06-11 Vpl Research, Inc. Improved method and apparatus for creating virtual worlds
US5590268A (en) * 1993-03-31 1996-12-31 Kabushiki Kaisha Toshiba System and method for evaluating a workspace represented by a three-dimensional model
US5347306A (en) * 1993-12-17 1994-09-13 Mitsubishi Electric Research Laboratories, Inc. Animated electronic meeting place
US5491743A (en) * 1994-05-24 1996-02-13 International Business Machines Corporation Virtual conference system and terminal apparatus therefor
US5736982A (en) * 1994-08-03 1998-04-07 Nippon Telegraph And Telephone Corporation Virtual space apparatus with avatars and speech
US6085256A (en) 1994-08-19 2000-07-04 Sony Corporation Cyber space system for providing a virtual reality space formed of three dimensional pictures from a server to a user via a service provider
US5913727A (en) * 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
US6139434A (en) * 1996-09-24 2000-10-31 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100355272C (en) * 2005-06-24 2007-12-12 清华大学 Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
CN102067179A (en) * 2008-04-14 2011-05-18 谷歌公司 Swoop navigation
CN103164612A (en) * 2011-08-03 2013-06-19 迪士尼企业公司 Zone-based positioning for virtual worlds
CN112188922A (en) * 2018-05-21 2021-01-05 微软技术许可有限责任公司 Virtual camera placement system
CN112188922B (en) * 2018-05-21 2024-05-24 微软技术许可有限责任公司 Virtual Camera Placement System
CN111589114A (en) * 2020-05-12 2020-08-28 腾讯科技(深圳)有限公司 Virtual object selection method, device, terminal and storage medium
CN111589114B (en) * 2020-05-12 2023-03-10 腾讯科技(深圳)有限公司 Virtual object selection method, device, terminal and storage medium
CN115639976A (en) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 Multi-mode and multi-angle synchronous display method and system for virtual reality content
CN115639976B (en) * 2022-10-28 2024-01-30 深圳市数聚能源科技有限公司 Multi-mode multi-angle synchronous display method and system for virtual reality content

Also Published As

Publication number Publication date
WO1999035597A3 (en) 1999-10-14
KR20000076066A (en) 2000-12-26
WO1999035597A2 (en) 1999-07-15
EP0966716A2 (en) 1999-12-29
KR100597329B1 (en) 2006-07-10
JP2001515630A (en) 2001-09-18
US6241609B1 (en) 2001-06-05
JP4276704B2 (en) 2009-06-10
GB9800397D0 (en) 1998-03-04
CN1132117C (en) 2003-12-24

Similar Documents

Publication Publication Date Title
CN1132117C (en) Virtual environment viewpoint control
USRE38287E1 (en) Computer network data distribution and selective retrieval system
US8533580B1 (en) System and method of navigating linked web resources
US6331853B1 (en) Display control apparatus display control method and presentation medium
Benford et al. Networked virtual reality and cooperative work
Craig et al. Developing virtual reality applications: Foundations of effective design
US6704784B2 (en) Information processing apparatus and method, information processing system and program providing medium
Fishkin A taxonomy for and analysis of tangible interfaces
US7107549B2 (en) Method and system for creating and distributing collaborative multi-user three-dimensional websites for a computer system (3D Net Architecture)
MacIntyre et al. Future multimedia user interfaces
Damala et al. Merging augmented reality based features in mobile multimedia museum guides
JP2021051757A (en) Creative camera
US20230343056A1 (en) Media resource display method and apparatus, device, and storage medium
Liechti et al. A digital photography framework enabling affective awareness in home communication
Bovier et al. An interactive 3D holographic pyramid for museum exhibition
Snowdon et al. Inhabited information spaces: living with your data
Benford et al. Visualising and Populating the Web: Collaborative virtual environments for browsing, searching and inhabiting Webspace
JP2022008997A (en) Creative camera
JP2002132828A (en) Bookmark management system, computer-readable recording medium having the same recorded, and bookmark managing device
Benford et al. The populated web: Browsing, searching and inhabiting the WWW using collaborative virtual environments
WO2024139458A1 (en) Friend-finder content recommendation method and apparatus, device, and storage medium
Kerr et al. 3D-Web Page Usability Issues; Present and Future
Deligiannidis et al. The London walkthrough in an immersive digital library environment
Bönisch et al. A VRML-based Visualization of User-Vicinities in the WWW
Jacob et al. New Human-computer Interaction Techniques for the Digital Library

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20031224

Termination date: 20140107