CN117830567A - Position updating method, device, medium and program product for virtual scene - Google Patents

Position updating method, device, medium and program product for virtual scene Download PDF

Info

Publication number
CN117830567A
CN117830567A CN202211194639.9A CN202211194639A CN117830567A CN 117830567 A CN117830567 A CN 117830567A CN 202211194639 A CN202211194639 A CN 202211194639A CN 117830567 A CN117830567 A CN 117830567A
Authority
CN
China
Prior art keywords
virtual
location
virtual character
character
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211194639.9A
Other languages
Chinese (zh)
Inventor
单卫华
闫达帅
贺中兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202211194639.9A priority Critical patent/CN117830567A/en
Priority to PCT/CN2023/110379 priority patent/WO2024066723A1/en
Publication of CN117830567A publication Critical patent/CN117830567A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/216Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/205Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform for detecting the geographical location of the game platform
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5573Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history player location
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Embodiments of the present application provide a location updating method, an electronic device, a computer storage medium, and a program product apparatus for a virtual scene. The method includes receiving a real location from a user device of a user during an offline state of a virtual character of the user in a virtual scene. The method also includes updating a virtual position of the virtual character in the virtual scene based on the received real position. In this way, after the avatar comes off-line, the user's virtual world location associated with the avatar may still be synchronized based on the user's real world location. Thus, the real-time and realism of the virtual scene can be enhanced, so that the immersive experience of the virtual scene can be significantly improved, thereby enhancing the user-friendliness of the virtual world application.

Description

Position updating method, device, medium and program product for virtual scene
Technical Field
The present application relates to the field of information technology, and more particularly, to a position updating method for a virtual scene, an electronic device, a computer storage medium, and a program product.
Background
In the meta-universe, as well as some game scenarios, the virtual world may be a virtualization and digitization of the real world. In such virtual world scenarios, generating a real world image via digital twinning techniques may provide an immersive experience to humans. With the development of big data and cloud computing technology, activities such as social entertainment in the real world can be mapped into application scenes of the virtual world, and the requirements of users on the immersion of the virtual world are also increasing.
Disclosure of Invention
The embodiment of the application provides a position updating scheme for a virtual scene.
In a first aspect, a method of location update for a virtual scene is provided. The method comprises the following steps: receiving a real location from a user device of a user during an offline state of a virtual character of the user in a virtual scene; and updating the virtual position of the virtual character in the virtual scene based on the received real position.
In this way, after the avatar comes offline, the user's virtual world location associated with the avatar may still be synchronized based on the user's real world location without being affected by the avatar offline. On this basis, the real-time and realism of the virtual scene can be enhanced, so that the immersive experience of the virtual scene can be significantly improved.
In some embodiments of the first aspect, the method further comprises: receiving an indication from the user device that the virtual character is online in the virtual scene; and in response to receiving the indication, determining three-dimensional display data for the virtual character for presentation on the user device based on the historical updated record of virtual locations during offline of the virtual character. In this way, the active scene during the offline period of the virtual character can be restored by searching the relevant scene data using the position record during the offline period as an anchor point to perform calculation.
In some embodiments of the first aspect, determining three-dimensional display data for the virtual character comprises: the three-dimensional display data is determined based on a history of updated records of virtual locations during offline of the virtual character, a map resource library associated with the virtual scene, and a physics engine library. Thus, rich data is obtained from the map resource library according to the history update record, and the expression of the virtual scene is enhanced by using the physical engine library, so that the active scene of the virtual character can be expressed truly.
In some embodiments of the first aspect, determining three-dimensional display data for the virtual character comprises: a dynamic representation of the virtual character moving in the virtual scene from a pre-offline position to a virtual position is determined, wherein the pre-offline position is a last virtual position in the virtual scene before the last offline of the virtual character. In this way, the online scene representation determined based on the position record during offline can truly restore the action track during the virtual character offline, and thus is more natural and realistic, than the scene state at the last offline that is not synchronized.
In some embodiments of the first aspect, determining three-dimensional display data for the virtual character further comprises: the three-dimensional display data is determined based on historical updated records of virtual locations of other virtual characters in the virtual scene during the offline period of the virtual character. In this way, the influence of a plurality of other virtual characters is considered when constructing a dynamic virtual representation during the offline period of the virtual character, further improving the realism of the virtual scene.
In some embodiments of the first aspect, the aforementioned user is a first user, the avatar is a first avatar, the user device is a first user device, the real location is a first real location, the virtual location is a first virtual location, and the method further comprises: receiving a second real location from a second user device of a second user during a second virtual character of the second user being in an online state in the virtual scene; updating a second virtual position of the second virtual character in the virtual scene based on the second real position; and determining three-dimensional display data for the second virtual character for presentation on the second user device based on the second virtual location, a map repository associated with the virtual scene, and a physics engine repository. As such, embodiments of the present disclosure may be applicable to multi-persona online virtual scenes.
In some embodiments of the first aspect, determining three-dimensional display data for the second virtual character comprises: three-dimensional display data for the second virtual character is determined based on the historical update record of the first virtual location of the first virtual character.
In some embodiments of the first aspect, determining three-dimensional display data for the second virtual character based on the historical update record of the first virtual location comprises: three-dimensional display data including a three-dimensional display of the first virtual character is generated based on the history of the first virtual location. As such, even if a virtual character in a virtual scene is offline, it can still be visible and impact other virtual characters, enabling the real world to map more closely to the virtual scene.
In some embodiments of the first aspect, determining three-dimensional display data further comprises: three-dimensional display data is determined based on the privacy settings of the first virtual character and the second virtual character, respectively. In this way, on the one hand, the user can have more flexible control over the privacy of the virtual character. On the other hand, the server may also avoid some unnecessary computation and generation later by checking the privacy settings.
In some embodiments of the first aspect, determining three-dimensional display data for the second virtual character comprises: a dynamic representation of a change in a second virtual position of a second virtual character in the virtual scene is determined. In this way, the generated data can be used to present real-time dynamic changes of online personas on the user device as the user device moves, improving the immersive experience of the user.
In some embodiments of the first aspect, determining the dynamic representation of the change comprises: determining a path of change of a second virtual position of a second virtual character in the virtual scene; and determining a dynamic representation based on the path. As such, virtual objects around the path and their changes as the second virtual character moves may be included in the three-dimensional display data for presentation to the user, thereby improving the realism of the virtual scene.
In some embodiments of the first aspect, determining the dynamic representation of the change further comprises: based on the path, an interactive representation of the second virtual character with the virtual object in the virtual scene is determined. Thus, the immersive performance of the virtual scene is improved by considering interaction among multiple roles when the virtual scene is presented.
In some embodiments of the first aspect, wherein determining the dynamic representation of the change further comprises: receiving a gesture from a second user device with respect to a second user; and determining a dynamic representation of the change based on the gesture of the second user. In this way, various action gestures of the online user may be correspondingly reflected on the virtual object, further improving the immersive experience of the user.
In some embodiments of the first aspect, wherein the virtual locations in the virtual scene and the real locations in the real world map to each other. As such, embodiments of the present disclosure may provide an immersive online virtual experience with a sense of realism for a virtual object with a real location associated with the virtual character as an anchor point.
In a second aspect, an apparatus for location updating for a virtual scene is provided. The device comprises: a location receiving module configured to receive a real location from a user device of a user during an offline state of a virtual character of the user in a virtual scene; and a location updating module configured to update a virtual location of the virtual character in the virtual scene based on the received real location.
In some embodiments of the second aspect, the location receiving module comprises: an online indication module configured to receive an indication from the user device that the virtual character is online in the virtual scene; and the apparatus further comprises: a scene module configured to determine three-dimensional display data for the virtual character for presentation on the user device based on a historical updated record of virtual locations during offline of the virtual character in response to receiving the indication.
In some embodiments of the second aspect, the scene module further comprises: a library module configured to determine three-dimensional display data for the virtual character based on a historical update record of virtual locations during offline of the virtual character, a map resource library associated with the virtual scene, and a physics engine library.
In some embodiments of the second aspect, the scene module comprises: a dynamic representation module configured to determine a dynamic representation of the virtual character moving in the virtual scene from a pre-offline position to a virtual position, wherein the pre-offline position is a last virtual position in the virtual scene before the last offline of the virtual character.
In some embodiments of the second aspect, the scene module further comprises: and a multi-persona module configured to determine three-dimensional display data for the virtual character based on historical update records of virtual locations of other virtual characters in the virtual scene during offline of the virtual character.
In some embodiments of the second aspect, the aforementioned user is a first user, the avatar is a first avatar, the user device is a first user device, the real location is a first real location, the virtual location is a first virtual location, and the apparatus further comprises: a second location receiving module configured to receive a second real location from a second user device of a second user during a second virtual character of the second user being in an online state in the virtual scene; a second location updating module configured to update a second virtual location of a second virtual character in the virtual scene based on the second real location; a second scene module configured to determine three-dimensional display data for a second virtual character for presentation on a second user device based on the second virtual location, a map repository associated with the virtual scene, and a physics engine repository.
In some embodiments of the second aspect, the second scene module comprises: and a second multi-persona module configured to determine three-dimensional display data for the second virtual persona based on the historical update record of the first virtual location of the first virtual persona.
In some embodiments of the second aspect, the second multi-persona module includes: and the other character generating module is configured to generate three-dimensional display data comprising three-dimensional display of the first virtual character based on the history update record of the first virtual position.
In some embodiments of the second aspect, the second scene module further comprises: and a privacy module configured to determine three-dimensional display data for the second virtual character based on the respective privacy settings of the first virtual character and the second virtual character.
In some embodiments of the second aspect, the second scene module further comprises: the second dynamic representation module determines a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene.
In some embodiments of the second aspect, the second dynamic representation module comprises: a path planning module configured to determine a path of a change in a second virtual position of a second virtual character in the virtual scene; and the second dynamic representation module further comprises: a path dynamics module configured to determine a dynamic representation based on the path.
In some embodiments of the second aspect, the second dynamic representation module further comprises: the path-along interaction module is configured to determine an interaction representation of the second virtual character with the virtual object in the virtual scene based on the path.
In some embodiments of the second aspect, the apparatus further comprises a gesture receiving module configured to receive a gesture from the second user device with respect to the second user; and the second scene module further includes a gesture representation module configured to determine a dynamic representation of the change further includes determining the dynamic representation based on a gesture of the second user.
In some embodiments of the second aspect, wherein the virtual locations in the virtual scene and the real locations in the real world map to each other.
In a third aspect, an electronic device is provided. The electronic device includes a processor and a memory having stored thereon computer instructions that, when executed by the processor, cause the electronic device to: receiving a real location from a user device of a user during an offline state of a virtual character of the user in a virtual scene; and updating the virtual position of the virtual character in the virtual scene based on the received real position.
In some embodiments of the third aspect, the actions further comprise: receiving an indication from the user device that the virtual character is online in the virtual scene; and in response to receiving the indication, determining three-dimensional display data for the virtual character for presentation on the user device based on the historical update record of the virtual location during the offline of the virtual character, the map resource library associated with the virtual scene, and the physics engine library.
In some embodiments of the third aspect, determining three-dimensional display data for the virtual character comprises: the three-dimensional display data is determined based on a history of updated records of virtual locations during offline of the virtual character, a map resource library associated with the virtual scene, and a physics engine library.
In some embodiments of the third aspect, determining three-dimensional display data for the virtual character comprises: a dynamic representation of the virtual character moving in the virtual scene from a pre-offline position to a virtual position is determined, wherein the pre-offline position is a last virtual position in the virtual scene before the last offline of the virtual character.
In some embodiments of the third aspect, determining three-dimensional display data for the virtual character further comprises: the three-dimensional display data is determined based on historical updated records of virtual locations of other virtual characters in the virtual scene during the offline period of the virtual character.
In some embodiments of the third aspect, the aforementioned user is a first user, the avatar is a first avatar, the user device is a first user device, the real location is a first real location, the virtual location is a first virtual location, and the method further comprises: receiving a second real location from a second user device of a second user during a second virtual character of the second user being in an online state in the virtual scene; updating a second virtual position of the second virtual character in the virtual scene based on the second real position; and determining three-dimensional display data for the second virtual character for presentation on the second user device based on the second virtual location, a map resource library associated with the virtual scene, and a physics engine library.
In some embodiments of the third aspect, determining three-dimensional display data for the second virtual character comprises: three-dimensional display data for the second virtual character is determined based on the historical update record of the first virtual location of the first virtual character.
In some embodiments of the third aspect, determining three-dimensional display data for the second virtual character based on the historical update record of the first virtual location comprises: three-dimensional display data including a three-dimensional display of the first virtual character is generated based on the history of the first virtual location.
In some embodiments of the third aspect, determining the three-dimensional display data further comprises: three-dimensional display data is determined based on the privacy settings of the first virtual character and the second virtual character, respectively.
In some embodiments of the third aspect, determining three-dimensional display data for the second virtual character comprises: a dynamic representation of a change in a second virtual position of a second virtual character in the virtual scene is determined.
In some embodiments of the third aspect, determining the dynamic representation of the change comprises: determining a path of change of a second virtual position of a second virtual character in the virtual scene; and determining a dynamic representation based on the path.
In some embodiments of the third aspect, determining the dynamic representation of the change further comprises: based on the path, an interactive representation of the second virtual character with the virtual object in the virtual scene is determined.
In some embodiments of the third aspect, wherein determining the dynamic representation of the change further comprises: receiving a gesture from a second user device with respect to a second user; and determining a dynamic representation based on the gesture of the second user.
In some embodiments of the third aspect, wherein the virtual locations in the virtual scene and the real locations in the real world map with each other.
In a fourth aspect, a cluster of computing devices is provided, the cluster of computing devices comprising at least one computing device, each computing device comprising a processor and a memory; the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform operations according to the method in the first aspect described above or any embodiment thereof.
In a fifth aspect, a computer-readable storage medium is provided. The computer-readable storage medium has stored thereon computer-executable instructions which, when executed by a processor, implement operations according to the method of the first aspect described above or any embodiment thereof.
In a sixth aspect, a computer program or computer program product is provided. The computer program or computer program product is tangibly stored on a computer-readable medium and comprises computer-executable instructions which, when executed, implement operations in accordance with the method in the first aspect or any embodiment thereof described above.
Drawings
The above and other features, advantages, and aspects of embodiments of the present application will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which various embodiments of the present disclosure may be implemented;
FIG. 2 illustrates an example method flow diagram for location update for a virtual scene, according to some embodiments of the disclosure;
FIG. 3 illustrates a schematic interactive diagram of a process of determining three-dimensional display data when a virtual character is online, according to some embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of an example movement path when a virtual character comes online, according to some embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of another example movement path when a virtual character comes online, according to some embodiments of the present disclosure;
FIG. 6 illustrates a schematic diagram of yet another example movement path when a virtual character comes online, in accordance with some embodiments of the present disclosure;
FIG. 7 illustrates a flowchart of an example method of determining three-dimensional display data for an online persona in accordance with some embodiments of the disclosure;
FIG. 8 illustrates a schematic diagram of an example movement path of an online avatar in accordance with some embodiments of the present disclosure;
FIG. 9 illustrates a schematic block diagram of an apparatus for location update of a virtual scene, according to some embodiments of the present disclosure;
FIG. 10 shows a schematic block diagram of an example device that may be used to implement embodiments of the present disclosure;
FIG. 11 illustrates a schematic block diagram of an example cluster of computing devices that may be used to implement embodiments of the disclosure; and
FIG. 12 illustrates a schematic block diagram of an example implementation of a cluster of computing devices that may be used to implement embodiments of the disclosure.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present application are shown in the drawings, it is to be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the present application. It should be understood that the drawings and examples of the present application are for illustrative purposes only and are not intended to limit the scope of the present application.
In the description of the embodiments of the present application, the term "comprising" and its similar terms should be understood as open-ended, i.e. "including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
In virtual reality applications such as parallel reality or augmented reality, virtual scenes are often composed based on superposition of real world images. The virtual locations in these scenes are a map of the actual locations in the real world, and the scenes typically contain various objects in the real world at the corresponding locations, such as infrastructure of buildings, greenery, parks, and the like.
In such virtual scenes based on real world mirroring, location data may be used as key information for interaction with the virtual scene. The user's related activities in the real world may be mapped into a series of application scenarios of the virtual world based on the location data. On this basis, the position of a character object in the virtual world can typically be mapped to the location of a client with communication and positioning capabilities (e.g., global positioning system (Global Positioning System, GPS)). For example, an individual walking in an actual scenic spot may use a carried mobile phone to log into a virtual scene that includes the scenic spot. In this scenario, the virtual character it uses may represent the individual himself. On this basis, when the individual is located at a particular attraction, its avatar will also be located at the corresponding attraction in the virtual scene based on the location of its handset.
In a traditional virtual scene application, when a character is offline in a scene, a client typically uploads the current position data of the character to a cloud server for archiving. Thereafter, when the character comes online again, the client reads the previously off-line pre-archived location data from the server. Thereby, the character is restored in the previous state at the virtual position before the offline. However, when the virtual character corresponds to an individual moving in the real world, the position and state of the individual when the character is again online may have not matched with that when previously offline, thereby resulting in an unnatural user experience. Furthermore, roles in the virtual world may disappear or be immobilized in place during offline periods. This may also degrade the real-time and realism experienced by other online users in a multi-persona online virtual scene that maps to the real world. As described above, the complete isolation of the virtual world from the real world during the virtual character offline results in the positional data not communicating, which can adversely affect the immersive experience of the virtual reality user.
To solve the above-described and other problems, the present disclosure provides a scheme of presenting a virtual scene, which can receive a real position from a user device of a user during an offline state of the virtual character of the user, and update a virtual position of the virtual character in the virtual scene based on the real position. In this way, after the avatar comes offline, the user's virtual world location associated with the avatar may still be synchronized based on the user's real world location, independent of the avatar being offline, etc. On this basis, the real-time and realism of the virtual scene can be enhanced, so that the immersive experience of the virtual scene can be significantly improved, thereby enhancing the user viscosity of the virtual world application.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure may be implemented. As shown in FIG. 1, the example environment 100 includes a server 110, a user device 120-1, user devices 120-2, …, and a user device 120-N. For convenience of description below, user device 120-1, user devices 120-2, …, and user device 120-N are collectively or individually referred to as user device 120 in embodiments of the present disclosure.
In some embodiments, the server 110 may be a central device of the virtual scene operator. Various resources required to run the virtual scene may be included in the server 110, such as, but not limited to, various basic data resources for constructing a virtual scene model, computing resources for constructing a real-time representation of the virtual scene for a certain user, storage resources for storing user data of the virtual scene, and communication resources for communicating with user devices and external content providers, etc. (not shown), etc. It should be appreciated that although shown as a single device, the server 110 may be implemented in any other form suitable for performing the respective functions, such as a plurality of centralized devices, distributed devices, and/or deployed in the cloud, etc.
In some embodiments, the user device 120 may be a device that installs a client application of a virtual scene. In some embodiments, the user device 120 may be implemented as an electronic device such as a smart phone, tablet, wearable device, or the like. The user device 120 has positioning functions that acquire a self-realistic position, such as satellite positioning, bluetooth positioning, and/or other suitable positioning functions. In addition, embodiments of the present disclosure do not limit the number of user devices. In some multi-persona online virtual scenarios, the number of user devices may be on the order of thousands, tens of thousands, or more.
FIG. 1 also shows user 130-1, users 130-2, …, and user 130-N. User 130-1, users 130-2, …, and user 130-N are collectively or individually referred to as user 130 in embodiments of the present disclosure. The user 130 may bind his virtual roles in the virtual scene to the device 120 and carry the user device 120 to move in the real world. For example, the user 130-1 may bind his avatar 140-1 to the device 120-1 and move in the real world carrying the user device 120-1. The user 130-1 may bind his avatar 140-2 to the device 120-2 and move in the real world carrying the user device 120-2. The user 130-N may bind his avatar 140-N to the device 120-N and move in the real world carrying the user device 120-N. The avatar 140-1, the avatars 140-2, …, and the avatar 140-N are collectively or individually referred to as the avatar 140 in embodiments of the present disclosure.
The user device 120 may be in communication with the server 110. It should be understood that embodiments of the present disclosure are not limited in the manner of communication, such as communication in a wired or wireless manner. The wired manner may include, but is not limited to, fiber optic connections, universal serial bus (Universal Serial Bus, USB) connections, etc., and the wireless manner may include, but is not limited to, mobile communication technologies (including, but not limited to, cellular mobile communications, etc., wi-Fi, bluetooth, point-to-Point (P2P), etc. furthermore, during movement of the user device 120, the manner in which it communicates with the server 110 may change, e.g., from cellular mobile network communications to Wi-Fi communications, and then at some Point to communications over a wired connection.
The user device 120 may send a message (such as in the form of a client request) to the server 110. In some embodiments, the message may be sent via an explicit operation by the user (e.g., to a client application interface). For example, the user 130-1 associated with the user device 120-1 may log in or log out an account of the avatar 140-1 into or from the virtual scene, take a manipulation action on a virtual object displayed on the interface, and so forth. Messages containing information about these operations may then be sent to server 110.
In some embodiments, the message may also be sent periodically or in response to certain conditions, depending on settings for the avatar, without explicit manipulation by the user. For example, the user device 120 may send the actual position of the user device 120 to the server 110 at regular intervals, or send a new actual position to the server 110 when the displacement of the user device 120 reaches a certain value compared to the actual position sent last time.
For example, the server 110 may perform various operations of storing, updating, analyzing, calculating, etc., according to various requests and other inputs received from the user device 120, so that the virtual scene may function normally. The server 110 may also send a message (such as in response to a client request) to the user device 120. In some embodiments, the server 110 may send virtual scene data to the user device 120 that should be presented on the user device 120 based on the actual location of the user device 120. For example, during the time that the virtual character is online by the user device 120, the server 110 may transmit a dynamic representation of the character in the virtual scene to the user device 120 in a streaming form that varies according to its location.
It should be understood that the architecture and functionality in the example environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure. Also, other devices, systems, or components, etc., not shown, may also be present in the example environment 100. For example, user device 120 may communicate indirectly with server 110 via an edge server in its vicinity. In addition, embodiments of the present disclosure may also be applied in other environments having different structures and/or functions.
Fig. 2 illustrates a flowchart of an example method 200 for location update for a virtual scene, according to some embodiments of the present disclosure. The example method 200 may be performed, for example, by the server 110 as shown in fig. 1. It should be appreciated that method 200 may also include additional actions not shown, the scope of the present disclosure being not limited in this respect. The method 200 is described in detail below in connection with the example environment 100 of fig. 1.
At block 210, a real-world location is received from a user device of a user during an offline state of the user's avatar in a virtual scene. For example, during an offline state of the avatar 140 of the user 130 in the virtual scene, the server 110 may receive a real-world location from the user device 120 of the user 130. For example, the server 110 may be a device of the virtual scene operator and the user device 120 has a client application of the virtual scene.
As non-limiting examples, the virtual scene may be a virtual game scene, a virtual mirrored travel guide scene for attractions, or the like. In some embodiments, the virtual locations in the virtual scene may be mapped to real locations in the real world. In such embodiments, various objects at corresponding locations in the real world are typically contained at some virtual location in the scenes, such as the infrastructure of buildings, greens, parks, etc., such that the virtual scenes appear as digital mirror images of the real world. In addition, these virtual scenes may also overlay virtual objects that are not present in the real world at virtual locations corresponding to particular real locations to augment the digital image, such as, but not limited to, introductions for particular objects, game props tied to real locations, and online service portals for particular places such as stores, etc. In this way, a user of a virtual scene may obtain an enhanced experience via the virtual scene when arriving at a particular location in the real world.
In some embodiments, server 110 may store information regarding the binding of user device 120 to avatar 140 in user data associated with avatar 140, e.g., based on existing settings. Then, during the time that the avatar 140 is offline, the server 110 may receive a real-world location from the user device 120 and identify from the user data that it is associated with the avatar 140 that is offline.
At block 220, the virtual position of the virtual character in the virtual scene is updated based on the received real position. For example, the server 110 may update the virtual location of the virtual character 140 in the virtual scene based on the real location received at block 210.
In some embodiments, the virtual location in the virtual scene and the real location may use the same coordinate system, and the server 110 may update the value of the virtual location recorded in the user data associated with the virtual character 140 to the value of the received real location. In some embodiments, server 110 may convert the received virtual location into a corresponding virtual location according to mapping rules between the virtual location and the real location for updating the virtual location of virtual character 140. In this way, the problem of out-of-sync location data caused by isolation of the virtual world from the real world during offline of the virtual character can be solved, which provides support for user experience improvement for online virtual scene applications that use location as an interaction anchor. For example, when a user device 120 moves to a new location after taking its bound avatar 140 off line at a certain location, its avatar 140 may likewise move to the virtual location mapped by the new location in the virtual scene.
It should be appreciated that during the time that the avatar 140 is offline, the server 110 may receive the real-world location from the user device 120 multiple times and update the virtual location of the avatar 140 multiple times accordingly. In some embodiments, server 110 may store a history of updated records of virtual locations of virtual character 140. For example, the server 110 may store a virtual location tracking table for the avatar 140 and add a new entry to the avatar 140 each time its location is updated. For example, the entry may include the updated virtual location and the corresponding update time. Embodiments of the present disclosure are not limited in the particular manner in which the server 110 stores the history update record. Thus, the server 110 can obtain the position trajectory of the virtual character 140.
In some such embodiments, the server 110 may receive an indication from the user device 120 that the avatar 140 is online in the virtual scene. For example, the server 110 may receive a login request for an account associated with the avatar 140 from a client application on the user device 120. In response to the received online indication, server 110 may determine three-dimensional display data for virtual character 140 for presentation on user device 120 based on the historical updated record of virtual locations during offline of virtual character 140.
In some embodiments, server 110 may determine three-dimensional display data for virtual character 140 for presentation on user device 120 based on historical update records of virtual locations during offline of virtual character 140, a map resource library associated with the virtual scene, and a physics engine library. The map repository may include various map data regarding various virtual objects in the virtual scene, such as, but not limited to, object names, locations, three-dimensional shapes, colors, various physical properties, and functions, among others. In some embodiments, the map repository may include real world mapping data, such as real landscape data for buildings, greenbelts, and bodies of water. The map repository may be used by the server 110 to build a base rendering model of the virtual scene, plan a path of movement of the virtual character, and so on. The physics engine library is used by the server 110 to calculate interaction expressions and policies between virtual objects from their physical properties, such as representing collisions between virtual roles or between virtual roles and virtual facilities, determining whether a virtual role can traverse another object, and so forth.
In some embodiments, as part of determining the three-dimensional display data, server 110 may determine a dynamic representation of virtual character 140 moving in the virtual scene from a pre-offline position to a virtual position, where the pre-offline position is the last virtual position in the virtual scene before the last offline of virtual character 140. For example, the server 110 may restore the movement path during the avatar 140 based on the history update record. The server 110 may then obtain corresponding scene data from the map data according to the path and use the data along with model data for the avatar 140 to calculate a three-dimensional dynamic rendering for the avatar 140 online. In this process, server 110 may also use a library of physical engines to enhance the interactive representation of the virtual object, making the dynamic representation more realistic. For example, server 110 may refine the three-dimensional display data by using a library of physical engines to calculate the deformations that both the virtual object produces when in contact with other objects.
The server 110 may then send the determined three-dimensional display data to the user device 120 for presentation to the user. As such, the online scene representation determined based on the location record during offline is more natural and more realistic than the scene state last offline at the time of restoring the asynchronization.
The specific content of the dynamic representation is related to the type of user device 120, the display mode of the client application, etc. For example, the user device 120 may have a graphical display interface and the three-dimensional display data may be a three-dimensional animated representation of the virtual character 140 moving to the current virtual position. For example, the user device 120 may be augmented reality glasses and the three-dimensional display data may be a first person perspective three-dimensional animated representation that moves to a current virtual position. The dynamic representation may also include a sound and haptic (e.g., vibration) representation.
Further, depending on the specific implementation and client configuration, etc., the server 110 may generate three-dimensional display data in different formats. For example, the server 110 may generate and send three-dimensional video to the user device. For example, server 110 may also generate a three-dimensional rendering file interpretable by the user device, and the three-dimensional rendering file is interpreted at the user device for presentation. Embodiments of the present disclosure are not limited to a particular form and content of dynamic representation, and server 110 may determine three-dimensional display data for different content for different client types and/or settings of the same virtual scene.
In some embodiments, server 110 may also take into account historical update records of virtual positions of other virtual characters in the virtual scene during offline of virtual character 140 when determining three-dimensional display data. For example, in restoring the moving path of the virtual character 140, the server 110 may check whether the trajectory during the offline of the virtual character 140 has a spatiotemporal intersection with other characters. The server 110 may then include dynamic representations of other virtual characters having intersections in the three-dimensional display data, such as, but not limited to, bypassing the current virtual character 140, bypassing each other at character track intersections, greetings, and/or the like. In some embodiments, the server 110 may also check the privacy settings of the various virtual characters when determining the three-dimensional display data. For example, the server 110 may consider only other virtual roles that are set to be visible to the virtual role 140 and that are set by the virtual role 140 to be desired to be seen.
In multi-user online virtual scene applications, the real-world feeling of the virtual scene can be further improved by considering the influence of multiple other virtual characters when constructing a dynamic virtual representation during the offline of the virtual characters.
Fig. 3 illustrates a schematic interactive diagram of a process 300 of determining three-dimensional display data when a virtual character is online, according to some embodiments of the present disclosure. Process 300 relates to server 110 and user device 120 described with respect to fig. 1, and may be viewed as an example process in which server 110 interacts with user device 120 when using method 220 described with respect to fig. 2.
In process 300, user device 120 sends 305 an indication to server 110 that virtual character 140 is down. For example, the user device 120 may send the offline message in response to the avatar 140 logging out of the virtual scene through a client application installed thereon. Alternatively, the user device 120 may include a real location of the virtual character 140 when it is off-line in the off-line indication.
After receiving the drop message, the server 110 performs a server-side drop process 310 for the avatar 140. For example, the server 110 may archive various states of the avatar 140 when it comes off-line, and update the virtual location of the avatar 140 based on the real location when available. For example, in a multi-persona scenario, server 110 may update the client dynamic representations of other affected characters based on virtual character 140 coming offline accordingly, such as displaying a hint animation or the like. For example, the server 110 may terminate a session with the user device 120 for transferring streaming data regarding the online persona.
Alternatively or additionally, the server 110 may determine three-dimensional display data about the virtual character 140 offline that should be presented on the user device 120 and send 315 the three-dimensional data to the user device 120 for the user device 120 to present the offline context of the virtual character 140. Server 110 may determine the three-dimensional display data based on the last virtual location of virtual character 140 before it was dropped (recorded or updated based on the actual location in the drop message), and/or a map repository associated with the virtual scene and a physical engine repository in a manner similar to that described previously.
During the offline period of the avatar 140, the user device 120 may send 320 a new real-world location to the server 110. In some embodiments, the user device 120 may send the real location to the server 110 according to the settings for the avatar 140 after the avatar 140 is offline. For example, the setting may be to transmit the current location of the user device 120 itself to the server 110 every predetermined period of time.
As a non-limiting example, during the time that the avatar 140 is offline, the user device 120 may no longer present the virtual scene to provide the virtual experience to the user 130. During this time, the user device 120 may be carried with the user 130 and otherwise used, e.g., experiencing another virtual scene, etc. However, depending on the setting binding the avatar 140 to the user device 120, the user device 120 may still send the real-world location to the server 110.
In response to receiving the message, the server 110 updates 325 the virtual position of the virtual character 140 in the virtual scene, as described above with respect to block 220, and is not described in detail herein for brevity.
In some embodiments, server 110 may also process client three-dimensional display data regarding other online avatars based on location updates during offline of avatar 140, such as when offline avatar 140 moves into proximity with another avatar online, such that offline avatar 140 may be presented on a user device that the other avatar is logged in. Such an embodiment will be described in more detail later in connection with fig. 4.
After offline for a period of time, the avatar 140 may be brought online again in the virtual scene. For example, user 130 may log into an account associated with avatar 140 using a client application on user device 120. When the avatar 140 comes online, the user device 120 sends 330 an indication that the avatar 140 is online (such as by an online request) to the server 110.
In response to receiving an indication from the user device 120 that the avatar 140 is online, the server 110 determines 335 three-dimensional display data for the avatar 140 for presentation on the user device 120. As previously described with respect to fig. 2, the server 110 determines the three-dimensional display data based on historical update records of virtual locations during offline of the avatar 140, and/or a map resource library associated with the virtual scene and a physical engine library, which are not described herein for brevity.
The server 110 then transmits 340 the determined three-dimensional display data to the user device 120. The three-dimensional display data is presented 346 to the user 130 by the user device 120 through its output interface. For example, the user device 120 may play the three-dimensional display data as a video stream. In another example, the user device 120 may interpret and present three-dimensional display data as a three-dimensional rendering file. Embodiments of the present disclosure are not limited to a particular form of three-dimensional display data.
It should be appreciated that the above-described process 300 of determining three-dimensional display data when a virtual character is online is merely illustrative, and that the process 300 may also include acts not shown or different from those shown in fig. 3. For example, server 110 may authenticate the received online request. For example, during offline of the avatar 140, the user device 120 may send the real location to the server 110 multiple times and the server 110 may update the virtual location of the avatar 140 in the virtual scene multiple times correspondingly. For example, after the avatar 140 is online, a streaming session connection for transmitting virtual scene data may be established between the server 110 and the user device 120. Moreover, the acts in process 300 may also be performed by other devices. For example, the user may change the user device bound to the virtual character.
Fig. 4 illustrates a schematic diagram 400 of an example movement path of a dynamic representation of when a virtual character comes online, according to some embodiments of the disclosure. The path shown in diagram 400 may be a non-limiting example of a movement path determined by server 110 when determining three-dimensional display data online for avatar 140 of user device 120.
As shown in fig. 4, in this example, the last time the avatar 140 comes off-line is located at virtual location 410. During offline, server 110 receives the real-world location associated with the virtual character multiple times and updates the virtual location of virtual character 140 accordingly, e.g., as virtual location 430.
When the avatar 140 is located at the virtual location 420, the avatar 140 is again brought online. At this time, the server 110 may restore the path 450 of the virtual character 140 moving from the virtual location 410 to the virtual location 420 according to the plurality of virtual locations updated during the offline of the virtual character 140. The server 110 may adjust the path based on the map repository so that it is more natural and smooth without requiring that each historical virtual location be strictly located on the path 450, as shown by virtual location 430.
The server may then further determine a dynamic representation of the virtual character 140 moving from the virtual location 410 to the virtual location 420. For example, server 110 may determine dynamically represented scene content, such as roads, buildings, and other landscapes that virtual character 140 traverses as it walks along path 450, from path 450 and a map repository. For example, server 110 may determine from the physics engine library an interactive representation of virtual character 140 as it walks along the path with other objects in the virtual scene, such as an interactive policy when it is blocked from wrapping by an obstacle. The server 110 may also determine the movement cadence of the avatar 140 based on the historical update time of the virtual location update record, such as by scaling the interval time. Thus, the server 110 can determine three-dimensional display data for the virtual character 140 to be online for presentation on the user device 120.
The server 110 may also take into account historical updated records of other virtual characters (and historical movement trajectories based thereon) when restoring the offline path and further determining the dynamic display. For example, as shown in FIG. 4, the server 110 may determine that the movement trajectory of another virtual character has a spatiotemporal intersection with the trajectory of the virtual character at 440 based on the path 460 indicated by the history update record of the other virtual character. Server 110 may thus determine that a dynamic representation of the other virtual character and optionally an interactive representation with the virtual character (such as discovering each other, greetings each other, etc.) should be included in a section of the three-dimensional display in which virtual character 140 moves to the vicinity of the point.
Fig. 5 illustrates a schematic diagram of an example movement path of a dynamic representation of when a virtual character comes online, according to some embodiments of the disclosure. The path shown in diagram 500 may be a non-limiting example of a movement path determined by server 110 when determining three-dimensional display data online for avatar 140 of user device 120.
In the example of fig. 5, the avatar 140 is located at a virtual location 510 when offline and at a virtual location 520 when online. As shown, in determining the movement path of the avatar 140, the server 110 may determine that a straight path (or similar simple connection path) between two locations is virtually unviewable in the virtual scene based on the two virtual locations 530 and 540 that are updated successively during offline and the related data in the map repository. For example, the straight path may pass through walls of a plurality of buildings.
Thus, the server 110 determines, based on the map repository: after moving to virtual location 530, avatar 140 continues to move along path segment 550 to virtual location 540. For example, the path segment 550 may traverse available roads in the virtual scene. The path thus determined is more realistic than simply traversing a wall along a straight line, so that a more realistic dynamic representation can be generated.
Fig. 6 illustrates a schematic diagram 600 of an example movement path when a virtual character comes online, according to some embodiments of the present disclosure. The path shown in diagram 400 may be a non-limiting example of a movement path determined by server 110 when determining three-dimensional display data online for avatar 140 of user device 120.
In the example of fig. 6, the avatar 140 is located at the virtual location 610 when offline and at the virtual location 620 when online. In determining the virtual character 140 movement path, the server 110 may determine that the distance between two virtual locations continuously updated during offline is of a different magnitude than the distance between other continuous virtual locations. For example, as shown by the path segment 650 indicated by the dashed line, virtual location 630 and virtual location 640 are in two areas (e.g., different blocks) that are remote from each other, while the remaining consecutive virtual locations are all in the same area as each other.
Thus, server 110 can determine path segments 650 for virtual character 140 to move from virtual location 630 to virtual location 640 in different movement patterns (alone, or in combination with update times and/or virtual assets possessed by virtual character 140, etc.). For example, server 110 may plan a path segment for movement of virtual location 630 to virtual location 640 into a motor vehicle mode while determining the remaining path segments in a walking mode. On this basis, for the avatar 140 to be online, the server 110 may generate, for example, the following dynamic representations: the avatar 140 walks from virtual location 610 to virtual location 630, continues to ride to virtual location 640, and then continues to walk to virtual location 620.
It should be understood that the specific paths and number of virtual location points in fig. 4-6 are shown for illustration purposes only and are not intended to limit the embodiments of the present disclosure in any way. It should also be appreciated that for clarity of illustration, fig. 4-6 are not necessarily drawn to scale.
In some embodiments applied to multi-role virtual scenes, during the time that some virtual roles are offline, other online virtual roles remain in the virtual scene. Fig. 7 illustrates a flowchart of an example method 700 of determining three-dimensional display data for an online persona in accordance with some embodiments of the disclosure. Method 700 may be considered an optional additional step of method 200 and may be performed, for example, by server 110 shown in fig. 1, on the basis of performing method 200 for an offline avatar. It should be appreciated that method 700 may also include additional actions not shown, the scope of the present disclosure being not limited in this respect. The method 700 is described in detail below in connection with the example environment 100 of fig. 1.
As previously described in connection with fig. 2, using the method 200, the server 110 may receive a first real-world location from the first user device 120-1 of the first user 130-1 during an offline state of the first avatar 140-1 of the first user 130-1 in the virtual scene. Based on the received first real-world location, the server 110 may update the virtual location of the first virtual character 130-1 in the virtual scene.
The server 110 may also receive a second real-world location from the second user device 120-2 of the second user 130-2 during the second virtual character 140-2 being in an online state in the virtual scene at block 710. In some embodiments, the server 110 may receive the second real-world location via a message sent by a client application of the virtual scene installed on the user device 120-2.
At block 720, the server 110 may update the second virtual location of the second virtual character 140-2 in the virtual scene based on the second real location. Server 110 may update the second virtual location in a similar manner as previously described with respect to block 220.
At block 730, the server 110 may determine three-dimensional display data for the second virtual character 140-2 for presentation on the second user device 120-2. As indicated at block 730-1, the server 110 may determine three-dimensional display data for the virtual character 140-2 based on the second virtual location, the map repository associated with the virtual scene (as described above with respect to fig. 2), and the physics engine repository. The specific content and format of the three-dimensional display is related to the type of user device 120-2, the display mode of the client application, and the like. For example, the three-dimensional display data generated for the virtual character 140-2 and the three-dimensional display data generated for the virtual character 140-1 have different perspectives and scales.
In some embodiments, as part of determining three-dimensional display data for virtual character 140-2, server 110 may determine a dynamic representation of a change in a second virtual position of virtual character 140-2 in the virtual scene. For example, the server 110 may generate a real-time video stream as the location of the online avatar 140-2 changes for continuous transmission via the session connection with the user device 120-2, thereby providing a smooth dynamic scene display. For example, server 110 may obtain corresponding scene data from the map data based on the second virtual location and use that data, along with the last virtual location of virtual character 140-2 and the character model data, to calculate a three-dimensional dynamic rendering of the most recent change for virtual character 140-2.
In some embodiments, server 110 may determine a path of the change in the second virtual location of virtual character 140-2 in the virtual scene and determine a dynamic representation for virtual character 140-2 based on the path. For example, server 110 may determine that virtual character 140-2 is turning in the walk. For example, server 110 may determine that virtual character 140-2 has not moved relative to the last location in which it was located. Server 110 may then calculate a dynamic representation of the virtual character 140-2 turning or pausing, such as in the form of a video stream, from the determined path.
In some embodiments, server 110 may then determine an interactive representation of virtual character 140-2 with the virtual object in the virtual scene based on the determined path. For example, server 110 may determine from the path and scene data obtained from the map data which objects in the scene virtual character 140-2 will interact with, such as real world mappings to objects in the virtual scene, additional virtual objects in the scene, and other online characters, etc. Server 110 may use a library of physical engines to enhance the interactive representation of virtual objects. For example, server 110 may calculate a rebound when the virtual character 140-2 bumps into another object as it walks forward.
In some embodiments, server 110 may also receive a gesture from user device 120-2 with respect to user 130-2 and determine a dynamic representation based on the gesture of user 130-2. For example, server 110 may receive user 130-2 operations on a client interface, such as clicking on a virtual object. For example, the server 110 may receive actual actions of the user 130-2 captured by the sensors, such as steering, nodding, waving hands, and so forth. Server 110 may then convert the received gesture into a dynamic representation of virtual character 140-2.
In this process, server 110 may also use the physical engine library to enhance the interactive representation between virtual objects and other virtual objects. For example, server 110 may use a physics engine to determine a dynamic representation of the pinching deformation of a flexible object by virtual character 140-2 as it is picked up.
In this manner, the server 110 may anchor the real position associated with the virtual character and fuse various data sources and tools, project the position, movement, gesture, etc. of the online user to the virtual character thereof, and present the dynamic changes and various interactions of the virtual objects around the virtual character to the user in real time, thereby providing the user with an immersive online virtual experience with a sense of realism.
In some embodiments, as shown in block 730-2, server 110 may additionally determine three-dimensional display data for virtual character 140-2 based on a history of updated first virtual locations of virtual character 140-1 in an offline state. For example, server 110 may determine from the trajectory of the first virtual location whether virtual character 140-1 currently has an intersection with virtual character 140-2. And adjusts the dynamic representation of the virtual character 140-2 based on the intersection.
In some embodiments, as part of determining three-dimensional display data for virtual character 140-2, server 110 may generate three-dimensional display data including a three-dimensional display of virtual character 140-1 based on the historical update record of the first virtual location. For example, upon determining that the avatar 140-1 currently has an intersection with the avatar 140-2, the server 110 may determine a direction of travel and dynamic representation of the avatar 140-1 in dynamic display of the avatar 140-2, such as entering or leaving a range of scenes included in the three-dimensional display data for the avatar 140-2, walking up to the avatar 140-2, or walking in front of the avatar 140-2, etc., based on the historical update record of the first virtual location. For example, the server 110 may also determine an interactive representation between two virtual characters on the basis of this, such as a hand-waving greeting when face-to-face, etc.
In this way, even if a avatar in a virtual scene goes offline, the avatar may still move in the virtual scene and be visible and optionally impact other avatars, thereby enabling the real world to map more closely to the virtual scene, improving the user's immersive experience.
In some embodiments, server 110 may also determine three-dimensional display data for virtual character 140-2 based on the privacy settings of each of virtual character 140-1 and virtual character 140-2. For example, server 110 may determine whether virtual character 140-1 is visible to virtual character 140-2 at this time based on the settings of virtual character 140-1, and may determine whether it is set to want to see virtual character 140-1 based on the settings of virtual character 140-2. On this basis, server 110 may decide whether or not to consider virtual character 140-1 in determining three-dimensional display data for virtual character 140-2, e.g., whether or not to make a position comparison of virtual character 140-2 with virtual character 140-2.
Thus, on one hand, the privacy of the virtual roles can be controlled more flexibly by the user, and the user experience is improved. On the other hand, the server can also avoid subsequent unnecessary computation and generation through checking the privacy settings, thereby improving virtual scene performance.
It should be appreciated that although the various actions of method 200 and method 400 are described above with respect to virtual roles 140-1 and 140-2, other more online and offline virtual roles may be present in the virtual scene, and that multiple other online and offline virtual roles may also be considered by server 110 in generating three-dimensional display data for one of the virtual roles.
Fig. 8 illustrates a schematic diagram 800 of an example movement path of an online avatar in accordance with some embodiments of the present disclosure. The path shown in diagram 800 may be a non-limiting example of a movement path that server 110 determines for updating a location for an online avatar 140-2 on user device 120-2.
As shown in FIG. 8, avatar 140-2 was previously located in virtual location 810. Server 110 then receives the real location associated with virtual character 140-2 from user device 120-2 and correspondingly determines that virtual character 140-2 should move to a corresponding virtual location 820. Based on this, server 110 determines path 840 as the path of movement of virtual character 140-2 (e.g., along a road, and/or without a conflict, etc.) from the map repository.
The server 110 may take into account other offline roles when determining the movement path. For example, server 110 may determine that an offline avatar (e.g., avatar 140-1 bound to user device 120-1 as previously described) is visible to avatar 140-2 and is moving to virtual location 830. Server 110 may then bypass offline avatar 140-1 in determining path 840. In addition, server 110, when generating a changing dynamic representation of virtual character 140-2, may also generate an interactive representation thereof with virtual character 140-1, as described above with respect to block 730-2.
It should be understood that the specific paths and number of virtual location points in fig. 8 are shown for illustration purposes only and are not intended to limit the embodiments of the present disclosure in any way. It should also be appreciated that for clarity of illustration, fig. 8 is not necessarily drawn to scale.
Fig. 9 illustrates a schematic block diagram of an apparatus 900 for location update for a virtual scene, according to some embodiments of the present disclosure. The apparatus 900 may be implemented as or included in the server 110 of fig. 1. Apparatus 900 may include a plurality of modules for performing corresponding acts in method 200, such as discussed in fig. 2.
As shown in fig. 9, the apparatus 900 includes a location receiving module 910 and a location updating module 920. The location receiving module 910 is configured to receive a real location from a user device of a user during an offline state of a virtual character of the user in a virtual scene. The location update module 920 is configured to update the virtual location of the virtual character in the virtual scene based on the received real location.
In some embodiments, the location receiving module 910 includes: an online indication module configured to receive an indication from the user device that the virtual character is online in the virtual scene; and the apparatus further comprises: a scene module configured to determine three-dimensional display data for the virtual character for presentation on the user device based on a historical updated record of virtual locations during offline of the virtual character in response to receiving the indication.
In some embodiments, the scene module includes: a library module configured to determine three-dimensional display data for the virtual character based on historical update records based on virtual locations during offline of the virtual character, a map resource library associated with the virtual scene, and a physics engine library.
In some embodiments, the scene module includes: a dynamic representation module configured to determine a dynamic representation of the virtual character moving in the virtual scene from a pre-offline position to a virtual position, wherein the pre-offline position is a last virtual position in the virtual scene before the last offline of the virtual character.
In some embodiments, the scene module further comprises: and a multi-persona module configured to determine three-dimensional display data for the virtual character based on historical update records of virtual locations of other virtual characters in the virtual scene during offline of the virtual character.
In some embodiments, the aforementioned user is a first user, the avatar is a first avatar, the user device is a first user device, the real location is a first real location, the virtual location is a first virtual location, and the apparatus further comprises: a second location receiving module configured to receive a second real location from a second user device of a second user during a second virtual character of the second user being in an online state in the virtual scene; a second location updating module configured to update a second virtual location of a second virtual character in the virtual scene based on the second real location; a second scene module configured to determine three-dimensional display data for a second virtual character for presentation on a second user device based on the second virtual location, a map repository associated with the virtual scene, and a physics engine repository.
In some embodiments, the second scenario module comprises: and a second multi-persona module configured to determine three-dimensional display data for the second virtual persona based on the historical update record of the first virtual location of the first virtual persona.
In some embodiments, the second multi-persona module includes: and the other character generating module is configured to generate three-dimensional display data comprising three-dimensional display of the first virtual character based on the history update record of the first virtual position.
In some embodiments, the second scenario module further comprises: and a privacy module configured to determine three-dimensional display data for the second virtual character based on the respective privacy settings of the first virtual character and the second virtual character.
In some embodiments, the second scenario module further comprises: the second dynamic representation module determines a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene.
In some embodiments, the second dynamic representation module comprises: a path planning module configured to determine a path of a change in a second virtual position of a second virtual character in the virtual scene; and the second dynamic representation module further comprises: a path dynamics module configured to determine a dynamic representation based on the path.
In some embodiments, the second dynamic representation module further comprises: the path-along interaction module is configured to determine an interaction representation of the second virtual character with the virtual object in the virtual scene based on the path.
In some embodiments, the apparatus further comprises a gesture receiving module configured to receive a gesture with respect to the second user from the second user device; and the second scene module further includes a gesture representation module configured to determine a dynamic representation of the change further includes determining the dynamic representation based on a gesture of the second user.
In some embodiments, wherein the virtual locations in the virtual scene map with the real locations in the real world.
The location receiving module 910 and the location updating module 920 may each be implemented by software or may be implemented by hardware. Illustratively, the implementation of the location-receiving module 910 is described next with reference to the location-receiving module 910. Similarly, the implementation of the location update module 920 may refer to the implementation of the location receiving module 910.
Module as an example of a software functional unit, the location receiving module 910 may include code running on a computing instance. The computing instance may include at least one of a physical host (computing device), a virtual machine, and a container, among others. Further, the above-described computing examples may be one or more. For example, the location-receiving module 910 may include code that runs on multiple hosts/virtual machines/containers. It should be noted that, multiple hosts/virtual machines/containers for running the code may be distributed in the same region (region), or may be distributed in different regions. Further, multiple hosts/virtual machines/containers for running the code may be distributed in the same availability zone (availability zone, AZ) or may be distributed in different AZs, each AZ comprising a data center or multiple geographically close data centers. Wherein typically a region may comprise a plurality of AZs.
Also, multiple hosts/virtual machines/containers for running the code may be distributed in the same virtual private cloud (virtual private cloud, VPC) or in multiple VPCs. In general, one VPC is disposed in one region, and a communication gateway is disposed in each VPC for implementing inter-connection between VPCs in the same region and between VPCs in different regions.
Module as an example of a hardware functional unit, the location receiving module 910 may include at least one computing device, such as a server or the like. Alternatively, the location receiving module 910 may be a device implemented using an application-specific integrated circuit (ASIC) or a programmable logic device (programmable logic device, PLD), or the like. The PLD may be implemented as a complex program logic device (complex programmable logical device, CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL), or any combination thereof.
The location-receiving module 910 may include multiple computing devices that are distributed in the same region or may be distributed in different regions. The plurality of computing devices included in the position receiving module 910 may be distributed in the same AZ or may be distributed in different AZ. Also, the plurality of computing devices included in the location receiving module 910 may be distributed in the same VPC or may be distributed in a plurality of VPCs. Wherein the plurality of computing devices may be any combination of computing devices such as servers, ASIC, PLD, CPLD, FPGA, and GAL. It should be noted that, in other embodiments, the location receiving module 910 may be configured to perform any of the processes and actions of the server 110 described in connection with fig. 2-8, and the location updating module 920 may be configured to perform any of the processes and actions of the server 110 described in connection with fig. 2-8. The steps that the location receiving module 910 and the location updating module 920 are responsible for implementing may be specified as desired, with the location receiving module 910 and the location updating module performing any of the processes and actions of the server 110 described in connection with fig. 2-8, respectively, to implement the full functionality of the apparatus 900.
Embodiments of the present disclosure also provide a computing device 1000. As shown in fig. 10, the computing device 1000 includes: bus 1002, processor 1004, memory 1006, and communication interface 1008. Communication between the processor 1004, memory 1006 and communication interface 1008 is via bus 1002. Computing device 1000 may be a server or a terminal device. It should be understood that the present application is not limited to the number of processors, memories in computing device 1000.
Bus 1002 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one line is shown in fig. 10, but not only one bus or one type of bus. Bus 1004 may include a pathway to transfer information between various components of computing device 1000 (e.g., memory 1006, processor 1004, communication interface 1008).
The processor 1004 may include any one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphics processing unit, GPU), a Microprocessor (MP), or a digital signal processor (digital signal processor, DSP).
The memory 1006 may include volatile memory (RAM), such as random access memory (random access memory). The processor 1004 may also include a non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, a mechanical hard disk (HDD) or a solid state disk (solid state drive, SSD).
The memory 106 has stored therein executable program code that is executed by the processor 1004 to implement the functions of the location receiving module 910 and the location updating module 920, respectively, as previously described, to implement, for example, the methods 200 and 700. That is, the memory 106 may have stored thereon instructions for the methods and functions of any of the embodiments described above in relation to the server 110.
Communication interface 1008 enables communication between computing device 1000 and other devices or communication networks using a transceiver module such as, but not limited to, a network interface card, transceiver, or the like.
Embodiments of the present disclosure also provide a computing device cluster 1100. The cluster of computing devices includes at least one computing device. The computing device may be a server, such as a central server, an edge server, or a local server in a local data center. In some embodiments, the computing device may also be a terminal device such as a desktop, notebook, or smart phone.
As shown in fig. 11, the computing device cluster includes at least one computing device 1000. The same instructions for performing the methods and functions described in any of the embodiments above with respect to server 110 may be stored in memory 1006 in one or more computing devices 1000 in a computing device cluster.
In some possible implementations, some of the instructions for performing the methods and functions, respectively, described above in any of the embodiments involving server 110, may also be stored in memory 1006 of one or more computing devices 1000 in the computing device cluster. In other words, a combination of one or more computing devices 1000 may collectively execute instructions for performing the methods and functions of server 110.
It should be noted that, the memories 1006 in different computing devices 1000 in the computing device cluster may store different instructions for performing part of the functions of the apparatus 900. That is, the instructions stored by the memory 1006 in the different computing devices 1000 may implement the functionality of one or more modules or sub-modules of the location receiving module 910 and the location updating module 920 (and in some embodiments the scene module).
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected through a network. Wherein the network may be a wide area network or a local area network, etc. Fig. 12 shows one possible implementation 1200. As shown in fig. 12, two computing devices 1000A and 1000B are connected by a network 1210. Specifically, the connection to the network is made through a communication interface in each computing device. In this type of possible implementation, for example, instructions to perform the functions of location-receiving module 910 are stored in memory 1006 in computing device 1000A. Meanwhile, instructions to perform the functions of the location update module 920 are stored in the memory 106 in the computing device 100B.
The manner of connection between clusters of computing devices shown in fig. 12 may be in view of the large amount of user data that the method provided herein with respect to server 110 requires to store and perform intensive real-time or near real-time calculations, thus allowing for the functionality implemented by location update module 920 to be performed by computing device 1000B.
It should be appreciated that the functionality of computing device 1000A shown in fig. 12 may also be performed by multiple computing devices 1000. Likewise, the functionality of computing device 1000B may also be performed by multiple computing devices 1000. Embodiments of the present disclosure also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform the methods and functions of any of the embodiments described above involving the server 110 or the user device 120.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer instructions that, when executed by a processor, cause the processor to perform the methods and functions of any of the embodiments described above with respect to the server 110 or the user device 120.
In general, the various embodiments of the disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device. While various aspects of the embodiments of the disclosure are illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium. The computer program product comprises computer executable instructions, such as instructions included in program modules, being executed in a device on a real or virtual processor of a target to perform the processes/methods as described above with reference to the figures. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. In various embodiments, the functionality of the program modules may be combined or split between program modules as desired. Machine-executable instructions for program modules may be executed within local or distributed devices. In distributed devices, program modules may be located in both local and remote memory storage media.
Computer program code for carrying out methods of the present disclosure may be written in one or more programming languages. These computer program code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the computer or other programmable data processing apparatus, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
In the context of this disclosure, computer program code or related data may be carried by any suitable carrier to enable an apparatus, device, or processor to perform the various processes and operations described above. Examples of carriers include signals, computer readable media, and the like. Examples of signals may include electrical, optical, radio, acoustical or other form of propagated signals, such as carrier waves, infrared signals, etc.
A computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device or data storage device such as a data center that can contain one or more available media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More detailed examples of a computer-readable storage medium include an electrical connection with one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical storage device, a magnetic storage device, or any suitable combination thereof.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform. It should also be noted that features and functions of two or more devices according to the present disclosure may be embodied in one device. Conversely, the features and functions of one device described above may be further divided into multiple devices to be embodied.
The foregoing has described implementations of the present disclosure, and the foregoing description is exemplary, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (18)

1. A method for updating a location for a virtual scene, comprising:
receiving a real location from a user device of a user during an offline state of a virtual character of the user in the virtual scene; and
and updating the virtual position of the virtual character in the virtual scene based on the received real position.
2. The method as recited in claim 1, further comprising:
receiving an indication from the user device that the virtual character is online in the virtual scene; and
in response to receiving the indication, three-dimensional display data for the virtual character for presentation on the user device is determined based on a history of updated records of the virtual location during offline of the virtual character.
3. The method of claim 2, wherein determining three-dimensional display data for the virtual character for presentation on the user device comprises:
the three-dimensional display data is determined based on the history update record, a map resource library associated with the virtual scene, and a physics engine library.
4. The method of claim 2, wherein determining three-dimensional display data for the virtual character for presentation on the user device comprises:
A dynamic representation of the virtual character moving from a pre-offline position to the virtual position in the virtual scene is determined, wherein the pre-offline position is a last virtual position in the virtual scene before the virtual character was last offline.
5. The method of claim 2, wherein determining three-dimensional display data for the virtual character for presentation on the user device further comprises:
the three-dimensional display data is determined based on historical updated records of virtual locations of other virtual characters in the virtual scene during offline of the virtual character.
6. The method of claim 1, wherein the user is a first user, the avatar is a first avatar, the user device is a first user device, the real location is a first real location, the virtual location is a first virtual location, and the method further comprises:
receiving a second real position from a second user device of a second user during a second virtual character of the second user being in an online state in the virtual scene;
updating a second virtual position of the second virtual character in the virtual scene based on the second real position; and
Three-dimensional display data for the second virtual character for presentation on the second user device is determined based on the second virtual location, a map resource library associated with the virtual scene, and a physics engine library.
7. The method of claim 6, wherein determining three-dimensional display data for the second virtual character for presentation on the second user device comprises:
the three-dimensional display data is determined based on the second virtual location, the map repository, the physical engine repository, and a historical update record of the first virtual location of the first virtual character.
8. The method of claim 7, wherein determining the three-dimensional display data based on the history of the first virtual location comprises:
generating the three-dimensional display data including a three-dimensional display of the first virtual character based on the history update record of the first virtual location.
9. The method of claim 7, wherein determining three-dimensional display data for the second virtual character for presentation on the second user device further comprises:
The three-dimensional display data is determined based on the privacy settings of each of the first virtual character and the second virtual character.
10. The method of claim 6, wherein determining three-dimensional display data for the second virtual character for presentation on the second user device comprises:
a dynamic representation of a change in the second virtual position of the second virtual character in the virtual scene is determined.
11. The method of claim 10, wherein determining a dynamic representation of a change in the second virtual position of the second virtual character in the virtual scene comprises:
determining a path of change of the second virtual position of the second virtual character in the virtual scene; and
the dynamic representation is determined based on the path.
12. The method of claim 11, wherein determining a dynamic representation of a change in the second virtual position of the second virtual character in the virtual scene further comprises:
based on the path, an interactive representation of the second virtual character with a virtual object in the virtual scene is determined.
13. The method of claim 10, wherein determining a dynamic representation of a change in the second virtual position of the second virtual character in the virtual scene further comprises:
receiving a gesture from the second user device with respect to the second user; and
the dynamic representation is determined based on the gesture of the second user.
14. The method of claim 1, wherein the virtual locations in the virtual scene are mapped to real locations in the real world.
15. An electronic device comprising a processor and a memory having stored thereon computer instructions that, when executed by the processor, cause the electronic device to perform the method of any of claims 1 to 14.
16. A cluster of computing devices, comprising at least one computing device, each computing device comprising a processor and a memory;
the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method of any one of claims 1 to 14.
17. A computer readable storage medium storing computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 14.
18. A computer program product comprising computer executable instructions thereon, which when executed implement the method according to any of claims 1 to 14.
CN202211194639.9A 2022-09-28 2022-09-28 Position updating method, device, medium and program product for virtual scene Pending CN117830567A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211194639.9A CN117830567A (en) 2022-09-28 2022-09-28 Position updating method, device, medium and program product for virtual scene
PCT/CN2023/110379 WO2024066723A1 (en) 2022-09-28 2023-07-31 Location updating method for virtual scene, and device, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211194639.9A CN117830567A (en) 2022-09-28 2022-09-28 Position updating method, device, medium and program product for virtual scene

Publications (1)

Publication Number Publication Date
CN117830567A true CN117830567A (en) 2024-04-05

Family

ID=90475956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211194639.9A Pending CN117830567A (en) 2022-09-28 2022-09-28 Position updating method, device, medium and program product for virtual scene

Country Status (2)

Country Link
CN (1) CN117830567A (en)
WO (1) WO2024066723A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118132781A (en) * 2024-04-30 2024-06-04 合肥链世科技有限公司 AIGC-based meta-universe scene generation system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116687A (en) * 2011-11-17 2013-05-22 苏州蜗牛数字科技股份有限公司 Role off-line control method based on online game
US9669296B1 (en) * 2012-07-31 2017-06-06 Niantic, Inc. Linking real world activities with a parallel reality game
US9569466B1 (en) * 2013-01-30 2017-02-14 Kabam, Inc. System and method for offline asynchronous user activity in a player versus player online game
US10885710B2 (en) * 2019-03-14 2021-01-05 Microsoft Technology Licensing, Llc Reality-guided roaming in virtual reality
CN112044057B (en) * 2020-09-17 2024-06-25 网易(杭州)网络有限公司 Game state monitoring method and device
CN112587926B (en) * 2020-12-25 2022-09-02 珠海金山数字网络科技有限公司 Data processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118132781A (en) * 2024-04-30 2024-06-04 合肥链世科技有限公司 AIGC-based meta-universe scene generation system

Also Published As

Publication number Publication date
WO2024066723A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
US11202036B2 (en) Merged reality system and method
Chang et al. 6G-enabled edge AI for metaverse: Challenges, methods, and future research directions
US20210241532A1 (en) Real-time shared augmented reality experience
US20200249819A1 (en) Systems and methods for augmented reality with precise tracking
KR102132675B1 (en) Method and system for providing navigation function through aerial view
US11341727B2 (en) Location-based platform for multiple 3D engines for delivering location-based 3D content to a user
US20200314046A1 (en) Spatial messaging and content sharing method, and system therefor
US11409276B2 (en) Method and system for controlling robots within in an interactive arena and generating a virtual overlayed
CN111527523A (en) Apparatus and method for sharing virtual reality environment
EP3783460A1 (en) Transitioning from public to personal digital reality experience
WO2024066723A1 (en) Location updating method for virtual scene, and device, medium and program product
KR102023186B1 (en) Method and system for crowdsourcing content based on geofencing
KR20230145430A (en) Method and device for displaying coordinate axes in a virtual environment, and terminals and media
WO2024088144A1 (en) Augmented reality picture processing method and apparatus, and electronic device and storage medium
US20230188941A1 (en) Location-based tasks
TW202133910A (en) Player density based region division for regional chat
US20230102377A1 (en) Platform Agnostic Autoscaling Multiplayer Inter and Intra Server Communication Manager System and Method for AR, VR, Mixed Reality, and XR Connected Spaces
EP3400991A1 (en) Transport simulation in a location-based mixed-reality game system
CN117355361A (en) Mixing cloud and local rendering
CN115186026B (en) Map display method and device, storage medium and electronic equipment
US20210306805A1 (en) System and method for client-server connection and data delivery by geographical location
CN117270675A (en) User virtual image loading method based on mixed reality remote collaborative environment
KR20240067675A (en) Method and system for generating 3d street view model using 3d building model and road model
CN117379795A (en) Dynamic model configuration method, terminal equipment, device and storage medium
KR20190072409A (en) Method and system for spatial messaging and content sharing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication