WO2024066723A1 - 针对虚拟场景的位置更新方法、设备、介质和程序产品 - Google Patents

针对虚拟场景的位置更新方法、设备、介质和程序产品 Download PDF

Info

Publication number
WO2024066723A1
WO2024066723A1 PCT/CN2023/110379 CN2023110379W WO2024066723A1 WO 2024066723 A1 WO2024066723 A1 WO 2024066723A1 CN 2023110379 W CN2023110379 W CN 2023110379W WO 2024066723 A1 WO2024066723 A1 WO 2024066723A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
virtual character
character
scene
user
Prior art date
Application number
PCT/CN2023/110379
Other languages
English (en)
French (fr)
Inventor
单卫华
闫达帅
贺中兴
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2024066723A1 publication Critical patent/WO2024066723A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/216Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present application relates to the field of information technology, and more specifically, to a method for updating the position of a virtual scene, an electronic device, a computer storage medium, and a program product.
  • the virtual world can be the virtualization and digitization of the real world.
  • the digital twin technology can generate a mirror image of the real world, which can provide people with an immersive experience.
  • activities in the real world such as social entertainment, can be mapped to the application scenarios of the virtual world, and users' requirements for immersion in the virtual world are getting higher and higher.
  • the embodiment of the present application provides a position update solution for a virtual scene.
  • a method for updating a position of a virtual scene comprises: receiving a real position from a user device of a user while a virtual character of the user is in an offline state in the virtual scene; and updating a virtual position of the virtual character in the virtual scene based on the received real position.
  • the virtual world position of the user associated with the virtual character can still be synchronized based on the user's real world position, without being affected by the virtual character being offline.
  • the real-time and real sense of the virtual scene can be enhanced, so that the immersive experience of the virtual scene can be significantly improved.
  • the method further includes: receiving an indication from a user device that a virtual character is online in a virtual scene; and in response to receiving the indication, determining three-dimensional display data for the virtual character for presentation on the user device based on a historical update record of a virtual position during an offline period of the virtual character. In this way, the position record during the offline period is used as an anchor point to retrieve relevant scene data for calculation, so that the activity scene of the virtual character during the offline period can be restored.
  • determining the three-dimensional display data for the virtual character includes: determining the three-dimensional display data based on a historical update record of a virtual position during an offline period of the virtual character, a map resource library associated with a virtual scene, and a physical engine library. In this way, rich data is obtained from the map resource library according to the historical update record and the physical engine library is used to enhance the expression of the virtual scene, so that the activity scene of the virtual character can be realistically expressed.
  • determining the three-dimensional display data for the virtual character includes: determining a dynamic representation of the virtual character moving from a pre-offline position to a virtual position in a virtual scene, wherein the pre-offline position is the last virtual position of the virtual character in the virtual scene before the last offline.
  • determining the three-dimensional display data for the virtual character further comprises: determining the three-dimensional display data based on a historical update record of virtual positions of other virtual characters in the virtual scene during the period when the virtual character is offline. In this way, the influence of multiple other virtual characters is considered when constructing the dynamic virtual representation of the virtual character during the period when the virtual character is offline, thereby further improving the sense of reality of the virtual scene.
  • the aforementioned user is a first user
  • the virtual character is a first virtual character
  • the user device is a first user device
  • the real position is a first real position
  • the virtual position is a first virtual position
  • the method further includes: receiving a second real position from a second user device of a second user while the second virtual character of the second user is online in the virtual scene; updating the second virtual position of the second virtual character in the virtual scene based on the second real position; and determining three-dimensional display data for the second virtual character to be presented on the second user device based on the second virtual position, a map resource library associated with the virtual scene, and a physics engine library.
  • determining the three-dimensional display data for the second virtual character includes: based on the first virtual character The historical update record of the virtual position determines the three-dimensional display data for the second virtual character.
  • determining the three-dimensional display data for the second virtual character based on the historical update record of the first virtual location includes: generating three-dimensional display data including the three-dimensional display of the first virtual character based on the historical update record of the first virtual location.
  • determining the three-dimensional display data further comprises: determining the three-dimensional display data based on the privacy settings of the first virtual character and the second virtual character. In this way, on the one hand, the user can have more flexible control over the privacy of the virtual character. On the other hand, the server can also avoid some subsequent unnecessary calculations and generation by checking the privacy settings.
  • determining the three-dimensional display data for the second virtual character includes: determining a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene. In this way, the generated data can be used to present the real-time dynamic changes of the online character as the user device moves on the user device, thereby improving the user's immersive experience.
  • determining the dynamic representation of the change includes: determining a path of the change of the second virtual position of the second virtual character in the virtual scene; and determining the dynamic representation based on the path. In this way, virtual objects around the path and their changes caused by the movement of the second virtual character can be included in the three-dimensional display data presented to the user, thereby improving the realism of the virtual scene.
  • determining the dynamic representation of the change further includes: determining, based on the path, an interactive representation of the second virtual character and the virtual object in the virtual scene. In this way, the interaction between multiple characters is taken into account when presenting the virtual scene, thereby improving the immersion of the virtual scene.
  • determining the dynamic representation of the change further includes: receiving a gesture of the second user from the second user device; and determining the dynamic representation of the change based on the gesture of the second user. In this way, various action gestures of the online user can be correspondingly reflected on the virtual object, further improving the user's immersive experience.
  • the virtual position in the virtual scene is mapped to the real position in the real world.
  • the embodiments of the present disclosure can use the real position associated with the virtual character as an anchor point to provide a realistic immersive online virtual experience for the virtual object.
  • a device for updating a position of a virtual scene includes: a position receiving module configured to receive a real position from a user device of a user while the user's virtual character is in an offline state in the virtual scene; and a position updating module configured to update a virtual position of the virtual character in the virtual scene based on the received real position.
  • the position receiving module includes: an online indication module configured to receive an indication from a user device that a virtual character is online in a virtual scene; and the apparatus further includes: a scene module configured to determine, in response to receiving the indication, three-dimensional display data for the virtual character to be presented on the user device based on a historical update record of the virtual position during the period when the virtual character is offline.
  • the scene module also includes: a library module configured to determine three-dimensional display data for the virtual character based on a historical update record of the virtual position during the period when the virtual character is offline, a map resource library associated with the virtual scene, and a physics engine library.
  • the scene module includes: a dynamic representation module configured to determine a dynamic representation of the virtual character moving from a pre-offline position to a virtual position in the virtual scene, wherein the pre-offline position is the last virtual position of the virtual character in the virtual scene before the last offline.
  • the scene module further includes: a multi-role module configured to determine three-dimensional display data for the virtual character based on historical update records of virtual positions of other virtual characters in the virtual scene during the period when the virtual character is offline.
  • the aforementioned user is a first user
  • the virtual character is a first virtual character
  • the user device is a first user device
  • the real position is a first real position
  • the virtual position is a first virtual position
  • the apparatus further comprises: a second position receiving module, configured to receive a second real position from a second user device of a second user while the second virtual character of the second user is online in the virtual scene; a second position updating module, configured to update the second virtual position of the second virtual character in the virtual scene based on the second real position; and a second scene module, configured to determine three-dimensional display data for the second virtual character to be presented on the second user device based on the second virtual position, a map resource library associated with the virtual scene, and a physical engine library.
  • the second scene module includes: a second multi-role module configured to determine three-dimensional display data for the second virtual character based on a historical update record of the first virtual position of the first virtual character.
  • the second multi-role module includes: another role generation module configured to generate three-dimensional display data including a three-dimensional display of the first virtual character based on a historical update record of the first virtual position.
  • the second scene module further includes: a privacy module configured to determine three-dimensional display data for the second virtual character based on respective privacy settings of the first virtual character and the second virtual character.
  • the second scene module further includes: a second dynamic representation module, which determines a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene.
  • the second dynamic representation module includes: a path planning module, configured to determine a path of change of the second virtual position of the second virtual character in the virtual scene; and the second dynamic representation module also includes: a path dynamic module, configured to determine the dynamic representation based on the path.
  • the second dynamic representation module further includes: an along-path interaction module configured to determine, based on the path, an interaction representation of the second virtual character and the virtual object in the virtual scene.
  • the apparatus further includes a posture receiving module configured to receive a posture about a second user from a second user device; and the second scene module further includes a posture representation module configured to determine that the dynamic representation of the change also includes determining the dynamic representation based on the posture of the second user.
  • the virtual position in the virtual scene is mapped to the real position in the real world.
  • an electronic device in a third aspect, includes a processor and a memory, wherein the memory stores computer instructions, and when the computer instructions are executed by the processor, the electronic device performs the following actions: receiving a real position from a user device of the user while the user's virtual character is in an offline state in a virtual scene; and updating a virtual position of the virtual character in the virtual scene based on the received real position.
  • the action also includes: receiving an indication from a user device that a virtual character is online in a virtual scene; and in response to receiving the indication, determining three-dimensional display data for the virtual character to be presented on the user device based on a historical update record of the virtual position during the period when the virtual character was offline, a map resource library associated with the virtual scene, and a physics engine library.
  • determining three-dimensional display data for the virtual character includes: determining the three-dimensional display data based on a historical update record of a virtual position during an offline period of the virtual character, a map resource library associated with a virtual scene, and a physical engine library.
  • determining three-dimensional display data for a virtual character includes: determining a dynamic representation of the virtual character moving from a pre-offline position to a virtual position in a virtual scene, wherein the pre-offline position is the last virtual position of the virtual character in the virtual scene before the last time it went offline.
  • determining the three-dimensional display data for the virtual character further comprises: determining the three-dimensional display data based on a historical update record of virtual positions of other virtual characters in the virtual scene while the virtual character is offline.
  • the aforementioned user is a first user
  • the virtual character is a first virtual character
  • the user device is a first user device
  • the real location is a first real location
  • the virtual location is a first virtual location
  • the method further includes: receiving a second real location from a second user device of the second user while the second virtual character of the second user is online in the virtual scene; updating the second virtual location of the second virtual character in the virtual scene based on the second real location; and determining three-dimensional display data for the second virtual character to be presented on the second user device based on the second virtual location, a map resource library associated with the virtual scene, and a physics engine library.
  • determining the three-dimensional display data for the second virtual character includes: determining the three-dimensional display data for the second virtual character based on a historical update record of the first virtual position of the first virtual character.
  • determining three-dimensional display data for the second virtual character based on the historical update record of the first virtual position includes: generating three-dimensional display data including a three-dimensional display of the first virtual character based on the historical update record of the first virtual position.
  • determining the three-dimensional display data further includes: determining the three-dimensional display data based on privacy settings of the first virtual character and the second virtual character, respectively.
  • determining three-dimensional display data for the second virtual character includes determining a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene.
  • determining the dynamic representation of the change includes: determining a path of the change of the second virtual position of the second virtual character in the virtual scene; and determining the dynamic representation based on the path.
  • determining the dynamic representation of the change further includes: determining, based on the path, an interaction representation of the second virtual character and the virtual object in the virtual scene.
  • determining the dynamic representation of the change further comprises: receiving a gesture regarding the second user from a second user device; and determining the dynamic representation based on the gesture of the second user.
  • the virtual position in the virtual scene is mapped to the real position in the real world.
  • a computing device cluster which includes at least one computing device, each computing device including a processor and a memory; the processor of the at least one computing device is used to execute instructions stored in the memory of the at least one computing device, so that the computing device cluster performs operations according to the method in the above-mentioned first aspect or any one of its embodiments.
  • a computer-readable storage medium wherein computer-executable instructions are stored on the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the operations of the method according to the first aspect or any of its embodiments are implemented.
  • a computer program or a computer program product is provided.
  • the computer program or computer program product is tangibly stored on a computer readable medium and includes computer executable instructions, which when executed implement the operations of the method according to the first aspect or any of its embodiments.
  • FIG1 is a schematic diagram showing an example environment in which various embodiments of the present disclosure can be implemented
  • FIG2 shows a flowchart of an example method for position update of a virtual scene according to some embodiments of the present disclosure
  • FIG3 is a schematic interactive diagram showing a process of determining three-dimensional display data when a virtual character goes online according to some embodiments of the present disclosure
  • FIG4 is a schematic diagram showing an example moving path of a virtual character when it goes online according to some embodiments of the present disclosure
  • FIG5 is a schematic diagram showing another example moving path of a virtual character when going online according to some embodiments of the present disclosure
  • FIG6 is a schematic diagram showing another example moving path of a virtual character when going online according to some embodiments of the present disclosure
  • FIG7 illustrates a flow chart of an example method for determining three-dimensional display data for an online character according to some embodiments of the present disclosure
  • FIG8 is a schematic diagram showing an example movement path of an online virtual character according to some embodiments of the present disclosure.
  • FIG9 shows a schematic block diagram of an apparatus for updating a position of a virtual scene according to some embodiments of the present disclosure
  • FIG10 shows a schematic block diagram of an example device that may be used to implement an embodiment of the present disclosure
  • FIG. 11 shows a schematic block diagram of an example computing device cluster that can be used to implement embodiments of the present disclosure.
  • FIG. 12 illustrates a schematic block diagram of an example implementation of a computing device cluster that may be used to implement embodiments of the present disclosure.
  • virtual scenes are often based on the superposition of real-world mirror images.
  • the virtual locations in these scenes are mappings of actual locations in the real world, and the scenes usually contain various objects in the corresponding locations in the real world, such as buildings, green areas, parks and other infrastructure.
  • location data can be used as key information for interacting with the virtual scene.
  • the user's related activities in the real world can be mapped to a series of application scenarios in the virtual world based on location data.
  • the location of the character object in the virtual world can usually be mapped to the location of the client with communication and positioning capabilities (such as the Global Positioning System (GPS positioning)).
  • GPS positioning Global Positioning System
  • an individual walking in an actual scenic spot can use the mobile phone he carries to log in to the virtual scene including the scenic spot.
  • the virtual character he uses can represent the individual himself.
  • the individual is at a specific scenic spot, based on the location of his mobile phone, his virtual character will also be located at the corresponding scenic spot in the virtual scene.
  • the present invention provides a solution for presenting a virtual scene, which can be used to present a virtual scene in the user's virtual character.
  • the real position is received from the user device of the user, and the virtual position of the virtual character in the virtual scene is updated based on the real position.
  • the virtual world position of the user associated with the virtual character can still be synchronized based on the real world position of the user, without being affected by the virtual character being offline.
  • the real-time sense and realism of the virtual scene can be enhanced, so that the immersive experience of the virtual scene can be significantly improved, thereby enhancing the user stickiness of the virtual world application.
  • FIG1 shows a schematic diagram of an example environment 100 in which multiple embodiments of the present disclosure can be implemented.
  • the example environment 100 includes a server 110, a user device 120-1, a user device 120-2, ..., and a user device 120-N.
  • the user device 120-1, the user device 120-2, ..., and the user device 120-N are collectively or individually referred to as user devices 120.
  • the server 110 may be a central device of a virtual scene operator.
  • the server 110 may include various resources required for running the virtual scene, such as but not limited to various basic data resources for building a virtual scene model, computing resources for building a real-time expression of a virtual scene for a certain user, storage resources for storing user data of the virtual scene, and communication resources for communicating with user devices and external content providers (not shown), etc.
  • the server 110 may also be implemented in any other form suitable for performing the corresponding functions, such as multiple centralized devices, distributed devices and/or deployed in the cloud, etc.
  • the user device 120 may be a device having a client application of a virtual scene installed.
  • the user device 120 may be implemented as an electronic device such as a smart phone, a tablet computer, a wearable device, etc.
  • the user device 120 has a positioning function for obtaining its own real position, such as satellite positioning, Bluetooth positioning, and/or other appropriate positioning functions.
  • the embodiments of the present disclosure do not limit the number of user devices. In some multi-role online virtual scenes, the number of user devices may be in the order of thousands, tens of thousands, or even larger.
  • FIG. 1 also shows user 130-1, user 130-2, ..., and user 130-N.
  • user 130-1, user 130-2, ..., and user 130-N are collectively or individually referred to as user 130.
  • User 130 can bind its virtual character in the virtual scene to device 120 and carry user device 120 to move in the real world.
  • user 130-1 can bind its virtual character 140-1 to device 120-1 and carry user device 120-1 to move in the real world.
  • User 130-1 can bind its virtual character 140-2 to device 120-2 and carry user device 120-2 to move in the real world.
  • User 130-N can bind its virtual character 140-N to device 120-N and carry user device 120-N to move in the real world.
  • the virtual character 140 - 1 , the virtual character 140 - 2 , . . . , and the virtual character 140 -N are collectively or individually referred to as a virtual character 140 .
  • the user device 120 can communicate with the server 110. It should be understood that the embodiments of the present disclosure do not limit the communication method, such as communicating in a wired or wireless manner.
  • the wired method may include but is not limited to optical fiber connection, Universal Serial Bus (USB) connection, etc.
  • the wireless method may include but is not limited to mobile communication technology (including but not limited to cellular mobile communication, etc., Wi-Fi, Bluetooth, Point to Point (P2P), etc.
  • the communication method between it and the server 110 may change, for example, from cellular mobile network communication to Wi-Fi communication, and then to communication via a wired connection at a certain location.
  • the user device 120 may send a message (such as in the form of a client request) to the server 110.
  • the message may be sent via an explicit operation of the user (e.g., to the client application interface).
  • the user 130-1 associated with the user device 120-1 may log in or log out of the account of the virtual character 140-1 to the virtual scene, and perform manipulation actions on the virtual objects displayed on the interface, etc.
  • a message containing information about these operations may then be sent to the server 110.
  • messages may also be sent periodically or in response to specific conditions according to settings regarding the virtual character without explicit user operation.
  • the user device 120 may send the actual position of the user device 120 to the server 110 at regular intervals, or send a new actual position to the server 110 when the displacement of the user device 120 reaches a certain value compared to the actual position sent last time.
  • the server 110 may perform various operations such as storage, updating, analysis, and calculation so that the virtual scene can function normally.
  • the server 110 may also send a message to the user device 120 (such as in response to a client request).
  • the server 110 may send virtual scene data that should be presented on the user device 120 to the user device 120 based on the actual location of the user device 120. For example, while the virtual character is online through the user device 120, the server 110 may transmit to the user device 120 in the form of streaming a dynamic representation of the character in the virtual scene that changes according to its location.
  • the architecture and functions in the example environment 100 are described only for exemplary purposes, and do not imply any limitation on the scope of the present disclosure.
  • other devices, systems, or components not shown may also exist in the example environment 100.
  • the user device 120 may communicate indirectly with the server 110 via an edge server near it.
  • the embodiments of the present disclosure may also be applied to other environments with different structures and/or functions.
  • FIG. 2 shows a flow chart of an example method 200 for position update of a virtual scene according to some embodiments of the present disclosure.
  • the example method 200 may be performed, for example, by the server 110 as shown in FIG. 1 . It should be understood that the method 200 may also include additional actions not shown, and the scope of the present disclosure is not limited in this respect.
  • the method 200 is described in detail below in conjunction with the example environment 100 of FIG. 1 .
  • a real position is received from a user device of the user while the user's virtual character is offline in the virtual scene.
  • the server 110 may receive a real position of the real world from the user device 120 of the user 130.
  • the server 110 may be a device of the operator of the virtual scene, and the user device 120 has a client application of the virtual scene.
  • the virtual scene can be a virtual game scene, a virtual mirror tourist guide scene of a scenic spot, and the like.
  • the virtual position in the virtual scene can be mapped to the real position in the real world.
  • a certain virtual position in these scenes usually contains various objects at the corresponding position in the real world, such as infrastructure such as buildings, greenery and parks, so that the virtual scene appears as a digital mirror of the real world.
  • these virtual scenes can also enhance the digital mirror by superimposing virtual objects that do not exist in the real world at the virtual position corresponding to the specific real position, such as but not limited to introductions for specific objects, game props bound to the real position, and online service entrances for specific places such as stores, etc. In this way, when the user of the virtual scene arrives at a specific position in the real world, he can obtain an enhanced experience via the virtual scene.
  • the server 110 may store information about the binding of the user device 120 to the virtual character 140 in the user data associated with the virtual character 140. Then, during the period when the virtual character 140 is offline, the server 110 may receive the real location from the user device 120 and recognize that it is associated with the offline virtual character 140 according to the user data.
  • the virtual position of the virtual character in the virtual scene is updated based on the received real position.
  • the server 110 may update the virtual position of the virtual character 140 in the virtual scene based on the real position received at block 210 .
  • the virtual position in the virtual scene and the real position can use the same coordinate system, and the server 110 can update the value of the virtual position recorded in the user data associated with the virtual character 140 to the value of the received real position.
  • the server 110 can convert the received virtual position into a corresponding virtual position according to the mapping rule between the virtual position and the real position, so as to update the virtual position of the virtual character 140.
  • the problem of asynchronous position data caused by the isolation of the virtual world from the real world during the virtual character offline period can be solved, which provides support for the improvement of user experience of online virtual scene applications that use position as an interaction anchor point. For example, when the user device 120 moves to a new position after the bound virtual character 140 goes offline at a certain position, the virtual character 140 can also move to the virtual position mapped by the new position in the virtual scene.
  • the server 110 may receive the real position from the user device 120 multiple times, and update the virtual position of the virtual character 140 multiple times accordingly.
  • the server 110 may store a historical update record of the virtual position of the virtual character 140.
  • the server 110 may store a virtual position tracking table for the virtual character 140, and add a new entry to it each time the position of the virtual character 140 is updated.
  • the entry may include the updated virtual position and the corresponding update time.
  • the embodiments of the present disclosure do not limit the specific manner in which the server 110 stores the historical update record. Thus, the server 110 can obtain the position track of the virtual character 140.
  • the server 110 may receive an indication from the user device 120 that the virtual character 140 is online in the virtual scene. For example, the server 110 may receive a login request for an account associated with the virtual character 140 from a client application on the user device 120. In response to the received online indication, the server 110 may determine the three-dimensional display data for the virtual character 140 to be presented on the user device 120 based on the historical update record of the virtual location during the period when the virtual character 140 was offline.
  • the server 110 may determine the three-dimensional display data for the virtual character 140 to be presented on the user device 120 based on the historical update record of the virtual position during the offline period of the virtual character 140, the map resource library associated with the virtual scene, and the physical engine library.
  • the map resource library may include various map data about various virtual objects in the virtual scene, such as but not limited to object name, location, three-dimensional shape, color, various physical properties, and functions, etc.
  • the map resource library may include mapping data of the real world, such as real landscape data such as buildings, green spaces, and water bodies.
  • the map resource library can be used by the server 110 to build a basic rendering model of the virtual scene and plan the movement path of the virtual character, etc.
  • the physical engine library is used by the server 110 to calculate the interactive expressions and strategies between virtual objects according to the physical properties of the virtual objects, such as representing collisions between virtual characters or between virtual characters and virtual facilities, determining whether the virtual character can pass through another object, etc.
  • the server 110 may determine a dynamic representation of the virtual character 140 moving from a pre-offline position to a virtual position in the virtual scene, wherein the pre-offline position is the last virtual position of the virtual character 140 in the virtual scene before the last offline. For example, the server 110 may restore the movement path of the virtual character 140 based on the historical update record. Then, the server 110 may obtain the corresponding scene data from the map data according to the path, and use the data together with the model data of the virtual character 140 to calculate the three-dimensional dynamic rendering for the virtual character 140 going online. In this process, the server 110 may also use the physical engine library to enhance the interactive representation of the virtual object, making the dynamic representation more realistic. For example, the server 110 may use the physical engine library to calculate the deformation of both parties when the virtual object contacts other objects, thereby refining the three-dimensional display data.
  • the server 110 may send the determined 3D display data to the user device 120 for presentation to the user.
  • the online scene representation determined based on the location record during the offline period is more natural and more realistic.
  • the specific content of the dynamic representation is related to the type of user device 120 and the display mode of the client application.
  • the user device 120 may have a graphical display interface, and the three-dimensional display data may be a three-dimensional animation representation of the virtual character 140 moving to the current virtual position.
  • the user device 120 may be an augmented reality pair of glasses, and the three-dimensional display data may be a first-person perspective three-dimensional animation representation moving to the current virtual position.
  • the dynamic representation may also include sound and tactile (e.g., vibration) representations.
  • the server 110 can generate three-dimensional display data in different formats.
  • the server 110 can generate a three-dimensional video and send it to the user device.
  • the server 110 can also generate a three-dimensional rendering file that can be interpreted by the user device, and the three-dimensional rendering file is interpreted at the user device for presentation.
  • the embodiments of the present disclosure do not limit the specific form and content of the dynamic representation, and for different client types and/or settings of the same virtual scene, the server 110 can determine three-dimensional display data of different contents.
  • the server 110 may also take into account the historical update records of the virtual positions of other virtual characters in the virtual scene during the period when the virtual character 140 is offline. For example, when restoring the movement path of the virtual character 140, the server 110 may check whether the trajectory of the virtual character 140 during the period when it is offline has a spatiotemporal intersection with other characters. Then, the server 110 may include the dynamic representation of other virtual characters with intersections in the three-dimensional display data, such as but not limited to passing by the current virtual character 140, circling each other at the intersection of the character trajectories and/or greeting each other, etc. In some embodiments, when determining the three-dimensional display data, the server 110 may also check the privacy settings of each virtual character. For example, the server 110 may only consider other virtual characters that are set to be visible to the virtual character 140 and are set by the virtual character 140 to be desired to see.
  • Fig. 3 shows a schematic interaction diagram of a process 300 for determining three-dimensional display data when a virtual character is online according to some embodiments of the present disclosure.
  • the process 300 involves the server 110 and the user device 120 described with respect to Fig. 1, and can be regarded as an example process of the server 110 interacting with the user device 120 when using the method 220 described with respect to Fig. 2.
  • user device 120 sends 305 an indication that virtual character 140 is offline to server 110.
  • user device 120 may send an offline message in response to virtual character 140 logging out of the virtual scene through a client application installed thereon.
  • user device 120 may include the real location of virtual character 140 when offline in the offline indication.
  • the server 110 After receiving the offline message, the server 110 performs server-side offline processing 310 for the virtual character 140.
  • the server 110 can archive various states of the virtual character 140 when it is offline, and update the virtual position of the virtual character 140 based on the real position when it is offline when it is available.
  • the server 110 can update the client dynamic representation of other roles affected by the virtual character 140 offline, such as displaying a prompt animation, etc.
  • the server 110 can terminate the session for transmitting streaming data about the online character with the user device 120.
  • the server 110 may determine three-dimensional display data about the virtual character 140 going offline that should be presented on the user device 120, and send 315 the three-dimensional data to the user device 120, so that the user device 120 can present the offline scene of the virtual character 140.
  • the server 110 may determine the three-dimensional display data in a manner similar to that described above, based on the last virtual position of the virtual character 140 before going offline (recorded or updated according to the real position in the offline message), and/or a map resource library and a physical engine library associated with the virtual scene.
  • the user device 120 may send 320 a new real location to the server 110.
  • the user device 120 may send the real location to the server 110 according to the settings for the virtual character 140.
  • the setting may be to send the current location of the user device 120 itself to the server 110 every predetermined time period.
  • the user device 120 may no longer present the virtual scene to provide the virtual experience to the user 130. During this period, the user device 120 may be carried by the user 130 and used in other ways, such as experiencing another virtual scene, etc. However, according to the settings for binding the virtual character 140 to the user device 120, the user device 120 may still send the real location to the server 110.
  • the server 110 updates 325 the virtual position of the virtual character 140 in the virtual scene, as described above with respect to block 220 , which will not be repeated here for the sake of brevity.
  • the server 110 may also process the client 3D display data about other online virtual characters based on the position update of the virtual character 140 during the offline period, for example, when the offline virtual character 140 moves near another online virtual character, the offline virtual character 140 may be displayed near the client 3D display data of the other online virtual characters.
  • the virtual character 140 may be presented on a user device that another virtual character is logged in. Such an embodiment will be described in more detail later in conjunction with FIG.
  • the virtual character 140 can go online again in the virtual scene.
  • the user 130 can log in to the account associated with the virtual character 140 using the client application on the user device 120.
  • the user device 120 sends 330 to the server 110 an indication that the virtual character 140 is online (such as through an online request).
  • the server 110 determines 335 three-dimensional display data for the virtual character 140 for presentation on the user device 120. As described above with respect to FIG. 2 , the server 110 determines the three-dimensional display data based on the historical update record of the virtual position during the period when the virtual character 140 is offline, and/or the map resource library and the physics engine library associated with the virtual scene, which will not be described in detail here for the sake of brevity.
  • the server 110 then sends 340 the determined three-dimensional display data to the user device 120.
  • the three-dimensional display data is presented 346 to the user 130 by the user device 120 through its output interface.
  • the user device 120 may play the three-dimensional display data as a video stream.
  • the user device 120 may interpret and present the three-dimensional display data as a three-dimensional rendering file.
  • the embodiments of the present disclosure do not limit the specific form of the three-dimensional display data.
  • the above process 300 of determining the three-dimensional display data when the virtual character goes online is only for illustration, and the process 300 may also include actions not shown or actions different from those shown in FIG. 3.
  • the server 110 may authenticate the received online request.
  • the user device 120 may send the real position to the server 110 multiple times and the server 110 may correspondingly update the virtual position of the virtual character 140 in the virtual scene multiple times.
  • a streaming session connection for transmitting virtual scene data may be established between the server 110 and the user device 120.
  • the actions in the process 300 may also be performed by other devices.
  • the user may replace the user device bound to the virtual character.
  • FIG. 4 shows a schematic diagram 400 of an example movement path of a dynamic representation of a virtual character when it is online according to some embodiments of the present disclosure.
  • the path shown in the schematic diagram 400 may be a non-limiting example of a movement path determined by the server 110 when determining three-dimensional display data for the virtual character 140 of the user device 120 when it is online.
  • the server 110 received the real position associated with the virtual character multiple times, and updated the virtual position of the virtual character 140 accordingly, such as a virtual position 430, for example.
  • the server 110 can restore the path 450 of the virtual character 140 moving from the virtual position 410 to the virtual position 420 according to the multiple virtual positions updated during the period when the virtual character 140 is offline.
  • the server 110 can adjust the path according to the map resource library to make it more natural and smooth, without requiring each historical virtual position to be strictly located on the path 450, as shown in the virtual position 430.
  • the server may then further determine a dynamic representation of the movement of the virtual character 140 from the virtual location 410 to the virtual location 420.
  • the server 110 may determine the scene content of the dynamic representation based on the path 450 and the map resource library, such as the roads, buildings, and other landscapes that the virtual character 140 passes through when walking along the path 450.
  • the server 110 may determine the interactive representation of the virtual character 140 with other objects in the virtual scene when walking along the path based on the physical engine library, such as the interactive strategy when being blocked by an obstacle to bypass.
  • the server 110 may also determine the movement rhythm of the virtual character 140 based on the update time of the historical update record of the virtual location, such as by scaling the interval time proportionally.
  • the server 110 may determine the three-dimensional display data for the virtual character 140 to be presented on the user device 120.
  • the server 110 may also take into account the historical update records of other virtual characters (and the historical movement trajectories based thereon). For example, as shown in FIG. 4 , based on the path 460 indicated by the historical update record of another virtual character, the server 110 may determine that at 440, the movement trajectory of the other virtual character and the trajectory of the virtual character have a spatiotemporal intersection. The server 110 may thereby determine that a dynamic representation of the other virtual character and optionally an interactive representation with the virtual character (such as discovering each other, greeting each other, etc.) should be included in a segment of the three-dimensional display where the virtual character 140 moves to the vicinity of this point.
  • a dynamic representation of the other virtual character and optionally an interactive representation with the virtual character such as discovering each other, greeting each other, etc.
  • FIG. 5 shows a schematic diagram of an example movement path of a dynamic representation of a virtual character when going online according to some embodiments of the present disclosure.
  • the path shown in schematic diagram 500 may be a non-limiting example of a movement path determined by server 110 when determining three-dimensional display data for virtual character 140 of user device 120 when going online.
  • the virtual character 140 is located at a virtual position 510 when offline and is located at a virtual position 520 when online.
  • the server 110 can determine that the straight path (or a similar simple connection path) between the two positions is actually impassable in the virtual scene.
  • the straight path may pass through the walls of multiple buildings.
  • the server 110 determines based on the map resource library that after moving to the virtual location 530, the virtual character 140 continues to follow the path segment 550 moves to the virtual position 540.
  • the path segment 550 can pass through the available roads in the virtual scene. Compared with simply crossing the wall in a straight line, the path determined in this way is closer to reality, so that a more realistic dynamic representation can be generated.
  • FIG. 6 shows a schematic diagram 600 of an example movement path of a virtual character when going online according to some embodiments of the present disclosure.
  • the path shown in the schematic diagram 400 may be a non-limiting example of a movement path determined by the server 110 when determining three-dimensional display data for the virtual character 140 of the user device 120 when going online.
  • the virtual character 140 is located at a virtual location 610 when offline and is located at a virtual location 620 when online.
  • the server 110 may determine that the distance between two virtual locations that are continuously updated during the offline period has a different magnitude than the distance between other continuous virtual locations. For example, as shown in the path segment 650 indicated by the dotted line, the virtual location 630 and the virtual location 640 are in two areas far from each other (e.g., different neighborhoods), while the remaining continuous virtual locations are all in the same area with each other.
  • the server 110 can determine the path segment 650 for the virtual character 140 to move from the virtual location 630 to the virtual location 640 in different movement modes (alone, or in combination with the update time and/or the virtual assets owned by the virtual character 140). For example, the server 110 can plan the path segment for the virtual location 630 to move to the virtual location 640 as a motor vehicle mode, while determining the remaining path segments in a walking mode. On this basis, for the virtual character 140 to go online, the server 110 can, for example, generate the following dynamic representation: the virtual character 140 walks from the virtual location 610 to the virtual location 630, continues to take a car to reach the virtual location 640, and then continues to walk to the virtual location 620.
  • FIG. 7 shows a flowchart of an example method 700 for determining three-dimensional display data for online characters according to some embodiments of the present disclosure.
  • Method 700 can be regarded as an optional additional step of method 200, and can be performed, for example, by the server 110 shown in FIG. 1 on the basis of executing method 200 for offline virtual characters. It should be understood that method 700 may also include additional actions not shown, and the scope of the present disclosure is not limited in this respect. Method 700 is described in detail below in conjunction with the example environment 100 of FIG. 1.
  • the server 110 can receive a first real position from the first user device 120-1 of the first user 130-1 while the first virtual character 140-1 of the first user 130-1 is in an offline state in the virtual scene. Based on the received first real position, the server 110 can update the virtual position of the first virtual character 130-1 in the virtual scene.
  • the server 110 may also receive a second real location from the second user device 120-2 of the second user 130-2 while the second virtual character 140-2 of the second user 130-2 is online in the virtual scene.
  • the server 110 may receive the second real location via a message sent by a client application of the virtual scene installed on the user device 120-2.
  • the server 110 may update the second virtual position of the second virtual character 140 - 2 in the virtual scene based on the second real position.
  • the server 110 may update the second virtual position in a manner similar to that described above with respect to block 220 .
  • the server 110 may determine three-dimensional display data for the second virtual character 140-2 for presentation on the second user device 120-2. As shown in box 730-1, the server 110 may determine the three-dimensional display data for the virtual character 140-2 based on the second virtual position, the map resource library associated with the virtual scene (as described above with respect to FIG. 2), and the physical engine library. The specific content and format of the three-dimensional display are related to the type of user device 120-2 and the display mode of the client application. For example, the three-dimensional display data generated for the virtual character 140-2 and the three-dimensional display data generated for the virtual character 140-1 have different perspectives and scales.
  • the server 110 may determine a dynamic representation of the change of the second virtual position of the virtual character 140-2 in the virtual scene. For example, the server 110 may generate a real-time video stream as the position of the online virtual character 140-2 changes, for continuous transmission via the session connection with the user device 120-2, thereby providing a smooth dynamic scene display. For example, the server 110 may obtain corresponding scene data from the map data based on the second virtual position, and use the data together with the last virtual position of the virtual character 140-2 and the character model data to calculate the three-dimensional dynamic rendering of the latest change of the virtual character 140-2.
  • the server 110 may determine a path of changes in the second virtual position of the virtual character 140-2 in the virtual scene, and determine a dynamic representation for the virtual character 140-2 based on the path. For example, the server 110 may determine that the virtual character 140-2 turns while walking. For example, the server 110 may determine that the virtual character 140-2 has not moved relative to the last position it was in. The server 110 may then calculate a dynamic representation of the virtual character 140-2 turning or pausing, such as in the form of a video stream, based on the determined path.
  • server 110 may then determine, based on the determined path, the interactive representation of virtual character 140-2 and virtual objects in the virtual scene. For example, server 110 may determine which objects in the scene virtual character 140-2 will interact with, such as objects in the real world mapped to the virtual scene, virtual objects attached to the scene, and other online characters, based on the path and the scene data obtained from the map data. Server 110 may use a physics engine library to enhance the interactive representation of virtual objects. For example, server 110 may The rebound when the virtual character 140 - 2 collides with other objects while walking forward is calculated.
  • the server 110 may also receive a gesture of the user 130-2 from the user device 120-2, and determine a dynamic representation based on the gesture of the user 130-2. For example, the server 110 may receive an operation of the user 130-2 on the client interface, such as clicking a virtual object. For example, the server 110 may receive actual actions of the user 130-2 captured by a sensor, such as turning, nodding, and waving. Then, the server 110 may convert the received gesture into a dynamic representation of the virtual character 140-2.
  • the server 110 can also use the physics engine library to enhance the interactive representation between the virtual object and other virtual objects.
  • the server 110 can use the physics engine to determine the dynamic representation of squeezing and deforming the flexible object when the virtual character 140-2 picks up the flexible object.
  • the server 110 can use the real location associated with the virtual character as an anchor point and integrate multiple data sources and tools to project the online user's location, movement, and posture onto his or her virtual character, presenting the dynamic changes of virtual objects around the virtual character and various interactions to the user in real time, thereby providing the user with a realistic and immersive online virtual experience.
  • the server 110 may also determine the three-dimensional display data for the virtual character 140-2 based on the historical update record of the first virtual position of the virtual character 140-1 in the offline state. For example, the server 110 may determine whether the virtual character 140-1 currently has an intersection with the virtual character 140-2 based on the trajectory of the first virtual position, and adjust the dynamic representation of the virtual character 140-2 based on the intersection.
  • the server 110 may generate three-dimensional display data including the three-dimensional display of the virtual character 140-1 based on the historical update record of the first virtual location. For example, when it is determined that the virtual character 140-1 and the virtual character 140-2 currently have an intersection, the server 110 may determine the walking direction and dynamic expression of the virtual character 140-1 in the dynamic display of the virtual character 140-2 based on the historical update record of the first virtual location, such as entering or leaving the scene range included in the three-dimensional display data for the virtual character 140-2, walking towards the virtual character 140-2, or walking in front of the virtual character 140-2. For example, the server 110 may also determine the interactive expression between the two virtual characters on this basis, such as waving a greeting when facing each other.
  • the virtual character can still move in the virtual scene and be visible to other virtual characters and optionally have an impact, thereby enabling the real world to be more closely mapped to the virtual scene, improving the user's immersive experience.
  • server 110 may also determine the three-dimensional display data for virtual character 140-2 based on the privacy settings of virtual character 140-1 and virtual character 140-2. For example, server 110 may determine whether virtual character 140-1 is visible to virtual character 140-2 at this time based on the settings of virtual character 140-1, and may determine whether virtual character 140-2 is set to want to see virtual character 140-1 based on the settings of virtual character 140-2. On this basis, server 110 may decide whether to consider virtual character 140-1 in determining the three-dimensional display data for virtual character 140-2, such as whether to compare the positions of virtual character 140-2 with virtual character 140-2.
  • the server can also avoid some unnecessary subsequent calculations and generation by checking the privacy settings, thereby improving the performance of the virtual scene.
  • method 200 and method 400 are described above with respect to virtual character 140-1 and virtual character 140-2, there may be other more online virtual characters and offline virtual characters in the virtual scene, and the server 110 may also consider multiple other online virtual characters and offline virtual characters when generating three-dimensional display data for one of the virtual characters.
  • FIG. 8 shows a schematic diagram 800 of an example movement path of an online virtual character according to some embodiments of the present disclosure.
  • the path shown in the schematic diagram 800 may be a non-limiting example of a movement path determined by the server 110 when updating the position of the virtual character 140-2 online on the user device 120-2.
  • the server 110 receives the real location associated with the virtual character 140-2 from the user device 120-2, and correspondingly determines that the virtual character 140-2 should move to the corresponding virtual location 820. On this basis, the server 110 determines the path 840 as the movement path of the virtual character 140-2 (e.g., along a road, and/or without conflict, etc.) according to the map resource library.
  • the server 110 may take into account other offline characters. For example, the server 110 may determine that an offline virtual character (e.g., the virtual character 140-1 bound to the user device 120-1 as described above) is visible to the virtual character 140-2 and is moving to the virtual location 830. The server 110 may then detour from the offline virtual character 140-1 when determining the path 840. In addition, when generating the dynamic representation of the changes of the virtual character 140-2, the server 110 may also generate the interaction representation between it and the virtual character 140-1, as described above with respect to block 730-2.
  • an offline virtual character e.g., the virtual character 140-1 bound to the user device 120-1 as described above
  • the server 110 may also generate the interaction representation between it and the virtual character 140-1, as described above with respect to block 730-2.
  • FIG9 shows a schematic block diagram of an apparatus 900 for updating a position of a virtual scene according to some embodiments of the present disclosure.
  • the apparatus 900 may be implemented as or included in the server 110 of FIG1 .
  • the apparatus 900 may include multiple modules for performing, for example, the following steps of FIG2 The corresponding actions in method 200 discussed in .
  • the apparatus 900 includes a position receiving module 910 and a position updating module 920.
  • the position receiving module 910 is configured to receive a real position from a user device of a user while the user's virtual character is in an offline state in a virtual scene.
  • the position updating module 920 is configured to update the virtual position of the virtual character in the virtual scene based on the received real position.
  • the position receiving module 910 includes: an online indication module configured to receive an indication from a user device that a virtual character is online in a virtual scene; and the device also includes: a scene module configured to determine, in response to receiving the indication, three-dimensional display data for the virtual character to be presented on the user device based on the historical update record of the virtual position during the period when the virtual character is offline.
  • the scene module includes: a library module configured to determine three-dimensional display data for the virtual character based on a historical update record of the virtual position during the period when the virtual character is offline, a map resource library associated with the virtual scene, and a physics engine library.
  • the scene module includes: a dynamic representation module configured to determine a dynamic representation of the virtual character moving from a pre-offline position to a virtual position in the virtual scene, wherein the pre-offline position is the last virtual position of the virtual character in the virtual scene before the last offline position.
  • the scene module further includes: a multi-role module configured to determine three-dimensional display data for the virtual character based on a historical update record of virtual positions of other virtual characters in the virtual scene during a period when the virtual character is offline.
  • the aforementioned user is a first user
  • the virtual character is a first virtual character
  • the user device is a first user device
  • the real location is a first real location
  • the virtual location is a first virtual location
  • the apparatus further includes: a second location receiving module, configured to receive a second real location from a second user device of a second user while the second virtual character of the second user is online in the virtual scene; a second location updating module, configured to update the second virtual location of the second virtual character in the virtual scene based on the second real location; and a second scene module, configured to determine three-dimensional display data for the second virtual character to be presented on the second user device based on the second virtual location, a map resource library associated with the virtual scene, and a physical engine library.
  • the second scene module includes: a second multi-role module configured to determine three-dimensional display data for the second virtual character based on a historical update record of the first virtual position of the first virtual character.
  • the second multi-role module includes: another character generation module configured to generate three-dimensional display data including a three-dimensional display of the first virtual character based on the historical update record of the first virtual position.
  • the second scene module further includes: a privacy module configured to determine the three-dimensional display data for the second virtual character based on the privacy settings of the first virtual character and the second virtual character respectively.
  • the second scene module further includes: a second dynamic representation module that determines a dynamic representation of a change in a second virtual position of the second virtual character in the virtual scene.
  • the second dynamic representation module includes: a path planning module configured to determine a path of change of the second virtual position of the second virtual character in the virtual scene; and the second dynamic representation module also includes: a path dynamic module configured to determine the dynamic representation based on the path.
  • the second dynamic representation module further includes: an along-path interaction module configured to determine, based on the path, an interaction representation of the second virtual character and a virtual object in the virtual scene.
  • the apparatus further comprises a posture receiving module configured to receive a posture of the second user from a second user device; and the second scene module further comprises a posture representation module configured to determine that the dynamic representation of the change also comprises determining the dynamic representation based on the posture of the second user.
  • virtual locations in the virtual scene are mapped to real locations in the real world.
  • the location receiving module 910 and the location updating module 920 can be implemented by software or hardware.
  • the location receiving module 910 is taken as an example to introduce the implementation of the location receiving module 910.
  • the implementation of the location updating module 920 can refer to the implementation of the location receiving module 910.
  • the location receiving module 910 may include code running on a computing instance.
  • the computing instance may include at least one of a physical host (computing device), a virtual machine, and a container. Further, the above-mentioned computing instance may be one or more.
  • the location receiving module 910 may include code running on multiple hosts/virtual machines/containers. It should be noted that the multiple hosts/virtual machines/containers used to run the code may be distributed in the same region (region) or in different regions.
  • the multiple hosts/virtual machines/containers used to run the code may be distributed in the same availability zone (AZ) or in different AZs, each AZ including one data center or multiple data centers with similar geographical locations. Among them, usually a region may include multiple AZs.
  • VPC virtual private cloud
  • multiple hosts/virtual machines/containers used to run the code can be distributed in the same virtual private cloud (VPC) or in multiple VPCs.
  • VPC virtual private cloud
  • a VPC is set in one region, and two VPCs in the same region can be For cross-region communication between VPCs, as well as between VPCs in different regions, a communication gateway must be set up in each VPC to achieve interconnection between VPCs through the communication gateway.
  • the position receiving module 910 may include at least one computing device, such as a server, etc.
  • the position receiving module 910 may also be a device implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the PLD may be a complex programmable logical device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • CPLD complex programmable logical device
  • FPGA field-programmable gate array
  • GAL generic array logic
  • the multiple computing devices included in the location receiving module 910 can be distributed in the same region or in different regions.
  • the multiple computing devices included in the location receiving module 910 can be distributed in the same AZ or in different AZs.
  • the multiple computing devices included in the location receiving module 910 can be distributed in the same VPC or in multiple VPCs.
  • the multiple computing devices can be any combination of computing devices such as servers, ASICs, PLDs, CPLDs, FPGAs, and GALs.
  • the location receiving module 910 can be used to execute any process and action of the server 110 described in combination with Figures 2 to 8
  • the location update module 920 can be used to execute any process and action of the server 110 described in combination with Figures 2 to 8.
  • the steps that the location receiving module 910 and the location update module 920 are responsible for implementing can be specified as needed, and the full functions of the device 900 are realized by the location receiving module 910 and the location update module respectively executing any process and action of the server 110 described in combination with Figures 2 to 8.
  • the embodiment of the present disclosure also provides a computing device 1000.
  • the computing device 1000 includes: a bus 1002, a processor 1004, a memory 1006, and a communication interface 1008.
  • the processor 1004, the memory 1006, and the communication interface 1008 communicate with each other through the bus 1002.
  • the computing device 1000 may be a server or a terminal device. It should be understood that the present application does not limit the number of processors and memories in the computing device 1000.
  • the bus 1002 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • the bus may be divided into an address bus, a data bus, a control bus, etc.
  • FIG. 10 is represented by only one line, but does not mean that there is only one bus or one type of bus.
  • the bus 1004 may include a path for transmitting information between various components of the computing device 1000 (e.g., the memory 1006, the processor 1004, and the communication interface 1008).
  • Processor 1004 may include any one or more processors such as a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP) or a digital signal processor (DSP).
  • processors such as a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP) or a digital signal processor (DSP).
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • the memory 1006 may include a volatile memory, such as a random access memory (RAM).
  • the processor 1004 may also include a non-volatile memory, such as a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • the memory 106 stores executable program codes, and the processor 1004 executes the executable program codes to respectively implement the functions of the aforementioned location receiving module 910 and the location updating module 920, thereby implementing, for example, the method 200 and the method 700. That is, the memory 106 may store instructions for the methods and functions involving the server 110 in any of the above embodiments.
  • the communication interface 1008 uses a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 1000 and other devices or communication networks.
  • a transceiver module such as, but not limited to, a network interface card or a transceiver to implement communication between the computing device 1000 and other devices or communication networks.
  • the embodiments of the present disclosure also provide a computing device cluster 1100.
  • the computing device cluster includes at least one computing device.
  • the computing device may be a server, such as a central server, an edge server, or a local server in a local data center.
  • the computing device may also be a terminal device such as a desktop computer, a laptop computer, or a smart phone.
  • the computing device cluster includes at least one computing device 1000.
  • the memory 1006 in one or more computing devices 1000 in the computing device cluster may store the same instructions for executing the methods and functions involving the server 110 in any of the above embodiments.
  • the memory 1006 of one or more computing devices 1000 in the computing device cluster may also respectively store partial instructions for executing the methods and functions of the server 110 in any of the above embodiments.
  • the combination of one or more computing devices 1000 may jointly execute instructions for executing the methods and functions of the server 110.
  • the memory 1006 in different computing devices 1000 in the computing device cluster may store different instructions, which are respectively used to execute part of the functions of the apparatus 900. That is, the instructions stored in the memory 1006 in different computing devices 1000 may implement the functions of one or more modules or sub-modules of the location receiving module 910 and the location updating module 920 (and the scene module in some embodiments).
  • one or more computing devices in a computing device cluster may be connected via a network.
  • the network may be a wide area network or a local area network, etc.
  • FIG. 12 shows a possible implementation 1200.
  • two computing devices 1000A and 1000B are connected via a network 1210.
  • the network is connected via a communication interface in each computing device.
  • the memory 1006 in the computing device 1000A stores instructions for executing the functions of the location receiving module 910.
  • the memory 106 in the computing device 100B stores instructions for executing the functions of the location updating module 920.
  • connection method between the computing device clusters shown in Figure 12 can be considered to be that the method provided in this application regarding the server 110 needs to store a large amount of user data and perform intensive real-time or near real-time calculations, so it is considered to hand over the functions implemented by the location update module 920 to the computing device 1000B for execution.
  • the functions of the computing device 1000A shown in FIG. 12 may also be performed by multiple computing devices 1000.
  • the functions of the computing device 1000B may also be performed by multiple computing devices 1000.
  • the embodiments of the present disclosure also provide a computer program product including instructions, which, when executed on a computer, enables the computer to perform the methods and functions involving the server 110 or the user device 120 in any of the above embodiments.
  • An embodiment of the present disclosure further provides a computer-readable storage medium having computer instructions stored thereon.
  • the processor executes the instructions, the processor executes the methods and functions involving the server 110 or the user device 120 in any of the above embodiments.
  • various embodiments of the present disclosure may be implemented in hardware or dedicated circuits, software, logic, or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be performed by a controller, microprocessor, or other computing device. Although various aspects of the embodiments of the present disclosure are shown and described as block diagrams, flow charts, or using some other graphical representation, it should be understood that the blocks, devices, systems, techniques, or methods described herein may be implemented as, by way of non-limiting example, hardware, software, firmware, dedicated circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium.
  • the computer program product includes computer executable instructions, such as instructions included in a program module, which are executed in a device on a real or virtual processor of the target to perform the process/method as described above with reference to the accompanying drawings.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • the functions of program modules can be combined or divided between program modules as needed.
  • Machine executable instructions for program modules can be executed in local or distributed devices. In distributed devices, program modules can be located in local and remote storage media.
  • the computer program code for realizing the method of the present disclosure can be written in one or more programming languages. These computer program codes can be provided to the processor of general-purpose computer, special-purpose computer or other programmable data processing device, so that the program code, when being executed by computer or other programmable data processing device, causes the function/operation specified in flow chart and/or block diagram to be implemented.
  • the program code can be executed completely on computer, partly on computer, as independent software package, partly on computer and partly on remote computer or completely on remote computer or server.
  • computer program codes or related data may be carried by any appropriate carrier to enable a device, apparatus or processor to perform the various processes and operations described above.
  • carriers include signals, computer readable media, and the like.
  • signals may include electrical, optical, radio, acoustic or other forms of propagation signals, such as carrier waves, infrared signals, and the like.
  • a computer-readable medium may be any tangible medium containing or storing a program for or related to an instruction execution system, apparatus or device, or a data storage device such as a data center containing one or more available media.
  • a computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • a computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, apparatus or device, or any suitable combination thereof.
  • Computer-readable storage media include an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical storage device, a magnetic storage device, or any suitable combination thereof.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • optical storage device a magnetic storage device, or any suitable combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请的实施例提供了针对虚拟场景的位置更新方法、电子设备、计算机存储介质和程序产品装置。该方法包括在用户的虚拟角色在虚拟场景中处于离线状态期间,从用户的用户设备接收现实位置。该方法还包括基于接收到的现实位置,更新该虚拟角色在虚拟场景中的虚拟位置。以此方式,在虚拟角色下线后,与虚拟角色相关联的用户的虚拟世界位置仍可以基于用户的现实世界位置而进行同步。因此,虚拟场景的实时感和真实感能够被增强,使得虚拟场景的沉浸式体验能够被显著改进,从而增强虚拟世界应用的用户粘性。

Description

针对虚拟场景的位置更新方法、设备、介质和程序产品
本申请要求于2022年09月28日提交中国专利局,申请号为202211194639.9,发明名称为“针对虚拟场景的位置更新方法、设备、介质和程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息技术领域,更具体地,涉及针对虚拟场景的位置更新方法、电子设备、计算机存储介质和程序产品。
背景技术
在元宇宙以及一些游戏场景中,虚拟世界可以是对现实世界的虚拟化和数字化。在这样的虚拟世界场景中,通过数字孪生技术生成现实世界的镜像,可以给人们提供沉浸式体验。随着大数据与云计算技术的发展,现实世界中的例如社交娱乐等活动可以映射到虚拟世界的应用场景中,用户对虚拟世界的沉浸感的要求也越来越高。
发明内容
本公开的其它特征将通过以下的描述变得容易理解。本申请的实施例提供了一种针对虚拟场景的位置更新方案。
在第一方面,提供了一种针对虚拟场景的位置更新方法。该方法包括:在用户的虚拟角色在虚拟场景中处于离线状态期间,从用户的用户设备接收现实位置;以及基于接收到的现实位置,更新虚拟角色在虚拟场景中的虚拟位置。
如此,在虚拟角色下线后,与虚拟角色相关联的用户的虚拟世界位置仍可以基于用户的现实世界位置进行同步,而不受虚拟角色离线的影响。在此基础上,虚拟场景的实时感和真实感能够被增强,使得虚拟场景的沉浸式体验能够显著改进。
在第一方面的一些实施例中,该方法还包括:从用户设备接收关于虚拟角色在虚拟场景中上线的指示;以及响应于接收到该指示,基于在虚拟角色离线期间虚拟位置的历史更新记录,确定用于在用户设备上呈现的针对虚拟角色的三维显示数据。如此,将离线期间的位置记录作为锚点来检索相关场景数据以进行计算,能够还原虚拟角色离线期间的活动场景。
在第一方面的一些实施例中,确定针对虚拟角色的三维显示数据包括:基于在虚拟角色离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定该三维显示数据。如此,根据历史更新记录从地图资源库获取丰富的数据并使用物理引擎库来增强虚拟场景的表达,能够真实表达虚拟角色的活动场景。
在第一方面的一些实施例中,确定针对虚拟角色的三维显示数据包括:确定虚拟角色在虚拟场景中从下线前位置移动到虚拟位置的动态表示,其中下线前位置是虚拟角色最近一次下线前在虚拟场景中的最后虚拟位置。如此,相比在还原不同步的上一次下线的场景状态,基于离线期间的位置记录所确定的上线场景表示能够真实还原虚拟角色离线期间的行动轨迹,因此更加自然并且更具实境感。
在第一方面的一些实施例中,确定针对虚拟角色的三维显示数据还包括:基于在虚拟角色离线期间虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录,确定该三维显示数据。如此,在构建虚拟角色离线期间的动态虚拟表示时考虑多个其他虚拟角色的影响,进一步提高了虚拟场景的实境感。
在第一方面的一些实施例中,前述用户是第一用户,虚拟角色是第一虚拟角色,用户设备是第一用户设备,现实位置是第一现实位置,虚拟位置是第一虚拟位置,并且该方法还包括:在第二用户的第二虚拟角色在虚拟场景中处于在线状态期间,从第二用户的第二用户设备接收第二现实位置;基于第二现实位置,更新第二虚拟角色在虚拟场景中的第二虚拟位置;以及基于第二虚拟位置、与虚拟场景相关联的地图资源库、以及物理引擎库,确定用于在第二用户设备上呈现的针对第二虚拟角色的三维显示数据。如此,本公开的实施例可以适用于多角色在线的虚拟场景。
在第一方面的一些实施例中,确定针对第二虚拟角色的三维显示数据包括:基于第一虚拟角色的第一 虚拟位置的历史更新记录,确定针对第二虚拟角色的三维显示数据。
在第一方面的一些实施例中,基于第一虚拟位置的历史更新记录确定针对第二虚拟角色的三维显示数据包括:基于第一虚拟位置的历史更新记录,生成包括第一虚拟角色的三维显示的三维显示数据。如此,即使虚拟场景中的虚拟角色离线,其仍然可以对其他虚拟角色可见以及产生影响,从而使得现实世界能够更紧密地映射到虚拟场景。
在第一方面的一些实施例中,确定三维显示数据还包括:基于第一虚拟角色和第二虚拟角色各自的隐私设置,确定三维显示数据。如此,一方面,可以使得用户对虚拟角色的隐私性具有更灵活的控制。另一方面,服务器也可以通过对隐私设置的检查,来避免后续一些不必要的计算和生成。
在第一方面的一些实施例中,确定针对第二虚拟角色的三维显示数据包括:确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的动态表示。如此,所生成的数据可以用于在用户设备上呈现在线角色随着用户设备移动的实时动态变化,提高用户的沉浸式体验。
在第一方面的一些实施例中,确定变化的动态表示包括:确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的路径;以及基于路径来确定动态表示。如此,路径周围的虚拟对象以及其随着第二虚拟角色移动而产生的变化可以被包括用于呈现给用户的三维显示数据中,从而提高虚拟场景的真实感。
在第一方面的一些实施例中,确定变化的动态表示还包括:基于路径,确定第二虚拟角色与虚拟场景中的虚拟对象的交互表示。如此,在呈现虚拟场景时考虑到多角色之间的互动,提高了虚拟场景的沉浸性。
在第一方面的一些实施例中,其中确定变化的动态表示还包括:从第二用户设备接收关于第二用户的姿态;以及基于第二用户的姿态来确定变化的动态表示。如此,在线用户的各种动作姿态可以对应地被反映在虚拟对象上,进一步了提高用户的沉浸式体验。
在第一方面的一些实施例中,其中虚拟场景中的虚拟位置与现实世界中的现实位置相互映射。如此,本公开的实施例可以与虚拟角色相关联的现实位置为锚点,来为虚拟对象提供具有真实感的沉浸式在线虚拟体验。
在第二方面,提供了一种用于针对虚拟场景的位置更新的装置。该装置包括:位置接收模块,被配置为在用户的虚拟角色在虚拟场景中处于离线状态期间,从用户的用户设备接收现实位置;以及位置更新模块,被配置为基于接收到的现实位置,更新虚拟角色在虚拟场景中的虚拟位置。
在第二方面的一些实施例中,位置接收模块包括:上线指示模块,被配置为从用户设备接收关于虚拟角色在虚拟场景中上线的指示;并且该装置还包括:场景模块,被配置为响应于接收到该指示,基于在虚拟角色离线期间虚拟位置的历史更新记录,确定用于在用户设备上呈现的针对虚拟角色的三维显示数据。
在第二方面的一些实施例中,场景模块还包括:库模块,被配置为基于在虚拟角色离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定用于针对虚拟角色的三维显示数据。
在第二方面的一些实施例中,场景模块包括:动态表示模块,被配置为确定虚拟角色在虚拟场景中从下线前位置移动到虚拟位置的动态表示,其中下线前位置是虚拟角色最近一次下线前在虚拟场景中的最后虚拟位置。
在第二方面的一些实施例中,场景模块还包括:多角色模块,被配置为基于在虚拟角色离线期间虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录,确定针对虚拟角色的三维显示数据。
在第二方面的一些实施例中,前述用户是第一用户,虚拟角色是第一虚拟角色,用户设备是第一用户设备,现实位置是第一现实位置,虚拟位置是第一虚拟位置,并且装置还包括:第二位置接收模块,被配置为在第二用户的第二虚拟角色在虚拟场景中处于在线状态期间,从第二用户的第二用户设备接收第二现实位置;第二位置更新模块,被配置为基于第二现实位置,更新第二虚拟角色在虚拟场景中的第二虚拟位置;第二场景模块,被配置为基于第二虚拟位置、与虚拟场景相关联的地图资源库以及物理引擎库,确定用于在第二用户设备上呈现的针对第二虚拟角色的三维显示数据。
在第二方面的一些实施例中,第二场景模块包括:第二多角色模块,被配置为基于第一虚拟角色的第一虚拟位置的历史更新记录,确定针对第二虚拟角色的三维显示数据。
在第二方面的一些实施例中,第二多角色模块包括:其他角色生成模块,被配置为基于第一虚拟位置的历史更新记录,生成包括第一虚拟角色的三维显示的三维显示数据。
在第二方面的一些实施例中,第二场景模块还包括:隐私模块,被配置为基于第一虚拟角色和第二虚拟角色各自的隐私设置,确定针对第二虚拟角色的三维显示数据。
在第二方面的一些实施例中,第二场景模块还包括:第二动态表示模块,确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的动态表示。
在第二方面的一些实施例中,第二动态表示模块包括:路径规划模块,被配置为确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的路径;并且第二动态表示模块还包括:路径动态模块,被配置为基于路径来确定动态表示。
在第二方面的一些实施例中,第二动态表示模块还包括:沿路径交互模块,被配置为基于路径,确定第二虚拟角色与虚拟场景中的虚拟对象的交互表示。
在第二方面的一些实施例中,装置还包括姿态接收模块,被配置为从第二用户设备接收关于第二用户的姿态;并且第二场景模块还包括姿态表示模块,被配置确定变化的动态表示还包括基于第二用户的姿态来确定动态表示。
在第二方面的一些实施例中,其中虚拟场景中的虚拟位置与现实世界中的现实位置相互映射。
在第三方面,提供了一种电子设备。该电子设备包括处理器和存储器,存储器上存储有计算机指令,当计算机指令被处理器执行时,使得该电子设备执行以下动作:在用户的虚拟角色在虚拟场景中处于离线状态期间,从用户的用户设备接收现实位置;以及基于接收到的现实位置,更新虚拟角色在虚拟场景中的虚拟位置。
在第三方面的一些实施例中,动作还包括:从用户设备接收关于虚拟角色在虚拟场景中上线的指示;以及响应于接收到该指示,基于在虚拟角色离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定用于在用户设备上呈现的针对虚拟角色的三维显示数据。
在第三方面的一些实施例中,确定针对虚拟角色的三维显示数据包括:基于在虚拟角色离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定该三维显示数据。
在第三方面的一些实施例中,确定针对虚拟角色的三维显示数据包括:确定虚拟角色在虚拟场景中从下线前位置移动到虚拟位置的动态表示,其中下线前位置是虚拟角色最近一次下线前在虚拟场景中的最后虚拟位置。
在第三方面的一些实施例中,确定针对虚拟角色的三维显示数据还包括:基于在虚拟角色离线期间虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录,确定该三维显示数据。
在第三方面的一些实施例中,前述用户是第一用户,虚拟角色是第一虚拟角色,用户设备是第一用户设备,现实位置是第一现实位置,虚拟位置是第一虚拟位置,并且该方法还包括:在第二用户的第二虚拟角色在虚拟场景中处于在线状态期间,从第二用户的第二用户设备接收第二现实位置;基于第二现实位置,更新第二虚拟角色在虚拟场景中的第二虚拟位置;以及基于第二虚拟位置、与虚拟场景相关联的地图资源库以及物理引擎库,确定用于在第二用户设备上呈现的针对第二虚拟角色的三维显示数据。
在第三方面的一些实施例中,确定针对第二虚拟角色的三维显示数据包括:基于第一虚拟角色的第一虚拟位置的历史更新记录,确定针对第二虚拟角色的三维显示数据。
在第三方面的一些实施例中,基于第一虚拟位置的历史更新记录确定针对第二虚拟角色的三维显示数据包括:基于第一虚拟位置的历史更新记录,生成包括第一虚拟角色的三维显示的三维显示数据。
在第三方面的一些实施例中,确定三维显示数据还包括:基于第一虚拟角色和第二虚拟角色各自的隐私设置,确定三维显示数据。
在第三方面的一些实施例中,确定针对第二虚拟角色的三维显示数据包括:确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的动态表示。
在第三方面的一些实施例中,确定变化的动态表示包括:确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的路径;以及基于路径来确定动态表示。
在第三方面的一些实施例中,确定变化的动态表示还包括:基于路径,确定第二虚拟角色与虚拟场景中的虚拟对象的交互表示。
在第三方面的一些实施例中,其中确定变化的动态表示还包括:从第二用户设备接收关于第二用户的姿态;以及基于第二用户的姿态来确定动态表示。
在第三方面的一些实施例中,其中虚拟场景中的虚拟位置与现实世界中的现实位置相互映射。
在第四方面,提供了一种计算设备集群,该计算设备集群包括至少一个计算设备,每个计算设备包括处理器和存储器;该至少一个计算设备的处理器用于执行至少一个计算设备的存储器中存储的指令,以使得计算设备集群执行根据上述第一方面或其任一实施例中的方法的操作。
在第五方面,提供了一种计算机可读存储介质。该计算机可读存储介质上存储有计算机可执行指令,该计算机可执行指令被处理器执行时实现根据上述第一方面或其任一实施例中的方法的操作。
在第六方面,提供了一种计算机程序或计算机程序产品。该计算机程序或计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,计算机可执行指令在被执行时实现根据上述第一方面或其任一实施例中的方法的操作。
附图说明
结合附图并参考以下具体实施方式,本申请各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标注表示相同或相似的元素,其中:
图1示出了本公开的多个实施例能够在其中实现的示例环境的示意图;
图2示出了根据本公开的一些实施例的针对虚拟场景的位置更新的示例方法流程图;
图3示出了根据本公开的一些实施例的确定虚拟角色上线时的三维显示数据的过程的示意交互图;
图4示出了根据本公开的一些实施例的虚拟角色上线时的示例移动路径的示意图;
图5示出了根据本公开的一些实施例的虚拟角色上线时的另一示例移动路径的示意图;
图6示出了根据本公开的一些实施例的虚拟角色上线时的又一示例移动路径的示意图;
图7示出了根据本公开的一些实施例的确定针对在线角色的三维显示数据的示例方法的流程图;
图8示出了根据本公开的一些实施例的在线虚拟角色的示例移动路径的示意图;
图9示出了根据本公开的一些实施例的用于虚拟场景的位置更新的装置的示意性框图;
图10示出了可以用来实施本公开的实施例的示例设备的示意性框图;
图11示出了可以用来实施本公开的实施例的示例计算设备集群的示意性框图;以及
图12示出了可以用来实施本公开的实施例的计算设备集群的示例实现方式的示意性框图。
具体实施方式
下面将参照附图更详细地描述本申请的实施例。虽然附图中显示了本申请的某些实施例,然而应当理解的是,本申请可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本申请。应当理解的是,本申请的附图及实施例仅用于示例性作用,并非用于限制本申请的保护范围。
在本申请的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
在平行现实或增强现实等虚拟现实应用中,虚拟场景经常是基于现实世界镜像的叠加构成。这些场景中的虚拟位置是现实世界中实际位置的映射,并且场景中通常包含现实世界中对应位置的各种对象,诸如建筑、绿化和公园等基础设施。
在这种基于现实世界镜像的虚拟场景中,可以使用位置数据作为与虚拟场景交互的关键信息。用户在现实世界中的相关活动可以基于位置数据映射到虚拟世界的一系列应用情景中。在此基础上,虚拟世界中的角色对象的位置通常可以映射到具备通讯能力与定位能力(例如全球定位系统(Global Positioning System,GPS定位)的客户端所在的位置。例如,在实际景区行走的个体可以使用所携带的手机登录到包括该景区的虚拟场景中。在该场景中,其使用的虚拟角色可以表示该个体本人。在此基础上,当该个体位于特定景点处时,基于其手机的位置,其虚拟角色也将位于虚拟场景中相应景点处。
在传统的虚拟场景应用中,当角色在场景中下线时,客户端通常将角色当前的位置数据上传至云端服务器存档。此后,当角色再次上线,客户端从服务器读取之前下线前存档的位置数据。由此,角色在下线前的虚拟位置处以先前状态被还原。然而,在虚拟角色与现实世界中移动的个体对应时,角色再次上线时个体的位置和状态可能已经与先前下线时不匹配,从而带来不自然的用户体验。此外,虚拟世界中的角色在离线期间,可能消失或者原地不动。这在与现实世界相互映射的多角色在线的虚拟场景中,还可能使得其他在线用户体验到的实时感和真实感下降。如上所述,在虚拟角色离线期间,虚拟世界与现实世界的完全隔离而导致的位置数据不互通,会使得虚拟现实用户的沉浸式体验受到不利影响。
为了解决上述问题以及其他问题,本公开提供了一种呈现虚拟场景的方案,可以在用户的虚拟角色处 于离线状态期间从用户的用户设备接收现实位置,以及基于该现实位置来更新该虚拟角色在虚拟场景中的虚拟位置。以此方式,在虚拟角色下线后,与虚拟角色相关联的用户的虚拟世界位置仍可以基于用户的现实世界位置而进行同步,而不受虚拟角色离线等影响。在此基础上,虚拟场景的实时感和真实感能够被增强,使得虚拟场景的沉浸式体验能够显著改进,从而增强虚拟世界应用的用户粘性。
图1示出了本公开的多个实施例能够在其中实现的示例环境100的示意图。如图1所示,示例环境100中包括服务器110、用户设备120-1、用户设备120-2、…、和用户设备120-N。为了下文表述方便,在本公开的实施例中将用户设备120-1、用户设备120-2、…、和用户设备120-N统称为或者单独称为用户设备120。
在一些实施例中,服务器110可以是虚拟场景运营方的中心设备。服务器110中可以包括运行虚拟场景所需的各种资源,例如但不限于用于构建虚拟场景模型的各种基础数据资源、用于构建针对某个用户的虚拟场景实时表达的计算资源、存储虚拟场景的用户数据的存储资源、以及与用户设备和外部内容提供方等(未示出)进行通信的通信资源,等等。应当理解,尽管被示出为单个设备,但是服务器110也可以被实现其他任何适合于执行相应功能的形式,诸如多个集中式设备、分布式设备和/或被部署在云端等等。
在一些实施例中,用户设备120可以是安装有虚拟场景的客户端应用的设备。在一些实施例中,用户设备120可以被实现为诸如智能手机、平板电脑、可穿戴设备等电子设备。用户设备120具有获取自身现实位置的定位功能,例如卫星定位、蓝牙定位和/或其他适当的定位功能。此外,本公开的实施例对用户设备的数目不作限定。在一些多角色在线虚拟场景中,用户设备的数目的量级可以是千、万、或更大。
图1还示出了用户130-1、用户130-2、…、和用户130-N。在本公开的实施例中将用户130-1、用户130-2、…、和用户130-N统称为或者单独称为用户130。用户130可以将其在虚拟场景中的虚拟角色绑定到设备120并且携带用户设备120在现实世界中移动。例如,用户130-1可以将其虚拟角色140-1绑定到设备120-1,并且携带用户设备120-1在现实世界移动。用户130-1可以将其虚拟角色140-2绑定到设备120-2,并且携带用户设备120-2在现实世界移动。用户130-N可以将其虚拟角色140-N绑定到设备120-N,并且携带用户设备120-N在现实世界移动。在本公开的实施例中将虚拟角色140-1、虚拟角色140-2、…、和虚拟角色140-N统称为或者单独称为虚拟角色140。
用户设备120可以与服务器110进行通信。应当理解,本公开的实施例对通信方式不做限定,例如以有线或无线方式通信。有线方式可以包括但不限于光纤连接、通用串行总线(Universal Serial Bus,USB)连接等,无线方式可以包括但不限于移动通信技术(包括但不限于蜂窝移动通信等、Wi-Fi、蓝牙(Bluetooth)、点对点(Point to Point,P2P)等。此外,在用户设备120的移动过程中,其与服务器110之间的通信方式可以产生变化,例如,从蜂窝移动网络通信转变为Wi-Fi通信,然后在某个地点转变为通过有线连接通信。
用户设备120可以向服务器110发送消息(诸如,以客户端请求的形式)。在一些实施例中,消息可以是经由用户(例如,对客户端应用界面的)显式操作而被发送的。例如,与用户设备120-1相关联的用户130-1可以使虚拟角色140-1的账户登录到虚拟场景中或从虚拟场景下线,以及对界面上显示的虚拟对象做出操纵动作等。包含关于这些操作的信息的消息然后可以被发送给服务器110。
在一些实施例中,消息还可以根据关于虚拟角色的设置而被定期地或响应于特定条件而被发送,而无需用户进行显式操作。例如,用户设备120可以每隔一定时间向服务器110发送用户设备120的实际位置,或者在用户设备120的位移相比上次发送的实际位置达到某个值时向服务器110发送新的实际位置。
例如,根据从用户设备120接收到的各种请求以及其他输入,服务器110可以进行存储、更新、分析、计算等各种操作,使得虚拟场景可以正常运作。服务器110也可以向用户设备120发送消息(诸如,响应于客户端请求)。在一些实施例中,服务器110可以基于用户设备120的实际位置来向用户设备120发送应在用户设备120上呈现的虚拟场景数据。例如,在虚拟角色通过用户设备120在线期间,服务器110可以以流式传输的形式向用户设备120传输根据其位置而变化的关于角色在虚拟场景中的动态表示。
应当理解,仅出于示例性的目的来描述示例环境100中的架构和功能,而不暗示对本公开的范围的任何限制。并且,示例环境100中还可以存在其他未示出的设备、系统或组件等。例如,用户设备120可以经由其附近的边缘服务器与服务器110间接地进行通信。另外,本公开的实施例还可以被应用到具有不同的结构和/或功能的其他环境中。
图2示出了根据本公开的一些实施例的针对虚拟场景的位置更新的示例方法200的流程图。示例方法200可以例如由如图1所示的服务器110执行。应当理解,方法200还可以包括未示出的附加动作,本公开的范围在此方面不受限制。以下结合图1的示例环境100来详细描述方法200。
在框210处,在用户的虚拟角色在虚拟场景中处于离线状态期间,从该用户的用户设备接收现实位置。例如,在用户130的虚拟角色140在虚拟场景中处于离线状态期间,服务器110可以从用户130的用户设备120接收现实世界的现实位置。例如,服务器110可以是该虚拟场景运营方的设备,并且用户设备120具有该虚拟场景的客户端应用。
作为非限制性示例,该虚拟场景可以是虚拟游戏场景、以及景点的虚拟镜像旅游向导场景等等。在一些实施例中,虚拟场景中的虚拟位置可以与现实世界中的现实位置相互映射。在这样的实施例中,这些场景中的某个虚拟位置处通常包含现实世界中对应位置的各种对象,诸如建筑、绿化和公园等基础设施,使得虚拟场景表现为现实世界的数字镜像。此外,这些虚拟场景还可以在与特定现实位置对应的虚拟位置处叠加现实世界中不存在的虚拟对象来增强该数字镜像,例如但不限于针对特定对象的介绍、与现实位置绑定的游戏小道具、以及关于诸如商店等特定场所的在线服务入口等等。如此,虚拟场景的用户在到达现实世界的特定位置时,可以经由虚拟场景获得增强的体验。
在一些实施例中,例如基于已有的设置,服务器110可以在与虚拟角色140相关联的用户数据中存储关于用户设备120绑定到虚拟角色140的信息。然后,在该虚拟角色140离线期间,服务器110可以从该用户设备120接收现实位置,并且根据用户数据识别其与离线的该虚拟角色140相关联。
在框220处,基于接收到的现实位置,更新虚拟角色在虚拟场景中的虚拟位置。例如,服务器110可以基于在框210处接收到的现实位置,更新虚拟角色140在虚拟场景中的虚拟位置。
在一些实施例中,虚拟场景中的虚拟位置和可以与现实位置使用同一坐标系统,并且服务器110可以将与虚拟角色140相关联的用户数据中记录的虚拟位置的值更新为所接收的现实位置的值。在一些实施例中,服务器110可以根据虚拟位置与现实位置之间的映射规则,将所接收的虚拟位置转换成对应的虚拟位置,以用于更新虚拟角色140的虚拟位置。如此,可以解决在虚拟角色离线期间虚拟世界与现实世界的隔离而导致的位置数据不同步问题,这为将位置作为交互锚点的在线虚拟场景应用的用户体验改进提供了支持。例如,在用户设备120在某个位置处使其绑定的虚拟角色140下线之后移动到新位置时,在其虚拟角色140在虚拟场景中可以同样移动到新位置所映射的虚拟位置。
应当理解,在虚拟角色140离线期间,服务器110可以多次从用户设备120接收到现实位置,并且据此多次更新虚拟角色140的虚拟位置。在一些实施例中,服务器110可以存储虚拟角色140的虚拟位置的历史更新记录。例如,服务器110可以存储针对虚拟角色140的虚拟位置跟踪表,并且在每次虚拟角色140的位置被更新时向其添加新的条目。例如,该条目可以包括所更新的虚拟位置以及相应的更新时间。本公开的实施例对服务器110存储历史更新记录的具体方式不做限制。由此,服务器110可以获得虚拟角色140的位置轨迹。
在这样的一些实施例中,服务器110可以从用户设备120接收关于虚拟角色140在虚拟场景中上线的指示。例如,服务器110可以从用户设备120上的客户端应用接收到与虚拟角色140相关联的账户的登录请求。响应于接收到的上线指示,服务器110可以基于在虚拟角色140离线期间虚拟位置的历史更新记录,确定用于在用户设备120上呈现的针对虚拟角色140的三维显示数据。
在一些实施例中,服务器110可以基于在虚拟角色140离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定用于在用户设备120上呈现的针对虚拟角色140的三维显示数据。地图资源库可以包括关于虚拟场景中的各种虚拟对象的各种地图数据,例如但不限于对象名称、位置、三维形状、颜色,各种物理性质、以及功能等等。在一些实施例中,该地图资源库可以包括现实世界的映射数据,诸如建筑、绿地以及水体等现实景观数据。地图资源库可以由服务器110用于构建虚拟场景的基础渲染模型以及规划虚拟角色的移动路径等等。物理引擎库由服务器110用于根据虚拟对象的物理属性来计算虚拟对象之间的交互表达和策略,诸如表示虚拟角色之间或虚拟角色与虚拟设施之间的碰撞,确定虚拟角色是否能够穿过另一对象等。
在一些实施例中,作为确定三维显示数据的一部分,服务器110可以确定虚拟角色140在虚拟场景中从下线前位置移动到虚拟位置的动态表示,其中下线前位置是虚拟角色140最近一次下线前在虚拟场景中的最后虚拟位置。例如,服务器110可以基于历史更新记录来还原虚拟角色140期间的移动路径。然后,服务器110可以根据该路径从地图数据中获取相应的场景数据,并且使用该数据连同虚拟角色140的模型数据一起来计算针对虚拟角色140上线的三维动态渲染。在该过程中,服务器110还可以使用物理引擎库来增强虚拟对象的交互表示,使得该动态表示更加逼真。例如,服务器110可以使用物理引擎库来计算虚拟对象与其他对象接触时双方产生的形变,由此精化三维显示数据。
然后,服务器110可以将所确定的三维显示数据发送给用户设备120以供向用户呈现。如此,相比在还原不同步的上一次下线的场景状态,基于离线期间的位置记录所确定的上线场景表示更加自然并且更具实境感。
动态表示的具体内容和用户设备120的类型和客户端应用的显示模式等有关。例如,用户设备120可以具有图形显示界面,并且三维显示数据可以是虚拟角色140移动到当前虚拟位置的三维动画表示。例如,用户设备120可以是增强现实眼镜,并且三维显示数据可以是移动到当前虚拟位置的第一人称视角三维动画表示。该动态表示也可以包括声音以及触觉(例如,振动)表示。
此外,取决于具体实现和客户端配置等,服务器110可以生成不同格式的三维显示数据。例如,服务器110可以生成三维视频并发送给用户设备。例如,服务器110还可以生成用户设备可解译的三维渲染文件,并且该三维渲染文件在用户设备处被解译以供呈现。本公开的实施例对动态表示的具体形式和内容不做限制,并且针对同一虚拟场景的不同客户端类型和/或设置,服务器110可以确定不同内容的三维显示数据。
在一些实施例中,在确定三维显示数据时,服务器110还可以将虚拟角色140离线期间虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录纳入考虑。例如,在还原虚拟角色140的移动路径时,服务器110可以检查虚拟角色140离线期间的轨迹是否与其他角色具有时空交集。然后,服务器110可以将有交集的其他虚拟角色的动态表示包括在三维显示数据中,例如但不限于从当前虚拟角色140旁经过,在角色轨迹交点处互相绕行和/或问候等。在一些实施例中,在确定三维显示数据时,服务器110还可以检查各个虚拟角色的隐私设置。例如,服务器110可以只考虑设置对虚拟角色140可见并且被虚拟角色140设置为想要看见的其他虚拟角色。
在多用户的在线虚拟场景应用中,在构建虚拟角色离线期间的动态虚拟表示时考虑多个其他虚拟角色的影响,可以进一步提高虚拟场景的实境感。
图3示出了根据本公开的一些实施例的确定虚拟角色上线时的三维显示数据的过程300的示意交互图。过程300涉及关于图1所描述的服务器110和用户设备120,并且可以视为服务器110在使用关于图2所描述的方法220时与用户设备120交互的示例过程。
在过程300中,用户设备120向服务器110发送305虚拟角色140下线的指示。例如,用户设备120可以响应于虚拟角色140通过安装在其上的客户端应用登出虚拟场景而发送下线消息。可选地,用户设备120可以在下线指示中包括虚拟角色140下线时的现实位置。
在接收到下线消息后,服务器110执行针对虚拟角色140的服务器侧下线处理310。例如,服务器110可以将虚拟角色140下线时的各种状态存档,并且在下线时的现实位置可用时基于该现实位置来更新虚拟角色140的虚拟位置。例如,在多角色的场景中,服务器110可以相应地基于虚拟角色140下线而更新受到影响的其他角色的客户端动态表示,诸如显示提示动画等。例如,服务器110可以终止与用户设备120之间传输关于在线角色的流数据的会话。
可选地或附加地,服务器110可以确定应在用户设备120上呈现的关于虚拟角色140下线的三维显示数据,并且将该三维数据发送315给用户设备120,以供用户设备120呈现虚拟角色140的下线情景。服务器110可以以类似前文所述的方式,基于虚拟角色140下线前最后的虚拟位置(已记录的或根据下线消息中的现实位置而被更新的)、和/或与虚拟场景相关联的地图资源库以及物理引擎库来确定该三维显示数据。
在虚拟角色140离线期间,用户设备120可以向服务器110发送320新的现实位置。在一些实施例中,用户设备120可以在虚拟角色140下线后,根据关于虚拟角色140的设置来向服务器110发送现实位置。例如,该设置可以是每隔预定时间段向服务器110发送用户设备120自身的当前位置。
作为非限制性示例,在虚拟角色140离线期间,用户设备120可以不再呈现虚拟场景以向用户130提供虚拟体验。在此期间,用户设备120可以由用户130随身携带以及以其他方式使用,例如体验另一虚拟场景等。但是,根据将虚拟角色140绑定到用户设备120的设置,用户设备120依然可以向服务器110发送现实位置。
响应于接收到该消息,服务器110更新325虚拟角色140在虚拟场景中的虚拟位置,如前文关于框220所述,为简洁起见在此不再赘述。
在一些实施例中,服务器110还可以基于虚拟角色140离线期间的位置更新来处理关于其他在线虚拟角色的客户端三维显示数据,例如在离线的虚拟角色140移动到在线的另一虚拟角色附近时,使得离线的 虚拟角色140可以在另一虚拟角色所登录的用户设备上呈现。稍后将结合图4更详细地描述这样的实施例。
在离线一段时间之后,虚拟角色140可以再次在虚拟场景中上线。例如,用户130可以使用用户设备120上的客户端应用登录到与虚拟角色140相关联的账户。当虚拟角色140上线,用户设备120向服务器110发送330虚拟角色140上线的指示(诸如通过上线请求)。
响应于从用户设备120接收到虚拟角色140上线的指示,服务器110确定335用于在用户设备120上呈现的针对虚拟角色140的三维显示数据。如前文关于图2所述,服务器110基于在虚拟角色140离线期间虚拟位置的历史更新记录、和/或与虚拟场景相关联的地图资源库以及物理引擎库来确定该三维显示数据,为简洁起见在此不再赘述。
服务器110然后向用户设备120发送340所确定的三维显示数据。该三维显示数据由用户设备120通过其输出界面呈现346给用户130。例如,用户设备120可以播放作为视频流的该三维显示数据。在另一示例中,用户设备120可以解译并呈现作为三维渲染文件的三维显示数据。本公开的实施例对三维显示数据的具体形式不做限制。
应当理解,上述确定虚拟角色上线时的三维显示数据的过程300仅作为示意,过程300还可以包括未示出的动作或者与图3所示的不同的动作。例如,服务器110可以对接收到的上线请求进行认证。例如,在虚拟角色140离线期间,用户设备120可以向服务器110多次发送现实位置并且服务器110可以对应地多次更新虚拟角色140在虚拟场景中的虚拟位置。例如,在虚拟角色140上线后,服务器110与用户设备120之间可以建立用于传输虚拟场景数据的流式传输会话连接。此外,过程300中的动作还可以由其他设备来执行。例如,用户可以更换绑定到虚拟角色的用户设备。
图4示出了根据本公开的一些实施例的虚拟角色上线时的动态表示的示例移动路径的示意图400。示意图400中所示的路径可以是服务器110在针对用户设备120的虚拟角色140上线确定三维显示数据时确定的移动路径的非限制性示例。
如图4所示,在该示例中,在上一次下线时,虚拟角色140下线时位于虚拟位置410处。在离线期间,服务器110多次接收到与虚拟角色相关联的现实位置,并对应地更新虚拟角色140的虚拟位置,例如,如虚拟位置430。
当虚拟角色140位于虚拟位置420时,虚拟角色140再次上线。此时,服务器110可以根据虚拟角色140离线期间更新的多个虚拟位置来还原虚拟角色140从虚拟位置410移动到虚拟位置420的路径450。服务器110可以根据地图资源库来调整路径使得其更加自然平滑,而无需每个历史虚拟位置都严格位于路径450上,如虚拟位置430所示。
然后,服务器可以进一步确定虚拟角色140从虚拟位置410移动到虚拟位置420的动态表示。例如,服务器110可以根据路径450和地图资源库来确定动态表示的场景内容,诸如虚拟角色140沿着路径450行走时经过的道路、建筑以及其他景观。例如,服务器110可以根据物理引擎库来确定虚拟角色140沿着路径行走时与虚拟场景中其他对象的交互表示,诸如被障碍物阻挡绕行时的交互策略。服务器110还可以基于虚拟位置的历史更新记录的更新时间来确定虚拟角色140的移动节奏,诸如通过将间隔时间按比例缩放。由此,服务器110可以确定用于在用户设备120上呈现的针对虚拟角色140上线的三维显示数据。
在还原离线路径以及进一步确定动态显示时,服务器110还可以考虑到其他虚拟角色的历史更新记录(以及基于此的历史移动轨迹)。例如,如图4所示,根据另一虚拟角色的历史更新记录所指示的路径460,服务器110可以确定在440处,该另一虚拟角色的移动轨迹与虚拟角色的轨迹具有时空交集。服务器110可以由此确定在虚拟角色140移动到该点附近的一段三维显示中应当包括关于该另一虚拟角色的动态表示以及可选地与虚拟角色之间的交互表示(诸如,彼此发现对方、相互问候等)。
图5示出了根据本公开的一些实施例的虚拟角色上线时的动态表示的示例移动路径的示意图。示意图500中所示的路径可以是服务器110在针对用户设备120的虚拟角色140上线确定三维显示数据时确定的移动路径的非限制性示例。
在图5的示例中,虚拟角色140在下线时位于虚拟位置510处并且在上线时位于虚拟位置520处。如图所示,在确定虚拟角色140的移动路径时,基于两个在离线期间相继更新的虚拟位置530和虚拟位置540和地图资源库中的相关数据,服务器110可以判断这两个位置之间的直线路径(或类似的简单连接路径)在虚拟场景中实际上不可通行。例如,该直线路径可能穿过多个建筑物的墙体。
因此,服务器110基于地图资源库确定:在移动到虚拟位置530后,虚拟角色140继续沿着路径片段 550移动到虚拟位置540。例如,路径片段550可以经过虚拟场景中的可用道路。相比简单地沿直线穿越墙壁,这样确定的路径更加贴近实际,从而可以生成更真实的动态表示。
图6示出了根据本公开的一些实施例的虚拟角色上线时示例移动路径的示意图600。示意图400中所示的路径可以是服务器110在针对用户设备120的虚拟角色140上线确定三维显示数据时确定的移动路径的非限制性示例。
在图6的示例中,虚拟角色140在下线时位于虚拟位置610处并且在上线时位于虚拟位置620处。在确定虚拟角色140移动路径时,服务器110可以确定在离线期间连续更新的两个虚拟位置之间的距离相比其他连续虚拟位置之间的距离具有不同量级。例如,如虚线指示的路径片段650所示,虚拟位置630和虚拟位置640处于两个彼此远离的区域(例如,不同街区),而其余连续的虚拟位置都与彼此处于同一区域。
因此,服务器110可以(单独地,或者结合更新时间和/或虚拟角色140拥有的虚拟资产等)以不同的移动模式来确定虚拟角色140从虚拟位置630移动到虚拟位置640的路径片段650。例如,服务器110可以将虚拟位置630移动到虚拟位置640的路径片段规划为机动车模式,同时以行走模式来确定其余的路径片段。在此基础上,针对虚拟角色140上线,服务器110例如可以生成如下动态表示:虚拟角色140从虚拟位置610行走到虚拟位置630,继续乘车到达虚拟位置640,并且然后继续行走到虚拟位置620。
应理解,图4至图6中的具体路径和虚拟位置点数目仅为说明起见示出,而不旨在对本公开的实施例进行任何限制。还应理解,为说明清楚起见,图4至图6并不一定按比例绘制。
在应用于多角色虚拟场景的一些实施例中,在一些虚拟角色离线期间,虚拟场景中仍然存在其他在线虚拟角色。图7示出了根据本公开的一些实施例的确定针对在线角色的三维显示数据的示例方法700的流程图。方法700可以视为方法200的可选附加步骤,并且可以例如由图1所示的服务器110在针对离线虚拟角色执行方法200的基础上执行。应当理解,方法700还可以包括未示出的附加动作,本公开的范围在此方面不受限制。以下结合图1的示例环境100来详细描述方法700。
如前文结合图2所述,使用方法200,服务器110可以在第一用户130-1的第一虚拟角色140-1在虚拟场景中处于离线状态期间,从第一用户130-1的第一用户设备120-1接收第一现实位置。基于接收到的第一现实位置,服务器110可以更新第一虚拟角色130-1在虚拟场景中的虚拟位置。
在框710处,服务器110还可以在第二用户130-2的第二虚拟角色140-2在虚拟场景中处于在线状态期间,从第二用户的第二用户设备120-2接收第二现实位置。在一些实施例中,服务器110可以经由用户设备120-2上安装的虚拟场景的客户端应用发送的消息接收第二现实位置。
在框720处,服务器110可以基于第二现实位置,更新第二虚拟角色140-2在虚拟场景中的第二虚拟位置。服务器110可以以如前文关于框220所述的方式类似的方式来更新第二虚拟位置。
在框730处,服务器110可以确定用于在第二用户设备120-2上呈现的针对第二虚拟角色140-2的三维显示数据。如框730-1所示,服务器110可以基于第二虚拟位置、(如前文关于图2所述的)与虚拟场景相关联的地图资源库以及物理引擎库,来确定针对虚拟角色140-2的三维显示数据。该三维显示的具体内容和格式与用户设备120-2的类型和客户端应用的显示模式等有关。例如,针对虚拟角色140-2生成的三维显示数据和针对虚拟角色140-1所生成的三维显示数据具有不同视角和比例。
在一些实施例中,作为确定针对虚拟角色140-2的三维显示数据的一部分,服务器110可以确定虚拟角色140-2在虚拟场景中的第二虚拟位置的变化的动态表示。例如,服务器110可以随着在线的虚拟角色140-2的位置变动生成实时的视频流,以供经由与用户设备120-2之间的会话连接来持续传输,从而提供流畅的动态场景显示。例如,服务器110可以根据第二虚拟位置从地图数据中获取相应的场景数据,并且使用该数据连同虚拟角色140-2的上一个虚拟位置以及角色模型数据一起,来计算针对虚拟角色140-2最新变化的三维动态渲染。
在一些实施例中,服务器110可以确定虚拟角色140-2在虚拟场景中的第二虚拟位置的变化的路径,以及基于路径来确定针对虚拟角色140-2的动态表示。例如,服务器110可以确定虚拟角色140-2在行走中转向。例如,服务器110可以确定虚拟角色140-2相对于其所在的上一个位置没有移动。然后,服务器110可以根据所确定的路径来计算虚拟角色140-2转向或停顿的动态表示,诸如以视频流的形式。
在一些实施例中,服务器110然后可以基于所确定的路径,确定虚拟角色140-2与虚拟场景中的虚拟对象的交互表示。例如,服务器110可以根据该路径和从地图数据中获取的场景数据来确定虚拟角色140-2将与场景中的哪些对象发生交互,诸如现实世界映射到虚拟场景中的对象、场景中附加的虚拟对象、以及其他在线角色等。服务器110可以使用物理引擎库来增强虚拟对象的交互表示。例如,服务器110可以 计算在虚拟角色140-2向前走动时撞到其他对象时的回弹。
在一些实施例中,服务器110还可以从用户设备120-2接收关于用户130-2的姿态,以及基于用户130-2的姿态来确定针对动态表示。例如,服务器110可以接收用户130-2对客户端界面的操作,诸如点击某个虚拟对象。例如,服务器110可以接收由传感器捕捉的用户130-2的实际动作,诸如转向、点头以及挥手等。然后,服务器110可以将所接收的姿态转换为虚拟角色140-2的动态表示。
在该过程中,服务器110同样可以使用物理引擎库来增强虚拟对象与其他虚拟对象之间的交互表示。例如,服务器110可以使用物理引擎来确定虚拟角色140-2拾取柔性对象时将该对象挤压变形的动态表示。
如此,服务器110可以与虚拟角色相关联的现实位置为锚点并融合多种数据源和工具,将在线用户的位置、移动和姿态等投射到其虚拟角色,向用户实时地呈现虚拟角色其周围的虚拟对象的动态变化以及各种互动,从而为用户提供具有真实感的沉浸式在线虚拟体验。
在一些实施例中,如框730-2所示,服务器110还可以附加地基于处于离线状态的虚拟角色140-1的第一虚拟位置的历史更新记录,确定针对虚拟角色140-2的三维显示数据。例如,服务器110可以根据第一虚拟位置的轨迹确定虚拟角色140-1当前是否与虚拟角色140-2具有交集。并且基于该交集来调整虚拟角色140-2的动态表示。
在一些实施例中,作为确定针对虚拟角色140-2的三维显示数据的一部分,服务器110可以基于第一虚拟位置的历史更新记录,生成包括虚拟角色140-1的三维显示的三维显示数据。例如,在确定虚拟角色140-1与虚拟角色140-2当前具有交集时,服务器110可以根据第一虚拟位置的历史更新记录来确定虚拟角色140-1在虚拟角色140-2的动态显示中的行走方向和动态表示,诸如进入或离开针对虚拟角色140-2的三维显示数据包括的场景范围,向虚拟角色140-2迎面走来,或者在虚拟角色140-2前方行走等。例如,服务器110还可以在此基础上确定两个虚拟角色之间的交互表达,诸如面对面时挥手问候等。
如此,即使虚拟场景中的虚拟角色离线,该虚拟角色仍然可以在虚拟场景中移动并且对其他虚拟角色可见以及可选地产生影响,从而使得现实世界能够更紧密地映射到虚拟场景,改进用户的沉浸式体验。
在一些实施例中,服务器110还可以基于虚拟角色140-1和虚拟角色140-2各自的隐私设置,来确定针对虚拟角色140-2的三维显示数据。例如,服务器110可以根据虚拟角色140-1的设置确定此时虚拟角色140-1是否对虚拟角色140-2可见,以及可以根据虚拟角色140-2的设置确定其是否设置在想要看见虚拟角色140-1。在此基础上,服务器110可以决定是否在确定针对虚拟角色140-2的三维显示数据中考虑虚拟角色140-1,例如是否进行虚拟角色140-2与虚拟角色140-2的位置比较。
如此,一方面,可以使得用户对虚拟角色的隐私性具有更灵活的控制,改进用户体验。另一方面,服务器也可以通过对隐私设置的检查,来避免后续一些不必要的计算和生成,从而提高虚拟场景性能。
应理解,尽管上文关于虚拟角色140-1和虚拟角色140-2描述了方法200和方法400的各种动作,但是虚拟场景中也可以具有其他更多的在线虚拟角色和离线虚拟角色,并且服务器110在针对其中某个虚拟角色来生成三维显示数据时,也可以考虑多个其他在线虚拟角色和离线虚拟角色。
图8示出了根据本公开的一些实施例的在线虚拟角色的示例移动路径的示意图800。示意图800中所示的路径可以是服务器110在针对在用户设备120-2上在线的虚拟角色140-2更新位置而确定的移动路径的非限制性示例。
如图8所示,虚拟角色140-2先前位于虚拟位置810。然后,服务器110从用户设备120-2接收到与虚拟角色140-2相关联的现实位置,并且对应地确定虚拟角色140-2应移动到对应的虚拟位置820。在此基础上,服务器110根据地图资源库确定路径840为虚拟角色140-2的移动路径(例如,沿着道路、和/或无冲突等)。
在确定该移动路径时,服务器110可以考虑到其他离线角色。例如,服务器110可以确定离线的虚拟角色(例如,如前文所述绑定到用户设备120-1的虚拟角色140-1)对虚拟角色140-2可见,并且正移动到虚拟位置830处。服务器110然后在确定路径840时可以从离线的虚拟角色140-1旁绕行。此外,服务器110在生成虚拟角色140-2的变化动态表示时,还可以生成其与虚拟角色140-1间的交互表示,如前文关于框730-2所述。
应理解,图8中的具体路径和虚拟位置点数目仅为说明起见示出,而不旨在对本公开的实施例进行任何限制。还应理解,为说明清楚起见,图8并不一定按比例绘制。
图9示出了根据本公开的一些实施例的用于针对虚拟场景的位置更新的装置900的示意框图。装置900可以被实现为或者被包括在图1的服务器110中。装置900可以包括多个模块,以用于执行例如图2 中所讨论的方法200中的对应动作。
如图9所示,装置900包括位置接收模块910和位置更新模块920。位置接收模块910被配置为在用户的虚拟角色在虚拟场景中处于离线状态期间,从用户的用户设备接收现实位置。位置更新模块920被配置为基于接收到的现实位置,更新虚拟角色在虚拟场景中的虚拟位置。
在一些实施例中,位置接收模910块包括:上线指示模块,被配置为从用户设备接收关于虚拟角色在虚拟场景中上线的指示;并且该装置还包括:场景模块,被配置为响应于接收到该指示,基于在虚拟角色离线期间虚拟位置的历史更新记录,确定用于在用户设备上呈现的针对虚拟角色的三维显示数据。
在一些实施例中,场景模块包括:库模块,被配置为基于基于在虚拟角色离线期间虚拟位置的历史更新记录、与虚拟场景相关联的地图资源库、以及物理引擎库,确定用于针对虚拟角色的三维显示数据。
在一些实施例中,场景模块包括:动态表示模块,被配置为确定虚拟角色在虚拟场景中从下线前位置移动到虚拟位置的动态表示,其中下线前位置是虚拟角色最近一次下线前在虚拟场景中的最后虚拟位置。
在一些实施例中,场景模块还包括:多角色模块,被配置为基于在虚拟角色离线期间虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录,确定针对虚拟角色的三维显示数据。
在一些实施例中,前述用户是第一用户,虚拟角色是第一虚拟角色,用户设备是第一用户设备,现实位置是第一现实位置,虚拟位置是第一虚拟位置,并且装置还包括:第二位置接收模块,被配置为在第二用户的第二虚拟角色在虚拟场景中处于在线状态期间,从第二用户的第二用户设备接收第二现实位置;第二位置更新模块,被配置为基于第二现实位置,更新第二虚拟角色在虚拟场景中的第二虚拟位置;第二场景模块,被配置为基于第二虚拟位置、与虚拟场景相关联的地图资源库以及物理引擎库,确定用于在第二用户设备上呈现的针对第二虚拟角色的三维显示数据。
在一些实施例中,第二场景模块包括:第二多角色模块,被配置为基于第一虚拟角色的第一虚拟位置的历史更新记录,确定针对第二虚拟角色的三维显示数据。
在一些实施例中,第二多角色模块包括:其他角色生成模块,被配置为基于第一虚拟位置的历史更新记录,生成包括第一虚拟角色的三维显示的三维显示数据。
在一些实施例中,第二场景模块还包括:隐私模块,被配置为基于第一虚拟角色和第二虚拟角色各自的隐私设置,确定针对第二虚拟角色的三维显示数据。
在一些实施例中,第二场景模块还包括:第二动态表示模块,确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的动态表示。
在一些实施例中,第二动态表示模块包括:路径规划模块,被配置为确定第二虚拟角色在虚拟场景中的第二虚拟位置的变化的路径;并且第二动态表示模块还包括:路径动态模块,被配置为基于路径来确定动态表示。
在一些实施例中,第二动态表示模块还包括:沿路径交互模块,被配置为基于路径,确定第二虚拟角色与虚拟场景中的虚拟对象的交互表示。
在一些实施例中,装置还包括姿态接收模块,被配置为从第二用户设备接收关于第二用户的姿态;并且第二场景模块还包括姿态表示模块,被配置确定变化的动态表示还包括基于第二用户的姿态来确定动态表示。
在一些实施例中,其中虚拟场景中的虚拟位置与现实世界中的现实位置相互映射。
位置接收模块910和位置更新模块920均可以通过软件实现,或者可以通过硬件实现。示例性的,接下来以位置接收模块910为例,介绍位置接收模块910的实现方式。类似的,位置更新模块920的实现方式可以参考位置接收模块910的实现方式。
模块作为软件功能单元的一种举例,位置接收模块910可以包括运行在计算实例上的代码。其中,计算实例可以包括物理主机(计算设备)、虚拟机、容器中的至少一种。进一步地,上述计算实例可以是一台或者多台。例如,位置接收模块910可以包括运行在多个主机/虚拟机/容器上的代码。需要说明的是,用于运行该代码的多个主机/虚拟机/容器可以分布在相同的区域(region)中,也可以分布在不同的region中。进一步地,用于运行该代码的多个主机/虚拟机/容器可以分布在相同的可用区(availability zone,AZ)中,也可以分布在不同的AZ中,每个AZ包括一个数据中心或多个地理位置相近的数据中心。其中,通常一个region可以包括多个AZ。
同样,用于运行该代码的多个主机/虚拟机/容器可以分布在同一个虚拟私有云(virtual private cloud,VPC)中,也可以分布在多个VPC中。其中,通常一个VPC设置在一个region内,同一region内两个VPC 之间,以及不同region的VPC之间跨区通信需在每个VPC内设置通信网关,经通信网关实现VPC之间的互连。
模块作为硬件功能单元的一种举例,位置接收模块910可以包括至少一个计算设备,如服务器等。或者,位置接收模块910也可以是利用专用集成电路(application-specific integrated circuit,ASIC)实现、或可编程逻辑器件(programmable logic device,PLD)实现的设备等。其中,上述PLD可以是复杂程序逻辑器件(complex programmable logical device,CPLD)、现场可编程门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合实现。
位置接收模块910包括的多个计算设备可以分布在相同的region中,也可以分布在不同的region中。位置接收模块910包括的多个计算设备可以分布在相同的AZ中,也可以分布在不同的AZ中。同样,位置接收模块910包括的多个计算设备可以分布在同一个VPC中,也可以分布在多个VPC中。其中,所述多个计算设备可以是服务器、ASIC、PLD、CPLD、FPGA和GAL等计算设备的任意组合。需要说明的是,在其他实施例中,位置接收模块910可以用于执行结合图2至图8所述的服务器110的任意过程和动作,位置更新模块920可以用于执行执行结合图2至图8所述的服务器110的任意过程和动作。位置接收模块910和位置更新模块920负责实现的步骤可根据需要指定,通过位置接收模块910和位置更新模块分别执行结合图2至图8所述的服务器110的任意过程和动作来实现装置900的全部功能。
本公开的实施例还提供一种计算设备1000。如图10所示,计算设备1000包括:总线1002、处理器1004、存储器1006和通信接口1008。处理器1004、存储器1006和通信接口1008之间通过总线1002通信。计算设备1000可以是服务器或终端设备。应理解,本申请不限定计算设备1000中的处理器、存储器的个数。
总线1002可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。总线1004可包括在计算设备1000各个部件(例如,存储器1006、处理器1004、通信接口1008)之间传送信息的通路。
处理器1004可以包括中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器1006可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。处理器1004还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。
存储器106中存储有可执行的程序代码,处理器1004执行该可执行的程序代码以分别实现前述位置接收模块910和位置更新模块920的功能,从而实现例如方法200和方法700。也即,存储器106上可以存储有用于上述任一实施例中涉及服务器110的方法和功能的指令。
通信接口1008使用例如但不限于网络接口卡、收发器一类的收发模块,来实现计算设备1000与其他设备或通信网络之间的通信。
本公开的实施例还提供一种计算设备集群1100。该计算设备集群包括至少一台计算设备。该计算设备可以是服务器,例如是中心服务器、边缘服务器,或者是本地数据中心中的本地服务器。在一些实施例中,计算设备也可以是台式机、笔记本电脑或者智能手机等终端设备。
如图11所示,该计算设备集群包括至少一个计算设备1000。计算设备集群中的一个或多个计算设备1000中的存储器1006中可以存有相同的用于执行上述任一实施例中涉及服务器110的方法和功能的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备1000的存储器1006中也可以分别存有用于执行上述任一实施例中涉及服务器110的方法和功能的部分指令。换言之,一个或多个计算设备1000的组合可以共同执行用于执行服务器110的方法和功能的指令。
需要说明的是,计算设备集群中的不同的计算设备1000中的存储器1006可以存储不同的指令,分别用于执行装置900的部分功能。也即,不同的计算设备1000中的存储器1006存储的指令可以实现位置接收模块910和位置更新模块920(以及在一些实施例中的场景模块)的一个或多个模块或子模块的功能。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是广域网或局域网等等。图12示出了一种可能的实现方式1200。如图12所示,两个计算设备1000A和1000B之间通过网络1210进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。 在这一类可能的实现方式中,例如,计算设备1000A中的存储器1006中存储有执行位置接收模块910的功能的指令。同时,计算设备100B中的存储器106中存有执行位置更新模块920的功能的指令。
图12所示的计算设备集群之间的连接方式可以是考虑到本申请提供的关于服务器110的方法需要存储大量用户数据以及进行密集的实时或近实时的计算,因此考虑将位置更新模块920实现的功能交由计算设备1000B执行。
应理解,图12中示出的计算设备1000A的功能也可以由多个计算设备1000完成。同样,计算设备1000B的功能也可以由多个计算设备1000完成。本公开的实施例还提供了一种包含指令的计算机程序产品,其在计算机上运行时,使得计算机执行上述各实施例中任一实施例中涉及服务器110或用户设备120的方法和功能。
本公开的实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,当处理器运行所述指令时,使得处理器执行上述任一实施例中涉及服务器110或用户设备120的方法和功能。
通常,本公开的各种实施例可以以硬件或专用电路、软件、逻辑或其任何组合来实现。一些方面可以用硬件实现,而其他方面可以用固件或软件实现,其可以由控制器,微处理器或其他计算设备执行。虽然本公开的实施例的各个方面被示出并描述为框图,流程图或使用一些其他图示表示,但是应当理解,本文描述的框,装置、系统、技术或方法可以实现为,如非限制性示例,硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某种组合。
本公开还提供有形地存储在非暂时性计算机可读存储介质上的至少一个计算机程序产品。该计算机程序产品包括计算机可执行指令,例如包括在程序模块中的指令,其在目标的真实或虚拟处理器上的设备中执行,以执行如上参考附图的过程/方法。通常,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、库、对象、类、组件、数据结构等。在各种实施例中,可以根据需要在程序模块之间组合或分割程序模块的功能。用于程序模块的机器可执行指令可以在本地或分布式设备内执行。在分布式设备中,程序模块可以位于本地和远程存储介质中。
用于实现本公开的方法的计算机程序代码可以用一种或多种编程语言编写。这些计算机程序代码可以提供给通用计算机、专用计算机或其他可编程的数据处理装置的处理器,使得程序代码在被计算机或其他可编程的数据处理装置执行的时候,引起在流程图和/或框图中规定的功能/操作被实施。程序代码可以完全在计算机上、部分在计算机上、作为独立的软件包、部分在计算机上且部分在远程计算机上或完全在远程计算机或服务器上执行。
在本公开的上下文中,计算机程序代码或者相关数据可以由任意适当载体承载,以使得设备、装置或者处理器能够执行上文描述的各种处理和操作。载体的示例包括信号、计算机可读介质、等等。信号的示例可以包括电、光、无线电、声音或其它形式的传播信号,诸如载波、红外信号等。
计算机可读介质可以是包含或存储用于或有关于指令执行系统、装置或设备的程序的任何有形介质或者是包含一个或多个可用介质的数据中心等数据存储设备。计算机可读介质可以是计算机可读信号介质或计算机可读存储介质。计算机可读介质可以包括但不限于电子的、磁的、光学的、电磁的、红外的或半导体系统、装置或设备,或其任意合适的组合。计算机可读存储介质的更详细示例包括带有一根或多根导线的电气连接、便携式计算机磁盘、硬盘、随机存储存取器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或闪存)、光存储设备、磁存储设备,或其任意合适的组合。
此外,尽管在附图中以特定顺序描述了本公开的方法的操作,但是这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。相反,流程图中描绘的步骤可以改变执行顺序。附加地或备选地,可以省略某些步骤,将多个步骤组合为一个步骤执行,和/或将一个步骤分解为多个步骤执行。还应当注意,根据本公开的两个或更多装置的特征和功能可以在一个装置中具体化。反之,上文描述的一个装置的特征和功能可以进一步划分为由多个装置来具体化。
以上已经描述了本公开的各实现,上述说明是示例性的,并非穷尽的,并且也不限于所公开的各实现。在不偏离所说明的各实现的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在很好地解释各实现的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术人员能理解本文公开的各个实现方式。

Claims (18)

  1. 一种针对虚拟场景的位置更新方法,其特征在于,包括:
    在用户的虚拟角色在所述虚拟场景中处于离线状态期间,从所述用户的用户设备接收现实位置;以及
    基于接收到的所述现实位置,更新所述虚拟角色在所述虚拟场景中的虚拟位置。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    从所述用户设备接收关于所述虚拟角色在所述虚拟场景中上线的指示;以及
    响应于接收到所述指示,基于在所述虚拟角色离线期间所述虚拟位置的历史更新记录,确定用于在所述用户设备上呈现的针对所述虚拟角色的三维显示数据。
  3. 根据权利要求2所述的方法,其特征在于,其中确定用于在所述用户设备上呈现的针对所述虚拟角色的三维显示数据包括:
    基于所述历史更新记录、与所述虚拟场景相关联的地图资源库、以及物理引擎库,确定所述三维显示数据。
  4. 根据权利要求2所述的方法,其特征在于,其中确定用于在所述用户设备上呈现的针对所述虚拟角色的三维显示数据包括:
    确定所述虚拟角色在所述虚拟场景中从下线前位置移动到所述虚拟位置的动态表示,其中所述下线前位置是所述虚拟角色最近一次下线前在所述虚拟场景中的最后虚拟位置。
  5. 根据权利要求2所述的方法,其特征在于,其中确定用于在所述用户设备上呈现的针对所述虚拟角色的三维显示数据还包括:
    基于在所述虚拟角色离线期间所述虚拟场景中的其他虚拟角色的虚拟位置的历史更新记录,确定所述三维显示数据。
  6. 根据权利要求1所述的方法,其特征在于,其中所述用户是第一用户,所述虚拟角色是第一虚拟角色,所述用户设备是第一用户设备,所述现实位置是第一现实位置,所述虚拟位置是第一虚拟位置,并且所述方法还包括:
    在第二用户的第二虚拟角色在所述虚拟场景中处于在线状态期间,从所述第二用户的第二用户设备接收第二现实位置;
    基于所述第二现实位置,更新所述第二虚拟角色在所述虚拟场景中的第二虚拟位置;以及
    基于所述第二虚拟位置、与所述虚拟场景相关联的地图资源库、以及物理引擎库,确定用于在所述第二用户设备上呈现的针对所述第二虚拟角色的三维显示数据。
  7. 根据权利要求6所述的方法,其特征在于,其中确定用于在所述第二用户设备上呈现的针对所述第二虚拟角色的三维显示数据包括:
    基于所述第二虚拟位置、所述地图资源库、所述物理引擎库、以及所述第一虚拟角色的所述第一虚拟位置的历史更新记录,确定所述三维显示数据。
  8. 根据权利要求7所述的方法,其特征在于,其中基于所述第一虚拟位置的所述历史更新记录确定所述三维显示数据包括:
    基于所述第一虚拟位置的所述历史更新记录,生成包括所述第一虚拟角色的三维显示的所述三维显示数据。
  9. 根据权利要求7所述的方法,其特征在于,其中确定用于在所述第二用户设备上呈现的针对所述第二虚拟角色的三维显示数据还包括:
    基于所述第一虚拟角色和所述第二虚拟角色各自的隐私设置,确定所述三维显示数据。
  10. 根据权利要求6所述的方法,其特征在于,其中确定用于在所述第二用户设备上呈现的针对所述第二虚拟角色的三维显示数据包括:
    确定所述第二虚拟角色在所述虚拟场景中的所述第二虚拟位置的变化的动态表示。
  11. 根据权利要求10所述的方法,其特征在于,其中确定所述第二虚拟角色在所述虚拟场景中的所述第二虚拟位置的变化的动态表示包括:
    确定所述第二虚拟角色在所述虚拟场景中的所述第二虚拟位置的变化的路径;以及
    基于所述路径来确定所述动态表示。
  12. 根据权利要求11所述的方法,其特征在于,其中确定所述第二虚拟角色在所述虚拟场景中的所述 第二虚拟位置的变化的动态表示还包括:
    基于所述路径,确定所述第二虚拟角色与所述虚拟场景中的虚拟对象的交互表示。
  13. 根据权利要求10所述的方法,其特征在于,其中确定所述第二虚拟角色在所述虚拟场景中的所述第二虚拟位置的变化的动态表示还包括:
    从所述第二用户设备接收关于所述第二用户的姿态;以及
    基于所述第二用户的所述姿态来确定所述动态表示。
  14. 根据权利要求1所述的方法,其特征在于,其中所述虚拟场景中的虚拟位置与现实世界中的现实位置相互映射。
  15. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器上存储有计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行权利要求1至14中任一项所述的方法。
  16. 一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;
    所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使得所述计算设备集群执行根据权利要求1-14所述的方法。
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现根据权利要求1至14中任一项所述的方法。
  18. 一种计算机程序产品,其特征在于,所述计算机程序产品上包含计算机可执行指令,所述计算机可执行指令在被执行时实现根据权利要求1至14中任一项所述的方法。
PCT/CN2023/110379 2022-09-28 2023-07-31 针对虚拟场景的位置更新方法、设备、介质和程序产品 WO2024066723A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211194639.9 2022-09-28
CN202211194639.9A CN117830567A (zh) 2022-09-28 2022-09-28 针对虚拟场景的位置更新方法、设备、介质和程序产品

Publications (1)

Publication Number Publication Date
WO2024066723A1 true WO2024066723A1 (zh) 2024-04-04

Family

ID=90475956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110379 WO2024066723A1 (zh) 2022-09-28 2023-07-31 针对虚拟场景的位置更新方法、设备、介质和程序产品

Country Status (2)

Country Link
CN (1) CN117830567A (zh)
WO (1) WO2024066723A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116687A (zh) * 2011-11-17 2013-05-22 苏州蜗牛数字科技股份有限公司 基于网络游戏的角色离线控制方法
US9569466B1 (en) * 2013-01-30 2017-02-14 Kabam, Inc. System and method for offline asynchronous user activity in a player versus player online game
US9669296B1 (en) * 2012-07-31 2017-06-06 Niantic, Inc. Linking real world activities with a parallel reality game
US20200294311A1 (en) * 2019-03-14 2020-09-17 Microsoft Technology Licensing, Llc Reality-guided roaming in virtual reality
CN112044057A (zh) * 2020-09-17 2020-12-08 网易(杭州)网络有限公司 一种游戏状态监测方法和装置
CN112587926A (zh) * 2020-12-25 2021-04-02 珠海金山网络游戏科技有限公司 一种数据处理方法与装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116687A (zh) * 2011-11-17 2013-05-22 苏州蜗牛数字科技股份有限公司 基于网络游戏的角色离线控制方法
US9669296B1 (en) * 2012-07-31 2017-06-06 Niantic, Inc. Linking real world activities with a parallel reality game
US9569466B1 (en) * 2013-01-30 2017-02-14 Kabam, Inc. System and method for offline asynchronous user activity in a player versus player online game
US20200294311A1 (en) * 2019-03-14 2020-09-17 Microsoft Technology Licensing, Llc Reality-guided roaming in virtual reality
CN112044057A (zh) * 2020-09-17 2020-12-08 网易(杭州)网络有限公司 一种游戏状态监测方法和装置
CN112587926A (zh) * 2020-12-25 2021-04-02 珠海金山网络游戏科技有限公司 一种数据处理方法与装置

Also Published As

Publication number Publication date
CN117830567A (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
US11245872B2 (en) Merged reality spatial streaming of virtual spaces
JP6281496B2 (ja) 情報処理装置、端末装置、情報処理方法及びプログラム
JP6281495B2 (ja) 情報処理装置、端末装置、情報処理方法及びプログラム
JP3625212B1 (ja) 3次元仮想空間シミュレータ、3次元仮想空間シミュレーションプログラム、およびこれを記録したコンピュータ読み取り可能な記録媒体
US8392839B2 (en) System and method for using partial teleportation or relocation in virtual worlds
JP6181917B2 (ja) 描画システム、描画サーバ、その制御方法、プログラム、及び記録媒体
JP2021525911A (ja) マルチサーバクラウド仮想現実(vr)ストリーミング
JP2014149712A (ja) 情報処理装置、端末装置、情報処理方法及びプログラム
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
JP2023159344A (ja) ゲーム内位置ベースのゲームプレイコンパニオンアプリケーション
US20200057435A1 (en) Method and system for controlling robots within in an interactive arena and generating a virtual overlayed
CN114177613B (zh) 导航网格更新方法、装置、设备及计算机可读存储介质
CN109314800B (zh) 用于将用户注意力引导到基于位置的游戏进行伴随应用的方法和系统
WO2018098744A1 (zh) 一种基于虚拟驾驶的数据处理方法及系统
US20230405452A1 (en) Method for controlling game display, non-transitory computer-readable storage medium and electronic device
US9497238B1 (en) Application control translation
WO2024088144A1 (zh) 增强现实画面的处理方法、装置、电子设备及存储介质
JP2024016017A (ja) 情報処理システム、情報処理装置およびプログラム
WO2024066723A1 (zh) 针对虚拟场景的位置更新方法、设备、介质和程序产品
JP3581673B2 (ja) 群集の移動を表現する方法、記憶媒体、および情報処理装置
EP4344234A1 (en) Live broadcast room presentation method and apparatus, and electronic device and storage medium
KR20240067675A (ko) 3차원 빌딩 모델 및 도로 모델을 이용한 3차원 거리뷰 모델 생성 방법 및 시스템
CN114405000A (zh) 游戏数据同步方法、装置、计算机设备及存储介质
CN117379795A (zh) 动态模型配置方法、终端设备、装置及存储介质
JP2004167273A (ja) 群集の移動を表現する方法、記憶媒体、および情報処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869969

Country of ref document: EP

Kind code of ref document: A1