CN115770389A - Virtual role movement synchronization method, device, equipment and storage medium - Google Patents

Virtual role movement synchronization method, device, equipment and storage medium Download PDF

Info

Publication number
CN115770389A
CN115770389A CN202111617082.0A CN202111617082A CN115770389A CN 115770389 A CN115770389 A CN 115770389A CN 202111617082 A CN202111617082 A CN 202111617082A CN 115770389 A CN115770389 A CN 115770389A
Authority
CN
China
Prior art keywords
mode
movement
determining
interpolation algorithm
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111617082.0A
Other languages
Chinese (zh)
Inventor
陈栋
董根
陈猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of CN115770389A publication Critical patent/CN115770389A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for mobile synchronization of virtual roles, and belongs to the field of data synchronization. The method comprises the following steps: receiving a mobile data packet of a first virtual character, wherein the mobile data packet comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in a virtual world; determining synchronous calculation parameters corresponding to the moving mode; synchronously calculating the logic position based on the synchronous calculation parameters to obtain a rendering position of the first virtual role; and displaying the first virtual role on a user interface according to the rendering position. According to the method and the device, different synchronous calculation strategies can be adopted based on different mobile modes, and the blocking phenomenon is effectively avoided.

Description

Virtual role movement synchronization method, device, equipment and storage medium
The present application claims priority of chinese patent application No. 202111044497.3 entitled "virtual character movement synchronization method, apparatus, device, and storage medium" filed on 09/07/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the field of data synchronization, in particular to a method, a device, equipment and a storage medium for mobile synchronization of virtual roles.
Background
In a massively Multiplayer Online Role-Playing Game (MMORPG), a player can control a virtual character to move in a virtual world.
Suppose that in one-spot battle, the client a controls the virtual character a in the virtual world, and the client B controls the virtual character B in the virtual world. The virtual role a is a non-master virtual role of the client B, and the client a needs to synchronize the position of the virtual role a to the client B for display. In the related art, a client a sends a mobile data packet to a client B every predetermined time interval, and the mobile data packet carries the position of a virtual role a.
However, the synchronization process is affected by various factors, and the above synchronization scheme is liable to cause a pause phenomenon when the client B displays the virtual character a.
Disclosure of Invention
The application provides a virtual character movement synchronization method, a virtual character movement synchronization device, virtual character movement synchronization equipment and a virtual character storage medium, wherein different synchronous calculation parameters can be adopted through different movement modes, and therefore a better display effect is achieved when a non-master control virtual character is displayed on a client. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method for synchronizing movement of a virtual character, the method including:
receiving a mobile data packet of a first virtual character, wherein the mobile data packet comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in a virtual world;
determining synchronous calculation parameters corresponding to the moving mode;
under the condition that the rendering position of the first virtual role does not reach the logic position, synchronously calculating the logic position based on the synchronous calculation parameters to obtain the rendering position of the first virtual role;
and displaying the first virtual role on a user interface according to the rendering position.
According to another aspect of the present application, there is provided a mobile synchronization apparatus of a virtual character, the apparatus including:
the mobile data package comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in a virtual world;
a determining module for determining a synchronization calculation parameter corresponding to the movement pattern;
the playing module is used for synchronously calculating the logic position based on the synchronous calculation parameters to obtain the rendering position of the first virtual role;
and the display module is used for displaying the first virtual role on a user interface according to the rendering position.
According to another aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method for mobile synchronization of a virtual character as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement the method for mobile synchronization of virtual characters as described above.
The beneficial effect that technical scheme that this application provided brought includes at least:
by providing different synchronous calculation parameters for different moving modes and synchronously calculating different types of moving data packets according to different synchronous calculation parameters, the smoothness of rendering positions calculated in different moving modes can be ensured, and the occurrence of the pause phenomenon is reduced or avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for synchronizing movement of a virtual character according to an exemplary embodiment of the present application;
FIG. 3 is a schematic view of an interface for movement synchronization of a virtual character in a running mode provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a spline interpolation as an interpolation algorithm provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart of a method for synchronizing movement of a virtual character according to an exemplary embodiment of the present application;
FIG. 6 is a schematic illustration of the effect of delay compensation on logical position provided by an exemplary embodiment of the present application;
FIG. 7 is an interface diagram of movement synchronization of a virtual character in an over-the-air mode provided by an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of cubic spline interpolation provided by an exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of a matrix-based cubic spline interpolation provided in an exemplary embodiment of the present application;
FIG. 10 is a schematic illustration of linear interpolation as an extrapolation method provided by an exemplary embodiment of the present application;
FIG. 11 is a schematic diagram of sweep detection provided by an exemplary embodiment of the present application;
FIG. 12 is a flowchart of a method for synchronizing the movement of a virtual character according to an exemplary embodiment of the present application;
FIG. 13 is a flowchart of a method for synchronizing the movement of a virtual character according to an exemplary embodiment of the present application;
FIG. 14 is a configuration diagram of parameters corresponding to a movement pattern provided by an exemplary embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an exemplary embodiment of the present application;
fig. 16 is a schematic structural diagram of a server according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application will be described:
massively Multiplayer Online Role-playing game (MMORPG): in a MMORPG, players can play one or more virtual characters and control the activities and behaviors of the virtual characters in the virtual environment. In the battle of MMORPG, player a controls a virtual character a in the virtual world, and player B controls a virtual character B in the virtual world. The client of the player A sends a mobile data packet to the client of the player B at intervals of a predetermined time, and the mobile data packet carries the position of the virtual character A, so that the mobile position of the virtual character A is displayed on the client of the player B.
Virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional three-dimensional world, or a purely fictional three-dimensional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. Optionally, the virtual environment is also used for virtual environment battles between at least two virtual characters, in which virtual resources are available for use by the at least two virtual characters.
Virtual roles: refers to a movable object in a virtual world. The movable object may be a simulated character or an animated character in the virtual world. A virtual character is an individual in a virtual world that independently utters different sounds. Optionally, the method includes a master role and a non-master role in the embodiment of the present application. The role of the master control party refers to a virtual role controlled by a client side where the current user is located, and the role of the non-master control party is as follows: the virtual roles are the virtual roles which appear at the client side where the current user is located and are controlled by other users.
Moving mode: the first virtual character moves forwards in the virtual world in a certain moving mode, and the moving mode comprises a standing mode, a running mode, a swimming mode, an air mode, an art displacement mode, a free displacement mode, a path displacement mode and the like. In different moving modes, the synchronous calculation parameters of the corresponding virtual roles are different, and the physical calculation, the position calculation and the interaction relation with the environment of the virtual roles are different.
A standing mode: it is the first avatar that stands different movement patterns in the virtual world.
A running mode: is a movement pattern in which the first virtual character moves in the virtual world in the form of walking or running.
Swimming mode: is a movement pattern in which the first virtual character moves in the virtual world in a swimming fashion.
An air mode: is a movement mode that the first virtual character moves after jumping and emptying in the virtual world.
Art displacement mode: the first virtual character moves in the virtual world according to a preset animation mode. In this mode, the master client only needs to send an initial mobile packet to other clients for location synchronization.
Free displacement mode: is a motion pattern in which the first avatar moves in the virtual world with irregular displacement. Such as climbing, climbing stairs, interacting with virtual items, etc.
Path displacement mode: the first virtual character moves in the virtual world according to a preset path.
FIG. 1 is a block diagram illustrating a computer system according to an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client 111 may be any one of a large-scale Shooting Game, a Virtual Reality (VR) application, an Augmented Reality (AR) program, a three-dimensional map program, a Virtual Reality Game, an Augmented Reality Game, a First-Person Shooting Game (FPS), a Third-Person Shooting Game (TPS), a Multiplayer Online tactical sports Game (MOBA), and a strategy Game (SLG). In the present embodiment, the client 111 is an MOBA game for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual character located in the virtual environment for activity, where the first virtual character may be referred to as a master virtual character of the first user 112. The activities of the first avatar include, but are not limited to: at least one of move, jump, transfer, release skill, adjust body posture, crawl, walk, run, ride, fly, jump, drive, pick, shoot, attack, throw. Illustratively, the first avatar is a first avatar, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual environment, and the client 131 may be a multiplayer online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on the screen of the second terminal 130. The client may be any one of a large-fleeing shooting game, a VR application program, an AR program, a three-dimensional map program, a virtual reality game, an augmented reality game, an FPS, a TPS, an MOBA, and an SLG, and in this embodiment, the client is an MOBA game as an example. The second terminal 130 is a terminal used by the second user 113, and the second user 113 uses the second terminal 130 to control a second virtual character located in the virtual environment to perform an activity, where the second virtual character may be referred to as a master virtual character of the second user 113. Illustratively, the second avatar is a second avatar, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Optionally, the first virtual role and the second virtual role may belong to the same camp, the same team, the same organization, a friend relationship, or temporary communication rights. Alternatively, the first virtual character and the second virtual character may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server 120 in different embodiments. Optionally, one or more terminals 140 may be terminals corresponding to a developer, a development and editing platform supporting a client in a virtual environment is installed on the terminal 140, the developer may edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 may download the client installation package from the server 120 to implement the update of the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used for providing background services for clients supporting a three-dimensional virtual environment. Alternatively, the server 120 undertakes primary computational work and the terminal undertakes secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a processor 122, a user account database 123, a combat service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the user to fight, such as 1V1 fight, 3V3 fight, 5V5 fight, etc.; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
Fig. 2 is a flowchart illustrating a method for synchronizing movement of a virtual character according to an exemplary embodiment of the present application. The method may be performed by the terminal 120 or the terminal 140 shown in fig. 1, the method comprising the steps of:
step 202: receiving a mobile data packet of a first virtual character, wherein the mobile data packet comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in the virtual world;
the mobile data packet is a data packet carrying the synchronization data information of the first virtual role. In the case where the first virtual character is controlled by the first client, the mobile data packet is transmitted from the first client to the second client, the first client is a client that controls the first virtual character, and the second client is a client that controls the second virtual character. For the second client, the first virtual role is a non-master virtual role, that is, a virtual role that can be controlled by a non-local terminal. The second client needs to receive the mobile data packet from the first client to synchronously display the real-time position of the first virtual character in the virtual world. In the case where the first avatar is a neutral avatar, the mobile data packet is sent by the server to the second client.
Illustratively, the mobile data packet includes: a movement pattern and a logical position of the first avatar.
The movement pattern is a pattern when the first avatar moves in the virtual world. Exemplary movement modes include a standing mode, a running mode, a swimming mode, an air mode, a art displacement mode, a free displacement mode, and a path movement mode. The first virtual character performs different actions in the virtual environment and has corresponding movement modes, illustratively, the first virtual character jumps and falls in the air and belongs to an air mode, the first virtual character rolls, slides and skips in the art displacement mode, the first virtual character climbs in a free displacement mode, and the first virtual character patrols in a path movement mode.
The logical position is used to indicate the position of the first avatar in the virtual world. For example, when the second virtual character performs a shooting operation on the first virtual character in the virtual environment, a shooting judgment needs to be performed according to the logical position of the first virtual character.
Taking the example that the first virtual role is controlled by the first client, the first virtual role moves in the virtual environment, the first client generates a movement data packet according to the movement mode and the logic position of the first virtual role during movement, and the movement data packet is sent to the second client through the server.
And the first client sends the mobile data packet to the second client according to the preset synchronous frequency. The synchronization frequencies corresponding to different movement patterns are the same or different. Optionally, the first client further sends the mobile data packet to the second client if a trigger condition is met, the trigger condition including but not limited to: the movement pattern is switched, the amount of change of the current logical position from the logical position at the time of the latest synchronization reaches a first threshold value, the amount of change of the current movement speed from the movement speed at the time of the latest synchronization reaches a second threshold value, and the amount of change of the current movement direction from the movement direction at the time of the latest synchronization reaches a third threshold value.
Step 204: determining synchronous calculation parameters corresponding to the moving mode;
and configuring corresponding synchronous calculation parameters for different moving modes in advance. The second client stores different synchronous calculation parameters corresponding to different moving modes. The synchronization calculation parameters in each moving mode comprise at least one of the following:
whether the current movement mode is configured with delay compensation;
the delay compensation means that when the second client receives the mobile packet, the delay compensation is half RTT behind the time when the server forwards the mobile packet. Therefore, the half RTT time needs to be compensated to estimate the latest logical position of the first avatar, and the compensated logical position is used as the logical position to be synchronized to catch up. The value of RTT can be estimated according to the time synchronization protocol.
Whether the current movement mode is configured with pre-expression;
the pre-expression means an action of displaying the first virtual character in advance without sending a mobile data packet or a confirmation instruction by the server.
Interpolation algorithm for the current movement pattern;
and under the condition that the current rendering position of the first virtual character does not reach the logic position in the mobile data packet, carrying out interpolation algorithm for interpolating the current rendering position to the logic position.
Extrapolation algorithm for the current movement pattern.
And under the condition that the current rendering position of the first virtual character reaches the logic position in the latest mobile data packet and no subsequent mobile data packet is received, carrying out interpolation prediction on the current rendering position forward according to the current moving direction.
And the client determines synchronous calculation parameters corresponding to the mobile mode according to the mobile mode carried by the received mobile data packet.
For example, in the case that the mobile data packet received from the first client by the second client carries the "running mode", spline interpolation is determined as an interpolation algorithm of the running mode, and linear interpolation is determined as an extrapolation algorithm and an orientation interpolation algorithm of the running mode.
Step 206: synchronously calculating the logic position based on the synchronous calculation parameters to obtain the rendering position of the first virtual role;
the rendering position is a position of the first virtual character displayed in the user interface. Illustratively, for determining the rendering position of the first avatar in a certain movement pattern, the method described in step 206 is iterated.
Illustratively, when the current rendering position of the first virtual character does not reach the logical position, the current rendering position is continuously approached to the logical position according to the determined synchronous calculation parameters corresponding to the movement mode of the first virtual character, the first rendering position, the second rendering position, the third rendering position and the like corresponding to the movement mode are obtained, a movement track path of the first virtual character visible on the user interface is formed, and the first virtual character at each rendering position is displayed on the user interface. Optionally, the current rendering position is the last rendering position of the first virtual character that has been calculated.
Optionally, when the current rendering position of the first virtual character does not reach the logical position and the logical position is reachable, the subsequent rendering position of the first virtual character is calculated according to an interpolation algorithm corresponding to the movement pattern.
Referring exemplarily to fig. 3, the second client displays a user interface 30, the user interface 30 including a first virtual character 31 and a second virtual character 32, and a logical position 33 and a rendering position (arrow mark) 34 corresponding to a certain time when the first virtual character 31 runs. And when the rendering position 34 of the first virtual character does not reach the logic position 33, continuously interpolating from the current rendering position to the logic position according to the spline interpolation algorithm corresponding to the running mode as an interpolation algorithm, and continuously circulating the process to obtain the position track of the first virtual character displayed by the user interface 30 during running.
Referring to fig. 4, a world coordinate system is established with the ground in the virtual environment as a reference plane and the current rendering position of the first avatar as a center, the current rendering position of the first avatar is represented by a rectangular icon 42a, and the logical position of the first avatar in the mobile data packet is represented by a circular icon 41 a. When the current rendering position 42a of the first virtual character does not reach the logical position 41a, interpolating from the current rendering position 42a to the logical position 41a according to a spline interpolation corresponding to the running mode as an interpolation algorithm to obtain a first rendering position 42b, a second rendering position 42c, and a third rendering position 42d, so as to form a movement track of the first virtual character visible on the user interface. When the logic position 41b carried by the next mobile data packet is received, a new interpolation algorithm is started to be executed to the logic position 41b, a fourth rendering position 42e, a fifth rendering position 42f and a sixth rendering position 42g are obtained, and a visible moving track on the user interface is formed; this process is cycled through when a new mobile packet is received.
Illustratively, the interpolation algorithms for at least two movement modes are different, so that each movement mode can calculate a smoother rendering position by using a self-adaptive interpolation algorithm, and a more flexible action track display is presented on the user interface.
Step 208: and displaying the first virtual character on the user interface according to the rendering position.
Since the logical positions are constantly being updated synchronously, the rendering position of the first avatar is also constantly being updated.
For example, as shown in fig. 3, the current rendering position of the first virtual character 31 is represented by an arrow mark 34, and according to the logical position 33 carried in the mobile data packet, synchronous calculation is performed from the current rendering position 34 to the logical position 33, and an obtained arrow mark 35 is a next rendering position. The client of the second avatar displays the first avatar on the user interface according to the next rendering position 35.
In summary, the method provided in this embodiment provides different synchronous calculation parameters for different mobile modes, and performs synchronous calculation on different types of mobile data packets according to the different synchronous calculation parameters, so as to ensure smoothness of rendering positions calculated in different mobile modes, and reduce or avoid a pause phenomenon.
Fig. 5 is a flowchart illustrating a method for synchronizing movement of a virtual character according to an exemplary embodiment of the present application. The method may be performed by the terminal 120 or the terminal 140 shown in fig. 1, the method comprising the steps of:
step 502: receiving a mobile data packet of a first virtual character, wherein the mobile data packet comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in the virtual world;
referring to step 202, the description is omitted.
Optionally, the mobile data packet further carries: synchronizing at least one of a timestamp, a current movement speed, a current movement direction, a current facing direction. The current direction of movement and the current facing direction are typically the same, but may be different.
Step 504: determining synchronization calculation parameters corresponding to the movement pattern, the synchronization calculation parameters including: a delay compensation parameter, a pre-expression parameter, an interpolation algorithm and an extrapolation algorithm;
and the client determines synchronous calculation parameters corresponding to the mobile mode according to the mobile mode carried by the received mobile data packet.
And configuring corresponding synchronous calculation parameters for different moving modes in advance. The second client stores different synchronous calculation parameters corresponding to different moving modes. The synchronization calculation parameters in each moving mode comprise at least one of the following:
the synchronization frequency;
the synchronization frequency refers to at least how many mobile data packets are sent out by the first client per second in the current mobile mode. For example, at least 1 mobile packet per second is sent in the standing mode, and at least 6 mobile packets per second is sent in the air mode.
Whether the current movement mode is configured with delay compensation;
the delay compensation means that when the second client receives the mobile packet, the delay compensation is half RTT behind the time when the server forwards the mobile packet. Therefore, the half RTT time needs to be compensated to estimate the latest logical position of the first avatar, and the compensated logical position is used as the logical position to be synchronized to catch up. Wherein, the value of RTT can be estimated according to the time synchronization protocol.
Whether the current movement mode is configured with pre-expression;
pre-rendering refers to an action that may display the first avatar in advance without the server sending a move packet or a confirmation command.
Interpolation algorithm for the current movement pattern;
and under the condition that the current rendering position of the first virtual role does not reach the logic position in the mobile data packet, carrying out interpolation algorithm for interpolating the current rendering position to the logic position.
Exemplarily, in a case where the movement mode is the standing mode, it is determined that the interpolation algorithm corresponding to the standing mode is the linear interpolation algorithm; determining that the interpolation algorithm corresponding to the running mode is a spline interpolation algorithm under the condition that the mobile mode is the running mode; determining that the interpolation algorithm corresponding to the swimming mode is a spline interpolation algorithm when the movement mode is the swimming mode; determining that an interpolation algorithm corresponding to the air mode is a cubic spline interpolation algorithm when the movement mode is the air mode; determining that an interpolation algorithm corresponding to the art displacement mode is a custom curve algorithm under the condition that the moving mode is the art displacement mode; determining that an interpolation algorithm corresponding to the free displacement mode is a linear interpolation algorithm when the movement mode is the free displacement mode; when the movement pattern is the path movement pattern, it is determined that the interpolation algorithm corresponding to the path movement pattern is the linear interpolation algorithm.
Extrapolation algorithm for the current movement pattern;
for example, in the case that the current rendering position of the first avatar reaches the logical position in the latest mobile packet and no subsequent mobile packet is received, the current rendering position is interpolated forward according to the current moving direction. Determining that an extrapolation algorithm corresponding to the standing mode is a linear interpolation algorithm when the moving mode is the standing mode; determining that an extrapolation algorithm corresponding to the running mode is a spline interpolation algorithm under the condition that the mobile mode is the running mode; determining that an extrapolation algorithm corresponding to the swimming mode is a spline interpolation algorithm when the movement mode is the swimming mode; determining that an extrapolation algorithm corresponding to the air mode is a linear interpolation algorithm when the moving mode is the air mode; determining that an extrapolation algorithm corresponding to the art displacement mode is a custom curve algorithm under the condition that the moving mode is the art displacement mode; determining that an extrapolation algorithm corresponding to the free displacement mode is a linear interpolation algorithm when the movement mode is the free displacement mode; when the movement pattern is the path movement pattern, it is determined that the extrapolation algorithm corresponding to the path movement pattern is the linear interpolation algorithm.
Orientation interpolation algorithm for the current movement pattern;
orientation interpolation refers to an interpolation algorithm for the orientation of the first avatar.
Physical detection.
And the second client performs ray detection or collision detection in the virtual world to correct the wrong rendering position in the process of predicting the rendering position based on the extrapolation algorithm.
And the client determines synchronous calculation parameters corresponding to the mobile mode according to the mobile mode carried by the received mobile data packet.
Referring to fig. 14 exemplarily, in a case where the second client receives the mobile data packet from the first client to carry the running mode, it is determined that the running mode has the characteristics of pre-expression and delay compensation, and the interpolation algorithm corresponding to the running mode is a cubic spline interpolation algorithm and the extrapolation algorithm is a linear interpolation algorithm.
Step 506: under the condition that delay compensation is configured in the mobile mode, compensating the logic position based on the RTT time to obtain a compensated logic position;
the delay compensation means that the second client receives the mobile data packet and is behind a half Round-Trip Time (RTT) of the server, so that the second client needs to compensate the half Round-Trip Time behind, and the logic position after the delay compensation is used as the logic position of the first virtual role for interpolation. For example, the distance corresponding to half RTT behind can be estimated by multiplying the moving speed v of the current first avatar by half RTT. That is, the compensated logical position is equal to the sum of the logical position and the compensation distance. The compensation distance is equal to the product of the moving speed v and half the RTT time. The compensation direction is a current movement direction of the first avatar.
Referring to fig. 6 schematically, with the ground in the virtual environment as a reference plane, the current rendering position of the first avatar is represented by a rectangular icon 42a, the logical position of the first avatar received by the second client is represented by a circular icon 41c, the distance of the compensation path 43 is estimated by the product of the current moving speed of the first avatar and half RTT time t, the compensation direction is estimated by the current moving direction of the first avatar, and the circular icon 41d represents the logical position after considering delay compensation. Therefore, the current rendering position 42a is interpolated to the compensated logical position 41d, so as to obtain a first rendering position 42b and a second rendering position 42c, which are the moving paths displayed on the user interface.
Illustratively, in the case that the mobile mode is not configured with delay compensation, the step 508 is executed without delay compensation.
Step 508: determining whether the current rendering position of the first avatar reaches the (compensated) logical position;
the rendering position is a position of the first virtual character displayed in the user interface. The rendering position is determined based on the logical position, and the rendering position may be different from the logical position and may not necessarily be identical. Illustratively, when the current rendering position of the first virtual character does not reach the logical position, according to the determined synchronous calculation parameters corresponding to the movement mode of the first virtual character, the current rendering position is continuously close to the logical position, the first rendering position, the second rendering position, the third rendering position and the like corresponding to the movement mode are obtained, and a movement track path of the first virtual character visible on the user interface is formed.
The client acquires the logical position (or the compensated logical position) of the first virtual character according to the received mobile data packet, and judges whether the current rendering position of the first virtual character reaches the logical position (or the compensated logical position). In case the current rendering position of the first avatar does not reach the logical position, go to step 512; in the event that the current rendering position of the first avatar has reached the logical position, step 510 is performed.
Step 510: calculating a next rendering position of the first virtual character based on an interpolation algorithm corresponding to the movement mode, wherein the interpolation algorithm is an algorithm for interpolating a current rendering position to a logical position;
in the case where the current rendering position of the first virtual character does not reach the logical position and the logical position is reachable, the next rendering position (or subsequent rendering position) of the first virtual character is calculated according to the interpolation algorithm corresponding to the movement pattern. For example, the interpolation algorithm is to interpolate from the current rendering position to the logic position to obtain a next rendering position according to the known current rendering position and logic position, so as to obtain a series of approximate values approximating the logic position, and thus obtain a moving track that the current rendering position continuously approaches the logic position.
As shown in fig. 7, the user interface 30 includes a first avatar 31, a second avatar 32, and corresponding logical positions 33 and current rendering positions 34 at a time when the first avatar jumps. At this time, the first virtual character is in the air mode, when the current rendering position 34 of the first virtual character does not reach the logic position 33, the interpolation from the current rendering position to the logic position is continuously performed according to the cubic spline interpolation corresponding to the air mode as the interpolation value algorithm, and the process is continuously circulated, so that the jumping position track of the first virtual character displayed on the user interface 30 is obtained.
The interpolation method adopted by the air mode is a cubic spline interpolation algorithm, and a cubic spline interpolation curve is a cubic polynomial and is conductive in the second order, so that the position change and the speed change based on the cubic spline interpolation curve are smooth. Referring to fig. 8, with the ground in the virtual environment as the reference plane, assuming the starting time is 0, the position, velocity and acceleration of the logical position 41 are p0, v0, a; the position and velocity of the rendering position 42 are p1, v1, respectively. Knowing the interpolation time T, the position and velocity of the fusion position 44, which can be estimated as logical position and rendering position, are p2, v2, respectively. The curve between the rendering position 42 and the merging position 44 is a cubic spline interpolation curve p (t) = at = t 3 +bt 2 + ct + d, which is also the movement trajectory of the first avatar displayed on the user interface. Referring to fig. 9 schematically, an equation set is listed according to the rendering position 42, the rendering position 42 speed, the merging position 44 and the merging position 44 speed, and coefficients a, b, c and d of the cubic spline interpolation curve are solved by using a matrix, so that a cubic spline interpolation curve equation is obtained.
Step 512: displaying the first virtual character on the user interface according to the next rendering position;
and the second client displays the first virtual character on the user interface based on the next rendering position of the first virtual character calculated according to the interpolation algorithm. The next rendering position is a position at which the first avatar is displayed in the user interface, which is calculated according to an interpolation algorithm corresponding to the movement pattern, when the rendering position of the first avatar does not reach the logical position.
Since the logical position is continuously updated, the next rendering position is obtained by performing interpolation calculation from the current rendering position to the logical position, and the next rendering position is also continuously updated.
In the event that the current rendering position of the first avatar has reached the logical position, step 514 is performed.
Step 514: judging whether the moving mode of the first virtual character is configured with pre-expression or not;
the pre-expression parameter is a characteristic that the second client can perform the prediction of the movement without receiving the latest movement data packet from the server. The movement pattern with the pre-rendering characteristic may be predictively continued to move forward by the second client according to the current movement pattern when the current rendering position of the first avatar has reached the logical position and no new movement data packet has been received.
When the current rendering position of the first avatar has reached the logical position, it is necessary to determine whether or not the movement pattern of the first avatar is configured with a pre-expression. In case the movement pattern of the first avatar requires pre-rendering, go to step 516; in the case where the movement pattern of the first avatar does not require pre-rendering, step 522 is performed.
Step 516: judging whether the current rendering position reaches a prediction limit position or not;
the predicted limit position refers to a limit value at which forward movement can be continued in accordance with the current movement pattern and the movement speed in the course of performing pre-expression. For example, the predicted limit position may be defined in time or distance. In the case where the movement pattern of the first virtual character is configured with the pre-expression feature and the current rendering position has reached the prediction limit position, it is necessary to stop performing the prediction.
And under the condition that the current rendering position of the first virtual character reaches the logic position and the moving mode of the first virtual character needs pre-expression, judging whether the current rendering position reaches the prediction limit position. In case the current rendering position does not reach the predicted limit position, performing step 518; in case the current rendering position has reached the predicted limit position, step 522 is performed.
Step 518: calculating a next rendering position of the first virtual character based on an extrapolation algorithm corresponding to the movement mode, wherein the extrapolation algorithm is an algorithm for interpolating the rendering position along the current movement direction;
and under the condition that the current rendering position of the first virtual character reaches the logic position and the next logic position is not received, calculating the next rendering position of the first virtual character according to an extrapolation algorithm corresponding to the movement mode. For example, the extrapolation algorithm is an algorithm for interpolating the current rendering position along the current moving direction according to the current moving mode and the current moving speed.
Referring to fig. 10 schematically, the current movement mode of the first virtual character is a running mode, a world coordinate system is established with the ground in the virtual environment as a reference plane and the current rendering position of the first virtual character as a center, the current rendering position of the first virtual character is represented by a rectangular icon 42d, and the logical position of the first virtual character is represented by a circular icon 41 d. When the current rendering position 42d of the first virtual character has reached the logic position 41d but no new logic position is received, taking linear interpolation corresponding to the running mode as an extrapolation algorithm, continuously carrying out forward interpolation on the current rendering position 42d along the current moving direction according to the current running mode and the single sign moving speed to obtain a first rendering position 42e, a second rendering position 42f and a third rendering position 42g, and displaying the predicted moving track on the user interface.
In some embodiments, in the case that the rendering position of the first avatar has reached the logical position, the next logical position has not been received, the current movement mode is configured with pre-expression, and the current rendering position has not reached the prediction limit position, the next rendering position of the first avatar is calculated according to an extrapolation algorithm corresponding to the current movement mode. For example, the extrapolation algorithm is an algorithm for interpolating the rendering position in the current moving direction according to the current moving mode and the moving speed.
In some embodiments, in a case where an obstacle exists before the next rendering position predicted by the outer interpolation algorithm, if it can be detected from the sweep detection that the next rendering position is unreachable, the next rendering position is corrected, and a position of the next rendering position before the correction at the obstacle is determined as a corrected rendering position.
Referring to fig. 11 schematically, when the current movement mode of the first virtual character is a walking mode, and the sweep detection 45a that is turned on for the rendering position 41 with the ground in the virtual environment as a reference plane, the presence of the obstacle wall 46 before the next rendering position 41h predicted by the extrapolation algorithm is detected, and when the sweep detection 45b detects that the position 41m is reachable, the position 41m is set as the corrected next rendering position.
Step 520: displaying the first virtual character on the user interface according to the next rendering position;
the second client calculates a next rendering position of the first virtual character according to the extrapolation algorithm, thereby displaying the first virtual character on the user interface based on the next rendering position.
Step 522: and ending the playing of the current mobile data packet.
In some embodiments, after the second client displays the first virtual character on the user interface according to the next rendering position, the playing of the current mobile data packet is ended.
In some embodiments, the playing of the current movement data packet is ended in case the rendering position of the first avatar has reached the logical position, the next logical position has not been received, the movement pattern is configured with pre-expression and the rendering position has reached the prediction limit position.
In some embodiments, the rendering position of the first virtual character has reached the logical position, the next logical position has not been received, the pre-expression is not configured for the moving mode, and the playing of the current moving data packet is ended.
In case the next mobile data packet is received or not played back is buffered, the process is cycled again starting from step 502.
In summary, the method provided in this embodiment provides different synchronous calculation parameters for different mobile modes, and performs synchronous calculation on different types of mobile data packets according to the different synchronous calculation parameters, so as to ensure smoothness of rendering positions calculated in different mobile modes, and reduce or avoid a pause phenomenon.
In the method provided by this embodiment, the position of the first virtual character is interpolated according to the interpolation value and the extrapolation value algorithm corresponding to the movement mode by determining whether the rendering position of the first virtual character reaches the logical position, and different interpolation methods are selected according to different conditions corresponding to different movement modes, so that the accuracy of virtual character movement synchronization is improved.
In the method provided by this embodiment, it is further determined whether the mobile mode is configured with delay compensation, the logical position is compensated according to the RTT time, and the position after delay compensation is used as the logical position of the first virtual character for interpolation, so that an error influence on the logical position caused by the RTT time is avoided, and the frame synchronization of the first virtual character displayed by the first client and the second client is stronger.
In the method provided by the embodiment, whether the pre-expression is configured in the moving mode is determined, so that the next rendering position of the first virtual character is calculated according to an extrapolation algorithm corresponding to the current moving mode under the conditions that the rendering position of the first virtual character reaches the logic position and does not receive the next logic position, the pre-expression is configured in the moving mode, and the rendering position does not reach the prediction limit position, thereby avoiding the phenomenon that the virtual character moves unsmoothly and discontinuously due to stopping rendering when the next logic position is not received caused by network jitter.
The above method is explained below with reference to fig. 12 and 13. A play queue for buffering mobile data packets is set in the second client, fig. 12 shows a process of storing the received mobile data packets into the play queue, and fig. 13 shows a process of removing the mobile data packets from the play queue for playing.
Referring to fig. 14, for different movement modes, the synchronization calculation parameters may be configured, and the synchronization calculation parameters include: at least one of synchronization frequency, pre-representation, interpolation value, extrapolation value, orientation interpolation, delay compensation, physical detection, and service side verification.
The synchronization frequency refers to the number of synchronization packets sent out by the first client per second. And determining the basic synchronization frequency of the corresponding mobile mode according to the mobile mode carried by the received synchronous data packet. Illustratively, in the case where the first avatar is in the walking mode, the base synchronization frequency is 2 frame rate. When the first virtual character moves linearly at a constant speed, receiving synchronous data packets by using a basic synchronous frequency, and synchronously calculating parameters at a speed of receiving two synchronous data packets per second; the synchronization frequency is increased when the first virtual character performs a shifting motion or a running motion of turning.
The synchronization frequency is related to the movement speed and the intensity of the position change of the moving mode. Illustratively, the motion speeds corresponding to the standing mode, the art displacement mode and the path moving mode are small and stable, and the corresponding synchronous frequency is 1 frame rate; the speed change of the walking running mode, the swimming mode and the free displacement mode is small, and the corresponding synchronous speed is 2 frame rate; the air mode comprises that the jumping and falling movement speed is changed rapidly, the moving path is distributed in an arc, and the corresponding synchronous frequency is 6 frame rates.
The first client sends a synchronization packet to the second client according to the basic synchronization frequency. Optionally, when some index changes of the moving mode are accumulated to a certain threshold, the client is also triggered to send a synchronization packet. For example, for the running mode, when the speed change and the direction change of the first virtual character are accumulated to a certain threshold, the first client is triggered to send the mobile data packet at the moment.
The physical detection of the second virtual character client is that after the mobile data packet is received, the logical position is corrected through ray detection and collision detection in the physical world. Illustratively, the path moving mode refers to a synchronous mode which is sent by the server to the client and relates to the moving path of the monster virtual character. The server and the client move according to the path moving mode, so that the synchronization frequency with the monster virtual character can be reduced. However, the accuracy of the virtual worlds of the server and the client is different, and the rendering positions have different topography, so that the second client needs to correct the logical positions in a sticking manner.
In some embodiments, the first avatar is in an airborne mode, and if an obstacle is present before the next logical position predicted by the outer interpolation algorithm, it is detected from the sweep detection that the next logical position is unreachable, the logical position is corrected, the position determined at the obstacle by the next logical position before correction is taken as the corrected logical position, and the calculation of the next rendering position of the first avatar by the outer interpolation algorithm is continued.
The server checks whether the position of the virtual role of the server side is changed or not, and at least one of mobile accessibility detection, speed detection, frequency detection and detection according to a custom curve is carried out. Illustratively, in the case that the first virtual character is in a self-movement mode, the free movement mode includes at least one of climbing a ladder, climbing and object interaction, the server checks only the speed and checks whether the current virtual environment allows free movement.
Fig. 12 is a flowchart illustrating a method for synchronizing movement of a virtual character according to an exemplary embodiment of the present application. The method may be performed by the terminal 120 or the terminal 140 shown in fig. 1, the method comprising the steps of:
illustratively, the second client comprises a receiving module for receiving the mobile data packet and a playing module for playing the mobile data packet. The receiving module performs steps 1202 to 1224, and the playing module performs steps 1224 to 1254.
Step 1202: judging whether a new mobile data packet is received or not;
the receiving module judges whether a new mobile data packet of the first client is received, and executes the step 1204 under the condition that the new mobile data packet is received; in the event that a mobile data packet is not received, step 1224 is performed.
The mobile data packet carries: a movement pattern and a logical position. Optionally, the mobile data packet further carries: at least one of a synchronization timestamp, a current movement speed, a current movement direction, a current facing direction, whether a correction packet, whether a critical packet. The current direction of movement and the current facing direction are typically the same, but may be different.
Step 1204: judging whether the mobile packet data is a correction data packet;
the correction data packet is a data packet which is sent to the receiving module by the server side under the condition that the server side finds that the logic position reported by the first client side has errors. The correction packet is used to pull the logical position back to the packet at the corrected position. The receiving module judges whether the received mobile data packet is a correction data packet. In case the mobile packet is a correction packet, go to step 1206; in the case that the mobile packet is not a correction packet, step 1210 is performed.
Step 1206: emptying a play queue of the mobile data packet;
once the receiving module recognizes that the received new mobile data packet is a correction data packet, the receiving module performs a clearing operation on the mobile data packet currently stored in the play queue, and preferentially plays the current correction data packet.
Step 1208: pulling the logical position back to the correction position according to the correction data packet;
the receiving module can pull the current logic position back to the correction position according to the correction position carried by the correction data packet, so that errors of rendering positions on the user interface are avoided.
Step 1210: judging whether the mobile data packet is configured with delay compensation or not;
the delay compensation means that the receiving module receives the mobile data packet and is behind the server by half Round-Trip Time (RTT), so that the receiving module needs to compensate the behind half RTT Time, and the compensated logic position is used as the logic position of the first virtual role for interpolation. For example, the compensation path corresponding to half RTT behind can be estimated by multiplying the moving speed v of the current first avatar by half RTT time t.
And when the received mobile data packet is not the correction data packet according to the judgment result of the receiving module, judging whether the mobile mode carried by the mobile data packet is configured with delay compensation or not. In case the mobile mode is configured with delay compensation, go to step 1212; in case the mobile mode is not configured with delay compensation, step 1214 is performed.
Step 1212: correcting the logic position by using delay compensation to obtain a compensated logic position;
and under the condition that the receiving module determines that the mobile mode carried by the mobile data packet is configured with delay compensation, the lagged half RTT time needs to be compensated, and the position after delay compensation is used as the logic position of the first virtual role for interpolation. For example, the compensation path corresponding to half RTT behind can be estimated by multiplying the moving speed v of the current first avatar by half RTT time t.
Step 1214: judging whether the mobile data packet is the same as the current mobile mode;
the receiving module determines whether the moving mode carried by the mobile data packet is the same as the current moving mode of the receiving module (i.e. the moving mode of the previous mobile data packet). If the mobile mode carried by the mobile data packet is the same as the current mobile mode, step 1218 is executed; in case the moving mode carried by the moving data packet is not the same as the current moving mode, step 1216 is executed.
Step 1216: switching to a new mobile mode;
and under the condition that the mobile mode carried by the mobile data packet is different from the current mobile mode, switching the mobile mode of the receiving module to the mobile mode carried by the mobile data packet.
Step 1218: judging whether the mobile data packet is a key packet or not;
the logical locations carried by the critical packets are logical locations that the virtual character must pass through during the movement of the virtual environment.
The receiving module judges whether the received mobile data packet is a key packet. In case the received mobile data packet is a critical packet, perform step 1222; in case the received mobile data packet is not a critical packet, step 1220 is performed.
Step 1220: merging consecutive non-critical packets;
when the mobile data packet received by the receiving module is not a critical packet, the continuous non-critical packets in the play queue are merged. Illustratively, merging consecutive non-critical packets refers to: and replacing the last non-critical packet in the play queue by the newly received non-critical packet.
Step 1222: adding the mobile data packet into a play queue;
and adding the mobile data packet which is the key packet into the play queue for buffering. The playing module plays the mobile data packets in sequence according to the sequence of the mobile data packets in the playing queue.
Step 1224: playing the mobile data packet;
and the playing module plays the mobile data packet in the playing queue. As shown in fig. 13, the specific playing process is as follows:
step 1226: judging whether a mobile data packet is played;
the playing module judges whether a mobile data packet which is being played currently exists. If there is a currently playing mobile data packet, go to step 1236; in case there is no mobile data packet currently being played, step 1228 is performed.
Step 1228: judging whether a mobile data packet exists in the play queue or not;
under the condition that no mobile data packet is currently played, the playing module judges whether the mobile data packet exists in the playing queue or not. If there is a mobile data packet in the current play queue, go to step 1230; in the case that there is no mobile packet in the current play out queue, step 1252 is performed.
Step 1230: taking out a mobile data packet from the play queue as a current mobile data packet;
and taking out one mobile data packet from the play queue as the current mobile data packet under the condition that the mobile data packet which is currently played does not exist and the mobile data packet exists in the play queue.
Step 1232: judging whether the mobile data packet is the same as the current mobile mode;
the play module determines whether the moving mode carried by the mobile data packet is the same as the current moving mode of the receiving module (i.e. the moving mode of the previous mobile data packet). If the moving mode carried by the mobile data packet is the same as the current moving mode, executing step 1236; in the case that the movement pattern carried by the mobile data packet is not the same as the current movement pattern, step 1234 is performed.
Step 1234: switching a mobile mode;
and under the condition that the mobile mode carried by the mobile data packet is different from the current mobile mode, the playing module switches the received mobile mode to the mobile mode carried by the mobile data packet.
Step 1236: playing the current mobile data packet;
the playing module starts to play the current mobile data packet.
Step 1238: judging whether the current rendering position reaches a logic position;
the rendering position is a position of the first virtual character displayed in the user interface. The rendering position is determined based on the logical position, and the rendering position may be different from the logical position and may not necessarily be identical. Illustratively, when the current rendering position of the first virtual character does not reach the logical position, according to the determined synchronous calculation parameters corresponding to the movement mode of the first virtual character, the current rendering position is continuously close to the logical position, the first rendering position, the second rendering position, the third rendering position and the like corresponding to the movement mode are obtained, and a movement track path of the first virtual character visible on the user interface is formed.
The second client acquires the logical position (or the compensated logical position) of the first virtual character according to the received mobile data packet, and judges whether the current rendering position of the first virtual character reaches the logical position (or the compensated logical position). In case the current rendering position of the first avatar does not reach the logical position, perform step 1240; in the event that the current rendering position of the first avatar has reached the logical position, step 1242 is performed.
Step 1240: calculating a moving speed and a moving direction by using an interpolation algorithm, and calculating a next rendering position;
in the case where the current rendering position of the first virtual character does not reach the logical position and the logical position is reachable, the next rendering position (or subsequent rendering position) of the first virtual character is calculated according to the interpolation algorithm corresponding to the movement pattern. For example, the interpolation algorithm is to interpolate from the current rendering position to the logic position according to the known current rendering position and logic position to obtain a next rendering position, a moving speed and a moving direction, so as to obtain a series of approximate values approximating the logic position, and thus obtain a moving trajectory in which the current rendering position is continuously approximated to the logic position.
Step 1242: judging whether the moving mode is configured with pre-expression or not;
the pre-expression parameter is a characteristic that the second client can perform the prediction of the movement without receiving the latest movement data packet from the server. The movement pattern with the pre-rendering characteristic may be predictively continued to move forward by the second client according to the current movement pattern when the current rendering position of the first avatar has reached the logical position and no new movement data packet has been received.
The second client acquires the logic position of the first virtual character according to the received mobile data packet, and needs to judge whether the mobile mode of the first virtual character needs to be pre-expressed or not under the condition that the current rendering position of the first virtual character reaches the logic position. If the movement pattern of the first virtual character is configured with pre-expression, go to step 1244; in the case where the movement pattern of the first virtual character does not require pre-rendering, step 1248 is performed.
Step 1244: judging whether the current logic position reaches a prediction limit or not;
the prediction limit is a limit value at which forward movement can be continued according to the current movement pattern and movement speed when the rendering position of the first avatar has reached the logical position and no new movement packet has been received. For example, the prediction limit may be defined in time or distance. In the case where the movement pattern of the first avatar has the pre-expression characteristic but the current movement position exceeds the prediction limit position, it is necessary to stop performing the predicted forward movement.
And under the condition that the rendering position of the first virtual character reaches the logic position and the moving mode of the first virtual character needs pre-expression, judging whether the rendering position reaches the prediction limit position. In case the rendering position does not reach the prediction limit position, step 1246 is executed; in the case where the rendering position has reached the prediction limit position, step 1248 is executed.
Step 1246: calculating the moving speed and the moving direction by using an extrapolation algorithm, and calculating the next rendering position;
and under the condition that the current rendering position of the first virtual character reaches the logic position and the next logic position is not received, calculating the next rendering position of the first virtual character according to an extrapolation algorithm corresponding to the movement mode. For example, the extrapolation algorithm is an algorithm for interpolating the current rendering position along the current moving direction according to the current moving mode and the current moving speed.
Referring to fig. 10 schematically, the current movement mode of the first virtual character is a walking mode, a world coordinate system is established with the ground in the virtual environment as a reference plane and the current rendering position of the first virtual character as a center, the current rendering position of the first virtual character is represented by a rectangular icon 42d, and the logical position of the first virtual character is represented by a circular icon 41 d. When the current rendering position 42d of the first virtual character has reached the logic position 41d but no new logic position is received, taking linear interpolation corresponding to the running mode as an extrapolation algorithm, continuously carrying out forward interpolation on the current rendering position 42d along the current moving direction according to the current running mode and the single sign moving speed to obtain a first rendering position 42e, a second rendering position 42f and a third rendering position 42g, and displaying the predicted moving track on the user interface.
In some embodiments, in the case that the rendering position of the first avatar has reached the logical position, the next logical position has not been received, the current movement mode is configured with pre-expression, and the current rendering position has not reached the prediction limit position, the next rendering position of the first avatar is calculated according to an extrapolation algorithm corresponding to the current movement mode. For example, the extrapolation algorithm is an algorithm for interpolating the rendering position in the current moving direction according to the current moving mode and the moving speed.
In some embodiments, when an obstacle exists before the next rendering position predicted by the external interpolation algorithm, if it can be detected from the sweep detection that the next rendering position is unreachable, the next rendering position is corrected, and a position of the next rendering position before the correction at the obstacle is determined as the corrected rendering position.
Step 1248: marking that the playing of the current mobile data packet is finished;
in some embodiments, the rendering position of the first avatar has reached the logical position, the next logical position has not been received, the movement pattern is configured with pre-performance, and the rendering position has reached the predicted limit position, marking that the current movement packet has been played.
In some embodiments, the rendering position of the first avatar has reached a logical position, no next logical position is received, the moving mode is not configured with pre-representation, and the current moving data packet is marked to be played completely.
Step 1250: calculating the position of the current frame according to the moving speed and the moving direction;
and obtaining the rendering position of the first virtual character in the user interface according to the moving speed and the moving direction obtained by the interpolation algorithm or the extrapolation algorithm.
Step 1252: stopping the movement of the first avatar logical position;
the second client stops the movement of the logical position of the first virtual character under the condition that no mobile data packet is currently played and no mobile data packet is currently played in the queue.
Step 1254: the current frame ends.
After the playing of the mobile data packet of the current frame is finished, the receiving and playing of the mobile data packet of a new frame are started, and the steps 1202 to 1254 are looped.
In summary, the method provided in this embodiment provides different synchronous calculation parameters for different mobile modes, and performs synchronous calculation on different types of mobile data packets according to the different synchronous calculation parameters, so as to ensure smoothness of rendering positions calculated in different mobile modes, and reduce or avoid a pause phenomenon.
In the method provided by this embodiment, whether the received mobile data packet is the same as the current mobile mode is further determined, and the received mobile data packet is switched to the new mobile mode under different conditions, so that an error caused by matching of the mobile data packet with a parameter different from the current mobile mode is avoided, and the accuracy of virtual character movement synchronization is improved.
Fig. 15 is a schematic structural diagram illustrating a mobile synchronization apparatus for a virtual character according to an exemplary embodiment of the present application. The apparatus may be implemented as all or part of a computer device in software, hardware, or a combination of both, the apparatus 1500 comprising:
a receiving module 1510, configured to receive a mobile data packet of a first avatar, wherein the mobile data packet includes a mobile mode and a logical position of the first avatar, and the logical position is used to indicate a position of the first avatar in a virtual world;
a determining module 1520 configured to determine a synchronization calculation parameter corresponding to the movement pattern;
the playing module 1530 is configured to perform synchronous calculation on the logic position based on the synchronous calculation parameter, so as to obtain a rendering position of the first virtual character;
a display module 1540, configured to display the first virtual character on a user interface according to the rendering position.
In an optional design of this embodiment, the playing module 1530 is further configured to calculate a next rendering position of the first avatar based on an interpolation algorithm corresponding to the movement pattern if the current rendering position of the first avatar does not reach the logical position; wherein the interpolation algorithm is an algorithm that interpolates the current rendering position to the logical position.
In an optional design of this embodiment, the determining module 1520 is further configured to determine that, if the movement mode is a standing mode, the interpolation algorithm corresponding to the standing mode is the linear interpolation algorithm;
the determining module 1520, further configured to determine that the interpolation algorithm corresponding to the running mode is a spline interpolation algorithm if the movement mode is the running mode;
the determining module 1520, further configured to determine that the interpolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
the determining module 1520, further configured to determine that, if the movement pattern is an air pattern, the interpolation algorithm corresponding to the air pattern is a cubic spline interpolation algorithm;
the determining module 1520, further configured to determine that an interpolation algorithm corresponding to the art displacement mode is a custom curve algorithm if the moving mode is the art displacement mode;
the determining module 1520, further configured to determine that an interpolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm if the movement mode is the free displacement mode;
the determining module 1520 is further configured to determine that the interpolation algorithm corresponding to the path moving mode is the linear interpolation algorithm if the moving mode is the path moving mode.
In an optional design of this embodiment, the playing module 1530 is further configured to predict a next rendering position of the first avatar based on an extrapolation algorithm corresponding to the movement pattern if the current rendering position of the first avatar reaches the logical position and a next logical position is not received; wherein the extrapolation algorithm is an algorithm that interpolates the current rendering position along a current moving direction.
In an optional design of this embodiment, the determining module 1520 is further configured to determine that the extrapolation algorithm corresponding to the standing mode is the linear interpolation algorithm if the moving mode is the standing mode;
the determining module 1520, further configured to determine that the extrapolation algorithm corresponding to the running mode is a spline interpolation algorithm if the movement mode is the running mode;
the determining module 1520, further configured to determine that an extrapolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
the determining module 1520, further configured to determine that the extrapolation algorithm corresponding to the air mode is a linear interpolation algorithm if the movement mode is the air mode;
the determining module 1520, further configured to determine that an extrapolation algorithm corresponding to the art displacement mode is a custom curve algorithm if the moving mode is the art displacement mode;
the determining module 1520, further configured to determine that an extrapolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm if the movement mode is the free displacement mode;
the determining module 1520, further configured to determine that the extrapolation algorithm corresponding to the path moving mode is the linear interpolation algorithm if the moving mode is the path moving mode.
In an optional design of this embodiment, the playing module 1530 is further configured to calculate a next rendering position of the first avatar based on an extrapolation algorithm corresponding to the movement mode when the current rendering position of the first avatar reaches the logical position, a next logical position is not received, the movement mode is configured with pre-expression, and the rendering position does not reach a predicted limit position.
In an optional design of this embodiment, the determining module 1520 is further configured to determine a next rendering position after modification if an obstacle exists before the next rendering position predicted by the extrapolation algorithm; the modified next rendering position is a position determined at the obstacle based on the next rendering position before correction.
In an optional design of this embodiment, the determining module 1520 is further configured to, when the delay compensation is configured in the moving mode, compensate the logical position based on the round trip time RTT, so as to obtain a compensated logical position.
Fig. 16 shows a block diagram of a terminal 1600 provided in an exemplary embodiment of the present application. The terminal 1600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1601 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be non-transitory. The memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one instruction for execution by the processor 1601 to implement a method for screen display of a virtual world provided by method embodiments in the present application.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. The processor 1601, the memory 1602 and the peripheral interface 1603 may be connected via buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1604, a touch screen display 1605, a camera 1606, audio circuitry 1607, and a power supply 1608.
Peripheral interface 1603 can be used to connect at least one peripheral associated with an I/O (Input/Output) to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1601, the memory 1602, and the peripheral interface 1603 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is for displaying a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. At this point, the display 1605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 can be one, providing the front panel of the terminal 1600; in other embodiments, the display screens 1605 can be at least two, respectively disposed on different surfaces of the terminal 1600 or in a folded design; in still other embodiments, display 1605 can be a flexible display disposed on a curved surface or a folded surface of terminal 1600. Even further, the display 1605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 can also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 for voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1607 may also include a headphone jack.
A power supply 1608 is used to provide power to the various components in terminal 1600. The power source 1608 may be alternating current, direct current, disposable or rechargeable. When the power supply 1608 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1609. The one or more sensors 1609 include, but are not limited to: acceleration sensor 1610, gyro sensor 1611, pressure sensor 1612, optical sensor 1613, and proximity sensor 1614.
Acceleration sensor 1610 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1610 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1601 may control the touch display 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1610. The acceleration sensor 1610 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1611 may detect a body direction and a rotation angle of the terminal 1600, and the gyro sensor 1611 may cooperate with the acceleration sensor 1610 to acquire a 3D motion of the user with respect to the terminal 1600. Based on the data collected by the gyro sensor 1611, the processor 1601 may implement the following functions: motion sensing (e.g., changing the UI according to a user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
Pressure sensors 1612 may be disposed on side frames of terminal 1600 and/or underlying touch display 1605. When the pressure sensor 1612 is arranged on the side frame of the terminal 1600, a holding signal of a user to the terminal 1600 can be detected, and the processor 1601 is used for identifying the left hand and the right hand or performing quick operation according to the holding signal acquired by the pressure sensor 1612. When the pressure sensor 1612 is disposed at a lower layer of the touch display screen 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1613 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the touch display screen 1605 based on the ambient light intensity collected by the optical sensor 1613. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the touch display 1605 is turned down. In another embodiment, the processor 1601 may also dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1613.
A proximity sensor 1614, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 1600. The proximity sensor 1614 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the touch display 1605 to switch from the light screen state to the rest screen state when the proximity sensor 1614 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1614 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the touch display 1605 is controlled by the processor 1601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
According to another aspect of the present application, there is also provided a computer storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the method for audible prompting in a virtual world as described above.
According to another aspect of the present application, there is also provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to enable the computer device to execute the sound prompt method in the virtual world.
It should be understood that reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for synchronizing movement of a virtual character, the method comprising:
receiving a mobile data packet of a first virtual character, wherein the mobile data packet comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in a virtual world;
determining synchronous calculation parameters corresponding to the moving mode;
performing synchronous calculation on the logic position based on the synchronous calculation parameters to obtain a rendering position of the first virtual role;
and displaying the first virtual character on a user interface according to the rendering position.
2. The method of claim 1, wherein the synchronously computing the logical position based on the synchronous computing parameter to obtain a next rendering position of the first avatar comprises:
calculating a next rendering position of the first virtual character based on an interpolation algorithm corresponding to the movement pattern in a case where the current rendering position of the first virtual character does not reach the logical position;
wherein the interpolation algorithm is an algorithm that interpolates the current rendering position to the logical position.
3. The method of claim 2, wherein determining the synchronization calculation parameter corresponding to the movement pattern comprises at least one of:
determining that an interpolation algorithm corresponding to the standing mode is the linear interpolation algorithm if the movement mode is the standing mode;
determining that an interpolation algorithm corresponding to the running mode is a spline interpolation algorithm in a case where the movement mode is the running mode;
determining that an interpolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
determining that an interpolation algorithm corresponding to the air mode is a cubic spline interpolation algorithm in a case where the movement mode is the air mode;
determining that an interpolation algorithm corresponding to the art displacement mode is a custom curve algorithm if the movement mode is the art displacement mode;
determining that an interpolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm in a case where the movement mode is a free displacement mode;
determining that an interpolation algorithm corresponding to the path movement mode is the linear interpolation algorithm in a case where the movement mode is a path movement mode.
4. The method of any of claims 1 to 3, wherein said synchronously computing the logical position based on the synchronous computing parameter to obtain a next rendering position of the first avatar comprises:
predicting a next rendering position of the first avatar based on an extrapolation algorithm corresponding to the movement pattern if a current rendering position of the first avatar reaches the logical position and a next logical position is not received;
wherein the extrapolation algorithm is an algorithm for interpolating the current rendering position along the current moving direction.
5. The method of claim 4, wherein determining the synchronization calculation parameter corresponding to the movement pattern comprises at least one of:
determining that an extrapolation algorithm corresponding to the standing mode is the linear interpolation algorithm, in a case that the moving mode is the standing mode;
determining that an extrapolation algorithm corresponding to the running mode is a spline interpolation algorithm when the movement mode is the running mode;
determining that an extrapolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
determining that an extrapolation algorithm corresponding to the air mode is a linear interpolation algorithm in a case that the movement mode is the air mode;
determining that an extrapolation algorithm corresponding to the art displacement mode is a custom curve algorithm when the movement mode is the art displacement mode;
determining that an extrapolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm, in a case where the movement mode is the free displacement mode;
determining that an extrapolation algorithm corresponding to the path movement mode is the linear interpolation algorithm when the movement mode is a path movement mode.
6. The method of claim 4, further comprising:
calculating a next rendering position of the first avatar based on an extrapolation algorithm corresponding to the movement mode when a current rendering position of the first avatar reaches the logical position, a next logical position is not received, the movement mode is configured with pre-expression, and the rendering position does not reach a prediction limit position.
7. The method of claim 4, further comprising:
determining a next rendering position after correction under the condition that an obstacle exists before the next rendering position predicted by the extrapolation algorithm; the modified next rendering position is a position determined at the obstacle based on the next rendering position before correction.
8. The method of any of claims 1 to 3, further comprising:
and under the condition that the mobile mode is configured with delay compensation, compensating the logic position based on the Round Trip Time (RTT) to obtain a compensated logic position.
9. An apparatus for synchronizing movement of a virtual character, the apparatus comprising:
the mobile data package comprises a mobile mode and a logical position of the first virtual character, and the logical position is used for indicating the position of the first virtual character in a virtual world;
a determining module for determining a synchronization calculation parameter corresponding to the movement pattern;
the calculation module is used for synchronously calculating the logic position based on the synchronous calculation parameters to obtain the rendering position of the first virtual role;
and the display module is used for displaying the first virtual role on a user interface according to the rendering position.
10. The apparatus of claim 9, wherein the computing module is configured to compute a next rendering position of the first avatar based on an interpolation algorithm corresponding to the movement pattern if the current rendering position of the first avatar does not reach the logical position;
wherein the interpolation algorithm is an algorithm that interpolates the rendering positions to the logical positions.
11. The apparatus of claim 10, wherein the determining module is configured to perform at least one of the following steps:
determining that an interpolation algorithm corresponding to the standing mode is the linear interpolation algorithm in a case where the moving mode is the standing mode;
determining that an interpolation algorithm corresponding to the running mode is a spline interpolation algorithm in a case where the movement mode is the running mode;
determining that an interpolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
determining that an interpolation algorithm corresponding to the air mode is a cubic spline interpolation algorithm in a case where the movement mode is the air mode;
determining that an interpolation algorithm corresponding to the art displacement mode is a custom curve algorithm if the movement mode is the art displacement mode;
determining that an interpolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm in a case where the movement mode is a free displacement mode;
determining that an interpolation algorithm corresponding to the path movement pattern is the linear interpolation algorithm if the movement pattern is a path movement pattern.
12. The apparatus of claim 9, wherein the computing module is configured to predict a next rendering position of the first avatar based on an extrapolation algorithm corresponding to the movement pattern if the current rendering position of the first avatar reaches the logical position and no next logical position is received;
wherein the extrapolation algorithm is an algorithm that interpolates the rendering position along a current moving direction.
13. The apparatus of claim 12, wherein the determining module is configured to perform at least one of the following steps:
determining that an extrapolation algorithm corresponding to the standing mode is the linear interpolation algorithm, in a case that the moving mode is the standing mode;
determining that an extrapolation algorithm corresponding to the running mode is a spline interpolation algorithm when the movement mode is the running mode;
determining that an extrapolation algorithm corresponding to the swimming mode is the spline interpolation algorithm if the movement mode is the swimming mode;
determining that an extrapolation algorithm corresponding to the air mode is a linear interpolation algorithm in a case that the movement mode is the air mode;
determining that an extrapolation algorithm corresponding to the art displacement mode is a custom curve algorithm when the movement mode is the art displacement mode;
determining that an extrapolation algorithm corresponding to the free displacement mode is the linear interpolation algorithm, in a case where the movement mode is the free displacement mode;
determining that an extrapolation algorithm corresponding to the path movement mode is the linear interpolation algorithm when the movement mode is a path movement mode.
14. A computer device comprising a processor and a memory, the memory having at least one program stored therein; the at least one program is loaded and executed by the processor to implement the method for mobile synchronization of virtual characters according to any one of claims 1 to 8.
15. A computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by a processor to implement the method for mobile synchronization of a virtual character according to any one of claims 1 to 8.
CN202111617082.0A 2021-09-07 2021-12-27 Virtual role movement synchronization method, device, equipment and storage medium Pending CN115770389A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021110444973 2021-09-07
CN202111044497 2021-09-07

Publications (1)

Publication Number Publication Date
CN115770389A true CN115770389A (en) 2023-03-10

Family

ID=85388310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111617082.0A Pending CN115770389A (en) 2021-09-07 2021-12-27 Virtual role movement synchronization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115770389A (en)

Similar Documents

Publication Publication Date Title
CN109876438B (en) User interface display method, device, equipment and storage medium
US9149720B2 (en) Computer-readable storage medium having information processing program stored therein, information processing apparatus, information processing system, and information processing method
CN110022363B (en) Method, device and equipment for correcting motion state of virtual object and storage medium
CN107982918B (en) Game game result display method and device and terminal
WO2022134980A1 (en) Control method and apparatus for virtual object, terminal, and storage medium
CN112843679B (en) Skill release method, device, equipment and medium for virtual object
CN112245921B (en) Virtual object control method, device, equipment and storage medium
CN111744185B (en) Virtual object control method, device, computer equipment and storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN111589141B (en) Virtual environment picture display method, device, equipment and medium
CN112221142B (en) Control method and device of virtual prop, computer equipment and storage medium
CN109806583B (en) User interface display method, device, equipment and system
CN112915541B (en) Jumping point searching method, device, equipment and storage medium
CN112274936B (en) Method, device, equipment and storage medium for supplementing sub-props of virtual props
CN112699208B (en) Map way finding method, device, equipment and medium
CN114404972A (en) Method, device and equipment for displaying visual field picture
CN112755517B (en) Virtual object control method, device, terminal and storage medium
CN112604302B (en) Interaction method, device, equipment and storage medium of virtual object in virtual environment
CN111589147B (en) User interface display method, device, equipment and storage medium
CN115770389A (en) Virtual role movement synchronization method, device, equipment and storage medium
CN114288659A (en) Interaction method, device, equipment, medium and program product based on virtual object
JP6114848B1 (en) Synchronization server and synchronization method
CN115193035A (en) Game display control method and device, computer equipment and storage medium
CN112675538A (en) Data synchronization method, device, equipment and medium
CN113318443B (en) Reconnaissance method, device, equipment and medium based on virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40084147

Country of ref document: HK