CN114768260A - Data processing method and device for virtual character in game and electronic equipment - Google Patents

Data processing method and device for virtual character in game and electronic equipment Download PDF

Info

Publication number
CN114768260A
CN114768260A CN202210480270.1A CN202210480270A CN114768260A CN 114768260 A CN114768260 A CN 114768260A CN 202210480270 A CN202210480270 A CN 202210480270A CN 114768260 A CN114768260 A CN 114768260A
Authority
CN
China
Prior art keywords
game
client
virtual character
scene
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210480270.1A
Other languages
Chinese (zh)
Inventor
姜潇策
余九和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210480270.1A priority Critical patent/CN114768260A/en
Publication of CN114768260A publication Critical patent/CN114768260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing

Abstract

The disclosure provides a method and a device for processing data of virtual characters in a game and electronic equipment, relates to the technical field of games, and solves the technical problem that the operation pressure of a client is large due to large data processing amount of the client. The method comprises the following steps: determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character; packaging the game state data into a state frame; and sending the state frame to the server so that the server sends the state frame to the second client to display a game picture corresponding to the game state data on the second client.

Description

Data processing method and device for virtual character in game and electronic equipment
Technical Field
The present disclosure relates to the field of game technologies, and in particular, to a method and an apparatus for processing data of a virtual character in a game, and an electronic device.
Background
Currently, frame synchronization is a common synchronization method in games, and is mostly used for games with few virtual characters, short single round time, but high requirements for fairness and consistency, such as combat, First-person shooter (FPS), Multiplayer Online Battle sports (MOBA), and the like. The biggest difference between the traditional state synchronization and the traditional state synchronization is that the combat logic is put to the client. For example, the client needs to perform a series of calculations of skills, attacks, movements, collisions, Artificial Intelligence (AI), etc. according to the received instructions.
However, in the conventional data processing method of the virtual character, the client performs various calculations, so that the data processing amount is large, and the operating pressure of the client is large.
Disclosure of Invention
The purpose of the present disclosure is to provide a method and an apparatus for processing data of a virtual character in a game, and an electronic device, so as to alleviate the technical problem that the operating pressure of a client is large due to a large data processing amount of the client.
In a first aspect, an embodiment of the present disclosure provides a data processing method for a virtual character in a game, where a game scene of the game includes a first virtual character controlled by a first client, and the method includes:
determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character;
packaging the game state data into a state frame;
and sending the state frame to a server so that the server sends the state frame to a second client to display a game picture corresponding to the game state data on the second client.
In one possible implementation, the game state data includes any one or more of:
the state data of the first virtual character, the state data of other virtual characters except the first virtual character in the game scene and the state data of other virtual objects.
In one possible implementation, the server corresponds to a plurality of the game scenes; further comprising:
determining a target game scene where the first virtual character is located;
sending the data of the target game scene to the server, so that the server sends the state frame to the second client; and the second virtual character controlled by the second client is the other virtual characters except the first virtual character in the target game scene.
In one possible implementation, the game scene includes a plurality of scene areas, and each scene area corresponds to an area identifier;
the step of sending the data of the target game scene to the server side so that the server side sends the state frame to the second client side includes:
and sending the data of the target game scene and a target area identifier corresponding to a target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identifier, and sends a target state frame corresponding to the target scene area to the second client.
In one possible implementation, the status frame is sent to the server in the form of a data packet; before the step of sending the status frame to the server, the method further includes:
determining data that changes in the status frame;
and inputting the changed data in the status frame into the packet body of the data packet.
In one possible implementation, the status frame is sent to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number;
the game picture is displayed on the second client at a specified frame rate; wherein the specified frame rate is the specified number of frames per second.
In one possible implementation, the game state data includes any one or more of:
movement state data, collision state data, position state data in the game.
In a second aspect, an embodiment of the present disclosure provides a data processing method for a virtual character in a game, which is applied to a server, where a game scene of the game includes a first virtual character controlled by a first client; the method comprises the following steps:
acquiring a state frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
determining a second virtual character in the same game scene as the first virtual character;
and sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
In one possible implementation, the step of determining a second virtual character in the same game scene as the first virtual character includes:
acquiring scene identifiers sent by each client; wherein the scene identification is used to represent at least one of the game scenes in which the client-controlled virtual character is located in the game;
and determining a second virtual character in the same game scene as the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
In one possible implementation, the server corresponds to a plurality of the game scenes, and each virtual character corresponds to a plurality of game scene allocation dimensions;
the step of determining a second virtual character in the same game scene as the first virtual character comprises:
determining a game scene with an optimal distribution dimension for each of the plurality of game scene distribution dimensions by traversing all the game scenes;
and determining a game scene to which the first virtual character is allocated and a second virtual character which is allocated to the same game scene as the first virtual character from the game scenes with the optimal game scene allocation dimension based on the priority of each game scene allocation dimension.
In one possible implementation, the game scene allocation dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
In one possible implementation, the number of the second virtual characters is multiple, and each second virtual character corresponds to a game character dividing dimension;
the step of sending the status frame to a second client corresponding to the second virtual character to display a game screen corresponding to the game status data on the second client includes:
determining a target second virtual character which is classified as the same as the first virtual character from the plurality of second virtual characters through the game character dividing dimension;
and sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
In one possible implementation, the game character partitioning dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
In a third aspect, an embodiment of the present disclosure provides a data processing method for virtual characters in a game, where the same game scene of the game includes a first virtual character controlled by a first client and a second virtual character controlled by a second client, and a graphical user interface is provided by the second client; the method comprises the following steps:
receiving a state frame sent by a server; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
and displaying a game picture corresponding to the game state data in the graphical user interface.
In one possible implementation, the step of displaying a game screen corresponding to the game state data in the graphical user interface includes:
and displaying a game picture corresponding to the game state data in the graphical user interface in an overlaying manner.
In a fourth aspect, a data processing apparatus for a virtual character in a game is provided, where a game scene of the game includes a first virtual character controlled by a first client, the apparatus includes:
the determining module is used for determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character;
the encapsulation module is used for encapsulating the game state data into a state frame;
and the sending module is used for sending the state frame to a server so that the server sends the state frame to a second client, and a game picture corresponding to the game state data is displayed on the second client.
In a fifth aspect, a data processing apparatus for virtual characters in a game is provided, and is applied to a server, where a game scene of the game includes a first virtual character controlled by a first client; the device comprises:
the acquisition module is used for acquiring the status frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
the determining module is used for determining a second virtual character which is in the same game scene with the first virtual character;
and the sending module is used for sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
A sixth aspect provides a data processing apparatus for virtual characters in a game, where the same game scene of the game includes a first virtual character controlled by a first client and a second virtual character controlled by a second client, and a graphical user interface is provided by the second client; the device comprises:
the receiving module is used for receiving the state frame sent by the server; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
and the display module is used for displaying a game picture corresponding to the game state data in the graphical user interface.
In a seventh aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the steps of the methods in the first aspect, the second aspect, and the third aspect when executing the computer program.
In an eighth aspect, the disclosed embodiments further provide a computer-readable storage medium storing computer executable instructions, which, when invoked and executed by a processor, cause the processor to execute the method of the first, second and third aspects.
The embodiment of the disclosure brings the following beneficial effects:
according to the data processing method, the data processing device and the electronic equipment for the virtual character in the game, firstly, based on a control instruction of a first client to the first virtual character, game state data generated in the process of executing the control instruction by the first virtual character is determined, then the state data is packaged into a state frame, and the state frame is further sent to a server, so that the server sends the state frame to a second client, and a game picture corresponding to the game state data is displayed on the second client. In the scheme, the first client can package the game state data related to the first virtual character controlled by the first client into a state frame, then the state frame is sent to the server, and further the state frame is synchronized to the second clients through the server, each second client can obtain the state frame from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, the calculation is still handed to the clients for processing, meanwhile, each client only processes and uploads the game state data related to the virtual character controlled by the server, so that the data of other clients can be directly drawn when receiving the data of other clients, each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, and the data processing amount and the operating pressure of the clients are effectively reduced, the technical problem that the operation pressure of the client is large due to the fact that the data processing amount of the client is large is solved.
Drawings
In order to more clearly illustrate the detailed description of the present disclosure or the technical solutions in the prior art, the drawings used in the detailed description or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario provided by an embodiment of the present disclosure;
fig. 2 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure;
fig. 3 is a schematic view of a usage scenario of a touch terminal according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a data processing method for a virtual character in a game according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a frame synchronization effect according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a game scene division provided in the embodiment of the present disclosure;
fig. 7 is a schematic flowchart of another method for processing data of a virtual character in a game according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a microservice design provided by an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of another method for processing data of a virtual character in a game according to an embodiment of the present disclosure;
fig. 10 is a structural diagram of a data processing device of a virtual character in a game according to an embodiment of the present disclosure;
FIG. 11 is a structural diagram of a data processing apparatus of a virtual character in another game according to an embodiment of the present disclosure;
fig. 12 is a structural view of a data processing apparatus of a virtual character in another game according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The terms "including" and "having," and any variations thereof, as referred to in the embodiments of the present disclosure, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The current frame synchronization game can generally provide a simpler social system, and has few special designs except world chat, team chat and friend chat. The window chat has many limitations, the window is too fast due to too many people, the interested topics are difficult to find, and the people are too few and may be very cool. Suppose there is a social square, and each person is an operable role and can freely walk and chat in the square. The chat information can be displayed in the chat window, and can also be displayed in the form of bubbles on the head of the person, so that other people can see the chat information at a glance. The player can quickly find out the topics or people that are of interest to him or her and then chat through the overhead bubble. Social squares can be implemented to some extent with a frame synchronization scheme. For example, the client uploads the operation command at a fixed frequency of 15 Frames Per Second (Frames Per Second, FPS), and the server broadcasts data to the roles in all squares at the frequency of FPS 15. And the client calculates after receiving the instruction of the client or others and draws the result. The scheme is completely consistent with the battle, and the social functions of chatting and the like are added on the basis of the battle. However, the method needs to consume a great amount of computing power of the client, the original number of people can be controlled to be 1-5, and once the number of people exceeds 10, the client is severely stuck and even dodges. A large number of simultaneous threads cannot be supported using only frame synchronization.
Frame synchronization is a technique that distributes computational stress to clients, which will multiply as the number of people increases. The upper limit of the pressure which can be borne by the client is unknown, and the maximum number of people which can be borne by one game scene cannot be quantified. Even if the energy is increased, the value is small, and the requirement of social contact of a large number of people cannot be met. Because of using this technology, a large amount of data is cached locally at the client, and the server cannot know the position and state of the Player Character, the state of the Non-Player Character (NPC), and the map state. Therefore, if a social square is realized in a state synchronization manner, the client and the server are required to realize two synchronization frames at the same time, which not only requires double development manpower, but also doubles the code amount, which is very unrealistic for a project. Meanwhile, no matter what technology is used, the problem of large flow cannot be avoided as long as the number of people on line is large at the same time. There is a very mature Area of Interest (AOI) scheme under state synchronization, and how to implement a special AOI under frame synchronization is also a problem.
Based on the above, the embodiment of the disclosure provides a data processing method and device for virtual characters in a game and an electronic device. Meanwhile, the AOI technology is introduced, and the AOI design based on the map Tag (Tag) can obtain the interested area of the client under the condition that the server has no data, so that the state frames are distributed in groups, unnecessary flow and calculation consumption are reduced, and the technical problem that the operating pressure of the client is high due to the fact that the data processing amount of the client is large is solved.
In one embodiment of the present disclosure, the data processing method for the virtual character in the game may be executed on a local terminal device or a server. When the data processing method for the virtual character in the game runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, the cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the data processing method of the virtual character in the game are finished on a cloud game server, and the client equipment is used for receiving and sending data and presenting the game picture, for example, the client equipment can be display equipment with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the client device through a network, and finally, the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player by holographic projection. By way of example, the local terminal device may include a display screen for presenting a graphical user interface including game screens and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present disclosure provides a data processing method for a virtual character in a game, where a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system.
For example, as shown in fig. 1, fig. 1 is a schematic view of an application scenario provided by an embodiment of the present disclosure. The application scenario may include an electronic terminal (e.g., a cell phone 102) and a server 101, and the electronic terminal may communicate with the server 101 through a wired network or a wireless network. The electronic terminal is used for running a virtual desktop, and can interact with the server 101 through the virtual desktop to control the virtual role in the server 101.
The electronic terminal of the embodiment is described by taking the mobile phone 102 as an example. The handset 102 includes Radio Frequency (RF) circuitry 210, memory 220, a touch screen 230, a processor 240, and the like. Those skilled in the art will appreciate that the handset configuration shown in fig. 2 is not intended to be limiting and may include more or fewer components than those shown, or may combine certain components, or split certain components, or arranged in different components. Those skilled in the art will appreciate that the touch screen 230 is part of a User Interface (UI) and that the cell phone 102 may include fewer than or the same User Interface as illustrated.
The RF circuitry 210 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 220 may be used to store software programs and modules, and the processor 240 may execute various functional applications and data processing of the cellular phone 102 by operating the software programs and modules stored in the memory 220. The memory 220 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the stored data area may store data created from use of the handset 102, and the like. Further, the memory 220 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The touch screen 230 may be used to display a graphical user interface and receive user operations with respect to the graphical user interface. A particular touch screen 230 may include a display panel and a touch panel. The Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may collect contact or non-contact operations of a user on or near the touch panel (e.g., operations of a user on or near the touch panel using any suitable object or accessory such as a finger 301, a stylus, etc., as shown in fig. 3), and generate preset operation instructions. In addition, the touch panel may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction and gesture of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into information that can be processed by the processor, sends the information to the processor 240, and receives and executes commands sent from the processor 240. In addition, the touch panel may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and may also be implemented by any technology developed in the future. Further, the touch panel may overlay the display panel, a user may operate on or near the touch panel overlaid on the display panel according to a graphical user interface displayed by the display panel, the touch panel detects the operation thereon or near, and then the touch panel transmits the operation to the processor 240 to determine a user input, and the processor 240 provides a corresponding visual output on the display panel in response to the user input. In addition, the touch panel and the display panel can be realized as two independent components or can be integrated.
The processor 240 is the control center of the handset 102, connects various parts of the entire handset using various interfaces and lines, and performs various functions and processes data of the handset 102 by running or executing software programs and/or modules stored in the memory 220 and calling data stored in the memory 220, thereby performing overall monitoring of the handset.
Embodiments of the present disclosure are further described below with reference to the accompanying drawings.
Fig. 4 is a schematic flowchart of a data processing method for a virtual character in a game according to an embodiment of the present disclosure. The method can be applied to a first client (for example, the mobile phone 102 shown in fig. 2), and a game scene of the game includes a first virtual character controlled by the first client. As shown in fig. 4, the method includes:
step S410, determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character.
In practical applications, the first client may determine the game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction for the first virtual character issued by the player through the first client.
As an example, the game state data may be state data of the first virtual character, for example, the player controls the first virtual character to move, jump, and the like in the game scene through the first client, and the first client may determine data of the distance, position, jump height, whether to collide with the terrain in the game scene, and the like of the movement as the game state data.
As another example, the game state data may be state data of other virtual characters caused during the execution of the control instruction by the first virtual character, for example, a player controls the first virtual character to attack the other virtual characters through the first client, which causes the life values of the other virtual characters to decrease, and the first client may determine these data as the game state data.
As another example, the game state data may be state data of other virtual objects caused during the execution of the control instruction by the first virtual character, for example, the player controls the first virtual character to attack a virtual building, fell a virtual tree, open a treasure box in a game scene, and the like through the first client, and the first client may determine the data as the game state data.
In step S420, the game state data is packaged into a state frame.
For example, as shown in fig. 5, the first client may package status data such as a distance position moved by the first avatar in the game scene, a jump height, and whether there is a collision with a terrain in the game scene into a status frame. For example, if the first avatar moves to the right by 1 meter at unit time 1 and jumps to the front right at unit time 2, the first client may record these actions as game state data and encapsulate them as a state frame. The state frame is different from frame synchronization and state synchronization, the server only forwards and does not calculate, and the client state information is issued in a frame synchronization mode.
Step S430, sending the status frame to the server, so that the server sends the status frame to the second client, and displays a game screen corresponding to the game status data on the second client.
For example, as shown in fig. 5, the first client sends the encapsulated status frame to the server, forwards the encapsulated status frame through the server, and forwards the encapsulated status frame to the second client, so that the second client draws based on the content in the status frame, and further displays the first avatar, that is, displays that the first avatar moves to the right by 1 meter at 1 unit time and jumps to the front right at 2 unit time in a graphical user interface of the second client. Each client only calculates own data such as movement, jumping, collision and the like, and encapsulates the final calculation result into a state frame to be sent to the server.
In the embodiment of the disclosure, the first client can encapsulate the game state data related to the first virtual character controlled by the first client into a state frame, and then send the state frame to the server, and further synchronize the state frame to the second clients through the server, each second client can obtain the state frame from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, and still transfers the calculation to the clients for processing, meanwhile, because each client only processes and uploads the game state data related to the virtual character controlled by the server, the client can directly draw when receiving the data of other clients, so that each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, thereby effectively reducing the data processing amount and operating pressure of the clients, the technical problem that the operating pressure of the client is large due to large data processing amount of the client is solved.
The above steps are described in detail below.
In some embodiments, the game state data may include multiple types, and the types of the game state data include multiple types, so that the first client can flexibly package multiple types of game state data into a state frame, richness of the uploaded content is improved, and data processing amount and operating pressure of the client can be effectively reduced. Illustratively, the game state data includes any one or more of:
state data of the first virtual character, state data of other virtual characters except the first virtual character in the game scene, and state data of other virtual objects.
As an example, the game state data may be state data of the first virtual character, for example, the player controls the first virtual character to move, jump, etc. in the game scene through the first client, and the first client may determine data of the distance, position, jump height, etc. of the movement, and whether the first client collides with the terrain in the game scene as the game state data.
As another example, the game state data may be state data of other virtual characters in the game scene except the first virtual character, for example, the player controls the first virtual character to attack the other virtual characters through the first client, which results in a reduction in the life value of the other virtual characters, and the first client may determine these data as the game state data.
As another example, the game state data may be state data of other virtual objects, for example, a player controls a first virtual character to attack a virtual building, fell a virtual tree, open a treasure box in a game scene, and the like through a first client, and the first client may determine the data as the game state data.
The types of the game state data comprise a plurality of types, so that the first client can flexibly package the game state data into the state frames, the richness of the uploaded content is improved, and the data processing amount and the operating pressure of the client can be effectively reduced.
In some embodiments, the client only displays the virtual characters in the scene area within a certain range, and therefore only needs to obtain the status frames of the virtual characters from the server, thereby reducing the data processing amount of the client, and reducing the operating pressure of the client in a more flexible manner, for example, the game scenes are generally much larger than the screen of the client, so the view field of the player is very limited, the server sends the game data outside the view field of the player, which is a waste, if this part of data can be removed, much traffic can be saved, by sending the data of the target scene area where the first virtual character is located to the corresponding second client in the same scene area through the server, so that the second client can receive the status frame corresponding to the first virtual character in the same area, and other clients corresponding to other scene areas cannot receive the status frames, the data transmission pressure of the server side is relieved, and the working pressure of other client sides is relieved. As one example, a server corresponds to a plurality of game scenarios; the method further comprises the following steps:
step a), determining a target game scene where the first virtual character is located.
And b), sending the data of the target game scene to the server so that the server sends the state frame to the second client.
For the step b), the second virtual character controlled by the second client is the other virtual character except the first virtual character in the target game scene.
For example, as shown in fig. 6, a game scene may include a plurality of scene areas, such as 9 different scene areas divided by dashed lines, and the first client may first determine a target scene area where the first virtual character 601 is located in the game scene, that is, a scene area to the left in the middle. And then, sending the data of the scene area with the left middle part to the server side, so that the server side sends the state frame corresponding to the scene area to all clients corresponding to the target virtual role in the scene area with the left middle part. It can be understood that the player can only see the game content in the same scene, that is, the player controls the first virtual character to move in the scene area to the left in the middle, so that the player can only see the game content in the scene area to the left in the middle, and the other players in the scene area to the left in the middle can also only see the game content in the scene area to the left in the middle.
The first client determines the target game scene where the first virtual character is located, and then the data of the target game scene is sent to the server, so that the server sends the state frame to the second client, and the server can be prevented from sending redundant and useless game data. By only uploading and drawing the game content in the current game scene, the client avoids invalid data transmission, reduces the data transmission pressure of the server and the working pressure of the client.
Based on the step a) and the step b), the server can determine the virtual roles in the scene area corresponding to each client in the area identification manner, for example, the scene area can be accurately divided in the manner of adding the identification, so that the status frame is accurately sent to the corresponding second client, the second client can receive the status frame corresponding to the first virtual role in the same area, other clients corresponding to other scene areas with different scene identifications cannot receive the status frame, the data transmission pressure of the server is reduced, and the working pressure of other clients is reduced. As an example, a game scene comprises a plurality of scene areas, and each scene area corresponds to an area identifier; the step b) may specifically include the following steps:
and step c), sending the data of the target game scene and the target area identification corresponding to the target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identification, and sends the target state frame corresponding to the target scene area to the second client.
For example, as shown in fig. 6, each scene area in the game corresponds to a unique area identifier, for example, the scene area to the left in the middle is identified as 00001, the scene area in the middle is identified as 00010, the scene area to the right in the middle is identified as 00100, and so on. If the first virtual character 601 is located in the scene area with the identifier 00001, the first client may send the data of the target scene and the target area identifier 00001 corresponding to the target scene area to the server, so that the server determines the target scene area where the first virtual character is located according to the target area identifier 00001 and sends the status frame to the second client located in the same scene, that is, the client corresponding to the other virtual characters in the scene area corresponding to the target area identifier 00001.
In practical applications, it can be understood that the target area identifier is a Tag value, and the Tag value is maintained to represent the entire map. Then, the whole map is divided into a small area according to conditions such as the visual field or the game scene, and each area occupies one bit of the total Tag. When the client uploads the status frame, the area visible by the controlled virtual character needs to be marked on the Tag and uploaded. After receiving the status frame, the server will group the virtual roles according to the Tag value, i.e. Tag has one same bit in one group. If a virtual character is at the edge of some area, it may be in multiple groups at the same time. As shown in fig. 6, each scene area corresponds to one digit in the integer data, for example, the scene area with the left middle part is identified as 00001, that is, the scene area with the left middle part is identified by the fifth digit; marking the middle scene area as 00010, namely marking the middle scene area by using the fourth bit; marking the scene area with the right middle part as 00100; identifying a scene area at the lower left as 01000; the scene area with the lower middle part is marked as 10000. The first virtual character 601 is in the scene area with the scene identifier 00001, so that the scene identifier 00001 can be uploaded to the server, the first virtual character can only observe other virtual characters in the scene area with the scene identifier 00001, and all virtual characters in the game scene with the scene identifier fifth 1(XXXX1) can observe the first virtual character 601. The second virtual character 602 is located in scene areas with scene identifiers of 00010 and 10000 at the same time, that is, the scene area with the fourth bit of 1 and the scene area with the first bit of 1, so that the scene identifier 10010 can be uploaded to the server, the second virtual character 602 can see other virtual characters in the two scene areas at the same time, and the second virtual character 602 can be observed by all virtual characters in the game scene with the fourth bit of scene identifier and the first bit of 1(1XX1X, which also includes 1XXXX and XXX 1X).
It should be noted that the target scene area uploaded by the first client may be a scene in a view field viewed by the first virtual character controlled by the first client, that is, a game scene currently displayed on the graphical user interface of the first client; the target scene area uploaded by the first client may also be a partial range in a scene viewed by the first virtual character, for example, only a game scene in the upper left corner of the graphical user interface of the first client is uploaded.
The data of the target game scene and the target area identification corresponding to the target scene area are sent to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene and determines the target scene area according to the target area identification, and sends the target state frame corresponding to the target scene area to the second client, so that the second client can accurately receive the state frame corresponding to the first virtual character in the same area, other clients corresponding to other scene areas with different identifications cannot receive the state frame, the data transmission pressure of the server is reduced, and the working pressure of other clients is reduced.
In some embodiments, the amount of data to be transmitted may be reduced in a flexible manner, so as to effectively reduce the data processing pressure of the client, for example, duplicate or unchanged data in the status frame is omitted, and only the changed data in the status frame is encapsulated to obtain a corresponding data packet, thereby effectively reducing the amount of data to be transmitted. As an example, the status frame is sent to the server in the form of a data packet; before the step S430, the method may further include the steps of:
and d), determining the changed data in the state frame.
And e), data changed in the state frame is added into the packet body of the data packet.
In practical application, Protobuf is a data serialization mechanism supporting multiple platforms, multiple languages and extensible. The high-efficiency storage is realized by serializing (serializing) the structured data, and the method is used for language-independent, platform-independent and extensible serialized structured data formats in the fields of communication protocols, data storage and the like, and supports a self-defined data structure. Taking a horizontal 2-dimensional (2D) game as an example, since the game has only two dimensions, i.e., an X axis and a Y axis, and a virtual character can move only up and down and left and right, the remaining states are represented by different bits of one value, except for necessary parameters such as virtual character coordinates, a corner virtual color state, and a map state. In addition, according to the characteristic that the Protobuf does not drive default values into the bag body, certain unchanged values can be omitted, for example, when the virtual character moves on the platform (in the X-axis direction), the Y-axis coordinate, the virtual character state and the map state are not changed, and then the virtual character, the virtual character and the map state are not driven into the bag body. After the compression in practical application, the size of a status frame is controlled within 2-19 bytes. This amount of compression is considerable and can effectively reduce the flow. In practical application, the same method is also applicable to 3-dimensional (3-Dimension, 3D) games, only some extensions are needed, such as increasing Z-axis coordinates, game scene fields, and the like, but data traffic is also correspondingly improved.
For example, the first client may perform real-time refreshing or refreshing at a preset frame rate, for example, draw an image at a preset frame rate of 15FPS, and omit some invariant values through characteristics of Protobuf in the process of drawing the image, although the number of status frames in the packet body of each data packet is unchanged, the storage space occupied by each data packet is reduced, thereby relieving data processing pressure of the client and the server.
The first client determines the changed data in the status frame first, and then the changed data in the status frame is embedded into the body of the data packet. By only encapsulating the data changed in the state frame, the data amount required to be transmitted is effectively reduced, and the data processing pressure of the client is reduced.
In some embodiments, the data amount to be transmitted can be reduced in a flexible manner, so as to effectively reduce the data processing pressure of the client, for example, the data amount can be reduced by limiting the number of status frames in each data packet on the premise of ensuring stable and continuous pictures, thereby effectively reducing the data processing pressure of the client and not affecting the game experience of the player. As an example, the status frame is sent to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number; displaying the game picture on the second client at a designated frame rate; wherein the specified frame rate is a specified number of frames per second.
Illustratively, the client also renders the image at a frame rate of 15FPS, but the upload data is compressed from 15 frames into 7 frames. The method collects own 7 frames of data, packs the data into a data packet and sends the data packet to the server, so that the number of uploading packets per second of all clients can be reduced, the times of serialization and deserialization protocols of the server are reduced, and the pressure of the server is effectively relieved. At the same time, therefore, as shown in fig. 5, the motion of the actual first virtual character (solid line figure) will be delayed by 0.5 seconds in the eyes of the other players (dashed line figure). This is acceptable in a social-based game scenario, and players are also largely unaware of this phenomenon, i.e., they are not aware that someone has jumped up 0.5 seconds in advance.
It should be noted that the number of frames contained in each data packet may be replaced, but the player experience is greatly affected. If the 15 frames are reduced by half, then there may be a more noticeable click-through of the actions of the other virtual characters in the player's eyes. If the 7 frames are reduced by half, the first avatar will be delayed 1 second in the eyes of the other players.
Sending the state frame to a server in a data packet form; each data packet comprises a state frame with a specified frame number; the game screen is displayed at a specified frame rate on the second client. The data volume needing to be transmitted is reduced in a flexible mode, so that the data processing pressure of the client is effectively reduced, and the game experience of a player is not influenced.
In some embodiments, the data types of the game state data may include multiple types, so that the data included in the data packet may sufficiently record the behavior state of the first virtual character and other related content, and further, other clients may accurately represent the behavior state of the first virtual character and other related content, thereby improving the game experience of the player. As one example, the game state data includes any one or more of:
movement state data, collision state data, position state data in the game.
Illustratively, the state data may include a movement state of the first avatar, such as which direction the first avatar is moving, whether a jump is made, and the like; the state data may also include a collision state of the first virtual character, such as whether a collision occurs with other virtual characters, whether a collision occurs with a prop model in the game scene area, and so on; the state data may also include a position state of the first avatar, such as which identifier the first avatar is currently in corresponding to the game scene area.
The game state data comprises a plurality of types, so that the data contained in the data packet can fully record the behavior state and other related contents of the first virtual character, other clients can accurately show the behavior state and other related contents of the first virtual character, and the game experience of the player is improved.
Fig. 7 is a schematic flowchart of a data processing method for a virtual character in a game according to an embodiment of the present disclosure. The method can be applied to a server side, and a game scene of a game comprises a first virtual character controlled by a first client side. As shown in fig. 7, the method includes:
step S710, obtains a status frame sent by the first client.
The state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client.
For example, the server may obtain a status frame sent by the first client, where the status frame is obtained by encapsulating status data by the first client, and the status data is game status data generated based on a control instruction of the first client to the first virtual character. For example, the player controls the first virtual character to move, jump, and the like in the game scene through the first client, and the first client may determine data such as a distance position of the movement, a jump height, and whether or not the first virtual character collides with a terrain in the game scene as game state data, and then encapsulate the data to obtain a state frame.
In step S720, a second virtual character in the same game scene as the first virtual character is determined.
For example, the server may determine, based on data of a target game scene sent by the first client, a target area identifier corresponding to a target scene area, and the like, a second virtual character in the same room (game scene) as the first virtual character, and further, the forwarding of the server has a screening function, so that the second virtual character in the same room (game scene) as the first virtual character may be screened out.
Step S730, sending the status frame to the second client corresponding to the second virtual character, so as to display the game screen corresponding to the game status data on the second client.
Illustratively, the server sends the encapsulated status frame to the second client, so that the second client does not need to recalculate but directly draws based on the content in the status frame, and displays the first virtual character, that is, displays the first virtual character in a graphical user interface of the second client to perform actions such as moving and jumping in a game scene.
In the embodiment of the disclosure, when a player controls a first virtual character through a first client, a system may acquire state data of the first virtual character, then encapsulate the state data into a state frame, and further synchronize the state frame to a second client through a server, so that the second client displays the first virtual character on a graphical user interface based on the state frame. The state frame solves the problem that the frame synchronization server does not have any game data, still gives the calculation to the client for processing, and simultaneously, each client only processes and uploads own data, so that the data of other clients can be directly drawn when being received, thereby effectively reducing the pressure of the client and relieving the technical problem that the operating pressure of the client is larger due to larger data processing amount of the client.
The above steps are described in detail below.
In some embodiments, the client may only display the content in the corresponding scene area, so as to reduce the data processing amount of the client, and reduce the pressure of the client in a more flexible manner, for example, through the screening of the server, the second client may only receive and display the game pictures in the same area, so as to effectively reduce the working pressure of the second client. As an example, the step S720 may specifically include the following steps:
and f), acquiring the scene identification sent by each client.
And g), determining a second virtual character in the same game scene with the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
For step f) above, the scene identifier is used to indicate at least one game scene in which the virtual character controlled by the client is located in the game.
Illustratively, as shown in fig. 6, it can be understood that the scene identifier is a Tag value, and the Tag value is maintained to represent the entire map. Then, the whole map is divided into a small area according to conditions such as the visual field or the game scene, and each area occupies one bit of the total Tag. When the client uploads the status frame, the area visible by the controlled virtual character needs to be marked on the Tag and uploaded. After receiving the scene identifiers sent by each client, the server groups the virtual roles according to the Tag values, namely, tags have one same bit in one group. The first client system sends the scene identifier 00001 in the scene area where the first virtual character 601 is located to the server, after the server obtains the scene identifiers from the multiple clients, it can be determined which virtual characters are located in the same scene area with the first virtual character 601, and then, for the same scene area, the status frame corresponding to each virtual character is broadcasted to all virtual characters located in the same scene area, so that other players can observe the status of other virtual characters. For another example, if the second virtual character 602 is in the scene area 00010 and the scene area 10000 at the same time, the server may obtain scene identifiers of the two scene areas, and then broadcast the status frame corresponding to each virtual character to all virtual characters in the same scene area with respect to the same scene area 00010 and the same scene area 10000, that is, the second virtual character 602 may be observed by other virtual characters in the scene area 00010 and the same scene area 10000.
The method comprises the steps that a server side obtains scene identifications sent by each client side, then virtual roles in the same scene area are determined based on the scene area where a plurality of virtual roles are located, and then a state frame corresponding to each virtual role is broadcasted to all virtual roles in the same scene area aiming at each same scene area, so that the client side only displays game contents in the same game scene, players can be guaranteed to see the required game contents in a visual field range, unnecessary contents which do not need data transmission are omitted, and the working pressure of the server side and the client side is reduced.
In some embodiments, the server corresponds to a plurality of game scenes, and the game scene with the highest priority corresponding to the virtual character can be obtained according to the traversal result by traversing all the game scenes and allocating the game scene to the virtual character, so that the virtual character and other virtual characters with higher similarity can be in the same game scene, and the game experience of the player can be improved. As an example, the server corresponds to a plurality of game scenes, and each virtual character corresponds to a plurality of game scene allocation dimensions; the step S720 may specifically include the following steps:
and h), determining the game scene with the optimal distribution dimension of each game scene in the distribution dimensions of the plurality of game scenes by traversing all the game scenes.
Step i), based on the priority of each game scene distribution dimension, determining the game scene to which the first virtual character is distributed and the second virtual character which is distributed to the same game scene with the first virtual character from the game scenes with the optimal game scene distribution dimension.
For example, as shown in fig. 8, the embodiment of the present disclosure may be applied to a social square in a game, where the social square is a distributed micro-service architecture, and is relatively independent from other service clusters, and both the clusters and the interior of the clusters support horizontal expansion and contraction. The number of people in different clusters is relatively equal by randomly accessing different gateways through the client. And a plurality of load balancing clothes are arranged in the cluster, the current load is calculated in real time according to the last load value and the predicted value of each square process, and an optimal process is selected and allocated to the virtual role. The clustering and the process of the square are distributed based on the equal principle, and the probability of the virtual role entering each square service is the same. There are hundreds of game scenes in each process, and the allocation of these game scenes is again allocated according to the attribute priority defined by the plan, and the virtual with similar or attraction attributes are more easily allocated together. For example, it is desirable to give priority to a first virtual character to match other virtual characters having similar game habits, if not, the matching level is similar, and if not, the conditions are abandoned, and a relatively active game scene is matched. All game scenes can be inserted into four queues, which are defined according to the activity level: full load, high load, active, idle four levels. And the game habit is set as a first priority, the role grade is set as a second priority, and the in and out of each virtual role updates the game scene attribute. Thus, the method changes from finding a proper virtual character to finding a proper game scene. In addition, because each priority needs to traverse all game scenes according to the activity degree, the performance is very wasted, a traversing memo list needs to be maintained additionally, all game scenes meeting various conditions can be obtained after traversing once, and traversing can be quitted immediately if a certain game scene meets the highest priority.
In practical application, the server has an empty list as shown in table 1 at first, and fills the game scene with the optimal dimension after traversing once, and finally obtains the result from the list.
TABLE 1
Priority/liveness Is active in Free up
Close habits Game scene 1 Is composed of
Similar grade Is composed of Game scene 2
Because the priority and the number of game scenes can be grown according to the planning requirement. If the game scenes are N in total and the priorities are M, the number of traversal times before optimization is N × M, and the number of traversal times after optimization is N + M. Because the design is based on the priority, namely if the game habits are similar, whether the matching grades are close or not is not considered. The design of increasing the weight can also be considered, each game scene has a habit ratio and a grade ratio, and the final addition result is used as a screening condition.
The server side traverses all game scenes, the target game scene with the optimal distribution dimension of each game scene in the distribution dimensions of the plurality of game scenes is determined, and then the final game scene distributed by the virtual character is determined from the target game scenes based on the priority of the distribution dimension of each game scene, so that the virtual character and other virtual characters with higher similarity can be located in the same game scene, the player can find other players interested in the game, and the game experience of the player is improved.
Based on the step h) and the step i), the game scene distribution dimensionality can comprise multiple types, and the final game scene can be flexibly and specifically determined by enabling the game scene distribution dimensionality to comprise multiple types, so that the virtual character can be distributed to the game scene with the highest matching degree, and the game experience of the player is improved. As one example, the game scene allocation dimension includes any one or more of:
game habits of virtual characters, character levels, and character liveness.
For example, the game scene allocation dimension may include various kinds, such as game habits of virtual characters, character levels, character activeness, and the like. Different dimensions can be set to different priorities, and then appropriate allocation is performed for the virtual roles. As mentioned above, the game habits of the virtual character may be given a first priority and the player character level a second priority, and each player's entry and exit will update the game scene attributes. Thus, the method changes from finding a proper player to finding a proper game scene. And all game scenes meeting various conditions can be obtained after one traversal of the traverse memo, wherein if a certain game scene meets the highest priority, the traversal can be immediately exited.
The game scene distribution dimensionalities comprise multiple types, the final game scene can be determined flexibly and in detail, the virtual character can be distributed to the game scene with the highest matching degree, the virtual character and other virtual characters with high similarity can be located in the same game scene, the player can find other players interested in the game, and the game experience of the player is improved.
In some embodiments, the server may further divide the plurality of second virtual characters according to the division dimensions corresponding to the second virtual characters, determine target second virtual characters that are divided into the same class as the first virtual characters, and send the status frame to the target second client corresponding to the target second virtual characters by screening, so that an effect that players who are interested in each other are visible to each other is achieved, and game experience of the players is further improved. As an example, the number of the second virtual characters is plural, and each second virtual character corresponds to a game character division dimension; the step S730 may specifically include the following steps:
and j) determining a target second virtual character which is classified into the same class as the first virtual character from the plurality of second virtual characters through the game character division dimension.
And step k), sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
Illustratively, similar to the allocation of the game scene, the game habits of the virtual characters can be determined as a first priority, the virtual character level can be determined as a second priority, the activity of the virtual characters can be determined as a third priority, a plurality of second virtual characters are further divided and screened according to the game character dividing dimension, and a batch of similar players most similar to the first virtual character can be determined. And then the server side sends the state frame of the first client side to a corresponding target second client side, so that the second client side can display a game picture corresponding to the game state data.
The method comprises the steps of firstly determining a target second virtual character which is divided into the same class as a first virtual character from a plurality of second virtual characters by dividing the dimension of the game character, and then sending a state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to game state data on the target second client. The server further divides a plurality of second virtual characters, the target second virtual characters which are similar to the first virtual characters are screened out, the status frame is sent to the target second client corresponding to the target second virtual characters through screening, the effect that only interesting players can see each other is achieved, the operating pressure of the client is further reduced, and the game experience of the players is improved.
Based on the step j) and the step k), the game role division dimensionality can comprise multiple types, and the determination of the same virtual role can be flexibly and specifically realized by enabling the game role division dimensionality to comprise multiple types, so that the first virtual role can be allocated to the second virtual role with the highest matching degree, and the game experience of the player is improved. As one example, the game character partitioning dimension includes any one or more of:
game habits of virtual characters, character levels, and character liveness.
For example, the game character division dimension may include various kinds, such as game habits of virtual characters, character levels, character activeness, and the like. Different dimensions can be set to different priorities, and then appropriate allocation is performed for the virtual roles. As mentioned above, the game habits of the virtual characters may be given a first priority, the virtual character level may be given a second priority, and the in and out of each virtual character will update the game scene attributes. Similarly, the above traversal memo shown in table 1 can be used to obtain all the second virtual characters belonging to the same category after one traversal.
The game role division dimensions can be various, the determination of the same virtual role can be flexibly and specifically realized, the first virtual role can be allocated to the second virtual role with the highest matching degree, and the game experience of the player is improved.
Fig. 9 is a schematic flowchart of a data processing method for a virtual character in a game according to an embodiment of the present disclosure. The method can be applied to a second client (for example, the mobile phone 102 shown in fig. 2), the same game scene of the game includes a first virtual character controlled by the first client and a second virtual character controlled by the second client, and the graphical user interface is provided by the second client. As shown in fig. 9, the method includes:
step S910, receiving a status frame sent by the server.
The state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client.
Illustratively, based on the screening assignment of the server, part of the target second clients corresponding to the first client will receive the status frame sent by the server.
In step S920, a game screen corresponding to the game state data is displayed on the graphical user interface.
For example, after receiving the status frame sent by the server, the second client may draw based on the content in the status frame, and then display a game screen corresponding to the game status data in the graphical user interface.
In the embodiment of the disclosure, the first client can encapsulate the game state data related to the first virtual character controlled by the first client into the state frame, and then send the state frame to the server, and further synchronize the state frame to the second clients through the server, each second client can obtain from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, and still hands the calculation to the client for processing, meanwhile, because each client only processes and uploads the game state data related to the virtual character controlled by the server, the data of other clients can be directly drawn when receiving the data of other clients, so that each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, and the data processing amount and the operating pressure of the clients are effectively reduced, the technical problem that the operating pressure of the client is large due to large data processing amount of the client is solved.
In some embodiments, the second client may display the game screen corresponding to the game state data in the graphical user interface in a more flexible manner, for example, the game screen corresponding to the game state data in the state frame is used to overlay the existing game screen, so that the game screen may be displayed for the player in real time. As an example, the step S920 may specifically include the following steps:
and step l), displaying the game picture corresponding to the game state data in a graphic user interface in an overlaying way.
For example, after receiving the state frame sent by the server, the second client may draw based on the game state data in the state frame to obtain a corresponding game screen, and then may overlay the display content in the current graphical user interface in an overlay manner to implement real-time update of the game content.
The game display content can be updated quickly by displaying the game picture corresponding to the game state data in the graphical user interface in a covering manner, the game picture can be displayed for the player in real time, and the game experience of the player is improved.
Fig. 10 is a schematic structural diagram of a data processing apparatus for a virtual character in a game according to an embodiment of the present disclosure. The device can be applied to a first client, and a game scene of the game comprises a first virtual character controlled by the first client. As shown in fig. 10, the data processing device 1000 of the in-game virtual character includes:
the determining module 1001 is configured to determine, based on a control instruction of a first client to a first virtual character, game state data generated in a process of executing the control instruction by the first virtual character.
The encapsulating module 1002 is configured to encapsulate the game state data into a state frame.
The sending module 1003 is configured to send the status frame to the server, so that the server sends the status frame to the second client, so as to display a game screen corresponding to the game status data on the second client.
In some embodiments, the game state data includes any one or more of:
the state data of the first virtual character, the state data of other virtual characters except the first virtual character in the game scene and the state data of other virtual objects.
In some embodiments, the server end corresponds to a plurality of game scenes; the apparatus may further include:
the second determining module is used for determining the target game scene where the first virtual role is located;
and sending the data of the target game scene to the server so that the server sends the state frame to the second client, wherein the second virtual character controlled by the second client is the other virtual character except the first virtual character in the target game scene.
In some embodiments, the game scene includes a plurality of scene areas, and each scene area corresponds to an area identifier; the sending module 1003 is specifically configured to:
and sending the data of the target game scene and the target area identifier corresponding to the target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identifier, and sends the target state frame corresponding to the target scene area to the second client.
In some embodiments, the status frame is sent to the server in the form of a data packet; the apparatus may further include:
the third determining module is used for determining the changed data in the state frame before sending the state frame to the server;
and (4) data changed in the status frame is typed into the packet body of the data packet.
In some embodiments, the status frame is sent to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number;
displaying the game picture on the second client at a specified frame rate; wherein the specified frame rate is a specified number of frames per second.
In some embodiments, the game state data includes any one or more of:
movement state data, collision state data, position state data in the game.
Fig. 11 is a schematic structural diagram of a data processing apparatus for a virtual character in a game according to an embodiment of the present disclosure. The device can be applied to a server side, and a game scene of a game comprises a first virtual character controlled by a first client side. As shown in fig. 11, the data processing apparatus 1100 of the in-game virtual character includes:
an obtaining module 1101, configured to obtain a status frame sent by a first client; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control command of the first client.
A determining module 1102, configured to determine a second virtual character in the same game scene as the first virtual character.
A sending module 1103, configured to send the status frame to a second client corresponding to the second virtual character, so as to display a game screen corresponding to the game status data on the second client.
In some embodiments, the determining module 1102 is specifically configured to:
acquiring scene identifiers sent by each client; wherein, the scene identification is used for representing at least one game scene in which the virtual character controlled by the client is positioned in the game;
and determining a second virtual character which is in the same game scene as the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
In some embodiments, the server corresponds to a plurality of game scenarios, and each virtual character corresponds to a plurality of game scenario allocation dimensions; the determining module 1102 is specifically configured to:
determining a game scene with an optimal distribution dimension for each game scene in a plurality of game scene distribution dimensions by traversing all game scenes;
and determining a game scene to which the first virtual character is allocated and a second virtual character which is allocated to the same game scene as the first virtual character from the game scenes with the optimal game scene allocation dimension based on the priority of each game scene allocation dimension.
In some embodiments, the game scene allocation dimension includes any one or more of:
game habits of the virtual character, character level, and character liveness.
In some embodiments, the number of the second virtual characters is plural, and each second virtual character corresponds to a game character division dimension; the sending module 1103 is specifically configured to:
determining a target second virtual character which is classified into the same class as the first virtual character from a plurality of second virtual characters through a game character dividing dimension;
and sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
In some embodiments, the game character partitioning dimension includes any one or more of:
game habits of the virtual character, character level, and character liveness.
Fig. 12 is a schematic structural diagram of a data processing apparatus for a virtual character in a game according to an embodiment of the present disclosure. The device can be applied to a second client, the same game scene of the game comprises a first virtual character controlled by the first client and a second virtual character controlled by the second client, and a graphical user interface is provided through the second client. As shown in fig. 12, the data processing apparatus 1200 for an in-game virtual character includes:
a receiving module 1201, configured to receive a status frame sent by a server; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client.
The display module 1202 is configured to display a game screen corresponding to the game state data in the graphical user interface.
In some embodiments, the display module 1202 is specifically configured to:
and displaying the game picture corresponding to the game state data in the graphical user interface in an overlaying manner.
The data processing device for the virtual character in the game provided by the embodiment of the disclosure has the same technical characteristics as the data processing method for the virtual character in the game provided by the embodiment of the disclosure, so that the same technical problems can be solved, and the same technical effects can be achieved. The first client can package the game state data related to the first virtual character controlled by the first client into a state frame, then the state frame is sent to the server, and then the state frame is synchronized to the second client through the server, each second client can obtain the state frame from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, and still transmits the calculation to the clients for processing, meanwhile, each client only processes and uploads the game state data related to the virtual character controlled by the server, so that the data of other clients can be directly drawn when receiving the data of other clients, each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, and the data processing amount and the operating pressure of the clients are effectively reduced, the technical problem that the operation pressure of the client is large due to the fact that the data processing amount of the client is large is solved.
An embodiment of the present application further provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device executes a data processing method of a virtual character in a game as in the embodiment, the processor executing the machine-readable instructions, the preamble of the processor method item to perform the steps of:
determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character;
packaging the game state data into a state frame;
and sending the state frame to a server so that the server sends the state frame to a second client to display a game picture corresponding to the game state data on the second client.
In one possible embodiment, the game state data includes any one or more of:
the state data of the first virtual character, the state data of other virtual characters except the first virtual character in the game scene and the state data of other virtual objects.
In one possible embodiment, the server corresponds to a plurality of the game scenes; further comprising:
determining a target game scene where the first virtual character is located;
and sending the data of the target game scene to the server so that the server sends the state frame to the second client, wherein a second virtual character controlled by the second client is other virtual characters except the first virtual character in the target game scene.
In a possible embodiment, the game scene comprises a plurality of scene areas, and each scene area corresponds to an area identifier;
the step of sending the data of the target game scene to the server so that the server sends the state frame to the second client includes:
and sending the data of the target game scene and a target area identifier corresponding to a target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identifier, and sends a target state frame corresponding to the target scene area to the second client.
In one possible embodiment, the status frame is sent to the server in the form of a data packet; before the step of sending the status frame to the server, the method further includes:
determining data that changes in the status frame;
and inputting the changed data in the status frame into the packet body of the data packet.
In one possible embodiment, the status frame is sent to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number;
the game picture is displayed on the second client at a designated frame rate; wherein the specified frame rate is the specified number of frames per second.
In one possible embodiment, the game state data includes any one or more of:
movement state data, collision state data, position state data in the game.
When the electronic device runs the data processing method of the virtual character in the game as in the embodiment, the processor and the storage medium are communicated through the bus, the processor executes the machine readable instructions, and the preamble of the processor method item is used for executing the following steps:
acquiring a state frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
determining a second virtual character in the same game scene as the first virtual character;
and sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
In one possible embodiment, the step of determining a second virtual character in the same game scene as the first virtual character comprises:
acquiring scene identifiers sent by each client; wherein the scene identifies at least one of the game scenes representing where the client-controlled virtual character is located in the game;
and determining a second virtual character in the same game scene as the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
In one possible embodiment, the server corresponds to a plurality of the game scenes, and each virtual character corresponds to a plurality of game scene allocation dimensions;
the step of determining a second virtual character in the same game scene as the first virtual character comprises:
determining a game scene with an optimal distribution dimension for each of the plurality of game scene distribution dimensions by traversing all the game scenes;
and determining a game scene to which the first virtual character is allocated and a second virtual character which is allocated to the same game scene as the first virtual character from the game scenes with the optimal game scene allocation dimension based on the priority of each game scene allocation dimension.
In one possible embodiment, the game scene allocation dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
In one possible embodiment, the number of the second virtual characters is plural, and each second virtual character corresponds to a game character division dimension;
the step of sending the status frame to a second client corresponding to the second virtual character to display a game screen corresponding to the game status data on the second client includes:
determining a target second virtual character which is classified as the same as the first virtual character from the plurality of second virtual characters through the game character dividing dimension;
and sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
In one possible embodiment, the game character partitioning dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
When the electronic device runs the data processing method of the virtual character in the game as in the embodiment, the processor and the storage medium are communicated through the bus, the processor executes the machine readable instructions, and the preamble of the processor method item is used for executing the following steps:
receiving a state frame sent by a server; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
and displaying a game picture corresponding to the game state data in the graphical user interface.
In a possible embodiment, the step of displaying a game screen corresponding to the game state data in the graphical user interface includes:
and displaying a game picture corresponding to the game state data in the graphical user interface in an overlaying manner.
The specific embodiment and the specific working process of the steps executed by the electronic device may refer to the corresponding processes in the method embodiments, and are not described herein again.
Through the mode, the first client can package the game state data related to the first virtual character controlled by the first client into the state frame, then the state frame is sent to the server, the state frame is further synchronized to the second clients through the server, each second client can obtain the state frame from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, the calculation is still handed to the client for processing, meanwhile, each client only processes and uploads the game state data related to the virtual character controlled by the server, so that the data of other clients can be directly drawn when receiving the data of other clients, each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, and the data processing amount and the operation pressure of the clients are effectively reduced, the technical problem that the operation pressure of the client is large due to the fact that the data processing amount of the client is large is solved.
Corresponding to the data processing method of the virtual character in the game, the embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to perform the following steps:
determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character;
packaging the game state data into a state frame;
and sending the state frame to a server side so that the server side sends the state frame to a second client side to display a game picture corresponding to the game state data on the second client side.
In one possible embodiment, the game state data includes any one or more of:
the state data of the first virtual character, the state data of other virtual characters except the first virtual character in the game scene and the state data of other virtual objects.
In one possible embodiment, the server corresponds to a plurality of the game scenes; further comprising:
determining a target game scene where the first virtual character is located;
and sending the data of the target game scene to the server so that the server sends the state frame to the second client, wherein a second virtual character controlled by the second client is the other virtual character except the first virtual character in the target game scene.
In a possible embodiment, the game scene comprises a plurality of scene areas, and each scene area corresponds to an area identifier;
the step of sending the data of the target game scene to the server so that the server sends the state frame to the second client includes:
and sending the data of the target game scene and a target area identifier corresponding to a target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identifier, and sends a target state frame corresponding to the target scene area to the second client.
In one possible embodiment, the status frame is sent to the server in the form of a data packet; before the step of sending the status frame to the server, the method further includes:
determining data that changes in the status frame;
and the data changed in the status frame is input into the packet body of the data packet.
In one possible embodiment, the status frame is sent to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number;
the game picture is displayed on the second client at a specified frame rate; and the specified frame rate is the specified frame number per second.
In one possible embodiment, the game state data includes any one or more of:
movement state data, collision state data, position state data in the game.
The computer readable storage medium stores computer executable instructions that, when invoked and executed by a processor, further cause the processor to perform the steps of:
acquiring a state frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
determining a second virtual character in the same game scene as the first virtual character;
and sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
In one possible embodiment, the step of determining a second virtual character in the same game scene as the first virtual character comprises:
acquiring scene identifiers sent by each client; wherein the scene identifies at least one of the game scenes representing where the client-controlled virtual character is located in the game;
and determining a second virtual character in the same game scene as the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
In one possible embodiment, the server end corresponds to a plurality of the game scenes, and each virtual character corresponds to a plurality of game scene distribution dimensions;
the step of determining a second virtual character in the same game scene as the first virtual character comprises:
determining a game scene with an optimal distribution dimension for each of the plurality of game scene distribution dimensions by traversing all the game scenes;
and determining a game scene to which the first virtual character is allocated and a second virtual character which is allocated to the same game scene as the first virtual character from the game scenes with the optimal game scene allocation dimension based on the priority of each game scene allocation dimension.
In one possible embodiment, the game scene allocation dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
In one possible embodiment, the number of the second virtual characters is multiple, and each second virtual character corresponds to a game character dividing dimension;
the step of sending the status frame to a second client corresponding to the second virtual character to display a game screen corresponding to the game status data on the second client includes:
determining a target second virtual character which is classified as the same as the first virtual character from a plurality of the second virtual characters through the game character division dimension;
and sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
In one possible embodiment, the game character partitioning dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
The computer readable storage medium stores computer executable instructions that, when invoked and executed by a processor, further cause the processor to perform the steps of:
receiving a state frame sent by a server; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
and displaying a game picture corresponding to the game state data in the graphical user interface.
In a possible embodiment, the step of displaying a game screen corresponding to the game state data in the graphical user interface includes:
and displaying a game picture corresponding to the game state data in the graphical user interface in an overlaying manner.
When the computer executable instruction in the computer readable storage medium is called and executed by a processor, a specific embodiment and a specific working process of the steps executed by the processor may refer to a corresponding process in the method embodiment, which is not described herein again.
Through the mode, the first client can package the game state data related to the first virtual character controlled by the first client into the state frame, then the state frame is sent to the server, the state frame is further synchronized to the second clients through the server, each second client can obtain the state frame from the server and directly display the corresponding game picture based on the state frame, the state frame solves the problem that the frame synchronization server does not have any game data, the calculation is still handed to the clients for processing, meanwhile, each client only processes and uploads the game state data related to the virtual character controlled by the server, so that the data of other clients can be directly drawn when receiving the data of other clients, each client does not need to perform a series of game state data calculations related to the virtual character controlled by other clients, and the data processing amount and the operating pressure of the clients are effectively reduced, the technical problem that the operating pressure of the client is large due to large data processing amount of the client is solved.
The data processing device of the virtual character in the game provided by the embodiment of the disclosure can be specific hardware on the device, or software or firmware installed on the device, and the like. The device provided by the embodiment of the present disclosure has the same implementation principle and technical effect as the foregoing method embodiment, and for the sake of brief description, no mention is made in the device embodiment, and reference may be made to the corresponding contents in the foregoing method embodiment. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the system, the apparatus and the unit described above may all refer to the corresponding processes in the method embodiments, and are not described herein again.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
For another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present disclosure, which essentially or partly contribute to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the data processing method for virtual characters in a game according to various embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure. Are intended to be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (20)

1. A data processing method for virtual characters in a game is characterized in that a game scene of the game comprises a first virtual character controlled by a first client, and the method comprises the following steps:
determining game state data generated in the process of executing the control instruction by the first virtual character based on the control instruction of the first client to the first virtual character;
packaging the game state data into a state frame;
and sending the state frame to a server so that the server sends the state frame to a second client to display a game picture corresponding to the game state data on the second client.
2. The in-game virtual character data processing method according to claim 1, wherein the game state data includes any one or more of:
the state data of the first virtual character, the state data of other virtual characters except the first virtual character in the game scene and the state data of other virtual objects.
3. The in-game virtual character data processing method of claim 1, wherein the server terminal corresponds to a plurality of the game scenes; further comprising:
determining a target game scene where the first virtual character is located;
sending the data of the target game scene to the server side so that the server side sends the state frame to the second client side; and the second virtual character controlled by the second client is the other virtual characters except the first virtual character in the target game scene.
4. The data processing method for virtual characters in a game according to claim 3, wherein the game scene comprises a plurality of scene areas, and each scene area corresponds to an area identifier;
the sending the data of the target game scene to the server so that the server sends the state frame to the second client comprises:
and sending the data of the target game scene and a target area identifier corresponding to a target scene area to the server, so that the server determines the target game scene where the first virtual character is located according to the data of the target game scene, determines the target scene area according to the target area identifier, and sends a target state frame corresponding to the target scene area to the second client.
5. The in-game virtual character data processing method according to claim 1, wherein the status frame is transmitted to the server in the form of a data packet; before sending the status frame to the server, the method further includes:
determining data that changes in the status frame;
and the data changed in the status frame is input into the packet body of the data packet.
6. The data processing method of an in-game virtual character according to claim 1, wherein the status frame is transmitted to the server in the form of a data packet; each data packet comprises a state frame with a specified frame number;
the game picture is displayed on the second client at a designated frame rate; wherein the specified frame rate is the specified number of frames per second.
7. The method of claim 1, wherein the game state data comprises any one or more of:
movement state data, collision state data, position state data in the game.
8. A data processing method of virtual characters in a game is characterized in that the data processing method is applied to a server side, and a game scene of the game comprises a first virtual character controlled by a first client side; the method comprises the following steps:
acquiring a state frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
determining a second virtual character in the same game scene as the first virtual character;
and sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
9. The method of claim 8, wherein the determining a second virtual character in the same game scene as the first virtual character comprises:
acquiring scene identifiers sent by each client; wherein the scene identifies at least one of the game scenes representing where the client-controlled virtual character is located in the game;
and determining a second virtual character in the same game scene as the first virtual character based on the game scene in which the plurality of virtual characters are positioned.
10. The data processing method for an in-game virtual character according to claim 8, wherein the server terminal corresponds to a plurality of the game scenes, and each virtual character corresponds to a plurality of game scene allocation dimensions;
the determining a second virtual character in the same game scene as the first virtual character comprises:
determining a game scene with an optimal distribution dimension for each of the plurality of game scene distribution dimensions by traversing all the game scenes;
and determining a game scene to which the first virtual character is allocated and a second virtual character which is allocated to the same game scene as the first virtual character from the game scenes with the optimal game scene allocation dimension based on the priority of each game scene allocation dimension.
11. The method of claim 10, wherein the game scene assignment dimension comprises any one or more of the following:
game habits, character levels, and character liveness of the virtual characters.
12. The in-game virtual character data processing method according to claim 8, wherein the number of the second virtual characters is plural, and each of the second virtual characters corresponds to a game character division dimension;
the sending the state frame to a second client corresponding to the second virtual character to display a game picture corresponding to the game state data on the second client includes:
determining a target second virtual character which is classified as the same as the first virtual character from the plurality of second virtual characters through the game character dividing dimension;
and sending the state frame to a target second client corresponding to the target second virtual character so as to display a game picture corresponding to the game state data on the target second client.
13. The in-game virtual character data processing method of claim 12, wherein the game character division dimension includes any one or more of:
game habits, character levels, and character liveness of the virtual characters.
14. A data processing method of virtual characters in a game is characterized in that the same game scene of the game comprises a first virtual character controlled by a first client and a second virtual character controlled by a second client, and a graphical user interface is provided by the second client; the method comprises the following steps:
receiving a state frame sent by a server; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
and displaying a game picture corresponding to the game state data in the graphical user interface.
15. The method of claim 14, wherein displaying a game screen corresponding to the game state data on the gui comprises:
and displaying a game picture corresponding to the game state data in the graphical user interface in an overlaying manner.
16. A data processing device for virtual characters in a game is characterized in that a game scene of the game comprises a first virtual character controlled by a first client, and the device comprises:
the determining module is used for determining game state data generated in the process that the first virtual character executes the control instruction based on the control instruction of the first client to the first virtual character;
the encapsulation module is used for encapsulating the game state data into a state frame;
and the sending module is used for sending the state frame to a server so that the server sends the state frame to a second client side and a game picture corresponding to the game state data is displayed on the second client side.
17. A data processing device of virtual characters in a game is characterized by being applied to a server side, wherein a game scene of the game comprises a first virtual character controlled by a first client side; the device comprises:
an obtaining module, configured to obtain a status frame sent by the first client; the state frame is obtained by encapsulating game state data by the first client, wherein the game state data is generated in the process that the first virtual character executes a control instruction of the first client;
the determining module is used for determining a second virtual character which is in the same game scene with the first virtual character;
and the sending module is used for sending the state frame to a second client corresponding to the second virtual character so as to display a game picture corresponding to the game state data on the second client.
18. A data processing device of virtual characters in a game is characterized in that the same game scene of the game comprises a first virtual character controlled by a first client and a second virtual character controlled by a second client, and a graphical user interface is provided through the second client; the device comprises:
the receiving module is used for receiving the state frame sent by the server; the state frame is obtained by encapsulating game state data by the first client, and the game state data is generated in the process that the first virtual character executes the control instruction of the first client;
and the display module is used for displaying a game picture corresponding to the game state data in the graphical user interface.
19. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and wherein the processor implements the steps of the method for processing data of a virtual character in a game according to any one of claims 1 to 15 when executing the computer program.
20. A computer-readable storage medium storing computer-executable instructions which, when invoked and executed by a processor, cause the processor to execute the data processing method of an in-game virtual character according to any one of claims 1 to 15.
CN202210480270.1A 2022-05-05 2022-05-05 Data processing method and device for virtual character in game and electronic equipment Pending CN114768260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210480270.1A CN114768260A (en) 2022-05-05 2022-05-05 Data processing method and device for virtual character in game and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210480270.1A CN114768260A (en) 2022-05-05 2022-05-05 Data processing method and device for virtual character in game and electronic equipment

Publications (1)

Publication Number Publication Date
CN114768260A true CN114768260A (en) 2022-07-22

Family

ID=82434468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210480270.1A Pending CN114768260A (en) 2022-05-05 2022-05-05 Data processing method and device for virtual character in game and electronic equipment

Country Status (1)

Country Link
CN (1) CN114768260A (en)

Similar Documents

Publication Publication Date Title
US11135515B2 (en) Information processing method and apparatus and server
CA2768900C (en) Rendering control apparatus, control method thereof, recording medium, rendering server, and rendering system
EP2425602B1 (en) Position tracking in a virtual world
CN109413480A (en) Picture processing method, device, terminal and storage medium
CN109885367B (en) Interactive chat implementation method, device, terminal and storage medium
CN112044074A (en) Method, device, storage medium and computer equipment for seeking path to non-player character
CN113018848B (en) Game picture display method, related device, equipment and storage medium
CN103729558A (en) Scene change method
US11110352B2 (en) Object moving method and apparatus, storage medium, and electronic apparatus
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
KR20220133279A (en) Voting result display method and apparatus, device, storage medium and program product
CN114768260A (en) Data processing method and device for virtual character in game and electronic equipment
CN113398595B (en) Scene resource updating method and device, storage medium and electronic device
CN112156475B (en) Business data processing method and device, electronic equipment and storage medium
CN116271830B (en) Behavior control method, device, equipment and storage medium for virtual game object
WO2024078225A1 (en) Virtual object display method and apparatus, device and storage medium
CN114100122A (en) Scene position plane display method, related device, equipment and storage medium
CN116920369A (en) Method, device, equipment and storage medium for processing information in game
CN116983613A (en) Virtual object marking method, device, terminal and storage medium
CN112819938A (en) Information processing method and device and computer readable storage medium
CN115957508A (en) Game terrain conversion method, electronic device and storage medium
CN117695634A (en) Interface interaction method and device, electronic equipment and computer storage medium
CN113448466A (en) Animation display method, animation display device, electronic equipment and storage medium
CN116363286A (en) Game processing method, game processing device, storage medium and program product
CN114882156A (en) Animation generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination