WO2021143255A1 - 数据处理方法、装置、计算机设备以及可读存储介质 - Google Patents

数据处理方法、装置、计算机设备以及可读存储介质 Download PDF

Info

Publication number
WO2021143255A1
WO2021143255A1 PCT/CN2020/123627 CN2020123627W WO2021143255A1 WO 2021143255 A1 WO2021143255 A1 WO 2021143255A1 CN 2020123627 W CN2020123627 W CN 2020123627W WO 2021143255 A1 WO2021143255 A1 WO 2021143255A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
variable
type variable
data
type
Prior art date
Application number
PCT/CN2020/123627
Other languages
English (en)
French (fr)
Inventor
何欢
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227007045A priority Critical patent/KR20220041906A/ko
Priority to JP2022519177A priority patent/JP7465959B2/ja
Priority to EP20914236.3A priority patent/EP3991817A4/en
Publication of WO2021143255A1 publication Critical patent/WO2021143255A1/zh
Priority to US17/671,968 priority patent/US12017146B2/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/358Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/352Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • This application relates to the field of computing technology, and in particular to a data processing method, device, computer equipment, and readable storage medium.
  • the game theme logic will run on the server, including the moving position of each non-player character (NPC), each The position and status of each player. These data will be selected and filtered, and sent to each game client through the "network copy" method. The game client will receive the corresponding data, and display the received data on the client screen by rendering it. .
  • NPC non-player character
  • the server will use the method of nine square grid division to divide the relevant surrounding players.
  • the change of the data is sent to the player's client, for example, in most FPS games, it is related data within a range of about 200 meters from the player.
  • the embodiments of the present application provide a data processing method, device, computer equipment, and computer-readable storage medium, which can improve the efficiency of data processing.
  • the embodiments of the present application provide a data processing method, which is executed by a computer device, and includes:
  • the first change data of the main virtual object is sent to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the first change data.
  • One aspect of the embodiments of the present application provides a data processing device, and the device includes:
  • the first obtaining module is configured to obtain the associated virtual object associated with the main virtual object, and obtain the variable belonging to the first update frequency type among the role variables of the associated virtual object as the first type variable;
  • the compression module is used to compress the above-mentioned first type variables to obtain compressed data
  • the first encapsulation module is configured to encapsulate the compressed data to obtain the first change data of the main virtual object
  • the first sending module is configured to send the first change data of the main virtual object to the target client where the main virtual object is located, so that the target client can update and display the frame image based on the first change data.
  • One aspect of the embodiments of the present application provides a computer device, including a processor, a memory, and an input and output interface;
  • the above-mentioned processor is respectively connected to the above-mentioned memory and the above-mentioned input and output interface, wherein the above-mentioned input and output interface is used for inputting data and outputting data, the above-mentioned memory is used for storing program code, and the above-mentioned processor is used for calling the above-mentioned program code to execute the Embodiment of the above-mentioned data processing method.
  • the embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program includes program instructions. When executed by a processor, the program instructions execute as in the embodiments of the present application.
  • the above-mentioned data processing method is not limited to:
  • Figure 1 is a data processing architecture diagram provided by an embodiment of the present application.
  • Fig. 2 is a flowchart of a data processing method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a data processing process of a first type variable provided by an embodiment of the present application
  • FIG. 4 is a data processing process of multiple virtual objects provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a specific flow of data processing provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a display scene of an associated virtual object provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a first type variable determination scenario provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of data processing for updating based on historical frames according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a second-type variable trigger scenario provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a variable division scenario provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a data processing device provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 1 is a data processing architecture diagram provided by an embodiment of the present application.
  • the data processing system includes a computer device 101 and a plurality of electronic devices, such as electronic device 102a and electronic device.
  • the device 102b, ... and the electronic device 102c, the computer device 101 and each electronic device can communicate with each other.
  • the computer device 101 obtains each connected electronic device, generates change data corresponding to each electronic device, and sends the generated change data to the corresponding electronic device, so that the electronic device can update the frame image based on the corresponding change data. show.
  • the computer device 101 detects that the electronic device 102a is online, and obtains the associated virtual object associated with the main virtual object logged in the electronic device 102a, including the player or player associated with the main virtual object.
  • the associated virtual object can be regarded as an object displayed in the electronic device 102a where the main virtual object is located.
  • Obtain the first type variables of each associated virtual object such as the position information, view direction information, speed information, acceleration information, path information, and Ragdoll information of the corresponding associated virtual object, among which Ragdoll information is used to construct the association
  • the role model of the virtual object where the first type variable is a variable belonging to the first update frequency type in the associated virtual object.
  • the first update frequency type can be considered as the frequency that changes every frame.
  • the changes are relatively frequent, so that the first type of variable needs to be updated when each frame is updated. Therefore, when the computer device performs data processing, it needs to obtain the first type of variable associated with the virtual object to maintain the target client's
  • the data in the display page is consistent with the data in the computer equipment, realizing network replication of the first type of variables.
  • the above-mentioned first type of variables can be compressed separately to obtain compressed data, including compressed data of position information, compressed data of view direction information, compressed data of path information, etc.
  • the second update frequency type means that the update frequency is low and does not change every frame. Therefore, the second type of variables does not need to be processed in every frame, thereby further reducing the processing of computer equipment
  • the amount of data and the amount of data transmitted with the client can improve the update efficiency.
  • the electronic device 102a After receiving the first change data, the electronic device 102a restores the first change data to obtain the position information, view direction information, speed information, acceleration information, path information, and Ragdoll information of the associated virtual object.
  • a type variable based on these first type variables, the current frame image is rendered. It can be considered that the state of the virtual object displayed in the current frame image is determined by each first type variable.
  • the process of the computer device 101 sending the change data to the electronic device 102b,... Or the electronic device 102c is the same as the process of the computer device 101 sending the change data to the electronic device 102a.
  • the computer device 101 may be a server or a client, or a system composed of a server and a client.
  • the client may be an electronic device, the client and the aforementioned electronic devices (electronic device 102a, electronic device 102b, ...And electronic equipment 102c), including but not limited to mobile phones, tablet computers, desktop computers, notebook computers, handheld computers, mobile internet devices (MID), wearable devices (such as smart watches, smart bracelets, etc.), etc. .
  • electronic device 102a electronic device 102a, electronic device 102b, ...And electronic equipment 102c
  • MID mobile internet devices
  • wearable devices such as smart watches, smart bracelets, etc.
  • the method provided in the embodiments of the present application may be executed by a computer device, which includes but is not limited to a terminal or a server.
  • the execution subject for data processing may be a computer device.
  • the computer device when the embodiment of the application is applied to a game scene, the computer device includes but is not limited to a dedicated server (Dedicated Server, DS), where DS is a server that runs game logic and does not run tasks related to image rendering and sound. , And will send the results of data processing to the client for processing game data.
  • DS dedicated Server
  • the character variables of the associated virtual objects are classified based on the update frequency variables, and the first type variables belonging to the high frequency (the first update frequency type) are updated during each frame update, and the The first type of variable is compressed, and for the second type of variable belonging to the low frequency (the second update frequency type), there is no need to update every frame, thereby reducing the overhead of computer equipment for network replication and improving frame update
  • the efficiency of the game scene shortens the update time of one frame in the game scene, thereby improving the frame image update performance of the game scene, making the switching between frames in the game scene smoother, and improving the user experience.
  • the embodiments of the present application can reduce the data processing pressure of computer equipment (such as DS), in computer equipment with the same performance, technicians can implement more abundant game scenes through the embodiments of the present application. For example, suppose that the DS can copy 2 frames per second when the game scene of 100 players is copied over the network, which makes the switching between frames in the game scene extremely slow, making the game experience extremely poor.
  • the embodiment of the application can improve the network replication performance. It can replicate dozens of frames per second, such as 40 frames. In the case of switching dozens of frames per second, it is an acceptable switching frequency for the human eye, and the player can It is observed that the frame image is switched continuously without freezing, which can enhance the game experience.
  • FIG. 2 is a flowchart of a data processing method provided by an embodiment of the present application. As shown in Fig. 2, the above-mentioned computer equipment is used as the execution subject for description.
  • the data processing process includes the following steps:
  • Step S201 Obtain an associated virtual object associated with the main virtual object, and acquire a first type variable among the role variables of the associated virtual object.
  • the associated virtual object associated with the main virtual object is acquired, and the variable belonging to the first update frequency type among the role variables of the associated virtual object is acquired as the first type variable.
  • it may include at least one main virtual object, and each main virtual object may be associated with at least one associated virtual object.
  • the computer device updates the frame image of the application scene, the application scene is acquired.
  • the associated main virtual object, the main virtual object is a virtual object whose attribute category is the user role attribute in the application scenario. After the computer device obtains the main virtual object, it obtains the associated virtual object associated with the main virtual object, and The associated virtual object associated with the main virtual object may not be unique.
  • Each associated virtual object includes a role variable, which constitutes the display state of the corresponding associated virtual object in the above application scenario, and is used to display the associated virtual object in the above application.
  • Scene Taking an associated virtual object as an example, a variable belonging to the first update frequency type among the role variables of the associated virtual object is acquired as the first type variable of the associated virtual object.
  • the first update frequency type may indicate an update frequency type that changes every frame, and may also indicate an update frequency type that has an update frequency greater than or equal to a first update frequency threshold.
  • FIG. 3 is a schematic diagram of a data processing process of a first type variable provided in an embodiment of the present application.
  • the computer device obtains the main virtual object in the application scenario, obtains the associated virtual object associated with the main virtual object, and obtains the first type variable among the role variables of the associated virtual object.
  • the application scene is a game scene.
  • the attribute category of each virtual object is a user role attribute or a system role attribute.
  • a virtual object whose attribute category is a user character attribute is a player character
  • a virtual object whose attribute category is a system character attribute is a non-player character (Non-Player-Character, NPC).
  • NPC Non-Player-Character
  • the main virtual object 302 corresponds to a client, and the client is used to display the application scene 301.
  • the frame image corresponding to the main virtual object 302 is the main vision.
  • the displayed frame images may not be completely the same, because the visual ranges of different main virtual objects are different.
  • an associated virtual object set 303 associated with the main virtual object 302 is acquired.
  • the associated virtual object set 303 includes at least one associated virtual object, such as an associated virtual object 3031, an associated virtual object 3032, and an associated virtual object 3033.
  • Associated virtual objects 3034... and associated virtual objects 303m (m is a positive integer, and m is the number of associated virtual objects associated with the main virtual object).
  • each associated virtual object included in the associated virtual object set 303 belongs to virtual objects other than the main virtual object 302 in the application scenario.
  • a variable belonging to the first update frequency type among the role variables 304 of the associated virtual object 3031 is acquired as the first type variable 305, and the first type variable 305 is the role variable 304 of the associated virtual object 3031 Part of it.
  • the first type variable 305 of the associated virtual object 3031 includes at least one variable, such as variable 3051, variable 3052, variable 3053, variable 3054, ... and variable 305n (n is a positive integer, n is the first type variable 305 The number of variables included).
  • Step S202 Perform compression processing on the first type variable to obtain compressed data, and encapsulate the compressed data into the first change data of the main virtual object.
  • the first type variable is compressed to obtain compressed data, and the compressed data is encapsulated into the first change data of the main virtual object.
  • compressing the variables of the first type including but not limited to encoding the variables of the first type or reducing the numerical type of the variables of the first type, etc., to reduce the storage space occupied by the variables of the first type and reduce the need to transmit to The amount of data on the client.
  • the first type variable 305 is compressed, and each variable contained in the first type variable 305 is compressed specifically to obtain compressed data 3081.
  • the compression processing method for different variables can be The same or different, there is no restriction here.
  • the variable 3051 in the first type variable 305 is compressed to obtain the compressed data 3061 corresponding to the variable 3051; the variable 3052 is compressed to obtain the compressed data 3062 corresponding to the variable 3052; ...; the variable 305n is performed
  • the compression process obtains the compressed data 306n corresponding to the variable 305n.
  • the compressed data 3061, the compressed data 3062,... and the compressed data 306n constitute the compressed data 3081 corresponding to the first type variable 305.
  • the first type variable of the associated virtual object 3032 is compressed to obtain compressed data 3082; the first type variable of the associated virtual object 3033 is compressed to obtain compressed data 3083; ...; the associated virtual object 303m
  • the first type of variable is compressed to obtain compressed data 308m.
  • the compressed data 3081 corresponding to the associated virtual object 3031, the compressed data 3082 corresponding to the associated virtual object 3032, and the compressed data 308m corresponding to the associated virtual object 303m are encapsulated to obtain the first modified data.
  • Step S203 Send the first change data of the main virtual object to the target client where the main virtual object is located.
  • the first change data of the main virtual object is sent to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the first change data.
  • the first change data is data used to render the associated virtual object when the target client displays in this frame.
  • the computer device After the computer device generates the first change data, it sends the first change data to the target client 307 where the associated virtual object is located, so that the target client 307 can display the frame image based on the first change data. .
  • the application scene includes multiple main virtual objects, take the data processing process of the main virtual object 302 as an example to obtain the first change data of other main virtual objects in the application scene, and the obtained first change data It is sent to the target client where the corresponding main virtual object is located, so that all the clients associated with the application scene can update and display the frame image based on the received first change data.
  • FIG. 4 is a data processing process of multiple virtual objects provided by an embodiment of the present application.
  • the above application scenario includes multiple master virtual objects, and each master virtual object is associated with more than one associated virtual object.
  • the computer device 401 obtains the main virtual object 4021, the main virtual object 4022, and the main virtual object 4023 from the application scenario, obtains the associated virtual object 4031 and the associated virtual object 4032 associated with the main virtual object 4021, and obtains the first associated virtual object 4031.
  • a type variable 4041 Compress the first type variable 4041 to obtain compressed data 4051.
  • obtain the first type variable 4042 of the associated virtual object 4032, compress the first type variable 4042 to obtain compressed data 4052, and compress the data 4051 and compressed data 4052 are encapsulated into first modified data 4061, and the first modified data 4061 is sent to the target client 4071 where the main virtual object 4021 is located.
  • the associated virtual object 4033 associated with the main virtual object 4022 obtains the first type variable 4043 of the associated virtual object 4033, compress the first type variable 4043 to obtain compressed data 4053, and encapsulate the compressed data 4053 into a first type variable 4043.
  • a change data 4062, the first change data 4062 is sent to the target client 4072 where the main virtual object 4022 is located; the associated virtual object 4034 associated with the main virtual object 4023 is obtained, and the first type variable 4044 of the associated virtual object 4034 is obtained , Compress the first type variable 4044 to obtain compressed data 4054, encapsulate the compressed data 4054 into first modified data 4063, and send the first modified data 4063 to the target client 4073 where the main virtual object 4023 is located.
  • the embodiment of the application realizes that the computer device obtains the associated virtual object associated with the main virtual object, and obtains the first type variable of the associated virtual object, compresses the first type variable, and obtains the compressed data.
  • the compressed data is encapsulated into the first change data, and the first change data is sent to the target client where the main virtual object is located, thereby reducing the amount of data transmission during data transmission between the computer device and each target client.
  • the role variables of each associated virtual object are classified, and the variables belonging to the first update frequency type (that is, the first type variables) are compressed.
  • the classification of the role variables reduces the number of computers. The amount of data that the device needs to process.
  • the first type of variable is compressed, which reduces the amount of data sent by the computer device to the client, so as to reduce the network data processing pressure of the computer device and reduce the time consumption of data processing, thereby increasing The efficiency of data processing.
  • the network replication pressure of computer equipment can be reduced.
  • the computer equipment includes but is not limited to DS.
  • Network replication (Data Replication) refers to the computer equipment sending game data to the client. The process of keeping the game data of the client and computer equipment consistent.
  • FIG. 5 is a schematic diagram of a specific data processing flow provided by an embodiment of the present application. As shown in Figure 5, it includes the following steps:
  • Step S501 Obtain an associated virtual object associated with the main virtual object.
  • the associated virtual object associated with the main virtual object can be determined by the visual range of the main virtual object, or the associated virtual object associated with the main virtual object can be determined by the attribute category of the main virtual object, or combined with the main virtual object.
  • the visual range and attribute category of the virtual object determine the associated virtual object associated with the main virtual object.
  • the distance information between the main virtual object and at least two original virtual objects is obtained from the distance relationship list; the distance information is obtained from the at least two original virtual objects
  • determine the viewing distance angle information between the view direction information of the candidate virtual object and the main virtual object set the distance information to be less than or equal to the first distance threshold
  • the original virtual object, or the candidate virtual object whose viewing distance included angle information is less than the visual angle threshold is determined as the associated virtual object associated with the main virtual object, or the original virtual object whose distance information is less than or equal to the first distance threshold
  • the object, and the virtual object to be selected whose line-of-sight information is less than the visual angle threshold are determined as the associated virtual object associated with the main virtual object.
  • the association between each virtual object in the application scene and the main virtual object is determined based on distance information and special logic.
  • the special logic can be "within a very small angle directly in front of the main virtual object (visual range) Virtual objects".
  • the original virtual object whose distance information from the main virtual object is less than or equal to the first distance threshold is the virtual object displayed on the display page of the target client where the main virtual object is located; the viewing distance included angle information is less than the visualization angle threshold
  • the virtual object to be selected is a virtual object that is displayed on the display page of the target client where the main virtual object is located when the main virtual object uses the telescopic props.
  • the virtual object to be selected can also have the distance information between the main virtual object and the main virtual object less than or The original virtual object equal to the second distance threshold.
  • Determine the associated virtual object associated with the player by determining the visual range of the player without using the far vision range props and using the far vision range props.
  • the above-mentioned viewing distance included angle information is determined according to the position information of the virtual object to be selected in the application scene and the viewing direction information of the main virtual object.
  • the view direction information of the main virtual object can be a unit vector, or a unit longitude and latitude, etc.
  • the representation of the view direction information of the main virtual object is determined according to the implementation of the application scenario, here No restrictions. For example, when the application scene is created based on the reference point (0, 0, 0), the orientation information of the main virtual object can be (x1, y1, z1), and x, y, and z are all between 0 and 1.
  • the value indicates the orientation of the main virtual object, that is, the view direction of the main virtual object; obtain the position information (x2, y2, z2) of the virtual object to be selected in the application scene, and the position information of the main virtual object in the application scene ( x3, y3, z3), obtain the direction vector between the candidate virtual object and the main virtual object through the position information (x2, y2, z2) of the virtual object to be selected and the position information (x3, y3, z3) of the main virtual object, According to the direction vector and the field of view direction information of the main virtual object, information on the viewing distance included between the candidate virtual object and the main virtual object is determined.
  • the above is a possible way of determining the angle of view information, and other ways of obtaining information about the angle of view between the virtual object to be selected and the direction of view of the main virtual object are not limited here.
  • all virtual objects within the visualization range of the player can be considered to be associated virtual objects associated with the player, that is, the distance information between the player and the player is less than or equal to the first distance
  • the original virtual object of the threshold when the player observes with a telescope such as a sniper scope, it will increase the player's visual range, and the virtual object within the visual range after the player uses the telescope can also be considered to be related to the player
  • the second distance threshold is the farthest distance that the player can observe when using the far-sightedness props
  • the visual angle threshold is the maximum viewing angle that the player can observe when the far-sightedness props are used.
  • FIG. 6 is a schematic diagram of a display scene of an associated virtual object provided in an embodiment of the present application.
  • the main virtual object 602 displays a frame image on the display page 601 of the target client when the far vision prop is not used.
  • the received associated virtual object is displayed in, where the associated virtual object is a virtual object other than the main virtual object 602 displayed on the display page 601.
  • the size of each associated virtual object displayed on the display page 601 can be adjusted according to the distance information between each associated virtual object and the main virtual object 602.
  • the main virtual object 602 uses the telescopic prop 604, obtain the candidate virtual objects whose distance information is greater than the first distance threshold 603 and less than or equal to the second distance threshold 605 from at least two original virtual objects, and determine the candidate virtual object and the main The viewing distance included angle information between the visual field direction information of the virtual objects, and the candidate virtual object whose viewing distance included angle information is less than the visualization angle threshold 606 is determined as the associated virtual object associated with the main virtual object 602.
  • the target The display page 601 of the client displays the associated virtual objects whose distance information is less than or equal to the first distance threshold 603, and also displays the associated virtual objects whose line-of-sight angle information is less than the visualization angle threshold 606.
  • the above-mentioned first distance threshold 603 and second distance threshold 605 refer to the actual distance value in the application scenario, rather than the page display distance displayed in the display page 601, that is, the first distance threshold 603 and the second distance threshold can be considered
  • the second distance threshold 605 is the distance between the location information in the application scene.
  • the associated virtual object is determined by the attribute category of the main virtual object
  • the attribute category of at least two original virtual objects in the application scenario is obtained, the original virtual object whose attribute category is the user role attribute is determined as the user virtual object, and The original virtual object whose attribute category is the attribute of the system role is determined as the system virtual object
  • the group label of the user virtual object is obtained, and the user virtual object whose group label is the same as the group label of the main virtual object is determined as the associated user virtual object
  • the associated user virtual object and the system virtual object are determined as the associated virtual object associated with the main virtual object. For example, in a game scene, for the player (the main virtual object), it may be necessary to know the location information of the virtual object that belongs to the same group as the player.
  • the group tag of the player can be obtained from the original virtual object.
  • the user virtual object whose attribute category is the user role attribute is obtained from the user virtual object, and the associated user virtual object with the same group tag as the main virtual object is obtained from the user virtual object.
  • the user virtual object is a non-player other than the above-mentioned player in the game scene. NPC object. Wherein, when multiple user virtual objects team up to play the game scene or enter the game scene to form a team, the same group tag is added to the multiple user virtual objects in the team to indicate that the multiple user virtual objects belong to The same group.
  • the associated virtual object is determined by the attribute category of the main virtual object.
  • the main virtual object can be displayed on the display page of the target client and the virtual objects belonging to the same group as the main virtual object, so that the main virtual object can view the same group as the main virtual object in real time.
  • the position information of the virtual objects of the group, etc., that is, the situation of one's teammates can be obtained in real time, which improves the interactivity between virtual objects in the same group.
  • the visual range and attribute category of the main virtual object can be combined to determine the associated virtual object associated with the main virtual object, the original virtual object whose distance information is less than or equal to the first distance threshold, and the candidate whose viewing distance included angle information is less than the visual angle threshold
  • the virtual object and the associated user virtual object are determined to be the associated virtual object associated with the main virtual object, where the associated user virtual object is determined based on the group tag, specifically obtaining the original virtual object whose attribute category is the user role attribute
  • the object is determined to be a user virtual object, and the user virtual object whose group label is the same as the group label of the main virtual object is determined as the associated user virtual object.
  • the distance relationship list includes distance information between each system virtual object and each user virtual object. Obtain the distance range to which the distance information between each original virtual object and the main virtual object belongs, obtain the update cache time of each original virtual object; obtain the list update time threshold corresponding to the distance range, and update the cache in at least two original virtual objects The original virtual object whose time difference between the time and the network time of the third system is greater than or equal to the list update time threshold is determined as the target virtual object; the distance information between the target virtual object and the main virtual object in the distance relationship list is updated.
  • the update time threshold is updated so that when the distance information is small, the update frequency is large, and when the distance information is large, the update frequency is small, so as to reduce the impact on the distance relationship list without affecting the display of the frame image of the main virtual object.
  • the update frequency reduces the amount of update data of the distance relationship list.
  • Step S502 Obtain the first type variable among the role variables of the associated virtual object.
  • a variable belonging to the first update frequency type among the role variables of the associated virtual object is acquired as the first type variable.
  • the variable of the first update frequency type can be considered as a variable that changes every frame in the application scenario.
  • the object variable subset includes variables belonging to the first update frequency type among the role variables of the associated virtual object; the first type variable of the associated virtual object is obtained from the object variable subset.
  • the first type of variables can include position information, view direction information, speed information, acceleration information, path information, and Ragdoll information, etc., because the acceleration information and view direction information can be changed by the client according to the received changes The data is calculated. Therefore, it can be considered that the computer device does not need to process the acceleration information and the field of view direction information.
  • the Ragdoll information is the bone information of the associated virtual object, which is used to render the character form of the corresponding associated virtual object, that is, it can be used to render the associated virtual object displayed on the display page.
  • the first type variable includes historical path information
  • obtain the movement trajectory of the associated virtual object within the target time range determine the movement position point of the associated virtual object based on the movement trajectory, and determine the movement position point as the path information of the associated virtual object
  • the historical path information in the variable of the first type is updated to path information;
  • the target time range refers to the interval length of updating the path information in the variable of the first type.
  • the path update time of the path information is based on the path update time and the path information of the associated virtual object is predicted again after the target time range has passed. For example, assuming that the target time range is 2s and the recorded path update time is 6:35:05, based on the path update time, the next time the path information is updated is 6:35:07. Further, the path update time can be accurate to milliseconds, etc., and every time the path information is updated, the path update time is updated synchronously.
  • FIG. 7 is a schematic diagram of a first type variable determination scenario provided by an embodiment of the present application.
  • the class object set 701 corresponding to the main virtual object is obtained.
  • the class object set 701 includes at least two object variable subsets, including the object variable subset 7011, the object variable subset 7012,... and the object variable Subset 7013, etc., each object variable subset includes a variable belonging to the first update frequency type among the character variables corresponding to the associated virtual object, that is, the first type variable corresponding to the associated virtual object.
  • FIG. 7 is a schematic diagram of a first type variable determination scenario provided by an embodiment of the present application.
  • the object variable subset 7011 includes the first type variable associated with the virtual object 1
  • the object variable subset 7012 includes the first type variable associated with the virtual object 2
  • the object variable subset 7013 Includes the first type variable associated with virtual object 3.
  • the first type variable corresponding to the associated virtual object can be obtained from the subset of each object variable.
  • the role variable 703 of the associated virtual object 1 includes a first type variable 7031, a second type variable 7032, other data 7033, and a full update time 7034.
  • other virtual objects also include variables of the first type, variables of the second type, other data, and full update time, etc.
  • the update process please refer to the update process of the associated virtual object 1. Therefore, after determining the associated virtual object associated with the main virtual object, the role variable of the associated virtual object can be obtained, and the first type variable can be obtained from the role variable. Taking Figure 7 as an example, the associated virtual object can be obtained. After the associated virtual object 1 is associated with the virtual object 1, the role variable 703 of the associated virtual object 1 is acquired, and the first type variable 7031 in the role variable 703 is acquired.
  • This type of object collection can be considered as a Holder object collection.
  • the Holder is a class that transfers variables through the copy of the original value and provides a variable packaging for immutable object references.
  • each class object set corresponds to a main virtual object for processing the first type variable corresponding to the main virtual object.
  • the computer device obtains the main virtual object in the application scenario, it creates a class object set for each main virtual object, and obtains the associated virtual object associated with each main virtual object, and associates the virtual object's
  • the first type variable is added to the class object collection corresponding to the main virtual object.
  • Step S503 Perform compression processing on the first type variable to obtain compressed data, and encapsulate the compressed data into the first change data of the main virtual object.
  • the first type variable is compressed to obtain compressed data
  • the compressed data is encapsulated to obtain the first change data of the main virtual object.
  • the first type variable includes location information; the location information and location accuracy of the associated virtual object are acquired, the location information is converted into integer location data based on the location accuracy, and the integer location data is determined to be compressed data ,
  • the integer position data is lossy.
  • the variable of the first type variable whose data type is floating-point data can also be compressed by the compression method of the above-mentioned position information. Since the memory occupied by floating-point data is larger than that of integer data, it is passed In this way, the memory size occupied by the first type variable can be reduced.
  • the position information is data represented by (X, Y, Z), the position accuracy is 0.2 cm, 22 bits are used to store position information X and Y, 20 bits are used to store position information, etc., and the position information is X
  • the memory size of, Y and Z can be determined according to the specific implementation of the application scenario, and the position accuracy can also be other values.
  • X in the location information of an associated virtual object is 1001.252
  • X is compressed based on the location accuracy
  • the compressed X is 1001.2
  • 1001.2 is updated to X in the integer location data, which is 5006.
  • the first type of variable in the frame to be updated, the first type of variable is obtained, and in the history frame where the main virtual object is located, the history buffer variable corresponding to the first type of variable is obtained, where the historical frame is the upper part of the frame to be updated.
  • One frame obtain the first difference variable between the historical cache variable and the first type variable; encode the first difference variable to obtain compressed data.
  • the coding can be a coding method that can compress or simplify data, such as Huffman coding.
  • Huffman coding for the variables of the first type of variable whose data between adjacent frames is a gradual relationship, it can be compressed in this way. For example, the data in the second frame is changed based on the data in the first frame, and the data in the third frame is changed.
  • the data of the frame is based on variables such as changes in the data of the second frame.
  • the historical cache variable of the position information can be obtained, and the first difference variable between the historical cache variable and the position information can be obtained.
  • the difference variable is encoded to obtain the compressed data.
  • the historical cache variable of the location information is (100,201,5) and the location information is (101,201.2,5)
  • the first value between the historical cache variable and the location information is obtained.
  • the difference variable is (1, 0.2, 0)
  • the first difference variable (1, 0.2, 0) is encoded to obtain compressed data.
  • FIG. 8 is a schematic diagram of data processing for updating based on historical frames according to an embodiment of the present application.
  • the associated virtual object 803a and the associated virtual object 804a are displayed on the display page 801 of the target client where the main virtual object 802 is located.
  • the location information (ie, historical cache variable) of the object 803a is (201, 100, 5), and the location information (ie, the historical cache variable) of the associated virtual object 804a is (10, 21, 5).
  • the position information (204, 97, 5) of the associated virtual object 803b and the position information (12, 25, 5) of the associated virtual object 804b are acquired.
  • the historical frame can be considered as the previous frame of the frame to be updated
  • the position information of the associated virtual object 803a is the historical cache variable of the position information of the associated virtual object 803b
  • the first difference variable obtained for the associated virtual object 803b is (3,- 3, 0)
  • the location information of the associated virtual object 804a is the historical cache variable of the location information of the associated virtual object 804b
  • the first difference variable of the associated virtual object 804b is (2, 4, 0)
  • the first difference variable ( 3, -3, 0) and the first difference variable (2, 4, 0) are encoded, and the encoding result is sent to the target client.
  • the target client After the target client receives the encoding result, it restores the first difference variable (3, -3, 0) and the first difference variable (2, 4, 0), based on the first difference variable (3, -3, 0)
  • the associated virtual object 803b is displayed on the display page 801
  • the associated virtual object 804b is displayed on the display page 801 based on the first difference variable (2, 4, 0).
  • the target client updates the associated virtual object 803a and the associated virtual object 804a in the display page 801 to obtain the associated virtual object 803b and the associated virtual object 804b.
  • the associated virtual object 804a that is, the historical frame displayed in the display page 801 is changed to the frame to be updated.
  • the first type of variable includes the object identifier of the associated virtual object; the object identifier of the associated virtual object's predecessor associated virtual object is obtained, and the object identifier of the associated virtual object is obtained between the object identifier of the associated virtual object and the object identifier of the previous associated virtual object.
  • the second difference variable where the preceding associated virtual object and the associated virtual object are processed sequentially; the second difference variable is encoded to obtain compressed data. For example, for the object identifier of the associated virtual object, the difference between different object identifiers is small, and it can be compressed in this way.
  • the object identifier of the first associated virtual object is 101
  • the object identifier of the second associated virtual object is 103
  • the object identifier of the third associated virtual object is 107
  • the first associated virtual object can be considered as the predecessor associated virtual object of the second associated virtual object
  • the second associated virtual object is the predecessor associated virtual object of the third associated virtual object.
  • the second difference variable of the position information can be converted into integer position data, and the integer position data can be encoded to obtain compressed data, or
  • the position information is converted into integer position data, the second difference variable of the position information is obtained according to the integer position data, and the second difference variable is encoded to obtain compressed data.
  • the Ragdoll information can be directly used as compressed data.
  • the Ragdoll information is related to location information. Therefore, the Ragdoll information can be regarded as a variable belonging to the first update frequency type.
  • the associated virtual object includes a plurality of variables of the first type, after obtaining the compressed data of each variable of the first type, all the compressed data is encapsulated to obtain the first modified data.
  • Step S504 Send the first change data of the main virtual object to the target client where the main virtual object is located.
  • the first change data of the main virtual object is sent to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the first change data.
  • the target client receives the first change data
  • the first change data is restored to obtain the first type variable, and the frame image is updated and displayed on the display page according to the first type variable.
  • step S505 it is determined that the second type variable has changed.
  • step S506 if the trigger information for the second-type variable of the associated virtual object is received, it is determined that the second-type variable has changed, and step S506 is executed. Or, obtain the historical update time of the second type variable associated with the virtual object and the first system network time, where the second type variable is a variable belonging to the second update frequency type among the role variables; if the first system network time and the historical update When the time difference is greater than or equal to the second-type variable update time threshold, it is determined that the second-type variable has changed, and step S506 is executed.
  • the trigger information can be considered as operation information that will cause data update of the corresponding second type variable. For example, when the associated virtual object is hit, the blood volume information and displacement animation parameters of the associated virtual object will change.
  • “Hit” can be considered as trigger information for the blood volume information and displacement animation parameters of the associated virtual object. It can be determined that the second type needs to be processed when the trigger information for the second-type variable of the associated virtual object is not received, but the difference between the network time of the first system and the historical update time is greater than or equal to the second-type variable update time threshold Variable, go to step S506; alternatively, it may be determined when the trigger information for the second type variable of the associated virtual object is received, but the difference between the network time of the first system and the historical update time is less than the second type variable update time threshold
  • step S506 is executed; or, when the trigger information for the second type of variable associated with the virtual object is received, and the difference between the network time of the first system and the historical update time is greater than or equal to the second type
  • the variable update time threshold it is determined that the second type of variable needs to be processed, and step S506 is executed.
  • Receiving the trigger information for the second type variable of the associated virtual object, or the difference between the network time of the first system and the historical update time is greater than or equal to the second type variable update time threshold.
  • step S506 is executed.
  • the execution sequence of the judgment process of the two conditions is not limited.
  • Step S506 Obtain a second type variable among the role variables of the associated virtual object, and encapsulate the second type variable into second change data.
  • the second type variable is a variable belonging to the second update frequency type among the character variables, and the second type variable is encapsulated into the second change data of the main virtual object.
  • the second type of variable can be considered to be some trigger variables, such as the change of attack information, attack behavior, and displacement animation parameters in the virtual object.
  • the second type of variable changes, it can be targeted to the second type of variable.
  • the corresponding associated virtual object executes the Actor Replication (Actor Replication) method to fully replicate the second type of variables.
  • Step S507 Send the second change data of the main virtual object to the target client where the main virtual object is located.
  • the second change data is sent to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the second change data.
  • the third change data is sent to the target client where the main virtual object is located, so that the target client can update and display the frame image based on the third change data.
  • the second-type variable 7041 of the associated virtual object 1 when the second-type variable 7041 of the associated virtual object 1 is updated, the second-type variable 7041 is added to the full update list 704.
  • the full update list 704 includes the data needed in the current data processing process.
  • the second type variable of the associated virtual object to be updated. For example, in this data processing process, if the second type variable 7041 of the associated virtual object 1, the second type variable 7042 of the associated virtual object 2 and the second type variable 7043 of the associated virtual object 3 are updated, the virtual object will be associated The second type variable 7041 of 1, the second type variable 7042 of the associated virtual object 2 and the second type variable 7043 of the associated virtual object 3 are added to the full update list 704.
  • FIG. 9 is a schematic diagram of a second-type variable triggering scenario provided by an embodiment of the present application.
  • the second type variables of the associated virtual object 903a of the main virtual object 902 include blood volume information 904a and attacked information, etc.
  • the main virtual object 902 uses the long-range shooting tool 905 to shoot at the associated virtual object 903a, where the blood volume information 904a As 100%, the long-range shooting tool 905 can be a crossbow, a firearm, a blasting tool, and the like.
  • the computer device receives the operation of the associated virtual object 903a being shot, that is, receives the second type for the associated virtual object 903a
  • the trigger information of the variable is used to obtain the second type variable of the associated virtual object 903b.
  • the associated virtual object 903a and the associated virtual object 903b can be regarded as the same virtual object displayed in adjacent frames.
  • the role variable of the virtual object has changed.
  • the target client restores the blood volume information 904b based on the received second change data, and displays the associated virtual object 903b and the blood volume information 904b of the associated virtual object 903b on the display page 901.
  • the displacement animation parameters of the associated virtual object 903a may also change.
  • the above trigger information also refers to the associated virtual object 903a.
  • the displacement animation parameter of the object 903a may be for at least one variable of the second type, which is not limited here.
  • the computer device After the computer device receives the shooting operation of the associated virtual object 903a, it can be based on the damage value of the bullet 906 that hit the associated virtual object 903a, the location where the associated virtual object 903a was shot, and the blood volume information 904a of the associated virtual object 903a. And so on, the blood volume information 904b of the associated virtual object 903b is acquired.
  • the computer device obtains the damage value of the bullet 906 and the position where the associated virtual object 903a was shot, determines the blood volume reduction value of the associated virtual object 903a, and encapsulates the blood volume reduction value into the second change data, and the target customer
  • the blood volume information 904b is obtained based on the blood volume information 904a of the previous frame and the blood volume reduction value, and the blood volume information 904b is displayed on the display page 901.
  • the above-mentioned processing procedure for the first type variable (step S502 to step S504) and the processing procedure for the second type variable (step S505 to step S507) can be executed synchronously or asynchronously, and the execution of both The order is not limited.
  • the compressed data is obtained, and the second type variable is obtained by performing step S505 to step S506.
  • the compressed data and the second type variable are encapsulated to obtain the target change data, and the target change data is sent to The target client where the main virtual object is located, so that the target client can update and display the frame image based on the target change data.
  • the first type variable of each associated virtual object is respectively used as a subset of object variables and added to the class object set 701; Obtain the second-type variable whose data is updated during this data processing, and add the second-type variable whose data is updated to the full update list 704.
  • the data of each object variable subset included in the class object set 701 is compressed to obtain compressed data corresponding to each object variable subset.
  • the computer device encapsulates the compressed data obtained after processing in the class object set 701 and each second type data included in the full update list 704 to obtain the target change data, and sends the target change data to the target client where the main virtual object is located , So that the target client can update and display the frame image based on the target change data.
  • step S503 After performing the above step S502, obtain the historical full update time of the first type variable associated with the virtual object and the second system network time; if the difference between the historical full update time and the second system network time is greater than or equal to the full update time threshold, Encapsulate the first type variable into full change data, and send the full change data to the target client, so that the target client can update and display the frame image based on the full change data, and update the historical full update time based on the network time of the second system; If the difference between the historical full update time and the second system network time is less than the full update time threshold, step S503 is executed to compress the first type variable to obtain compressed data.
  • the historical full update time may be included in the character variable corresponding to the associated virtual object. As shown in FIG.
  • the historical full update time is the full update time 7034 in the character variable 703 of the associated virtual object 1.
  • the full historical update time can also be stored in other ways, which is not limited here.
  • the object identifier can be associated with the corresponding associated virtual object to indicate the associated virtual object to which the role variable belongs.
  • the role variables of the virtual object can also be divided into at least two categories, such as the above-mentioned first type variable belonging to the first update frequency type, and the second type variable belonging to the second update frequency type.
  • the role variables can also be divided into more types of variables based on different update frequency types, such as three types of variables, or four types of variables, or five types of variables, etc., in different
  • each type of variable belongs to a different update frequency type.
  • different update frequency types can be managed through a tree structure, and data processing methods for variables corresponding to different update frequency types Not the same.
  • FIG. 10 is a schematic diagram of a variable division scenario provided by an embodiment of the present application.
  • the role variables are divided based on the first update frequency type and the second update frequency type, and a variable belonging to the first update frequency type among the role variables is used as the first type variable;
  • the variable belonging to the second update frequency type among the role variables is regarded as the second type variable; among them, the data processing methods of the first type variable and the second type variable are different, and the first type variable and the second type variable can be passed through Render a complete associated virtual object.
  • the role variables are divided based on the third update frequency type, the fourth update frequency type, and the fifth update frequency type, and the variables belonging to the third update frequency type among the role variables are regarded as the third type variables;
  • the variable belonging to the fourth update frequency type among the role variables is regarded as the fourth type variable;
  • the variable belonging to the fifth update frequency type among the role variables is regarded as the fifth type variable; among them, the third type variable, the fourth type variable and
  • the data processing methods of the fifth type variables are different. Among them, a complete associated virtual object can be rendered through the third type variable, the fourth type variable, and the fifth type variable.
  • role variables can also be classified by other division methods.
  • the embodiment of the present application obtains the associated virtual object associated with the main virtual object, obtains the variable belonging to the first update frequency type among the role variables of the associated virtual object, as the first type variable, compresses the first type variable to obtain Compress the data, encapsulate the compressed data to obtain the first change data of the main virtual object, and send the first change data to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the first change data.
  • the second type of variable changes, the second type of variable is updated, or the second type of variable is updated periodically, and the first type of variable is continuously updated periodically, so as to solve the possible loss before the full update.
  • the role variables of the associated virtual objects are classified, and the first type variables belonging to the first update frequency type are compressed, which greatly reduces the amount of data that needs to be processed and the amount of data that needs to be sent. Data processing efficiency.
  • FIG. 11 is a schematic diagram of a data processing apparatus provided by an embodiment of the present application.
  • the above-mentioned data processing device may be a computer program (including program code) running in a computer device, for example, the data processing device is an application software; the device may be used to execute corresponding steps in the method provided in the embodiments of the present application.
  • the data processing apparatus 110 may be used for the computer equipment in the embodiment corresponding to FIG. 2.
  • the data processing apparatus 110 may include: a first acquisition module 11, a compression module 12, and a first package Module 13 and the first sending module 14.
  • the first obtaining module 11 is configured to obtain an associated virtual object associated with the main virtual object, and obtain a variable belonging to the first update frequency type among the role variables of the associated virtual object as the first type variable;
  • the compression module 12 is configured to perform compression processing on the above-mentioned first type variables to obtain compressed data
  • the first encapsulation module 13 is configured to encapsulate the compressed data to obtain the first change data of the main virtual object
  • the first sending module 14 is configured to send the first change data of the main virtual object to the target client where the main virtual object is located, so that the target client can update and display the frame image based on the first change data.
  • the above-mentioned device 110 further includes:
  • the second obtaining module 15 is configured to obtain the second type variable if the trigger information for the second type variable of the associated virtual object is received; the second type variable is the second update frequency type among the character variables variable;
  • the second encapsulation module 16 is configured to encapsulate the above-mentioned second type variable into the second change data of the above-mentioned main virtual object;
  • the second sending module 17 is configured to send the second changed data to the target client where the main virtual object is located, so that the target client can update and display the frame image based on the second changed data.
  • the above-mentioned device 110 further includes:
  • the third obtaining module 18 is configured to obtain the historical update time and the first system network time of the second type variable of the associated virtual object; the second type variable is a variable belonging to the second update frequency type among the character variables;
  • the third encapsulation module 19 is configured to encapsulate the second type variable into the third of the main virtual object if the difference between the first system network time and the historical update time is greater than or equal to the second type variable update time threshold. Change data;
  • the third sending module 20 is configured to send the third change data to the target client where the main virtual object is located, so that the target client can update and display the frame image based on the third change data.
  • the above-mentioned first type variable includes location information
  • the aforementioned compression module 12 includes:
  • the first obtaining unit 121 is configured to obtain the position information and position accuracy of the associated virtual object
  • the determining unit 122 is configured to convert the position information into integer position data based on the position accuracy, and determine the integer position data as the compressed data.
  • the aforementioned compression module 12 includes:
  • the second obtaining unit 123 is configured to obtain the first type variable in the frame to be updated, and obtain the history buffer variable corresponding to the first type variable in the history frame where the main virtual object is located; the history frame is the waiting Update the previous frame of the frame;
  • the third obtaining unit 124 is configured to obtain the first difference variable between the above-mentioned historical cache variable and the above-mentioned first type variable;
  • the first generating unit 125 is configured to encode the above-mentioned first difference variable to obtain the above-mentioned compressed data.
  • the above-mentioned first type variable includes the object identifier of the above-mentioned associated virtual object
  • the aforementioned compression module 12 includes:
  • the fourth obtaining unit 126 is configured to obtain the object identifier of the preceding associated virtual object of the associated virtual object, and obtain the second difference variable between the object identifier of the associated virtual object and the object identifier of the preceding associated virtual object; Then the associated virtual object and the above-mentioned associated virtual object are processed in sequence;
  • the second generating unit 127 is configured to encode the above-mentioned second difference variable to obtain the above-mentioned compressed data.
  • the above-mentioned first type variable includes historical path information
  • the above-mentioned first obtaining module 11 includes:
  • the path acquisition unit 111 is configured to acquire the motion trajectory of the associated virtual object within the target time range, determine the motion position point of the associated virtual object based on the motion trajectory, and determine the motion position point as the path information of the associated virtual object,
  • the above-mentioned historical path information in the above-mentioned first type variable is updated to the above-mentioned path information;
  • the above-mentioned target time range refers to the interval length of updating the above-mentioned path information in the above-mentioned first type variable.
  • the above-mentioned device 110 further includes:
  • the fourth acquiring module 21 is configured to acquire the historical full update time of the first type variable of the associated virtual object and the second system network time;
  • the fourth encapsulation module 22 is configured to encapsulate the parameters of the first type variable into full change data if the difference between the historical full update time and the second system network time is greater than or equal to the full update time threshold, and the full change data
  • the change data is sent to the target client, so that the target client can update and display frame images based on the full amount of change data;
  • the fourth encapsulation module 22 is further configured to perform compression processing on the first type variable through the compression module 12 if the difference between the historical full update time and the second system network time is less than the full update time threshold. , The steps to get compressed data.
  • the above-mentioned device 110 further includes:
  • the distance obtaining module 23 is configured to obtain the distance information between the main virtual object and the at least two original virtual objects from the distance relationship list;
  • the viewing distance determining module 24 is configured to obtain the candidate virtual objects whose distance information is greater than the first distance threshold and less than or equal to the second distance threshold from the above-mentioned at least two original virtual objects, and determine whether the candidate virtual object is the same as the main virtual object. Information about the angle of the viewing distance between the object's field of view direction information;
  • the association determination module 25 is configured to determine the original virtual object whose distance information is less than or equal to the first distance threshold, or the candidate virtual object whose line-of-sight information is less than the visualization angle threshold, as being associated with the main virtual object The above-mentioned associated virtual object.
  • the above-mentioned device 110 further includes:
  • the attribute classification module 26 is configured to obtain the attribute categories of at least two original virtual objects in the application scenario, determine the original virtual object whose attribute category is the user role attribute as the user virtual object, and set the attribute category as the original virtual object of the system role attribute.
  • the virtual object is determined as the system virtual object;
  • the user association module 27 is configured to obtain the group label of the user virtual object, and determine the user virtual object whose group label is the same as the group label of the main virtual object as the associated user virtual object;
  • the association determination module 25 is further configured to determine the associated user virtual object and the system virtual object as the associated virtual object associated with the main virtual object.
  • the above-mentioned device 110 further includes:
  • the fifth obtaining module 28 is configured to obtain the distance range to which the distance information between each original virtual object and the above-mentioned main virtual object belongs, and obtain the update cache time of each original virtual object;
  • the target determination module 29 is configured to obtain the list update time threshold corresponding to the above distance range, and determine the difference between the update cache time in the at least two original virtual objects and the network time of the third system to be greater than or equal to the original virtual object whose list update time threshold is greater than or equal to The object is determined as the target virtual object;
  • the update module 30 is configured to update the distance information between the target virtual object and the main virtual object in the distance relationship list.
  • the above-mentioned first obtaining module 11 includes:
  • the class obtaining unit 112 is configured to obtain a class object set corresponding to the main virtual object; the class object set includes at least two object variable subsets;
  • the subset acquiring unit 113 is configured to acquire the object variable subset corresponding to the associated virtual object from the at least two object variable subsets of the class object set; the object variable subset includes the role variables of the associated virtual object belonging to the above Variables of the first update frequency type;
  • the variable obtaining unit 114 is configured to obtain the first type variable of the associated virtual object from the object variable subset.
  • the embodiment of the application describes a data processing device.
  • the above-mentioned device obtains the associated virtual object associated with the main virtual object, and obtains the variable belonging to the first update frequency type among the role variables of the associated virtual object.
  • One type of variable is compressed to obtain compressed data
  • the compressed data is encapsulated to obtain the first change data of the main virtual object
  • the first change data is sent to the target client where the main virtual object is located, so that the target client is based on the first Change the data to update the display of the frame image.
  • the role variables of the associated virtual objects are classified, and the first type variables belonging to the first update frequency type are compressed, which greatly reduces the amount of data that needs to be processed and the amount of data that needs to be sent. Data processing efficiency.
  • the computer device 1200 in the embodiment of the present application may include: one or more processors 1201, a memory 1202, and an input/output interface 1203.
  • the aforementioned processor 1201, memory 1202, and input/output interface 1203 are connected through a bus 1204.
  • the memory 1202 is used to store a computer program, the computer program includes program instructions, the input and output interface 1203 is used to input data and output data, including data interaction between each communication client and event server, and data between the user and each communication client Interaction;
  • the processor 1201 is used to execute the program instructions stored in the memory 1202, and perform the following operations:
  • the first change data of the main virtual object is sent to the target client where the main virtual object is located, so that the target client updates and displays the frame image based on the first change data.
  • the aforementioned processor 1201 may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), and application-specific integrated circuits ( application specific integrated circuit (ASIC), field-programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 1202 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1201 and the input/output interface 1203. A part of the memory 1202 may also include a non-volatile random access memory. For example, the memory 1202 may also store device type information.
  • the above-mentioned computer can execute the implementation manner provided in each step in FIG. 2 or FIG. 5 through its built-in functional modules.
  • the implementation manner provided in each step in FIG. 2 or FIG. 5 please refer to the implementation manner provided in each step in FIG. 2 or FIG. 5. This will not be repeated here.
  • the embodiment of the present application provides a computer including a processor, an input/output interface, and a memory.
  • the processor obtains computer instructions in the memory and executes the steps of the method shown in FIG. 2 or FIG. 5 to perform data processing operations. .
  • the processor executes the following steps: obtain the associated virtual object associated with the main virtual object, obtain the variable belonging to the first update frequency type among the role variables of the associated virtual object, as the first type variable, One type of variable is compressed to obtain compressed data, the compressed data is encapsulated to obtain the first change data of the main virtual object, and the first change data is sent to the target client where the main virtual object is located, so that the target client is based on the first Change the data to update the display of the frame image.
  • the second type of variable changes, the second type of variable is updated, or the second type of variable is updated periodically, and the first type of variable is continuously updated periodically, so as to solve the possible loss before the full update.
  • Package problem Through the above process, the role variables of the associated virtual objects are classified, and the first type variables belonging to the first update frequency type are compressed, which greatly reduces the amount of data that needs to be processed and the amount of data that needs to be sent. Data processing efficiency.
  • the embodiments of the present application also provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the data processing method shown in FIG. 2 or FIG. 5.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and when the program instructions are executed by a processor, each step in FIG. 2 or FIG. 5 is implemented.
  • the data processing method please refer to the implementation manners provided in each step of FIG. 2 or FIG. 5 for details, which will not be repeated here.
  • the foregoing computer-readable storage medium may be the data processing device provided in any of the foregoing embodiments or the internal storage unit of the foregoing computer, such as the hard disk or memory of the computer.
  • the computer-readable storage medium may also be an external storage device of the computer, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, and a flash memory card equipped on the computer. (flash card) and so on.
  • the computer-readable storage medium may also include both an internal storage unit of the computer and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the computer.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • each process and/or structural schematic diagrams of the method flowcharts and/or structural schematic diagrams can be implemented by computer program instructions. Or a block, and a combination of processes and/or blocks in the flowcharts and/or block diagrams.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing equipment to generate a machine, so that instructions executed by the processor of the computer or other programmable data processing equipment are generated for use.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the schematic structural diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one or more blocks of one or more processes in the flowchart and/or in the structural schematic diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种数据处理方法,由计算机设备执行,包括:获取与主虚拟对象相关联的关联虚拟对象,获取该关联虚拟对象的角色变量中的第一类型变量,作为第一类型变量(201);对第一类型变量进行压缩处理,得到压缩数据,将压缩数据封装成主虚拟对象的第一变更数据(202);将主虚拟对象的第一变更数据发送给主虚拟对象所在的目标客户端(203),以使所述目标客户端基于所述第一变更数据进行帧图像更新显示。

Description

数据处理方法、装置、计算机设备以及可读存储介质
本申请要求于2020年1月13日提交中国专利局、申请号为202010033046.9、发明名称为“数据处理方法、装置、计算机以及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算技术领域,尤其涉及一种数据处理方法、装置、计算机设备以及可读存储介质。
发明背景
在网络游戏中,尤其是联机第一人称射击(First person shooter,FPS)游戏中,游戏主题逻辑会运行于服务端,包括每个非玩家角色(Non-Player-Character,NPC)的移动位置、每个玩家的位置和状态等。这些数据会经过选择和过滤,通过“网络复制”的方法发送到每个游戏客户端,游戏客户端会收到相应的数据,并通过对收到的数据进行渲染,在客户端屏幕上进行显示。
而由于服务器需要处理的相关数据量比较大,数据处理需要耗费一定的时间,网络带宽也有限,再加上客户端玩家视距有限,所以一般服务器会使用九宫格分割的方法,将玩家周围的相关数据的变化情况发送给该玩家的客户端,如在大多FPS游戏中为距玩家大约为200米左右的范围内的相关数据。
但是,对于地图中有大量的NPC和玩家的游戏来说,如果通过上述的数据同步方法对客户端进行数据处理,需要对比每个游戏对象(包括NPC和玩家)当前的数据与上次更新时的数据,得到每个游戏对象的变更记录,加上需要更新的游戏对象较多且每个游戏对象包括大量的数据,使得服务器的性能会无法承受,游戏对象的数据比对及变更记录的整理也会耗费大量的时间,造成每帧更新时用于游戏对象网络复制的开销就会达到40毫秒甚至更多,从而造成服务器对于数据的更新效率低下。
发明内容
本申请实施例提供了一种数据处理方法、装置、计算机设备以及计算机可读存储介质,可以提高数据处理的效率。
本申请实施例一方面提供了一种数据处理方法,由计算机设备执行,包括:
获取与主虚拟对象相关联的关联虚拟对象,获取上述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
对上述第一类型变量进行压缩处理,得到压缩数据,将上述压缩数据进行封装得到上述主虚拟对象的第一变更数据;
将上述主虚拟对象的第一变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第一变更数据进行帧图像更新显示。
本申请实施例一方面提供了一种数据处理装置,所述装置包括:
第一获取模块,用于获取与主虚拟对象相关联的关联虚拟对象,获取上述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
压缩模块,用于对上述第一类型变量进行压缩处理,得到压缩数据;
第一封装模块,用于将上述压缩数据进行封装得到上述主虚拟对象的第一变更数据;
第一发送模块,用于将上述主虚拟对象的第一变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第一变更数据进行帧图像更新显示。
本申请实施例一方面提供了一种计算机设备,包括处理器、存储器、输入输出接口;
上述处理器分别与上述存储器和上述输入输出接口相连,其中,上述输入输出接口用于输入数据和输出数据,上述存储器用于存储程序代码,上述处理器用于调用上述程序代码,以执行如本申请实施例上述的数据处理方法。
本申请实施例一方面提供了一种计算机可读存储介质,上述计算机可读存储介质存储有计算机程序,上述计算机程序包括程序指令,上述程序指令当被处理器执行时,执行如本申请实施例上述的数据处理方法。
附图简要说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种数据处理架构图;
图2是本申请实施例提供的一种数据处理方法流程图;
图3是本申请实施例提供的一种第一类型变量的数据处理过程示意图;
图4是本申请实施例提供的一种多虚拟对象的数据处理过程;
图5是本申请实施例提供的一种数据处理具体流程示意图;
图6是本申请实施例提供的一种关联虚拟对象的显示场景示意图;
图7是本申请实施例提供的一种第一类型变量确定场景示意图;
图8是本申请实施例提供的一种基于历史帧进行更新的数据处理示意图;
图9是本申请实施例提供的一种第二类型变量触发场景示意图;
图10是本申请实施例提供的一种变量划分场景示意图;
图11是本申请实施例提供的一种数据处理装置示意图;
图12是本申请实施例提供的一种计算机设备的结构示意图。
实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
具体的,请参见图1,图1是本申请实施例提供的一种数据处理架构图,如图1所示,该数据处理系统包括计算机设备101及多个电子设备,如电子设备102a、电子设备102b、…及电子设备102c,计算机设备101与各个电子设备间可以进行通信连接。具体的,计算机设备101获取连接的各个电子设备,生成每个电子设备对应的变更数据,将生成的变更数据发送给对应的电子设备,以使该电子设备可以基于对应的变更数据进行帧图像更新显示。其中,以电子设备102a为例,计算机设备101监测到该电子设备102a在线,获取与该电子设备102a中登陆的主虚拟对象相关联的关联虚拟对象,包括与该主虚拟对象相关联的玩家或NPC,该关联虚拟对象可以认为是在主虚拟对象所在的电子设备102a中进行显示的对象。获取每个关联虚拟对象的第一类型变量,如对应关联虚拟对象的位置信息、视野方向信息、速度信息、加速度信息、路径信息及布娃娃(Ragdoll)信息等,其中,Ragdoll信息用于构建关联虚拟对象的角色模型,其中,该第一类型变量为关联虚拟对象中属于第一更新频率类型的变量,第一更新频率类型可以认为是每一帧都会发生变化的频率,由于第一类型变量的变化较为频繁,使得在对每一帧进行更新时都需要对第一类型变量进行更新,因此,计算机设备在进行数据处理时,均需获取关联虚拟对象的第一类型变量,以保持目标客户端的显示页面中的数据与计算机设备中的数据一致,实现对第一类型变量的网络复制。其中,在对第一类型变量进行网络复制时,可以对上述各个第一类型变量分别进行压缩处理,得到压缩数据,包括位置信息的压缩数据、视野方向信息的压缩数据以及路径信息的压缩数据等,将生成的压缩数据封装成主虚拟对象的第一变更数据,将第一变更数据发送给电子设备102a,电子设备102a为该主虚拟对象所在的目标客户端。通过对第一类型变量进行压缩处理,减少计算机设备与客户端间传输的数据量,提高更新效率,而对于除第一类型变量之外的变量,如第二类型变量,由于该第二类型变量属于第二更新频率类型的变量,第二更新频率类型表示更新频率较低,不会每一帧都发生变化,因此第二类型变量不需要每一帧都进行处理,从而进一步减少计算机设备处理的数据量及与客户端间传输的数据量,提高更新效率。
电子设备102a在接收到第一变更数据后,对第一变更数据进行还原,得到对应关联虚拟对象的位置信息、视野方向信息、速度信息、加速度信息、路径信息及布娃娃(Ragdoll)信息等第一类型变量,基于这些第一类型变量,渲染出当前帧图像, 可以认为,当前帧图像中显示的虚拟对象的状态是由各个第一类型变量所确定的。其中,计算机设备101向电子设备102b、…或电子设备102c发送变更数据的过程,与上述计算机设备101向电子设备102a发送变更数据的过程相同。其中,计算机设备101可以是服务器或客户端,也可以是服务器和客户端组成的系统,该客户端可以是一种电子设备,该客户端及上述各个电子设备(电子设备102a、电子设备102b、…及电子设备102c),包括但不限于手机、平板电脑、台式电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备(例如智能手表、智能手环等)等。
可以理解的是,本申请实施例提供的方法可以由计算机设备执行,计算机设备包括但不限于终端或服务器。本申请实施例中进行数据处理的执行主体可以为计算机设备。其中,当本申请实施例应用于游戏场景时,该计算机设备包括但不限于独立专用服务器(Dedicated Server,DS),其中,DS为运行游戏逻辑的服务器,不运行图像渲染和声音等相关的任务,并会将数据处理的结果发送给客户端,用于处理游戏数据。
其中,在游戏场景中,通过基于更新频率变量对关联虚拟对象的角色变量进行分类,在每一帧更新时都对属于高频(第一更新频率类型)的第一类型变量进行更新,且会对第一类型变量进行压缩处理,而对于属于低频(第二更新频率类型)的第二类型变量,则不需要每一帧都进行更新,从而减少计算机设备用于网络复制的开销,提高帧更新的效率,减短了该游戏场景中一帧的更新时间,进而可以提升游戏场景的帧图像更新性能,使得游戏场景中帧与帧之间的切换更为流畅,提高用户体验。由于通过本申请实施例,可以减轻计算机设备(如DS)的数据处理压力,因此,在相同性能的计算机设备中,技术人员通过本申请实施例可以实现更为丰富的游戏场景。举例来说,假定该DS在对100个玩家的游戏场景进行网络复制时,每秒可以复制2帧,使得该游戏场景中帧与帧之间的切换极为缓慢,使得游戏的体验极差,通过本申请实施例,可以提升网络复制性能,每秒可以复制几十帧,如40帧,在每秒切换几十帧的情况下,对于人眼来说,属于可接受的切换频率,玩家就可以观察到连续不卡顿的帧图像切换,从而可以提升游戏的体验。
进一步地,请参见图2,图2是本申请实施例提供的一种数据处理方法流程图。如图2所示,以上述计算机设备为执行主体进行描述,该数据处理过程包括如下步骤:
步骤S201,获取与主虚拟对象相关联的关联虚拟对象,获取该关联虚拟对象的角色变量中的第一类型变量。
具体的,获取与主虚拟对象相关联的关联虚拟对象,获取该关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量。具体的,对于一个应用场景来说,可能包括至少一个主虚拟对象,每个主虚拟对象可能会关联至少一个关联虚拟对象,当计算机设备对该应用场景进行帧图像更新时,获取该应用场景所关联的主虚拟对象,该主虚拟对象为在该应用场景中属性类别为用户角色属性的虚 拟对象,计算机设备获取到该主虚拟对象后,获取与该主虚拟对象相关联的关联虚拟对象,与该主虚拟对象相关联的关联虚拟对象可能不唯一,每个关联虚拟对象包括角色变量,该角色变量组成对应关联虚拟对象在上述应用场景中的显示状态,用于将关联虚拟对象显示在上述应用场景中。以一个关联虚拟对象为例,获取该关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为该关联虚拟对象的第一类型变量。其中,该第一更新频率类型可以指示每一帧都进行变化的更新频率类型,也可以指示更新频率大于或等于第一更新频率阈值的更新频率类型等。
具体的,参见图3,图3是本申请实施例提供的一种第一类型变量的数据处理过程示意图。在应用场景中,计算机设备获取该应用场景中的主虚拟对象,并获取与该主虚拟对象相关联的关联虚拟对象,获取关联虚拟对象的角色变量中的第一类型变量。如图3所示,假定该应用场景为游戏场景,在该应用场景301中,包括多个虚拟对象,每个虚拟对象的属性类别为用户角色属性或系统角色属性,一种情况下可以认为,属性类别为用户角色属性的虚拟对象为玩家角色,属性类别为系统角色属性的虚拟对象为非玩家角色(Non-Player-Character,NPC)。获取该应用场景301中属性类别为用户角色属性的虚拟对象,将属性类别为用户角色属性的虚拟对象确定为主虚拟对象,以该应用场景301中的主虚拟对象302为例,对本申请实施例中的数据处理过程进行描述。该主虚拟对象302既可以表示在应用场景301中显示出来的虚拟对象,也可以表示操控该虚拟对象的虚拟账号,该主虚拟对象302对应一个客户端,该客户端用于显示应用场景301中以对应主虚拟对象302为主视觉的帧图像。换句话说,对于同一个应用场景,以不同主虚拟对象为主视觉进行帧图像显示时,所显示出来的帧图像可能不完全相同,这是由于不同的主虚拟对象的视觉范围有所不同。
具体的,在图3中,获取与主虚拟对象302相关联的关联虚拟对象集合303,该关联虚拟对象集合303包括至少一个关联虚拟对象,如关联虚拟对象3031、关联虚拟对象3032、关联虚拟对象3033、关联虚拟对象3034、...及关联虚拟对象303m(m为正整数,m为与主虚拟对象相关联的关联虚拟对象的数量)。其中,该关联虚拟对象集合303中所包括的各个关联虚拟对象属于该应用场景中除主虚拟对象302之外的虚拟对象。以关联虚拟对象3031为例,获取该关联虚拟对象3031的角色变量304中属于第一更新频率类型的变量,作为第一类型变量305,该第一类型变量305为关联虚拟对象3031的角色变量304中的一部分。假定该关联虚拟对象3031的第一类型变量305包括至少一个变量,如变量3051、变量3052、变量3053、变量3054、...及变量305n(n为正整数,n为第一类型变量305中所包含的变量的数量)。
步骤S202,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据封装成主虚拟对象的第一变更数据。
具体的,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据封装成主虚拟对象的第一变更数据。具体的,对第一类型变量进行压缩处理,包括但不限于 对第一类型变量进行编码或减少第一类型变量的数值类型等,以减少第一类型变量所占的存储空间,降低需要传输给客户端的数据量。
具体的,可以参见图3,对第一类型变量305进行压缩处理,具体对第一类型变量305所包含的各个变量进行压缩处理,得到压缩数据3081,其中,对于不同的变量的压缩处理方法可以相同,也可以不相同,在此不做限制。具体的,对第一类型变量305中的变量3051进行压缩处理,得到变量3051对应的压缩数据3061;对变量3052进行压缩处理,得到变量3052对应的压缩数据3062;...;对变量305n进行压缩处理,得到变量305n对应的压缩数据306n。其中,压缩数据3061、压缩数据3062、...及压缩数据306n组成了第一类型变量305对应的压缩数据3081。同理,对关联虚拟对象3032的第一类型变量进行压缩处理,得到压缩数据3082;对关联虚拟对象3033的第一类型变量进行压缩处理,得到压缩数据3083;...;对关联虚拟对象303m的第一类型变量进行压缩处理,得到压缩数据308m。将关联虚拟对象3031对应的压缩数据3081、关联虚拟对象3032对应的压缩数据3082、...及关联虚拟对象303m对应的压缩数据308m进行封装,得到第一变更数据。
步骤S203,将主虚拟对象的第一变更数据发送给主虚拟对象所在的目标客户端。
具体的,将主虚拟对象的第一变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第一变更数据进行帧图像更新显示。具体的,第一变更数据为目标客户端在这一帧进行显示时,用于渲染出关联虚拟对象的数据。具体可以参见图3,计算机设备在生成第一变更数据后,将第一变更数据发送给关联虚拟对象所在的目标客户端307,以使目标客户端307可以基于该第一变更数据进行帧图像显示。
其中,若该应用场景中包括多个主虚拟对象时,则以主虚拟对象302的数据处理过程为例,得到该应用场景中其他主虚拟对象的第一变更数据,将得到的第一变更数据发送给对应的主虚拟对象所在的目标客户端,从而使得该应用场景所关联的所有客户端都可以基于接收到的第一变更数据进行帧图像更新显示。具体的,参见图4,图4是本申请实施例提供的一种多虚拟对象的数据处理过程。如图4所示,在上述应用场景中包括多个主虚拟对象,每个主虚拟对象关联不止一个关联虚拟对象。假定计算机设备401从应用场景中获取到主虚拟对象4021、主虚拟对象4022及主虚拟对象4023,获取与主虚拟对象4021关联的关联虚拟对象4031及关联虚拟对象4032,获取关联虚拟对象4031的第一类型变量4041,对第一类型变量4041进行压缩,得到压缩数据4051,同理获取关联虚拟对象4032的第一类型变量4042,对第一类型变量4042进行压缩,得到压缩数据4052,将压缩数据4051及压缩数据4052封装成第一变更数据4061,将第一变更数据4061发送给主虚拟对象4021所在的目标客户端4071。同理,获取与主虚拟对象4022相关联的关联虚拟对象4033,获取该关联虚拟对象4033的第一类型变量4043,对第一类型变量4043进行压缩得到压缩数据4053,将压缩数据4053封装成第一变更数据4062,将第一变更数据4062 发送给主虚拟对象4022所在的目标客户端4072;获取与主虚拟对象4023相关联的关联虚拟对象4034,获取该关联虚拟对象4034的第一类型变量4044,对第一类型变量4044进行压缩得到压缩数据4054,将压缩数据4054封装成第一变更数据4063,将第一变更数据4063发送给主虚拟对象4023所在的目标客户端4073。其中,计算机设备无论在应用场景中获取到的主虚拟对象的数量及每个主虚拟对象关联的关联虚拟对象的数量是多少(一个或至少两个等),都可以通过图4中所实现的多虚拟对象的数据处理过程,得到每个主虚拟对象对应的第一变更数据,并将第一变更数据发送给对应主虚拟对象所在的目标客户端。
本申请实施例通过上述数据处理过程,实现了计算机设备获取与主虚拟对象相关联的关联虚拟对象,并获取关联虚拟对象的第一类型变量,对第一类型变量进行压缩,得到压缩数据,将压缩数据封装成第一变更数据,将第一变更数据发送给主虚拟对象所在的目标客户端,从而减少在计算机设备与各个目标客户端间进行数据传输时的数据传输量。通过上述数据处理过程,将每个关联虚拟对象的角色变量进行分类,对属于第一更新频率类型的变量(即第一类型变量)进行压缩处理,一方面通过对角色变量的分类,减少了计算机设备需要处理的数据量,一方面,对第一类型变量进行压缩处理,减少了计算机设备发送给客户端的数据量,以降低计算机设备的网络数据处理压力,减少数据处理的耗时,从而提高了数据处理的效率。当本申请实施例应用于游戏场景时,可以降低计算机设备的网络复制压力,其中,该计算机设备包括但不限于DS,网络复制(Data Replication)指的是计算机设备将游戏数据发送给客户端,以使客户端与计算机设备的游戏数据保持一致的过程。
进一步地,参见图5,图5是本申请实施例提供的一种数据处理具体流程示意图。如图5所示,包括如下步骤:
步骤S501,获取与主虚拟对象相关联的关联虚拟对象。
具体的,可以通过主虚拟对象的视觉范围确定与该主虚拟对象相关联的关联虚拟对象,也可以通过主虚拟对象的属性类别确定与该主虚拟对象相关联的关联虚拟对象,或者,结合主虚拟对象的视觉范围及属性类别确定与该主虚拟对象相关联的关联虚拟对象。
一种通过主虚拟对象的视觉范围确定关联虚拟对象的情况下,从距离关系列表中获取主虚拟对象分别与至少两个原始虚拟对象间的距离信息;从至少两个原始虚拟对象中获取距离信息大于第一距离阈值且小于或等于第二距离阈值的待选虚拟对象,确定待选虚拟对象与主虚拟对象的视野方向信息间的视距夹角信息;将距离信息小于或等于第一距离阈值的原始虚拟对象,或者视距夹角信息小于可视化角度阈值的待选虚拟对象,确定为与主虚拟对象相关联的关联虚拟对象,或者,可以将距离信息小于或等于第一距离阈值的原始虚拟对象,及视距夹角信息小于可视化角度阈值的待选虚拟对象,确定为与主虚拟对象相关联的关联虚拟对象。换句话说,应用场景中的各个虚拟对象与主虚拟对象间的关联性是基于距离信息及特殊逻辑确定的,例如,该特殊逻辑可以是“主虚拟对象正前方极小角度内(视觉范围)的虚拟 对象”。其中,与主虚拟对象间的距离信息小于或等于第一距离阈值的原始虚拟对象,为在主虚拟对象所在的目标客户端的显示页面进行显示的虚拟对象;视距夹角信息小于可视化角度阈值的待选虚拟对象,为当主虚拟对象使用远视距道具时,在主虚拟对象所在的目标客户端的显示页面进行显示的虚拟对象,该待选虚拟对象也可以是与主虚拟对象间的距离信息小于或等于第二距离阈值的原始虚拟对象。通过对玩家在不使用远视距道具和使用远视距道具的情况下的视觉范围,确定与该玩家相关联的关联虚拟对象。
其中,上述视距夹角信息是根据待选虚拟对象在应用场景中的位置信息及主虚拟对象的视野方向信息所确定的。其中,该主虚拟对象的视野方向信息可以是单位向量,也可以是单位经纬度等,具体的,该主虚拟对象的视野方向信息的表示方式是根据该应用场景的实现方式所确定的,在此不做限制。例如,当应用场景是以基准点(0,0,0)进行创建的,则该主虚拟对象的朝向信息可以为(x1,y1,z1),x、y及z均为0到1间的数值,表示该主虚拟对象的朝向,即主虚拟对象的视野方向;获取待选虚拟对象在应用场景中的位置信息(x2,y2,z2),以及主虚拟对象在应用场景中的位置信息(x3,y3,z3),通过待选虚拟对象的位置信息(x2,y2,z2)与主虚拟对象的位置信息(x3,y3,z3)得到待选虚拟对象与主虚拟对象间的方向向量,根据该方向向量与主虚拟对象的视野方向信息确定该待选虚拟对象与主虚拟对象间的视距夹角信息。以上为一种可能的视距夹角信息的确定方式,其他可以得到待选虚拟对象与主虚拟对象的视野方向信息间的视距夹角信息的方式,在此不做限制。
举例来说,在游戏场景中,对于玩家(主虚拟对象)的可视化范围内的所有虚拟对象可以认为是与该玩家相关联的关联虚拟对象,即与玩家间的距离信息小于或等于第一距离阈值的原始虚拟对象;当玩家使用狙击镜望远镜等远视距道具进行观察时,会增加该玩家的视觉范围,在玩家使用远视距道具后的视觉范围内的虚拟对象也可以认为是与该玩家相关联的关联虚拟对象,其中,第二距离阈值为玩家使用远视距道具时所能观察到的最远距离,可视化角度阈值为玩家使用远视距道具时所能观察到的最大视野角度等。
进一步地,可以参见图6,图6是本申请实施例中提供的一种关联虚拟对象的显示场景示意图。如图6所示,假定主虚拟对象602在未使用远视距道具时在目标客户端的显示页面601中显示帧图像。获取与主虚拟对象602间的距离信息小于或等于第一距离阈值603的原始虚拟对象,确定为关联虚拟对象,将关联虚拟对象的相关信息发送给目标客户端后,目标客户端在显示页面601中显示接收到的关联虚拟对象,其中,关联虚拟对象为显示页面601中所显示的除主虚拟对象602之外的虚拟对象。其中,目标客户端在显示页面601中显示关联虚拟对象时,可以根据各个关联虚拟对象与主虚拟对象602间的距离信息,调整各个关联虚拟对象在显示页面601中所显示的尺寸。当主虚拟对象602使用远视距道具604时,从至少两个原始虚拟对象中获取距离信息大于第一距离阈值603且小于或等于第二距离阈值605 的待选虚拟对象,确定待选虚拟对象与主虚拟对象的视野方向信息间的视距夹角信息,将视距夹角信息小于可视化角度阈值606的待选虚拟对象,确定为与主虚拟对象602相关联的关联虚拟对象,此时,在目标客户端的显示页面601中显示了距离信息小于或等于第一距离阈值603的关联虚拟对象,也显示了视距夹角信息小于可视化角度阈值606的关联虚拟对象。其中,上述第一距离阈值603和第二距离阈值605指的是在应用场景中的实际距离值,而不是在显示页面601中进行显示的页面显示距离,即可以认为第一距离阈值603和第二距离阈值605为应用场景中的位置信息间的距离。
一种通过主虚拟对象的属性类别确定关联虚拟对象的情况下,获取应用场景中的至少两个原始虚拟对象的属性类别,将属性类别为用户角色属性的原始虚拟对象确定为用户虚拟对象,将属性类别为系统角色属性的原始虚拟对象确定为系统虚拟对象;获取用户虚拟对象的群组标签,将群组标签与主虚拟对象的群组标签相同的用户虚拟对象,确定为关联用户虚拟对象;将关联用户虚拟对象及系统虚拟对象,确定为与主虚拟对象相关联的关联虚拟对象。举例来说,在游戏场景中,对于玩家(主虚拟对象)来说,可能需要了解与自己属于同一群组的虚拟对象的位置信息等,因此,可以获取玩家的群组标签,从原始虚拟对象中获取属性类别为用户角色属性的用户虚拟对象,从用户虚拟对象中获取与主虚拟对象的群组标签相同的关联用户虚拟对象,该用户虚拟对象为该游戏场景中除上述玩家之外的非NPC对象。其中,当多个用户虚拟对象组队进行该游戏场景或进入该游戏场景后进行组队,则为组队的多个用户虚拟对象添加相同的群组标签,以表示上述多个用户虚拟对象属于同一个群组。通过主虚拟对象的属性类别确定关联虚拟对象,可以在目标客户端的显示页面中,为主虚拟对象显示与主虚拟对象属于同一群组的虚拟对象,以使得主虚拟对象可以实时查看与自己同一群组的虚拟对象的位置信息等,即可以实时获取到自己的队友的情况,提高了同一群组中的虚拟对象间的交互性。
可以结合主虚拟对象的视觉范围及属性类别确定与主虚拟对象相关联的关联虚拟对象,将距离信息小于或等于第一距离阈值的原始虚拟对象,视距夹角信息小于可视化角度阈值的待选虚拟对象,以及关联用户虚拟对象,确定为与主虚拟对象相关联的关联虚拟对象,其中,该关联用户虚拟对象是基于群组标签所确定的,具体是获取属性类别为用户角色属性的原始虚拟对象,确定为用户虚拟对象,将群组标签与主虚拟对象的群组标签相同的用户虚拟对象确定为关联用户虚拟对象。
其中,在上述通过主虚拟对象的视觉范围确定关联虚拟对象的情况中,上述距离关系列表包括各个系统虚拟对象分别与各个用户虚拟对象间的距离信息。获取每个原始虚拟对象与主虚拟对象间的距离信息所属的距离范围,获取每个原始虚拟对象的更新缓存时间;获取距离范围对应的列表更新时间阈值,将至少两个原始虚拟对象中更新缓存时间与第三系统网络时间的差值大于或等于列表更新时间阈值的原始虚拟对象,确定为目标虚拟对象;更新距离关系列表中目标虚拟对象与主虚拟对象间的距离信息。通过对每个原始虚拟对象与主虚拟对象间的距离信息进行划分, 得到多个距离范围,对每个距离范围设置一个列表更新时间阈值,使得不同的距离范围内对应的距离信息经过不同的列表更新时间阈值进行更新,以使距离信息较小时,更新频率较大,距离信息较大时,更新频率较小,从而在不影响主虚拟对象的帧图像显示的情况下,减少对距离关系列表的更新频率,减少距离关系列表的更新数据量。
步骤S502,获取该关联虚拟对象的角色变量中的第一类型变量。
具体的,获取关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量。其中,该第一更新频率类型的变量可以认为是在应用场景中每一帧都会发生变化的变量。具体的,获取主虚拟对象对应的类对象集合,该类对象集合包括至少两个对象变量子集;从类对象集合的至少两个对象变量子集中获取关联虚拟对象对应的对象变量子集,其中,该对象变量子集包括关联虚拟对象的角色变量中属于第一更新频率类型的变量;从对象变量子集中获取关联虚拟对象的第一类型变量。其中,该第一类型变量可以包括位置信息、视野方向信息、速度信息、加速度信息、路径信息及布娃娃(Ragdoll)信息等,由于加速度信息及视野方向信息等可以由客户端根据接收到的变更数据计算得到,因此,可以认为计算机设备不需要对该加速度信息及视野方向信息等进行处理。其中,Ragdoll信息为关联虚拟对象的骨骼信息,用于渲染出对应的关联虚拟对象的角色形态,即可以用于渲染出在显示页面中显示出来的关联虚拟对象。其中,该第一类型变量的获取过程还可以参见图2中的步骤S201的具体描述,在此不做赘述。
其中,第一类型变量包括历史路径信息时,获取关联虚拟对象在目标时间范围内的运动轨迹,基于运动轨迹确定上述关联虚拟对象的运动位置点,将运动位置点确定为关联虚拟对象的路径信息,将第一类型变量中的历史路径信息更新为路径信息;该目标时间范围是指更新第一类型变量中的路径信息的间隔时长。换句话说,可以对关联虚拟对象在目标时间范围内的运动轨迹进行预测,基于该运动轨迹确定该关联虚拟对象的运动位置点,将历史路径信息更新为基于运动位置点确定的路径信息,记录该路径信息的路径更新时间,在该路径更新时间基础上,经过目标时间范围后,再次对该关联虚拟对象的路径信息进行预测。举例来说,假定目标时间范围为2s,记录的路径更新时间为6时35分05秒,则在该路径更新时间的基础上,下次对路径信息进行更新的时间为6时35分07秒,进一步地,该路径更新时间可以精确到毫秒等,且每一次对路径信息进行更新,都对路径更新时间进行同步更新。
进一步地,可以参见图7,图7是本申请实施例提供的一种第一类型变量确定场景示意图。如图7所示,获取主虚拟对象对应的类对象集合701,该类对象集合701包括至少两个对象变量子集,包括对象变量子集7011、对象变量子集7012、...及对象变量子集7013等,每个对象变量子集中包括对应关联虚拟对象的角色变量中属于第一更新频率类型的变量,即对应关联虚拟对象的第一类型变量。如图7中所示,对象变量子集7011中包括关联虚拟对象1的第一类型变量,对象变量子集7012中包括关联虚拟对象2的第一类型变量,...,对象变量子集7013中包括关联虚拟对 象3的第一类型变量。从各个对象变量子集中可以获取到对应关联虚拟对象的第一类型变量。其中,以关联虚拟对象1更新702为例,该关联虚拟对象1的角色变量703中包括第一类型变量7031、第二类型变量7032、其他数据7033及全量更新时间7034,当确定该关联虚拟对象1为与主虚拟对象相关联的虚拟对象,则将该关联虚拟对象1的第一类型变量7031作为对象变量子集7011添加至类对象集合701中。其中,对于其他的虚拟对象也包括第一类型变量、第二类型变量、其他数据及全量更新时间等,该更新过程可以参见关联虚拟对象1的更新过程。因此,可以在确定与主虚拟对象相关联的关联虚拟对象后,获取该关联虚拟对象的角色变量,从该角色变量中获取第一类型变量,以图7为例,获取到与主虚拟对象关联的关联虚拟对象1后,获取该关联虚拟对象1的角色变量703,获取该角色变量703中的第一类型变量7031。该类对象集合可以认为是Holder类对象集合,该Holder是通过原始值的复制传递变量的类,为不可变的对象引用提供一个可变的包装。
可以认为该应用场景中包括多少个主虚拟对象,就包括多少个类对象集合,每个类对象集合对应一个主虚拟对象,用于处理对应主虚拟对象的第一类型变量。换句话说,当计算机设备获取到该应用场景中的主虚拟对象后,为每一个主虚拟对象创建一个类对象集合,并获取与每个主虚拟对象关联的关联虚拟对象,将关联虚拟对象的第一类型变量添加至对应主虚拟对象的类对象集合中。
步骤S503,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据封装成主虚拟对象的第一变更数据。
具体的,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据进行封装得到主虚拟对象的第一变更数据。
具体的,一种压缩情况下,第一类型变量包括位置信息;获取关联虚拟对象的位置信息及位置精度,基于位置精度将位置信息转换为整型位置数据,将整型位置数据确定为压缩数据,该整型位置数据为有损数据。其中,第一类型变量中数据类型为浮点型数据的变量,也可以通过上述位置信息的压缩方式进行压缩,由于浮点型数据所占的内存比整型数据所占的内存大,所以通过这种方式可以减少第一类型变量所占的内存大小。例如,位置信息为以(X,Y,Z)进行表示的数据,位置精度为0.2厘米,使用22比特存储位置信息的X和Y,使用20比特存储位置信息的Z等,该位置信息的X、Y及Z的内存大小可以根据应用场景的具体实现所确定,位置精度也可以是其他的数值。例如,一个关联虚拟对象的位置信息中的X为1001.252,基于位置精度对X进行压缩,得到压缩后的X为1001.2,将1001.2更新为整型位置数据中的X,为5006。
一种压缩情况下,在待更新帧中,获取第一类型变量,在主虚拟对象所在的历史帧中,获取第一类型变量对应的历史缓存变量,其中,该历史帧为待更新帧的上一帧;获取历史缓存变量与第一类型变量间的第一差异变量;对第一差异变量进行编码,得到压缩数据。其中,该编码可以是哈夫曼编码等可以对数据进行压缩或简化的编码方式。其中,对于第一类型变量中相邻帧间的数据为递变关系的变量,均 可以通过该方式进行压缩,如在第二帧的数据是基于在第一帧的数据进行变化,在第三帧的数据是基于在第二帧的数据进行变化等的变量。举例来说,对于主虚拟对象的位置信息,在相邻帧之间是依次变化的,可以获取该位置信息的历史缓存变量,获取历史缓存变量与位置信息间的第一差异变量,对第一差异变量进行编码,得到压缩数据,假定该位置信息的历史缓存变量为(100,201,5),该位置信息为(101,201.2,5),则获取历史缓存变量与位置信息间的第一差异变量为(1,0.2,0),再对第一差异变量(1,0.2,0)进行编码,得到压缩数据。
具体的,可以参见图8,图8是本申请实施例提供的一种基于历史帧进行更新的数据处理示意图。如图8所示,在主虚拟对象802所在的目标客户端的显示页面801中显示了关联虚拟对象803a及关联虚拟对象804a,即可以认为在显示页面801中显示的是历史帧,获取该关联虚拟对象803a的位置信息(即历史缓存变量)为(201,100,5),以及该关联虚拟对象804a的位置信息(即历史缓存变量)为(10,21,5)。在待更新帧中,获取关联虚拟对象803b的位置信息(204,97,5),以及关联虚拟对象804b的位置信息(12,25,5)。其中,可以认为历史帧为待更新帧的上一帧,关联虚拟对象803a的位置信息为关联虚拟对象803b的位置信息的历史缓存变量,得到关联虚拟对象803b的第一差异变量为(3,-3,0);关联虚拟对象804a的位置信息为关联虚拟对象804b的位置信息的历史缓存变量,得到关联虚拟对象804b的第一差异变量为(2,4,0),对第一差异变量(3,-3,0)及第一差异变量(2,4,0)进行编码,将编码结果发送给目标客户端。目标客户端在接收到编码结果后,还原得到第一差异变量(3,-3,0)及第一差异变量(2,4,0),基于第一差异变量(3,-3,0)在显示页面801中显示关联虚拟对象803b,基于第一差异变量(2,4,0)在显示页面801中显示关联虚拟对象804b。换句话说,目标客户端对显示页面801中的关联虚拟对象803a及关联虚拟对象804a进行更新,得到关联虚拟对象803b及关联虚拟对象804b,此时,在显示页面801中不存在关联虚拟对象803a及关联虚拟对象804a,即将显示页面801中显示的历史帧变更为待更新帧。
一种压缩情况下,第一类型变量包括关联虚拟对象的对象标识;获取关联虚拟对象的前继关联虚拟对象的对象标识,获取关联虚拟对象的对象标识与前继关联虚拟对象的对象标识间的第二差异变量,其中,前继关联虚拟对象与关联虚拟对象为依次处理关系;对第二差异变量进行编码,得到压缩数据。举例来说,对于关联虚拟对象的对象标识来说,不同的对象标识间的差异性较小,可以通过该种方式进行压缩。例如,与主虚拟对象关联的关联虚拟对象有3个,第一个关联虚拟对象的对象标识为101,第二个关联虚拟对象的对象标识为103,第三个关联虚拟对象的对象标识为107,则可以认为第一个关联虚拟对象为第二个关联虚拟对象的前继关联虚拟对象,第二个关联虚拟对象为第三个关联虚拟对象的前继关联虚拟对象,根据该压缩方法,可以得到第二个关联虚拟对象的第二差异变量为(103-101-1=1),得到第三个关联虚拟对象的第二差异变量为(107-103-1=3),其中,“-1”是由于两个关联虚拟对象的对象标识不可能相同,获取第二差异变量时也可以不进行“-1”。
其中,上述各个压缩方法可以组合执行,例如,对于位置信息来说,可以将该位置信息的第二差异变量转换为整型位置数据,对整型位置数据进行编码,得到压缩数据,或者,将位置信息转换为整型位置数据,根据整型位置数据得到该位置信息的第二差异变量,对第二差异变量进行编码,得到压缩数据。
若第一类型变量包括Ragdoll信息时,可以直接将该Ragdoll信息作为压缩数据。其中,该Ragdoll信息与位置信息相关,因此,可以将Ragdoll信息视为属于第一更新频率类型的变量。其中,当关联虚拟对象包括多个第一类型变量时,得到每个第一类型变量的压缩数据后,将所有压缩数据进行封装得到第一变更数据。
步骤S504,将主虚拟对象的第一变更数据发送给主虚拟对象所在的目标客户端。
具体的,将主虚拟对象的第一变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第一变更数据进行帧图像更新显示。其中,当目标客户端接收到第一变更数据后,对第一变更数据进行还原,得到第一类型变量,根据该第一类型变量在显示页面中进行帧图像更新显示。
步骤S505,确定第二类型变量发生变化。
具体的,若接收到针对关联虚拟对象的第二类型变量的触发信息,则确定第二类型变量发生变化,执行步骤S506。或者,获取关联虚拟对象的第二类型变量的历史更新时间及第一系统网络时间,其中,第二类型变量为角色变量中属于第二更新频率类型的变量;若第一系统网络时间与历史更新时间的差值大于或等于第二类型变量更新时间阈值,则确定第二类型变量发生变化,执行步骤S506。其中,该触发信息可以认为是会造成对应第二类型变量发生数据更新的操作信息,例如,当关联虚拟对象被击中,该关联虚拟对象的血量信息及位移动画参数等会发生变化,该“被击中”可以认为是针对关联虚拟对象的血量信息及位移动画参数等的触发信息。可以在未接收到针对关联虚拟对象的第二类型变量的触发信息时,但第一系统网络时间与历史更新时间的差值大于或等于第二类型变量更新时间阈值时,确定需要处理第二类型变量,执行步骤S506;或者,可以在接收到针对关联虚拟对象的第二类型变量的触发信息时,但第一系统网络时间与历史更新时间的差值小于第二类型变量更新时间阈值时,确定需要处理第二类型变量,执行步骤S506;或者,可以在接收到针对关联虚拟对象的第二类型变量的触发信息时,且第一系统网络时间与历史更新时间的差值大于或等于第二类型变量更新时间阈值时,确定需要处理第二类型变量,执行步骤S506。
接收到针对关联虚拟对象的第二类型变量的触发信息,或者第一系统网络时间与历史更新时间的差值大于或等于第二类型变量更新时间阈值,两者任意一个条件满足时,就无需判断另一个条件是否满足,可以直接确定需要处理第二类型变量,执行步骤S506。其中,当第一个判断的条件不满足时,判断另一个条件是否满足。其中,两个条件的判断过程的执行顺序不做限定。
步骤S506,获取该关联虚拟对象的角色变量中的第二类型变量,将第二类型变 量封装成第二变更数据。
具体的,获取上述第二类型变量,该第二类型变量为所述角色变量中属于第二更新频率类型的变量,将第二类型变量封装成主虚拟对象的第二变更数据。其中,当第一系统网络时间与历史更新时间的差值大于或等于第二类型变量更新时间阈值时,可以认为将第二类型变量封装成主虚拟对象的第三变更数据。换句话说,可以认为第二类型变量为一些触发式的变量,例如虚拟对象中受击信息变更、攻击行为及位移动画参数等,当第二类型变量发生变更时,可以针对该第二类型变量对应的关联虚拟对象执行参与者复制(Actor Replication)方法,以对第二类型变量进行全量复制。
步骤S507,将主虚拟对象的第二变更数据发送给主虚拟对象所在的目标客户端。
具体的,将第二变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第二变更数据进行帧图像更新显示。其中,若得到的是第三变更数据,则将第三变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第三变更数据进行帧图像更新显示。
具体的,可以参见图7,当关联虚拟对象1的第二类型变量7041更新时,将该第二类型变量7041加入全量更新列表704中,该全量更新列表704中包括本次数据处理过程中需要进行更新的关联虚拟对象的第二类型变量。例如,在本次数据处理过程中,关联虚拟对象1的第二类型变量7041、关联虚拟对象2的第二类型变量7042以及关联虚拟对象3的第二类型变量7043发生更新,则将关联虚拟对象1的第二类型变量7041、关联虚拟对象2的第二类型变量7042以及关联虚拟对象3的第二类型变量7043添加至全量更新列表704中。
针对上述步骤S505至步骤S507,举例来说,参见图9,图9是本申请实施例提供的一种第二类型变量触发场景示意图。如图9所示,假定主虚拟对象902的关联虚拟对象903a的第二类型变量包括血量信息904a及受击信息等,以血量信息904a为例,当该主虚拟对象902所在的目标客户端的显示页面901中显示有关联虚拟对象903a,及该关联虚拟对象903a的血量信息904a时,主虚拟对象902使用远距离射击工具905向关联虚拟对象903a进行射击,其中,该血量信息904a为100%,该远距离射击工具905可以是弓弩、枪械及爆破类工具等。假定远距离射击工具905为枪械,当枪械中射击出的子弹906射中关联虚拟对象903a时,计算机设备接收到关联虚拟对象903a被射击操作,即接收到针对该关联虚拟对象903a的第二类型变量的触发信息,获取该关联虚拟对象903b的第二类型变量,其中,关联虚拟对象903a与关联虚拟对象903b可以认为是在相邻帧中显示的同一个虚拟对象,此处通过不同标号表示该虚拟对象的角色变量发生了变化。当计算机设备获取到关联虚拟对象903b的血量信息904b为80%时,将血量信息904b进行封装得到第二变更数据,将第二变更数据发送给目标客户端。目标客户端基于接收到的第二变更数据还原得到血量信息904b,在显示页面901中显示关联虚拟对象903b及该关联虚拟对象903b 的血量信息904b。当关联虚拟对象903a被子弹906射中时,该关联虚拟对象903a由于与子弹906间的受力作用,关联虚拟对象903a的位移动画参数也可能发生变化,此时,上述触发信息还针对关联虚拟对象903a的位移动画参数。其中,对于关联虚拟对象来说,同一个触发信息可能是针对至少一个第二类型变量的,在此不做限制。
计算机设备在接收到关联虚拟对象903a被射击操作后,可以基于射中关联虚拟对象903a的子弹906的伤害值、关联虚拟对象903a被射中的部位、及该关联虚拟对象903a的血量信息904a等,获取到关联虚拟对象903b的血量信息904b。或者,计算机设备在获取到子弹906的伤害值、关联虚拟对象903a被射中的部位,确定该关联虚拟对象903a的血量减少数值,将血量减少数值封装至第二变更数据中,目标客户端还原得到血量减少数值后,基于上一帧的血量信息904a及该血量减少数值,得到血量信息904b,在显示页面901中显示该血量信息904b。
其中,上述对第一类型变量的处理过程(步骤S502至步骤S504)与对第二类型变量的处理过程(步骤S505至步骤S507)可以为同步执行,也可以为异步执行,且两者的执行顺序不做限定。在通过执行步骤S502至步骤S503后得到压缩数据,且通过执行步骤S505至步骤S506后得到第二类型变量,将压缩数据和第二类型变量进行封装,得到目标变更数据,将目标变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于目标变更数据进行帧图像更新显示。具体参见图7,在得到与主虚拟对象关联的各个关联虚拟对象的第一类型变量后,将各个关联虚拟对象的第一类型变量分别作为一个对象变量子集,添加至类对象集合701中;获取本次数据处理过程中数据发生更新的第二类型变量,将数据发生更新的第二类型变量添加至全量更新列表704中。对类对象集合701中所包含的各个对象变量子集的数据进行压缩,得到每个对象变量子集对应的压缩数据。计算机设备对类对象集合701中处理之后得到的压缩数据,以及全量更新列表704中包括的各个第二类型数据进行封装,得到目标变更数据,将目标变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于目标变更数据进行帧图像更新显示。
在执行上述步骤S502后,获取关联虚拟对象的第一类型变量的历史全量更新时间及第二系统网络时间;若历史全量更新时间与第二系统网络时间的差值大于或等于全量更新时间阈值,则将第一类型变量封装成全量变更数据,将全量变更数据发送给目标客户端,以使目标客户端基于全量变更数据进行帧图像更新显示,并基于第二系统网络时间更新历史全量更新时间;若历史全量更新时间与第二系统网络时间的差值小于全量更新时间阈值,则执行步骤S503,对第一类型变量进行压缩处理,得到压缩数据。该历史全量更新时间可以包含于对应关联虚拟对象的角色变量中,如图7中所示,该历史全量更新时间为关联虚拟对象1的角色变量703中的全量更新时间7034。该历史全量更新时间也可以通过其他方式进行存储,在此不做限制。其中,在对角色变量进行处理时,可以通过对象标识与对应的关联虚拟对象进行关联,以表示该角色变量所属的关联虚拟对象。
进一步地,还可以将虚拟对象的角色变量划分为至少两个类别,如上述属于第 一更新频率类型的第一类型变量,以及属于第二更新频率类型的第二类型变量。除这种划分方式外,还可以基于不同的更新频率类型将角色变量划分为更多类型的变量,如划分为三种类型变量,或者四种类型变量,或者五种类型变量等等,在不同的划分情况下,每种类型变量所属的更新频率类型不同,其中,在同一种划分情况下,可以通过树状结构管理不同的更新频率类型,对于不同的更新频率类型对应的变量的数据处理方法不相同。举例来说,参见图10是本申请实施例提供的一种变量划分场景示意图。如图10所示,第一种划分情况下,基于第一更新频率类型及第二更新频率类型对角色变量进行划分,将角色变量中属于第一更新频率类型的变量,作为第一类型变量;将角色变量中属于第二更新频率类型的变量,作为第二类型变量;其中,第一类型变量及第二类型变量的数据处理方法不相同,其中,可以通过第一类型变量与第二类型变量渲染出一个完整的关联虚拟对象。第二种划分情况下,基于第三更新频率类型、第四更新频率类型及第五更新频率类型对角色变量进行划分,将角色变量中属于第三更新频率类型的变量,作为第三类型变量;将角色变量中属于第四更新频率类型的变量,作为第四类型变量;将角色变量中属于第五更新频率类型的变量,作为第五类型变量;其中,第三类型变量、第四类型变量及第五类型变量的数据处理方法不相同,其中,可以通过第三类型变量、第四类型变量及第五类型变量渲染出一个完整的关联虚拟对象。同理,也可以将角色变量通过其他划分方式进行分类。
本申请实施例通过获取与主虚拟对象相关联的关联虚拟对象,获取关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据进行封装得到主虚拟对象的第一变更数据,将第一变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第一变更数据进行帧图像更新显示。并在第二类型变量发生变化时对第二类型变量进行更新,或者对第二类型变量进行周期性更新,且对第一类型变量继续周期性全量更新,从而解决在全量更新之前可能出现的丢包问题。通过上述过程,对关联虚拟对象的角色变量进行分类处理,并对属于第一更新频率类型的第一类型变量进行压缩处理,大大减少了需要进行处理的数据量及需要发送的数据量,提高了数据处理效率。
参见图11,图11是本申请实施例提供的一种数据处理装置示意图。上述数据处理装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如该数据处理装置为一个应用软件;该装置可以用于执行本申请实施例提供的方法中的相应步骤。如图11所示,该数据处理装置110可以用于上述图2所对应实施例中的计算机设备,具体的,该数据处理装置110可以包括:第一获取模块11、压缩模块12、第一封装模块13及第一发送模块14。
第一获取模块11,用于获取与主虚拟对象相关联的关联虚拟对象,获取上述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
压缩模块12,用于对上述第一类型变量进行压缩处理,得到压缩数据;
第一封装模块13,用于将上述压缩数据进行封装得到上述主虚拟对象的第一变更数据;
第一发送模块14,用于将上述主虚拟对象的第一变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第一变更数据进行帧图像更新显示。
其中,上述装置110还包括:
第二获取模块15,用于若接收到针对上述关联虚拟对象的第二类型变量的触发信息,则获取上述第二类型变量;上述第二类型变量为上述角色变量中属于第二更新频率类型的变量;
第二封装模块16,用于将上述第二类型变量封装成上述主虚拟对象的第二变更数据;
第二发送模块17,用于将上述第二变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第二变更数据进行帧图像更新显示。
其中,上述装置110还包括:
第三获取模块18,用于获取上述关联虚拟对象的第二类型变量的历史更新时间及第一系统网络时间;上述第二类型变量为上述角色变量中属于第二更新频率类型的变量;
第三封装模块19,用于若上述第一系统网络时间与上述历史更新时间的差值大于或等于第二类型变量更新时间阈值,则将上述第二类型变量封装成上述主虚拟对象的第三变更数据;
第三发送模块20,用于将上述第三变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第三变更数据进行帧图像更新显示。
其中,上述第一类型变量包括位置信息;
上述压缩模块12,包括:
第一获取单元121,用于获取上述关联虚拟对象的上述位置信息及位置精度;
确定单元122,用于基于上述位置精度将上述位置信息转换为整型位置数据,将上述整型位置数据确定为上述压缩数据。
其中,上述压缩模块12,包括:
第二获取单元123,用于在待更新帧中,获取上述第一类型变量,在上述主虚拟对象所在的历史帧中,获取上述第一类型变量对应的历史缓存变量;上述历史帧为上述待更新帧的上一帧;
第三获取单元124,用于获取上述历史缓存变量与上述第一类型变量间的第一差异变量;
第一生成单元125,用于对上述第一差异变量进行编码,得到上述压缩数据。
其中,上述第一类型变量包括上述关联虚拟对象的对象标识;
上述压缩模块12,包括:
第四获取单元126,用于获取上述关联虚拟对象的前继关联虚拟对象的对象标 识,获取上述关联虚拟对象的对象标识与上述前继关联虚拟对象的对象标识间的第二差异变量;上述前继关联虚拟对象与上述关联虚拟对象为依次处理关系;
第二生成单元127,用于对上述第二差异变量进行编码,得到上述压缩数据。
其中,上述第一类型变量包括历史路径信息;
上述第一获取模块11,包括:
路径获取单元111,用于获取上述关联虚拟对象在目标时间范围内的运动轨迹,基于上述运动轨迹确定上述关联虚拟对象的运动位置点,将上述运动位置点确定为上述关联虚拟对象的路径信息,将上述第一类型变量中的上述历史路径信息更新为上述路径信息;上述目标时间范围是指更新上述第一类型变量中的上述路径信息的间隔时长。
其中,上述装置110还包括:
第四获取模块21,用于获取上述关联虚拟对象的上述第一类型变量的历史全量更新时间及第二系统网络时间;
第四封装模块22,用于若上述历史全量更新时间与上述第二系统网络时间的差值大于或等于全量更新时间阈值,则将上述第一类型变量的参数封装成全量变更数据,将上述全量变更数据发送给上述目标客户端,以使上述目标客户端基于上述全量变更数据进行帧图像更新显示;
上述第四封装模块22,还用于若上述历史全量更新时间与上述第二系统网络时间的差值小于上述全量更新时间阈值,则通过上述压缩模块12执行对所述第一类型变量进行压缩处理,得到压缩数据的步骤。
其中,上述装置110还包括:
距离获取模块23,用于从距离关系列表中获取上述主虚拟对象分别与至少两个原始虚拟对象间的距离信息;
视距确定模块24,用于从上述至少两个原始虚拟对象中获取上述距离信息大于第一距离阈值且小于或等于第二距离阈值的待选虚拟对象,确定上述待选虚拟对象与上述主虚拟对象的视野方向信息间的视距夹角信息;
关联确定模块25,用于将上述距离信息小于或等于上述第一距离阈值的原始虚拟对象,或者上述视距夹角信息小于可视化角度阈值的待选虚拟对象,确定为与上述主虚拟对象相关联的上述关联虚拟对象。
其中,上述装置110还包括:
属性分类模块26,用于获取应用场景中的至少两个原始虚拟对象的属性类别,将上述属性类别为用户角色属性的原始虚拟对象确定为用户虚拟对象,将上述属性类别为系统角色属性的原始虚拟对象确定为系统虚拟对象;
用户关联模块27,用于获取上述用户虚拟对象的群组标签,将上述群组标签与上述主虚拟对象的群组标签相同的用户虚拟对象,确定为关联用户虚拟对象;
上述关联确定模块25,还用于将上述关联用户虚拟对象及上述系统虚拟对象,确定为与上述主虚拟对象相关联的上述关联虚拟对象。
其中,上述装置110还包括:
第五获取模块28,用于获取每个原始虚拟对象与上述主虚拟对象间的距离信息所属的距离范围,获取上述每个原始虚拟对象的更新缓存时间;
目标确定模块29,用于获取上述距离范围对应的列表更新时间阈值,将上述至少两个原始虚拟对象中更新缓存时间与第三系统网络时间的差值大于或等于上述列表更新时间阈值的原始虚拟对象,确定为目标虚拟对象;
更新模块30,用于更新上述距离关系列表中上述目标虚拟对象与上述主虚拟对象间的距离信息。
其中,上述第一获取模块11,包括:
类获取单元112,用于获取上述主虚拟对象对应的类对象集合;上述类对象集合包括至少两个对象变量子集;
子集获取单元113,用于从上述类对象集合的上述至少两个对象变量子集中获取上述关联虚拟对象对应的对象变量子集;上述对象变量子集包括上述关联虚拟对象的角色变量中属于上述第一更新频率类型的变量;
变量获取单元114,用于从上述对象变量子集中获取上述关联虚拟对象的上述第一类型变量。
本申请实施例描述了数据处理装置,上述装置通过获取与主虚拟对象相关联的关联虚拟对象,获取关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据进行封装得到主虚拟对象的第一变更数据,将第一变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第一变更数据进行帧图像更新显示。通过上述过程,对关联虚拟对象的角色变量进行分类处理,并对属于第一更新频率类型的第一类型变量进行压缩处理,大大减少了需要进行处理的数据量及需要发送的数据量,提高了数据处理效率。
参见图12,图12是本申请实施例提供的一种计算机设备的结构示意图。如图12所示,本申请实施例中的计算机设备1200可以包括:一个或多个处理器1201、存储器1202和输入输出接口1203。上述处理器1201、存储器1202和输入输出接口1203通过总线1204连接。存储器1202用于存储计算机程序,该计算机程序包括程序指令,输入输出接口1203用于输入数据和输出数据,包括各个通讯客户端与事件服务器间的数据交互,以及用户与各个通讯客户端间的数据交互;处理器1201用于执行存储器1202存储的程序指令,执行如下操作:
获取与主虚拟对象相关联的关联虚拟对象,获取关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
对上述第一类型变量进行压缩处理,得到压缩数据,将上述压缩数据进行封装得到上述主虚拟对象的第一变更数据;
将上述主虚拟对象的第一变更数据发送给上述主虚拟对象所在的目标客户端,以使上述目标客户端基于上述第一变更数据进行帧图像更新显示。
在一些实施方式中,上述处理器1201可以是中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器1202可以包括只读存储器和随机存取存储器,并向处理器1201和输入输出接口1203提供指令和数据。存储器1202的一部分还可以包括非易失性随机存取存储器。例如,存储器1202还可以存储设备类型的信息。
具体实现中,上述计算机可通过其内置的各个功能模块执行如上述图2或图5中各个步骤所提供的实现方式,具体可参见上述图2或图5中各个步骤所提供的实现方式,在此不再赘述。
本申请实施例通过提供一种计算机,包括:处理器、输入输出接口、存储器,通过处理器获取存储器中的计算机指令,执行上述图2或图5中所示方法的各个步骤,进行数据处理操作。通过存储器中的计算机指令,处理器执行以下步骤:获取与主虚拟对象相关联的关联虚拟对象,获取关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量,对第一类型变量进行压缩处理,得到压缩数据,将压缩数据进行封装得到主虚拟对象的第一变更数据,将第一变更数据发送给主虚拟对象所在的目标客户端,以使目标客户端基于第一变更数据进行帧图像更新显示。并在第二类型变量发生变化时对第二类型变量进行更新,或者对第二类型变量进行周期性更新,且对第一类型变量继续周期性全量更新,从而解决在全量更新之前可能出现的丢包问题。通过上述过程,对关联虚拟对象的角色变量进行分类处理,并对属于第一更新频率类型的第一类型变量进行压缩处理,大大减少了需要进行处理的数据量及需要发送的数据量,提高了数据处理效率。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述图2或图5所示的数据处理方法。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包括程序指令,该程序指令被处理器执行时实现图2或图5中各个步骤所提供的数据处理方法,具体可参见上述图2或图5各个步骤所提供的实现方式,在此不再赘述。
上述计算机可读存储介质可以是前述任一实施例提供的数据处理装置或者上述计算机的内部存储单元,例如计算机的硬盘或内存。该计算机可读存储介质也可以是该计算机的外部存储设备,例如该计算机上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,该计算机可读存储介质还可以既包括该计算机的内部存储单元也包括外 部存储设备。该计算机可读存储介质用于存储该计算机程序以及该计算机所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例的说明书和权利要求书及附图中的术语“第一”、“第二”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、装置、产品或设备没有限定于已列出的步骤或模块,而是还包括没有列出的步骤或模块,或还包括对于这些过程、方法、装置、产品或设备固有的其他步骤单元。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例提供的方法及相关装置是参照本申请实施例提供的方法流程图和/或结构示意图来描述的,具体可由计算机程序指令实现方法流程图和/或结构示意图的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。这些计算机程序指令可提供到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或结构示意图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或结构示意图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或结构示意图的一个方框或多个方框中指定的功能的步骤。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (20)

  1. 一种数据处理方法,由计算机设备执行,其中,所述方法包括:
    获取与主虚拟对象相关联的关联虚拟对象,获取所述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
    对所述第一类型变量进行压缩处理,得到压缩数据,将所述压缩数据进行封装得到所述主虚拟对象的第一变更数据;
    将所述主虚拟对象的第一变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第一变更数据进行帧图像更新显示。
  2. 如权利要求1所述的方法,其中,所述方法还包括:
    若接收到针对所述关联虚拟对象的第二类型变量的触发信息,则获取所述第二类型变量;所述第二类型变量为所述角色变量中属于第二更新频率类型的变量;
    将所述第二类型变量封装成所述主虚拟对象的第二变更数据,将所述第二变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第二变更数据进行帧图像更新显示。
  3. 如权利要求1所述的方法,其中,所述方法还包括:
    获取所述关联虚拟对象的第二类型变量的历史更新时间及第一系统网络时间;所述第二类型变量为所述角色变量中属于第二更新频率类型的变量;
    若所述第一系统网络时间与所述历史更新时间的差值大于或等于第二类型变量更新时间阈值,则将所述第二类型变量封装成所述主虚拟对象的第三变更数据,将所述第三变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第三变更数据进行帧图像更新显示。
  4. 如权利要求1所述的方法,其中,所述第一类型变量包括位置信息;
    所述对所述第一类型变量进行压缩处理,得到压缩数据,包括:
    获取所述关联虚拟对象的所述位置信息及位置精度,基于所述位置精度将所述位置信息转换为整型位置数据,将所述整型位置数据确定为所述压缩数据。
  5. 如权利要求1所述的方法,其中,所述对所述第一类型变量进行压缩处理,得到压缩数据,包括:
    在待更新帧中,获取所述第一类型变量,在所述主虚拟对象所在的历史帧中,获取所述第一类型变量对应的历史缓存变量;所述历史帧为所述待更新帧的上一帧;
    获取所述历史缓存变量与所述第一类型变量间的第一差异变量;
    对所述第一差异变量进行编码,得到所述压缩数据。
  6. 如权利要求1所述的方法,其中,所述第一类型变量包括所述关联虚拟对象的对象标识;
    所述对所述第一类型变量进行压缩处理,得到压缩数据,包括:
    获取所述关联虚拟对象的前继关联虚拟对象的对象标识,获取所述关联虚拟对象的对象标识与所述前继关联虚拟对象的对象标识间的第二差异变量;所述前继关 联虚拟对象与所述关联虚拟对象为依次处理关系;
    对所述第二差异变量进行编码,得到所述压缩数据。
  7. 如权利要求1所述的方法,其中,所述第一类型变量包括历史路径信息;
    所述获取所述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量,包括:
    获取所述关联虚拟对象在目标时间范围内的运动轨迹,基于所述运动轨迹确定所述关联虚拟对象的运动位置点,将所述运动位置点确定为所述关联虚拟对象的路径信息,将所述第一类型变量中的所述历史路径信息更新为所述路径信息;所述目标时间范围是指更新所述第一类型变量中的所述路径信息的间隔时长。
  8. 如权利要求1所述的方法,其中,所述方法还包括:
    获取所述关联虚拟对象的所述第一类型变量的历史全量更新时间及第二系统网络时间;
    若所述历史全量更新时间与所述第二系统网络时间的差值大于或等于全量更新时间阈值,则将所述第一类型变量封装成全量变更数据,将所述全量变更数据发送给所述目标客户端,以使所述目标客户端基于所述全量变更数据进行帧图像更新显示;
    若所述历史全量更新时间与所述第二系统网络时间的差值小于所述全量更新时间阈值,则执行对所述第一类型变量进行压缩处理,得到压缩数据的步骤。
  9. 如权利要求1所述的方法,其中,所述方法还包括:
    从距离关系列表中获取所述主虚拟对象分别与至少两个原始虚拟对象间的距离信息;
    从所述至少两个原始虚拟对象中获取所述距离信息大于第一距离阈值且小于或等于第二距离阈值的待选虚拟对象,确定所述待选虚拟对象与所述主虚拟对象的视野方向信息间的视距夹角信息;
    将所述距离信息小于或等于所述第一距离阈值的原始虚拟对象,或者所述视距夹角信息小于可视化角度阈值的待选虚拟对象,确定为与所述主虚拟对象相关联的所述关联虚拟对象。
  10. 如权利要求1所述的方法,其中,所述方法还包括:
    获取应用场景中的至少两个原始虚拟对象的属性类别,将所述属性类别为用户角色属性的原始虚拟对象确定为用户虚拟对象,将所述属性类别为系统角色属性的原始虚拟对象确定为系统虚拟对象;
    获取所述用户虚拟对象的群组标签,将所述群组标签与所述主虚拟对象的群组标签相同的用户虚拟对象,确定为关联用户虚拟对象;
    将所述关联用户虚拟对象及所述系统虚拟对象,确定为与所述主虚拟对象相关联的所述关联虚拟对象。
  11. 如权利要求9所述的方法,其中,所述方法还包括:
    获取每个原始虚拟对象与所述主虚拟对象间的距离信息所属的距离范围,获取 所述每个原始虚拟对象的更新缓存时间;
    获取所述距离范围对应的列表更新时间阈值,将所述至少两个原始虚拟对象中更新缓存时间与第三系统网络时间的差值大于或等于所述列表更新时间阈值的原始虚拟对象,确定为目标虚拟对象;
    更新所述距离关系列表中所述目标虚拟对象与所述主虚拟对象间的距离信息。
  12. 如权利要求1所述的方法,其中,所述获取所述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量,包括:
    获取所述主虚拟对象对应的类对象集合;所述类对象集合包括至少两个对象变量子集;
    从所述类对象集合的所述至少两个对象变量子集中获取所述关联虚拟对象对应的对象变量子集;所述对象变量子集包括所述关联虚拟对象的角色变量中属于所述第一更新频率类型的变量;
    从所述对象变量子集中获取所述关联虚拟对象的所述第一类型变量。
  13. 一种数据处理装置,其中,所述装置包括:
    第一获取模块,用于获取与主虚拟对象相关联的关联虚拟对象,获取所述关联虚拟对象的角色变量中属于第一更新频率类型的变量,作为第一类型变量;
    压缩模块,用于对所述第一类型变量进行压缩处理,得到压缩数据;
    第一封装模块,用于将所述压缩数据进行封装得到所述主虚拟对象的第一变更数据;
    第一发送模块,用于将所述主虚拟对象的第一变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第一变更数据进行帧图像更新显示。
  14. 如权利要求13所述的装置,其中,所述装置还包括:
    第二获取模块,用于若接收到针对所述关联虚拟对象的第二类型变量的触发信息,则获取所述第二类型变量;所述第二类型变量为所述角色变量中属于第二更新频率类型的变量;
    第二封装模块,用于将所述第二类型变量封装成所述主虚拟对象的第二变更数据;
    第二发送模块,用于将所述第二变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第二变更数据进行帧图像更新显示。
  15. 如权利要求13所述的装置,其中,所述装置还包括:
    第三获取模块,用于获取所述关联虚拟对象的第二类型变量的历史更新时间及第一系统网络时间;所述第二类型变量为所述角色变量中属于第二更新频率类型的变量;
    第三封装模块,用于若所述第一系统网络时间与所述历史更新时间的差值大于或等于第二类型变量更新时间阈值,则将所述第二类型变量封装成所述主虚拟对象的第三变更数据;
    第三发送模块,用于将所述第三变更数据发送给所述主虚拟对象所在的目标客户端,以使所述目标客户端基于所述第三变更数据进行帧图像更新显示。
  16. 如权利要求13所述的装置,其中,所述第一类型变量包括位置信息;
    所述压缩模块包括:
    第一获取单元,用于获取所述关联虚拟对象的所述位置信息及位置精度;
    确定单元,用于基于所述位置精度将所述位置信息转换为整型位置数据,将所述整型位置数据确定为所述压缩数据。
  17. 如权利要求13所述的装置,其中,所述压缩模块包括:
    第二获取单元,用于在待更新帧中,获取所述第一类型变量,在所述主虚拟对象所在的历史帧中,获取所述第一类型变量对应的历史缓存变量;所述历史帧为所述待更新帧的上一帧;
    第三获取单元,用于获取所述历史缓存变量与所述第一类型变量间的第一差异变量;
    第一生成单元,用于对所述第一差异变量进行编码,得到所述压缩数据。
  18. 如权利要求13所述的装置,其中,所述第一类型变量包括所述关联虚拟对象的对象标识;
    所述压缩模块包括:
    第四获取单元,用于获取所述关联虚拟对象的前继关联虚拟对象的对象标识,获取所述关联虚拟对象的对象标识与所述前继关联虚拟对象的对象标识间的第二差异变量;其中,所述前继关联虚拟对象与所述关联虚拟对象为依次处理关系;
    第二生成单元,用于对所述第二差异变量进行编码,得到所述压缩数据。
  19. 一种计算机设备,包括处理器、存储器、输入输出接口;
    所述处理器分别与所述存储器和所述输入输出接口相连,其中,所述输入输出接口用于输入数据和输出数据,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,以执行如权利要求1-12任一项所述的方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,执行如权利要求1-12任一项所述的方法。
PCT/CN2020/123627 2020-01-13 2020-10-26 数据处理方法、装置、计算机设备以及可读存储介质 WO2021143255A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227007045A KR20220041906A (ko) 2020-01-13 2020-10-26 데이터 처리 방법 및 장치, 컴퓨터 기기 그리고 판독 가능한 저장 매체
JP2022519177A JP7465959B2 (ja) 2020-01-13 2020-10-26 データ処理方法、装置、コンピュータデバイスおよびコンピュータプログラム
EP20914236.3A EP3991817A4 (en) 2020-01-13 2020-10-26 DATA PROCESSING METHOD AND DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIA
US17/671,968 US12017146B2 (en) 2020-01-13 2022-02-15 Updating virtual object frame images corresponding to variables

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010033046.9A CN111228797B (zh) 2020-01-13 2020-01-13 数据处理方法、装置、计算机以及可读存储介质
CN202010033046.9 2020-01-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/671,968 Continuation US12017146B2 (en) 2020-01-13 2022-02-15 Updating virtual object frame images corresponding to variables

Publications (1)

Publication Number Publication Date
WO2021143255A1 true WO2021143255A1 (zh) 2021-07-22

Family

ID=70871014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123627 WO2021143255A1 (zh) 2020-01-13 2020-10-26 数据处理方法、装置、计算机设备以及可读存储介质

Country Status (6)

Country Link
US (1) US12017146B2 (zh)
EP (1) EP3991817A4 (zh)
JP (1) JP7465959B2 (zh)
KR (1) KR20220041906A (zh)
CN (1) CN111228797B (zh)
WO (1) WO2021143255A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024060856A1 (zh) * 2022-09-20 2024-03-28 腾讯科技(深圳)有限公司 数据处理方法、装置、电子设备、存储介质和程序产品

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111228797B (zh) 2020-01-13 2021-05-28 腾讯科技(深圳)有限公司 数据处理方法、装置、计算机以及可读存储介质
CN112927332B (zh) * 2021-04-02 2023-06-09 腾讯科技(深圳)有限公司 骨骼动画更新方法、装置、设备及存储介质
CN113797546B (zh) * 2021-09-18 2024-02-23 珠海金山数字网络科技有限公司 资源处理方法、装置、计算设备及计算机可读存储介质
CN114344892B (zh) * 2022-01-04 2023-07-18 腾讯科技(深圳)有限公司 一种数据处理方法和相关装置
CN114630182A (zh) * 2022-02-28 2022-06-14 海信视像科技股份有限公司 一种虚拟现实视频的播放方法及设备
CN115423919B (zh) * 2022-09-14 2023-08-25 阿波罗智联(北京)科技有限公司 图像的渲染方法、装置、设备以及存储介质
CN115619867B (zh) * 2022-11-18 2023-04-11 腾讯科技(深圳)有限公司 数据处理方法、装置、设备、存储介质
CN116208623B (zh) * 2023-05-04 2023-07-14 腾讯科技(深圳)有限公司 信息同步方法、装置、引擎服务器及存储介质
CN117315375B (zh) * 2023-11-20 2024-03-01 腾讯科技(深圳)有限公司 虚拟部件分类方法、装置、电子设备及可读存储介质
CN117899473B (zh) * 2024-03-12 2024-06-04 腾讯科技(深圳)有限公司 图像帧显示方法、装置、计算机设备及存储介质
CN117899474B (zh) * 2024-03-20 2024-06-07 深圳市迷你玩科技有限公司 画面渲染方法、装置、电子设备及可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100241692A1 (en) * 2009-03-20 2010-09-23 Sony Computer Entertainment America Inc., a Delaware Corporation Methods and systems for dynamically adjusting update rates in multi-player network gaming
CN106512402A (zh) * 2016-11-29 2017-03-22 北京像素软件科技股份有限公司 一种游戏角色的渲染方法及装置
CN106991713A (zh) * 2017-04-13 2017-07-28 网易(杭州)网络有限公司 更新游戏中的场景的方法和装置、介质、处理器以及终端
CN107911374A (zh) * 2017-11-27 2018-04-13 腾讯科技(上海)有限公司 数据同步方法和装置、存储介质及电子装置
CN110404262A (zh) * 2019-09-03 2019-11-05 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、电子设备以及存储介质
CN111228797A (zh) * 2020-01-13 2020-06-05 腾讯科技(深圳)有限公司 数据处理方法、装置、计算机以及可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4555000B2 (ja) * 2003-06-20 2010-09-29 株式会社エヌ・ティ・ティ・ドコモ アプリケーションの適応的レプリケーションをサーバサイドコードユニットを用いて実行する方法および装置
JP5587800B2 (ja) * 2011-01-12 2014-09-10 株式会社スクウェア・エニックス ネットワークゲームシステム、ゲーム装置、サーバ装置、及びプログラム
DE102012224321B4 (de) 2012-12-21 2022-12-15 Applejack 199 L.P. Messvorrichtung zum Erfassen einer Schlagbewegung eines Schlägers, Trainingsvorrichtung und Verfahren zum Training einer Schlagbewegung
JP2015196091A (ja) 2014-04-02 2015-11-09 アップルジャック 199 エル.ピー. アバターが仮想環境内でプレイヤーを表すためのセンサベースのゲームシステム
CN107682356B (zh) * 2017-10-26 2020-08-04 广州市雷军游乐设备有限公司 数据的更新方法及装置、设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100241692A1 (en) * 2009-03-20 2010-09-23 Sony Computer Entertainment America Inc., a Delaware Corporation Methods and systems for dynamically adjusting update rates in multi-player network gaming
CN106512402A (zh) * 2016-11-29 2017-03-22 北京像素软件科技股份有限公司 一种游戏角色的渲染方法及装置
CN106991713A (zh) * 2017-04-13 2017-07-28 网易(杭州)网络有限公司 更新游戏中的场景的方法和装置、介质、处理器以及终端
CN107911374A (zh) * 2017-11-27 2018-04-13 腾讯科技(上海)有限公司 数据同步方法和装置、存储介质及电子装置
CN110404262A (zh) * 2019-09-03 2019-11-05 网易(杭州)网络有限公司 游戏中的显示控制方法、装置、电子设备以及存储介质
CN111228797A (zh) * 2020-01-13 2020-06-05 腾讯科技(深圳)有限公司 数据处理方法、装置、计算机以及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3991817A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024060856A1 (zh) * 2022-09-20 2024-03-28 腾讯科技(深圳)有限公司 数据处理方法、装置、电子设备、存储介质和程序产品

Also Published As

Publication number Publication date
CN111228797B (zh) 2021-05-28
JP7465959B2 (ja) 2024-04-11
US20220168649A1 (en) 2022-06-02
JP2022554069A (ja) 2022-12-28
US12017146B2 (en) 2024-06-25
CN111228797A (zh) 2020-06-05
KR20220041906A (ko) 2022-04-01
EP3991817A1 (en) 2022-05-04
EP3991817A4 (en) 2022-09-21

Similar Documents

Publication Publication Date Title
WO2021143255A1 (zh) 数据处理方法、装置、计算机设备以及可读存储介质
CN111767503B (zh) 一种游戏数据处理方法、装置、计算机及可读存储介质
CN109413480A (zh) 画面处理方法、装置、终端及存储介质
CN108057249B (zh) 一种业务数据处理方法和装置
CN110138769A (zh) 一种图像传输的方法以及相关装置
CN111346378B (zh) 游戏画面传输方法、装置、存储介质和设备
EP3410302B1 (en) Graphic instruction data processing method, apparatus
CN111467798B (zh) 游戏应用程序中的帧显示方法、装置、终端和存储介质
US11731050B2 (en) Asset aware computing architecture for graphics processing
CN109413152B (zh) 图像处理方法、装置、存储介质及电子设备
US20140161173A1 (en) System and method for controlling video encoding using content information
CN112950640A (zh) 视频人像分割方法、装置、电子设备及存储介质
CN113419809B (zh) 实时交互程序界面数据渲染方法及设备
CN105727556B (zh) 一种图像绘制的方法、相关设备及系统
CN108769715A (zh) 图形指令数据的处理方法及装置
CN113694521A (zh) 伤害处理方法、装置、电子设备以及存储介质
CN110392262A (zh) 一种压缩虚拟桌面图像的方法及装置
CN111632382A (zh) 游戏数据同步方法、装置、计算机及可读存储介质
CN115025495B (zh) 角色模型的同步方法、装置、电子设备及存储介质
CN113599806B (zh) 一种数据预处理方法、剧情显示方法、装置、介质及设备
CN113599817A (zh) 一种数据处理方法、装置、存储介质及电子设备
CN115731324A (zh) 处理数据的方法、装置、系统及介质
CN114100123A (zh) 射击游戏中游戏场景呈现方法、装置、设备及介质
CN113658213A (zh) 形象呈现方法、相关装置及计算机程序产品
JP2020160737A (ja) 情報処理装置、情報処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914236

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020914236

Country of ref document: EP

Effective date: 20220127

ENP Entry into the national phase

Ref document number: 20227007045

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022519177

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE