CN111450531B - Virtual character control method, virtual character control device, electronic equipment and storage medium - Google Patents

Virtual character control method, virtual character control device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111450531B
CN111450531B CN202010239611.7A CN202010239611A CN111450531B CN 111450531 B CN111450531 B CN 111450531B CN 202010239611 A CN202010239611 A CN 202010239611A CN 111450531 B CN111450531 B CN 111450531B
Authority
CN
China
Prior art keywords
virtual
data
character
virtual character
game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010239611.7A
Other languages
Chinese (zh)
Other versions
CN111450531A (en
Inventor
李晓倩
练振杰
付强
王亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010239611.7A priority Critical patent/CN111450531B/en
Publication of CN111450531A publication Critical patent/CN111450531A/en
Application granted granted Critical
Publication of CN111450531B publication Critical patent/CN111450531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/301Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device using an additional display connected to the game console, e.g. on the controller

Abstract

The application discloses a virtual role control method, a virtual role control device, electronic equipment and a storage medium, wherein the virtual role control method comprises the following steps: collecting historical game data of virtual character game in a target game; extracting attribute characteristics of the virtual character, local view characteristics of the virtual character in a view range in a virtual scene and behavior information of the virtual character for executing target game behaviors from historical game data; performing fusion processing on the attribute features, the local view features and the behavior information to obtain fused features; constructing an exclusive private feature shared among the virtual roles and a shared common feature shared among the virtual roles based on the fused features; training a preset model according to the common characteristics and the private characteristics to obtain an operation control model, and controlling the virtual characters in the target game through the operation prediction model.

Description

Virtual character control method, virtual character control device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a virtual character control method, a virtual character control device, electronic equipment and a storage medium.
Background
Artificial Intelligence (AI) is a comprehensive subject, and relates to a wide range of fields, both hardware and software technologies. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, a Machine Learning/deep Learning and the like, wherein Machine Learning (ML) is a multi-field cross subject and relates to a plurality of subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence.
For example, the method may be applied to an Online game, such as a Multiplayer Online Battle sports game (MOBA), where a user or a server may obtain a profit by controlling a virtual character to execute a scheduling policy, and when the virtual character is controlled by the server, the virtual character needs to be controlled by an AI to complete game play.
Disclosure of Invention
The application provides a virtual character control method, a virtual character control device, an electronic device and a storage medium, which can improve the accuracy of predicting game behaviors of virtual characters.
The application provides a virtual role control method, which comprises the following steps:
collecting historical game data of virtual character game in a target game;
extracting attribute characteristics of the virtual character, local view characteristics of the virtual character in a view range in a virtual scene and behavior information of the virtual character for executing target game behaviors from historical game data;
performing fusion processing on the attribute features, the local view features and the behavior information to obtain fused features;
constructing an exclusive private feature shared among the virtual roles and a shared common feature shared among the virtual roles based on the fused features;
and training a preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the virtual roles in the target game through the operation prediction model.
Correspondingly, the present application also provides a virtual character control apparatus, including:
the acquisition module is used for acquiring historical game data of virtual character game in the target game;
the extraction module is used for extracting the attribute characteristics of the virtual character, the local visual field characteristics of the virtual character in the visual field range of the virtual scene and the behavior information of the virtual character executing the target game behavior from the historical game data;
the fusion module is used for carrying out fusion processing on the attribute features, the local view features and the behavior information to obtain fused features;
the construction module is used for constructing the exclusive private characteristics shared among the virtual roles and the shared common characteristics shared among the virtual roles based on the fused characteristics;
and the training module is used for training a preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the virtual role in the target game through the operation prediction model.
Optionally, in some embodiments of the present invention, the virtual characters include a first virtual character, a second virtual character, and a fixed virtual character, where the first virtual character and the second virtual character are virtual characters in different game partners in the historical game partners, and the extraction module includes:
the first extraction unit is used for extracting the attribute characteristics of the first virtual character, the attribute characteristics of the second virtual character, the attribute characteristics of the fixed virtual character and the local view characteristics of the first virtual character in the view range in the virtual scene from historical game data by adopting a preset characteristic extraction network;
a second extraction unit, configured to extract behavior information of a first virtual character executing a target game behavior from the historical game data using a preset information extraction network;
the fusion module is specifically configured to: and performing fusion processing on the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role, the local view characteristics of the first virtual role and the behavior information of the first virtual role to obtain fused characteristics.
Optionally, in some embodiments of the present invention, the second extraction unit includes:
a determining subunit, configured to determine time information for a first virtual character to perform a target game action in a target game;
an acquisition subunit configured to acquire a plurality of consecutive game images from the historical game data based on the time information;
and the extraction subunit is used for extracting the behavior information of the first virtual character executing the target game behavior from the plurality of game images by adopting a preset information extraction network.
Optionally, in some embodiments of the present invention, the extracting subunit is specifically configured to:
acquiring position information data and operation information data of a first virtual character executing target game behaviors according to a plurality of continuous game images;
according to the position information data and the operation information data, behavior data of the first virtual character executing the target game behavior are constructed;
and extracting the behavior information of the first virtual character executing the target game behavior from the behavior tag by adopting a preset behavior information extraction network.
Optionally, in some embodiments of the present invention, the first extracting unit is specifically configured to:
extracting character data of a first virtual character from historical game data to obtain first character data;
extracting character data of a second virtual character from the historical game data to obtain second character data;
extracting character data of the fixed virtual character from historical game data to obtain third character data;
and respectively extracting the characteristics of the first role data, the second role data and the third role data by adopting a preset characteristic extraction network to obtain the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role and the local visual field characteristics of the first virtual role in the visual field range in the virtual scene.
Optionally, in some embodiments of the present invention, the fusion module includes:
the first processing unit is used for performing maximum pooling processing on the attribute characteristics to obtain processed attribute characteristics;
the second processing unit is used for performing convolution processing on the local view characteristics to obtain processed view characteristics;
and the fusion unit is used for carrying out fusion processing on the processed attribute features, the processed view features and the behavior information to obtain fused features.
Optionally, in some embodiments of the present invention, the fusion unit is specifically configured to:
splicing the processed attribute features and the processed view features to obtain spliced features;
and embedding the behavior information into the spliced features to obtain fused features.
Optionally, in some embodiments of the present invention, the building module is specifically configured to:
performing feature space transformation on the fused features to obtain transformed features;
selecting feature components with the quantity corresponding to a preset strategy from the transformed features to obtain shared common features among the virtual roles;
and removing shared common characteristics among the virtual roles from the converted characteristics to obtain exclusive private characteristics among the virtual roles.
This application is after the historical game data of virtual character match in gathering the target game, draws the attribute characteristic of virtual character, the local field of vision characteristic of virtual character in the visual field scope and the action information that the virtual character carries out the target game action in the historical game data, then, will attribute characteristic, local field of vision characteristic and action information fuse the processing, obtain the post-fusion characteristic, then, construct the shared common characteristic between exclusive private characteristic and the virtual character between the virtual character based on the post-fusion characteristic, finally, according to common characteristic and private characteristic train the preset model, obtain the operation control model, with pass through operation prediction model controls the virtual character in the target game. Therefore, the scheme can improve the accuracy of predicting the game behavior of the virtual character.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a scene schematic diagram of a virtual character control method provided in the present application;
FIG. 1b is a schematic flow chart of a virtual character control method provided in the present application;
FIG. 1c is a schematic diagram of various features of the virtual character control method provided herein;
FIG. 1d is a path diagram of a target game action executed in the virtual character control method provided by the present application;
FIG. 2a is another schematic flow chart of a virtual character control method provided in the present application;
fig. 2b is a schematic view of another scenario of the virtual character control method provided in the present application;
FIG. 2c is a schematic diagram of a fusion layer in avatar control provided herein;
FIG. 3 is a schematic diagram of the structure of the object detection device provided in the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application provides a virtual role control method, a virtual role control device, electronic equipment and a storage medium.
The virtual role control device may be specifically integrated in a server, and the server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform, and the servers may be directly or indirectly connected by a wired or wireless communication manner, which is not limited herein.
For example, referring to fig. 1a, the virtual character control apparatus is integrated on a server, the server may collect historical game data of virtual character match-up in a target game, the target game may be a multiplayer online tactical competition game or a multiplayer shooting game, and is specifically selected according to actual requirements, which is described below by taking an MOBA game as an example, after the server collects historical game data of virtual character match-up in the target game, an attribute feature of the virtual character, a local view feature of the virtual character in a view field range of a virtual scene, and behavior information of a virtual character executing a target game behavior are extracted from the historical game data, then the server performs fusion processing on the attribute feature, the local view feature, and the behavior information to obtain a fused feature, and then the server constructs a private feature shared between virtual characters and a shared common feature between virtual characters based on the fused feature, and finally, the server trains a preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the virtual roles in the target game through the operation prediction model, for example, when the server receives a man-machine confrontation mode request triggered by a user aiming at a game engine, the server controls the virtual roles in the target game based on the man-machine confrontation mode request.
It should be noted that one play method of the MOBA game is a 5V5 battle, that is, two character sets are confronted with each other, and each character set is composed of 5 players. Each player controls one hero, and pushes one of the crystal bases of the other party away as the winner, and in the MOBA game, 5 heros are needed to play the game in a cooperative manner. Whether resource allocation on a map or operations in a group war, a good fit between 5 heros is required. For example, the hero on the upper road, the middle road and the lower road needs to develop on respective lines, the wild hero is developed in a wild area when the hero is dug, the output hero needs to be output in the back row, the auxiliary hero needs to be damaged in the front row, the guest hero needs to be finally harvested when the guest hero is stabbed, for example, the server controls the virtual character in the target game to attack the virtual character of the enemy through operating the prediction model if the virtual corner type controlled by the server is the output hero.
The virtual role control scheme provided by the application utilizes the attribute characteristics, the local view characteristics and the behavior information to carry out fusion processing to obtain fused characteristics, and constructing an exclusive private feature shared among the virtual characters and a shared common feature shared among the virtual characters based on the fused features, training a preset model according to the shared common feature and the private feature, in actual use, the virtual characters controlled by the operation prediction model of the present application can not only focus on information (common characteristics) shared between virtual characters in the same formation, but also focus on information (private characteristics) of the virtual characters themselves, therefore, in a complex game scene, the virtual character can execute correct game behaviors according to the shared information and the information of the virtual character to complete game match, and therefore the accuracy of the game behaviors of the virtual character can be predicted by the scheme of the application.
The following are detailed below. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
A virtual character control method includes: the method comprises the steps of collecting historical game data of virtual character match-up in a target game, extracting attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors from the historical game data, fusing the attribute features, the local view features and the behavior information to obtain fused features, constructing exclusive private features among the virtual characters and shared common features among the virtual characters based on the fused features, training a preset model according to the common features and the private features to obtain an operation control model, and controlling the virtual characters in the target game through the operation prediction model.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a virtual role control method provided in the present application. The specific process of the virtual character control method can be as follows:
101. historical game data of virtual character game in the target game are collected.
The target game may be a multiplayer online tactical competition game or a multiplayer shooting game, and specifically, data of historical game play of the virtual character (i.e., historical game data) may be collected from a database of the target game, and the historical game data may be stored locally or pulled through an access network interface, and is specifically determined according to actual conditions.
The historical game data may be data generated by a human player in an actual game process, or data obtained by simulating the operation of the human player by a machine, and the historical game data is mainly data provided by the human player. An average of 30 minutes per game play and 15 frames per second is calculated, with 27000 frames of images per game draw. In order to reduce the complexity of data, the application mainly selects data related to a macro task and a micro-manipulation task for training, wherein the macro task is divided by an operation intention and comprises the following steps: the game comprises the following steps of 'wild', clear soldier ', group battle' and 'towering', wherein each game has about 100 general tasks on average, and the micro-operation decisions in each general task are divided by operation types, and the steps comprise: "skill attack", "general attack", "move to target location", and "go back to town", etc., each of the macro tasks also contains relatively few micro-manipulation tasks.
102. And extracting attribute characteristics of the virtual character, local view characteristics of the virtual character in the view range of the virtual scene and behavior information of the virtual character for executing target game behaviors from the historical game data.
For example, in the MOBA game, each hero has its own attribute values, such as "blood volume", "physical defense", and "magic defense", and the positioning of each hero in different opponents is different, so that even the same hero in a game, whose equipment is selected based on the positioning of the hero in the opponents, may cause the "blood volume", "physical defense", and "magic defense" of the same hero in different opponents to be different, and at the same time, the field of view shared by the same virtual character is different even on the same map in different opponents, e.g., in the first opponents, virtual character a is the winner, and virtual character a is the base of the opponents, and then the field of view of the base of the opponents is shared; in the second opposite office, the virtual character A is a loser and cannot enter an enemy base, so that the virtual character A has no view of the enemy base in the second opposite office, and the macro task and the micro task are tasks executed by the virtual character on the basis of the conditions of winning the enemy, and therefore the macro task information and the micro task information are incorporated into the behavior information.
It should be noted that, in one game, the virtual characters may include a first virtual character, a second virtual character, and a fixed virtual character, the first virtual character and the second virtual character are virtual characters in different parties of the historical game, the fixed virtual character is a virtual character capable of interacting with the first virtual character and/or the second virtual character, the fixed virtual character may be a virtual character having an affiliated camp such as a "defense tower" and a "soldier", and the fixed virtual character may also be a virtual character having no affiliated camp such as a "monster", "dragon", and a "dragon", so in order to improve the prediction capability of the subsequent model, the attribute characteristic of the first virtual character, the attribute characteristic of the second virtual character, the attribute characteristic of the fixed virtual character, the local view characteristic of the first virtual character, and the behavior information of the first virtual character for executing the target game behavior may be extracted from the historical game data, that is, optionally, in some embodiments, the step "extracting, from the historical game data, the attribute feature of the virtual character, the local view feature of the virtual character in the view range of the virtual scene, and the behavior information of the virtual character executing the target game behavior" may specifically include:
(11) extracting the attribute characteristics of a first virtual character, the attribute characteristics of a second virtual character, the attribute characteristics of a fixed virtual character and the local visual field characteristics of the first virtual character in the visual field range in a virtual scene from historical game data by adopting a preset characteristic extraction network;
(12) and extracting behavior information of the first virtual character executing the target game behavior from the historical game data by adopting a preset information extraction network.
The characteristic extraction network and the information extraction network are different types of neural networks, the characteristic extraction network is used for extracting characteristics of data, the information extraction network is used for predicting tags according to input, namely tags (behavior information) of game behaviors executed by virtual characters in game play are identified through the information extraction network, optionally, in order to further improve the prediction capability of a subsequent operation prediction model, game data in a fighting scene can be obtained from historical game data, fighting game data is obtained, then, the preset characteristic extraction network is used for extracting the attribute characteristics of a first virtual character, the attribute characteristics of a second virtual character, the attribute characteristics of a fixed virtual character and the local visual field characteristics of the first virtual character in the visual field range of the virtual scene from the fighting game data, and the preset information extraction network is used for extracting the behavior information of the first virtual character for executing target game behaviors from the fighting game data .
Referring to fig. 1c, fig. 1c is a representation of the attribute characteristics of the virtual character extracted from the battle scene, the local visual field characteristics of the virtual character within the visual field range of the virtual character in the virtual scene, and the behavior information of the virtual character for executing the target game behavior, the attribute characteristics of the first virtual character, the second virtual character, and the fixed virtual character are expressed in a first type expression manner, the behavior information of the first virtual character is expressed in a second type expression manner, and the local visual field characteristics of the first virtual character are expressed in a third type expression manner, wherein the attribute characteristics include the characteristics of the blood volume, defense, and offensive power of the virtual character, such as the attributes of my hero, enemy hero, soldier, wild monster, defense tower, and the like, the local visual field characteristics include the characteristics of the skill range within the virtual school color visual field range, and the obstacle position, and the like, the behavior information includes macro task information and micro task information, such as "equipment" information of the virtual character and skill information used in wartime, that is, optionally, in some embodiments, the step "extracting, by using a preset feature extraction network, an attribute feature of the first virtual character, an attribute feature of the second virtual character, an attribute feature of the fixed virtual character, and a local view feature of the first virtual character in a view range in the virtual scene" from the historical game data may specifically include:
(21) extracting character data of a first virtual character from historical game data to obtain first character data, extracting character data of a second virtual character from the historical game data to obtain second character data, and extracting character data of a fixed virtual character from the historical game data to obtain third character data;
(22) and respectively extracting the characteristics of the first role data, the second role data and the third role data by adopting a preset characteristic extraction network to obtain the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role and the local visual field characteristics of the first virtual role in the visual field range in the virtual scene.
Further, it is possible to obtain an image corresponding to time information of the target game behavior executed by the first virtual character, and extract behavior information from the obtained image using a preset information extraction network, that is, in some embodiments, the step of "extracting behavior information of the target game behavior executed by the first virtual character from historical game data using the preset information extraction network" includes:
(31) determining time information for a first virtual character to perform a target game action in a target game;
(32) acquiring a plurality of continuous game images from the historical game data based on the time information;
(33) and extracting behavior information of the first virtual character executing the target game behavior from the plurality of game images by adopting a preset information extraction network.
It should be noted that the time information for executing the target game behavior includes an execution starting point and an execution ending point, for example, if the target game behavior executed by the first virtual character is "attack", the time point at which the attack occurs is taken as the execution ending point, the time point at which the attack occurs last time is taken as the execution starting point, a path for executing the target game behavior is as shown in fig. 1d, the first virtual character a respectively generates attack behaviors from time t0 to time ts, the position at which the first virtual character is located at time t0 is taken as the execution starting point at time ts at which the attack occurs, and the time ts is taken as the execution ending point at time ts, and similarly, the position at time ts at which the first virtual character is located is taken as the execution starting point at time ts +1 at which the attack occurs.
For example, in a game play a of the target game, when the game time of the game play a is 2 minutes 35 seconds, a 3V3 battle event is triggered at a defense tower where the first virtual character is located, the first virtual character travels from the base where the first virtual character is located to a place where the battle occurs to support, after the first virtual character arrives at the place where the battle is played, the enemy virtual character is attacked by using the skill h, and when the battle ends, the game time of the game play a is 3 minutes 05 seconds, then a behavior tag of the first virtual character support behavior may be constructed according to the game data in the period of 2 minutes 35 seconds to 3 minutes 05 seconds, that is, optionally, in some embodiments, the step "extracting behavior information of the first virtual character from a plurality of game images by using a preset information extraction network" may specifically include:
(41) acquiring position information data and operation information data of a first virtual character executing target game behaviors according to a plurality of continuous game images;
(42) according to the position information data and the operation information data, behavior data of the first virtual character executing the target game behavior are constructed;
(43) and extracting the behavior information of the first virtual character executing the target game behavior from the behavior data by adopting a preset behavior information extraction network.
The position information data records moving path information and position conversion information when the first virtual character executes the target game behavior, namely, the first virtual character moves from the J position to the K position on the map, the operation information data records operation labels selected by the first virtual character when the first virtual character executes the target game behavior, such as selection of an operation label 'Q skill', an operation label 'summoning skill' and an operation label 'common attack', and the like, then behavior data of the first virtual character executing the target game behavior is constructed according to the position information data and the operation information data, and finally, the preset behavior information extraction network is adopted to extract the behavior information of the first virtual character executing the target game behavior from the behavior data.
103. And performing fusion processing on the attribute features, the local view features and the behavior information to obtain fused features.
In the MOBA game, the interaction among multiple virtual characters is involved, such as operations of coordinating and returning life values among virtual characters in the same row, or the fight among virtual characters in different rows, such as 1V1 or 5V5, so that some information is sharable for the virtual characters, and some information is shared by the virtual characters themselves, and therefore, in order to improve the matching capability among the virtual characters, the attribute feature, the local view feature and the behavior information need to be fused, so as to improve the prediction capability of the preset model.
For example, specifically, the attribute feature may be subjected to maximum pooling, the local view feature may be subjected to convolution processing, and then the processed attribute feature, the processed view feature, and the behavior information are subjected to fusion processing to obtain a fused feature, that is, optionally, in some embodiments, the step "performing fusion processing on the attribute feature, the local view feature, and the behavior information to obtain a fused feature" may specifically include:
(51) performing maximum pooling on the attribute characteristics to obtain processed attribute characteristics;
(52) performing convolution processing on the local visual field characteristics to obtain processed visual field characteristics;
(53) and performing fusion processing on the processed attribute features, the processed view features and the behavior information to obtain fused features.
For example, specifically, the attribute features may be encoded first, then, the encoding result is subjected to maximum pooling, the dimension of the attribute features may be reduced, a value with the largest feature difference is extracted from the attribute features, then, the processed attribute features and the processed view field features may be spliced, and the behavior information is embedded into the spliced features to obtain fused features, that is, optionally, in some embodiments, the step "fusing the processed attribute features, the processed view field features, and the behavior information to obtain fused features" may specifically include:
(61) splicing the processed attribute features and the processed view field features to obtain spliced features;
(62) and embedding the behavior information into the spliced features to obtain the fused features.
104. And constructing the exclusive private characteristics shared among the virtual roles and the shared common characteristics shared among the virtual roles on the basis of the fused characteristics.
Because the fused features include attribute features, local view features and behavior information, for a virtual character, the fused features have the ability to share information with virtual characters in the same formation, such as refresh time of monsters killed by our parties, and certainly also have the ability to share information with virtual characters in enemy formations, such as damage range of skills used by enemy formations, and the like, therefore, the ability of training the preset model through the fused features is inevitably poor, so that the model obtained through training predicts the game behaviors of the virtual character, and therefore, private features which are shared among the virtual characters need to be constructed based on the fused features, and particularly, the fused features can be divided based on preset strategies, so that private features which are shared among the virtual characters and shared among the virtual characters are obtained, that is, optionally, in some embodiments, the step "building a private feature that is shared exclusively among virtual roles and a shared common feature among virtual roles based on the fused features" may specifically include:
(71) performing feature space transformation on the fused features to obtain transformed features;
(72) selecting feature components with the quantity corresponding to a preset strategy from the transformed features to obtain shared common features among the virtual roles;
(73) and removing shared common characteristics among the virtual roles from the converted characteristics to obtain exclusive private characteristics among the virtual roles.
For example, specifically, the number of feature components after fusion can be changed into 2048 (i.e., feature space transformation) through a full connection layer and an activation function, then the first 512 components are selected from the transformed features to obtain shared common features between virtual characters, and finally the shared common features between the virtual characters are removed from the transformed features to obtain exclusive private features between the virtual characters, that is, features of which the private features are 512 th to 2048 th components.
105. And training the preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the virtual characters in the target game through the operation prediction model.
Because of the Long timeliness of the MOBA game, each game behavior executed by the virtual character in the game play may affect the trend of the affiliated battle in the game play, that is, the virtual character needs to pay attention not only to the currently executed game behavior but also to the game behavior before the historical time, so, optionally, the present application adopts a Long-Short Term Memory artificial neural network Model (LSTM), which is a time-recursive neural network Model that can connect previous information to the current task, for example, to evaluate the influence of the current game behavior on the win ratio according to the past game behavior, wherein in the present application, the Model can be set according to the requirements of the actual application, for example, the Model can include four volumes and a fully connected layer.
And (3) rolling layers: the method is mainly used for feature extraction of input samples (such as training samples or images needing to be identified), wherein the size of a convolution kernel can be determined according to practical application, for example, the sizes of convolution kernels from a first layer of convolution layer to a fourth layer of convolution layer can be (7, 7), (5, 5), (3, 3), (3, 3); optionally, in order to reduce the complexity of the calculation and improve the calculation efficiency, in this embodiment, the sizes of convolution kernels of the four convolution layers may all be set to (3, 3), the activation functions all adopt "relu (Linear rectification function, Rectified Linear Unit)", the padding (padding, which refers to a space between an attribute definition element border and an element content) modes are all set to "same", and the "same" padding mode may be simply understood as padding an edge with 0, and the number of 0 padding on the left side (upper side) is the same as or less than the number of 0 padding on the right side (lower side). Optionally, the convolutional layers may be directly connected to each other, so as to accelerate the network convergence speed, and in order to further reduce the amount of computation, downsampling (downsampling) may be performed on all layers or any 1 to 2 layers of the second to fourth convolutional layers, where the downsampling operation is substantially the same as the operation of convolution, and the downsampling convolution kernel is only a maximum value (max) or an average value (average) of corresponding positions.
It should be noted that, for convenience of description, in the present application, both the layer where the activation function is located and the down-sampling layer (also referred to as a pooling layer) are included in the convolutional layer, and it should be understood that the structure may also be considered to include the convolutional layer, the layer where the activation function is located, the down-sampling layer (i.e., the pooling layer), and the full-connection layer, and of course, may also include an input layer for inputting data and an output layer for outputting data, which are not described herein again.
Full connection layer: the learned features may be mapped to a sample label space, which mainly functions as a "classifier" in the whole convolutional neural network, and each node of the fully-connected layer is connected to all nodes output by the previous layer (e.g., the down-sampling layer in the convolutional layer), where one node of the fully-connected layer is referred to as one neuron in the fully-connected layer, and the number of neurons in the fully-connected layer may be determined according to the requirements of the practical application, for example, in the upper half branch network and the lower half branch network of the twin neural network model, the number of neurons in the fully-connected layer may be set to 512 each, or may be set to 128 each, and so on. Similar to the convolutional layer, optionally, in the fully-connected layer, a non-linear factor may be added by adding an activation function, for example, an activation function sigmoid (sigmoid function) may be added.
Specifically, the common characteristic and the private characteristic are used as the input of the model, the gradient descent algorithm is adopted to update the model parameters, and finally the operation prediction model is obtained, so as to control the virtual character in the target game through the operation prediction model.
After historical game data of virtual character match in a target game are collected, attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors are extracted from the historical game data, then the attribute features, the local view features and the behavior information are subjected to fusion processing to obtain fused features, then private features which are shared exclusively among the virtual characters and shared common features among the virtual characters are established based on the fused features, finally, a preset model is trained according to the common features and the private features to obtain an operation control model, and the virtual characters in the target game are controlled through an operation prediction model. The virtual role control scheme provided by the application utilizes the attribute characteristics, the local view characteristics and the behavior information to carry out fusion processing to obtain fused characteristics, and constructing an exclusive private feature shared among the virtual characters and a shared common feature shared among the virtual characters based on the fused features, training a preset model according to the shared common feature and the private feature, in actual use, the virtual characters controlled by the operation prediction model of the present application can not only focus on information (common characteristics) shared between virtual characters in the same formation, but also focus on information (private characteristics) of the virtual characters themselves, therefore, in a complex game scene, the virtual character can execute correct game behaviors according to the shared information and the information of the virtual character to complete game match, and therefore the accuracy of the game behaviors of the virtual character can be predicted by the scheme of the application.
The method according to the examples is further described in detail below by way of example.
In the present embodiment, the virtual character control apparatus will be described by taking an example in which the virtual character control apparatus is specifically integrated in a server.
Referring to fig. 2a, a virtual character control method may specifically include the following processes:
201. the server collects historical game data of virtual character game in the target game.
The target game may be a multiplayer online tactical competition game or a multiplayer shooting game, and specifically, data of historical game play of the virtual character (i.e., historical game data) may be collected from a database of the target game, and the historical game data may be stored locally or pulled through an access network interface, and is specifically determined according to actual conditions.
The historical game data may be data generated by a human player in an actual game process, or data obtained by simulating the operation of the human player by a machine, and the historical game data is mainly data provided by the human player.
202. The server extracts the attribute characteristics of the virtual character, the local visual field characteristics of the virtual character in the visual field range of the virtual scene and the behavior information of the virtual character executing the target game behavior from the historical game data.
For example, the server may extract, from the historical game data, an attribute feature of the first virtual character, an attribute feature of the second virtual character, an attribute feature of the fixed virtual character, a local view feature of the first virtual character, and behavior information of the first virtual character executing the target game behavior.
203. And the server performs fusion processing on the attribute characteristics, the local view characteristics and the behavior information to obtain fused characteristics.
For example, the server may perform maximum pooling on the attribute features, perform convolution on the local view features, and perform fusion processing on the processed attribute features, the processed view features, and the behavior information to obtain fused features.
204. The server constructs the exclusive private characteristics shared between the virtual roles and the shared common characteristics shared between the virtual roles based on the fused characteristics.
For example, the server may perform feature space transformation on the fused features to obtain transformed features, then select feature components of a quantity corresponding to a preset policy from the transformed features to obtain shared features shared among virtual characters, and remove the shared features shared among the virtual characters from the transformed features to obtain exclusive private features shared among the virtual characters.
205. And the server trains the preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the virtual roles in the target game through the operation prediction model.
Specifically, the server uses the common characteristic and the private characteristic as the input of the model, updates the model parameters by adopting a gradient descent algorithm, and finally obtains an operation prediction model, so as to control the virtual character in the target game through the operation prediction model, for example, in an MOBA game, when a user selects to match with a robot in a client of the target game, the server can control the virtual character in the target game through the operation prediction model, so as to complete the matching with the virtual character controlled by the user.
To facilitate understanding of the virtual character control method provided by the present application, please refer to fig. 2b, the present application adopts an end-to-end establishment of a multi-agent artificial intelligence model, which includes a coding layer, a fusion layer and an output layer, wherein the coding layer can be used to define attribute characteristics of a virtual character in a target game, such as attribute characteristics of hero, soldier, monster, dragon and defense tower. The fusion layer is used for fusing data transmitted by the coding layer, such as local view characteristics, attribute characteristics of virtual roles and behavior information of the virtual roles, when fusing, the number of fused characteristic components can be changed into 2048 through a full connection layer and an activation function, then the first 512 components are selected from the transformed characteristics to obtain shared common characteristics among the virtual roles, and finally the shared common characteristics among the virtual roles are removed from the transformed characteristics to obtain exclusive private characteristics among the virtual roles, as shown in fig. 2 c. The output layer is used for predicting the game behavior of the virtual character according to the common characteristics and the private characteristics output by the fusion layer.
The method comprises the steps that after historical game data of virtual character match in a target game are collected, attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors are extracted from the historical game data, then the server conducts fusion processing on the attribute features, the local view features and the behavior information to obtain fused features, then the server builds private features shared among the virtual characters and shared common features among the virtual characters based on the fused features, finally, the server trains a preset model according to the common features and the private features to obtain an operation control model, and the virtual characters in the target game are controlled through an operation prediction model. The virtual role control scheme provided by the application utilizes the attribute characteristics, the local view characteristics and the behavior information to carry out fusion processing to obtain fused characteristics, and constructing an exclusive private feature shared among the virtual characters and a shared common feature shared among the virtual characters based on the fused features, training a preset model according to the shared common feature and the private feature, in actual use, the virtual characters controlled by the operation prediction model of the present application can not only focus on information (common characteristics) shared between virtual characters in the same formation, but also focus on information (private characteristics) of the virtual characters themselves, therefore, in a complex game scene, the virtual character can execute correct game behaviors according to the shared information and the information of the virtual character to complete game match, and therefore the accuracy of the game behaviors of the virtual character can be predicted by the scheme of the application.
In order to better implement the target detection method of the present application, the present application further provides a control device (control device for short) based on the virtual character. The terms are the same as those in the above virtual character control method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a virtual character control apparatus provided in the present application, where the detection apparatus may include an acquisition module 301, an extraction module 302, a fusion module 303, a construction module 304, and a training module 305, and specifically may be as follows:
the collection module 301 is configured to collect historical game data of virtual character game in the target game.
The target game may be a multiplayer online tactical competition game or a multiplayer shooting game, and specifically, data of historical game play of the virtual character (i.e., historical game data) may be collected from a database of the target game, and the historical game data may be stored locally or pulled through an access network interface, and is specifically determined according to actual conditions.
An extracting module 302, configured to extract, from historical game data, attribute features of a virtual character, local view features of the virtual character in a view range in a virtual scene, and behavior information of a virtual character executing a target game behavior.
The virtual characters include a first virtual character, a second virtual character, and a fixed virtual character, where the first virtual character and the second virtual character are virtual characters in different game partners in a historical game partner, and optionally, in some embodiments, the extracting module 302 may specifically include:
the first extraction unit is used for extracting the attribute characteristics of the first virtual character, the attribute characteristics of the second virtual character, the attribute characteristics of the fixed virtual character and the local view characteristics of the first virtual character in the view range in the virtual scene from historical game data by adopting a preset characteristic extraction network;
and the second extraction unit is used for extracting the behavior information of the first virtual character executing the target game behavior from the historical game data by adopting a preset information extraction network.
Optionally, in some embodiments, the second extracting unit may specifically include:
a determining subunit, configured to determine time information for a first virtual character to perform a target game action in a target game;
an acquisition subunit operable to acquire a plurality of consecutive game images from the history game data based on the time information;
and the extraction subunit is used for extracting the behavior information of the first virtual character executing the target game behavior from the plurality of game images by adopting a preset information extraction network.
Optionally, in some embodiments, the extraction subunit may specifically be configured to: acquiring position information data and operation information data of a first virtual character executing target game behaviors according to a plurality of continuous game images, constructing behavior data of the first virtual character executing target game behaviors according to the position information data and the operation information data, and extracting the behavior information of the first virtual character executing target game behaviors from behavior tags by adopting a preset behavior information extraction network.
Optionally, in some embodiments, the first extraction unit may specifically be configured to: extracting role data of a first virtual role from historical game data to obtain first role data, extracting role data of a second virtual role from the historical game data to obtain second role data, extracting role data of a fixed virtual role from the historical game data to obtain third role data, and respectively performing feature extraction on the first role data, the second role data and the third role data by adopting a preset feature extraction network to obtain attribute features of the first virtual role, attribute features of the second virtual role, attribute features of the fixed virtual role and local view features of the first virtual role in a view field range in a virtual scene.
The fusion module 303 is configured to perform fusion processing on the attribute features, the local view features, and the behavior information to obtain fused features;
for example, specifically, the fusion module 303 may perform maximum pooling on the attribute features and perform convolution on the local view features, and then the fusion module 303 performs fusion processing on the processed attribute features, the processed view features, and the behavior information to obtain the fused features.
Optionally, in some embodiments, the fusion module 303 may specifically include:
the first processing unit is used for performing maximum pooling processing on the attribute characteristics to obtain processed attribute characteristics;
the second processing unit is used for performing convolution processing on the local view characteristics to obtain processed view characteristics;
and the fusion unit is used for performing fusion processing on the processed attribute features, the processed view features and the behavior information to obtain fused features.
Optionally, in some embodiments, the fusion unit may be specifically configured to: and splicing the processed attribute features and the processed view features to obtain spliced features, and embedding the behavior information into the spliced features to obtain fused features.
Optionally, in some embodiments, the fusion module 303 may be specifically configured to: and performing fusion processing on the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role, the local view characteristics of the first virtual role and the behavior information of the first virtual role to obtain fused characteristics.
A construction module 304, configured to construct, based on the fused features, an exclusive private feature shared between the virtual roles and a shared common feature shared between the virtual roles;
for example, specifically, the building module 304 may divide the merged features based on a preset policy, so as to obtain the private features that are shared exclusively among the virtual roles and the common features that are shared among the virtual roles.
Optionally, in some embodiments, the building module 304 may specifically be configured to: and performing feature space transformation on the fused features to obtain transformed features, selecting feature components with the quantity corresponding to a preset strategy from the transformed features to obtain shared features shared among the virtual characters, and removing the shared features shared among the virtual characters from the transformed features to obtain exclusive private features shared among the virtual characters.
The training module 305 is configured to train a preset model according to the common features and the private features to obtain an operation control model, so as to control the virtual character in the target game through the operation prediction model.
After the acquisition module 301 acquires historical game data of virtual character match in a target game, the extraction module 302 extracts attribute features of virtual characters, local view features of the virtual characters in a view range of a virtual scene, and behavior information of the virtual characters for executing target game behaviors from the historical game data, the fusion module 303 performs fusion processing on the attribute features, the local view features, and the behavior information to obtain fused features, the construction module 304 constructs private features shared between the virtual characters and shared common features between the virtual characters based on the fused features, and finally the training module 305 trains a preset model according to the common features and the private features to obtain an operation control model so as to control the virtual characters in the target game through an operation prediction model. The virtual role control scheme provided by the application utilizes the attribute characteristics, the local view characteristics and the behavior information to carry out fusion processing to obtain fused characteristics, and constructing an exclusive private feature shared among the virtual characters and a shared common feature shared among the virtual characters based on the fused features, training a preset model according to the shared common feature and the private feature, in actual use, the virtual characters controlled by the operation prediction model of the present application can not only focus on information (common characteristics) shared between virtual characters in the same formation, but also focus on information (private characteristics) of the virtual characters themselves, therefore, in a complex game scene, the virtual character can execute correct game behaviors according to the shared information and the information of the virtual character to complete game match, and therefore the accuracy of the game behaviors of the virtual character can be predicted by the scheme of the application.
In addition, the present application also provides an electronic device, as shown in fig. 4, which shows a schematic structural diagram of the electronic device related to the present application, specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are realized through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 404, and the input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
the method comprises the steps of collecting historical game data of virtual character match-up in a target game, extracting attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors from the historical game data, fusing the attribute features, the local view features and the behavior information to obtain fused features, constructing exclusive private features among the virtual characters and shared common features among the virtual characters based on the fused features, training a preset model according to the common features and the private features to obtain an operation control model, and controlling the virtual characters in the target game through the operation prediction model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
After historical game data of virtual character match in a target game are collected, attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors are extracted from the historical game data, then the attribute features, the local view features and the behavior information are subjected to fusion processing to obtain fused features, then private features which are shared exclusively among the virtual characters and shared common features among the virtual characters are established based on the fused features, finally, a preset model is trained according to the common features and the private features to obtain an operation control model, and the virtual characters in the target game are controlled through an operation prediction model. The virtual role control scheme provided by the application utilizes the attribute characteristics, the local view characteristics and the behavior information to carry out fusion processing to obtain fused characteristics, and constructing an exclusive private feature shared among the virtual characters and a shared common feature shared among the virtual characters based on the fused features, training a preset model according to the shared common feature and the private feature, in actual use, the virtual characters controlled by the operation prediction model of the present application can not only focus on information (common characteristics) shared between virtual characters in the same formation, but also focus on information (private characteristics) of the virtual characters themselves, therefore, in a complex game scene, the virtual character can execute correct game behaviors according to the shared information and the information of the virtual character to complete game match, and therefore the accuracy of the game behaviors of the virtual character can be predicted by the scheme of the application.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium having stored therein a plurality of instructions that can be loaded by a processor to perform the steps of any of the avatar control methods provided herein. For example, the instructions may perform the steps of:
the method comprises the steps of collecting historical game data of virtual character match-up in a target game, extracting attribute features of the virtual characters, local view features of the virtual characters in a view field range of a virtual scene and behavior information of the virtual characters for executing target game behaviors from the historical game data, fusing the attribute features, the local view features and the behavior information to obtain fused features, constructing exclusive private features among the virtual characters and shared common features among the virtual characters based on the fused features, training a preset model according to the common features and the private features to obtain an operation control model, and controlling the virtual characters in the target game through the operation prediction model.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any virtual character control method provided by the present application, the beneficial effects that can be achieved by any virtual character control method provided by the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The virtual role control method, device, electronic device and storage medium provided by the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (13)

1. A virtual character control method is characterized by comprising the following steps:
acquiring historical game data of virtual character game alignment in a target game, wherein the virtual characters comprise a first virtual character, a second virtual character and a fixed virtual character, and the first virtual character and the second virtual character are virtual characters in different game alignment parties in the historical game;
extracting the attribute characteristics of the first virtual character, the attribute characteristics of the second virtual character, the attribute characteristics of the fixed virtual character and the local visual field characteristics of the first virtual character in the visual field range in the virtual scene from the historical game data by adopting a preset characteristic extraction network;
extracting behavior information of a first virtual character executing target game behavior from historical game data by adopting a preset information extraction network;
fusing the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role, the local view characteristics of the first virtual role and the behavior information of the first virtual role to obtain fused characteristics;
dividing the fused features into exclusive private features shared among the first virtual role, the second virtual role and the fixed virtual role and shared common features shared among the first virtual role, the second virtual role and the fixed virtual role based on a preset strategy;
training a preset model according to the common characteristics and the private characteristics to obtain an operation control model, and controlling a first virtual character, a second virtual character and a fixed virtual character in the target game through the operation prediction model.
2. The method of claim 1, wherein the extracting behavior information of the first virtual character executing the target game behavior from the historical game data by using the preset information extraction network comprises:
determining time information for a first virtual character to perform a target game action in a target game;
acquiring a plurality of continuous game images from historical game data based on the time information;
and extracting behavior information of the first virtual character executing the target game behavior from the plurality of game images by adopting a preset information extraction network.
3. The method of claim 2, wherein the extracting behavior information of the first virtual character executing the target game behavior from the plurality of game images using the preset information extraction network comprises:
acquiring position information data and operation information data of a first virtual character executing target game behaviors according to a plurality of continuous game images;
according to the position information data and the operation information data, behavior data of the first virtual character executing the target game behavior are constructed;
and extracting the behavior information of the first virtual character executing the target game behavior from the behavior data by adopting a preset behavior information extraction network.
4. The method according to claim 1, wherein the extracting the attribute feature of the first virtual character, the attribute feature of the second virtual character, the attribute feature of the fixed virtual character and the local view feature of the first virtual character in the virtual scene from the historical game data by using the preset feature extraction network comprises:
extracting character data of a first virtual character from historical game data to obtain first character data;
extracting character data of a second virtual character from the historical game data to obtain second character data;
extracting character data of the fixed virtual character from historical game data to obtain third character data;
and respectively extracting the characteristics of the first role data, the second role data and the third role data by adopting a preset characteristic extraction network to obtain the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role and the local visual field characteristics of the first virtual role in the visual field range in the virtual scene.
5. The method according to claim 1, wherein the fusing the attribute feature of the first virtual character, the attribute feature of the second virtual character, the attribute feature of the fixed virtual character, the local view feature of the first virtual character, and the behavior information of the first virtual character to obtain a fused feature comprises:
performing maximum pooling on the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role and the attribute characteristics of the fixed virtual role to obtain processed attribute characteristics;
performing convolution processing on the local view characteristics of the first virtual role to obtain processed view characteristics;
and performing fusion processing on the processed attribute features, the processed view features and the behavior information to obtain fused features.
6. The method of any one of claims 1 to 5, wherein the constructing the private features that are exclusively shared among the first virtual character, the second virtual character, and the fixed virtual character and the common features that are shared among the first virtual character, the second virtual character, and the fixed virtual character based on the fused features comprises:
performing feature space transformation on the fused features to obtain transformed features;
selecting feature components with the quantity corresponding to a preset strategy from the transformed features to obtain shared common features among the first virtual role, the second virtual role and the fixed virtual role;
and removing shared common characteristics among the first virtual role, the second virtual role and the fixed virtual role from the converted characteristics to obtain the exclusive private characteristics shared among the first virtual role, the second virtual role and the fixed virtual role.
7. A virtual character control apparatus, comprising:
the system comprises an acquisition module, a game processing module and a game processing module, wherein the acquisition module is used for acquiring historical game data of virtual role play in a target game, the virtual roles comprise a first virtual role, a second virtual role and a fixed virtual role, and the first virtual role and the second virtual role are virtual roles in different game parties in the historical game play;
an extraction module comprising:
a first extraction unit, configured to extract, from the historical game data, an attribute feature of the first virtual character, an attribute feature of the second virtual character, an attribute feature of the fixed virtual character, and a local view feature of the first virtual character within a view range in a virtual scene by using a preset feature extraction network;
the second extraction unit is used for extracting the behavior information of the first virtual character executing the target game behavior from the historical game data by adopting a preset information extraction network;
the fusion module is used for fusing the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role, the local view characteristics of the first virtual role and the behavior information of the first virtual role to obtain fused characteristics;
the construction module is used for dividing the fused features into exclusive private features among the first virtual role, the second virtual role and the fixed virtual role and shared common features among the first virtual role, the second virtual role and the fixed virtual role based on a preset strategy;
and the training module is used for training a preset model according to the common characteristics and the private characteristics to obtain an operation control model so as to control the first virtual role, the second virtual role and the fixed virtual role in the target game through the operation prediction model.
8. The apparatus of claim 7, wherein the second extraction unit comprises:
a determining subunit, configured to determine time information for a first virtual character to perform a target game action in a target game;
an acquisition subunit configured to acquire a plurality of consecutive game images from the historical game data based on the time information;
and the extraction subunit is used for extracting the behavior information of the first virtual character executing the target game behavior from the plurality of game images by adopting a preset information extraction network.
9. The apparatus according to claim 8, wherein the extraction subunit is specifically configured to:
acquiring position information data and operation information data of a first virtual character executing target game behaviors according to a plurality of continuous game images;
according to the position information data and the operation information data, behavior data of the first virtual character executing the target game behavior are constructed;
and extracting the behavior information of the first virtual character executing the target game behavior from the behavior data by adopting a preset behavior information extraction network.
10. The apparatus according to claim 7, wherein the first extraction unit is specifically configured to:
extracting character data of a first virtual character from historical game data to obtain first character data;
extracting character data of a second virtual character from the historical game data to obtain second character data;
extracting character data of the fixed virtual character from historical game data to obtain third character data;
and respectively extracting the characteristics of the first role data, the second role data and the third role data by adopting a preset characteristic extraction network to obtain the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role, the attribute characteristics of the fixed virtual role and the local visual field characteristics of the first virtual role in the visual field range in the virtual scene.
11. The apparatus of claim 7, wherein the fusion module comprises:
the first processing unit is used for performing maximum pooling on the attribute characteristics of the first virtual role, the attribute characteristics of the second virtual role and the attribute characteristics of the fixed virtual role to obtain processed attribute characteristics;
the second processing unit is used for performing convolution processing on the local view characteristics of the first virtual role to obtain processed view characteristics;
and the fusion unit is used for carrying out fusion processing on the processed attribute features, the processed view features and the behavior information to obtain fused features.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the avatar control method according to any of claims 1-6 are implemented when the program is executed by the processor.
13. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the avatar control method according to any of claims 1-6.
CN202010239611.7A 2020-03-30 2020-03-30 Virtual character control method, virtual character control device, electronic equipment and storage medium Active CN111450531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010239611.7A CN111450531B (en) 2020-03-30 2020-03-30 Virtual character control method, virtual character control device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010239611.7A CN111450531B (en) 2020-03-30 2020-03-30 Virtual character control method, virtual character control device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111450531A CN111450531A (en) 2020-07-28
CN111450531B true CN111450531B (en) 2021-08-03

Family

ID=71670814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010239611.7A Active CN111450531B (en) 2020-03-30 2020-03-30 Virtual character control method, virtual character control device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111450531B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111921201B (en) * 2020-09-21 2021-01-08 成都完美天智游科技有限公司 Method and device for generating frame data, storage medium and computer equipment
CN114344889B (en) * 2020-10-12 2024-01-26 腾讯科技(深圳)有限公司 Game strategy model generation method and control method of intelligent agent in game
CN113996063A (en) * 2021-10-29 2022-02-01 北京市商汤科技开发有限公司 Method and device for controlling virtual character in game and computer equipment
CN116999823A (en) * 2022-06-23 2023-11-07 腾讯科技(成都)有限公司 Information display method, information display device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180083703A (en) * 2017-01-13 2018-07-23 주식회사 엔씨소프트 Method of decision making for a fighting action game character based on artificial neural networks and computer program therefor
CN108671546A (en) * 2018-05-23 2018-10-19 腾讯科技(深圳)有限公司 Determination method and apparatus, storage medium and the electronic device of object run
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server
CN109893857A (en) * 2019-03-14 2019-06-18 腾讯科技(深圳)有限公司 A kind of method, the method for model training and the relevant apparatus of operation information prediction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378423A (en) * 2019-07-22 2019-10-25 腾讯科技(深圳)有限公司 Feature extracting method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180083703A (en) * 2017-01-13 2018-07-23 주식회사 엔씨소프트 Method of decision making for a fighting action game character based on artificial neural networks and computer program therefor
CN108671546A (en) * 2018-05-23 2018-10-19 腾讯科技(深圳)有限公司 Determination method and apparatus, storage medium and the electronic device of object run
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server
CN109893857A (en) * 2019-03-14 2019-06-18 腾讯科技(深圳)有限公司 A kind of method, the method for model training and the relevant apparatus of operation information prediction

Also Published As

Publication number Publication date
CN111450531A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111450531B (en) Virtual character control method, virtual character control device, electronic equipment and storage medium
KR102523888B1 (en) Method, Apparatus and Device for Scheduling Virtual Objects in a Virtual Environment
CN111111220B (en) Self-chess-playing model training method and device for multiplayer battle game and computer equipment
US11938403B2 (en) Game character behavior control method and apparatus, storage medium, and electronic device
CN110163238B (en) Information prediction method, model training method and server
CN112791394B (en) Game model training method and device, electronic equipment and storage medium
CN111582469A (en) Multi-agent cooperation information processing method and system, storage medium and intelligent terminal
WO2021159779A1 (en) Information processing method and apparatus, computer-readable storage medium and electronic device
CN111450534B (en) Training method of label prediction model, and label prediction method and device
CN109529338A (en) Object control method, apparatus, Electronic Design and computer-readable medium
CN111437608B (en) Game play method, device, equipment and storage medium based on artificial intelligence
CN110170171A (en) A kind of control method and device of target object
CN113688977A (en) Confrontation task oriented man-machine symbiosis reinforcement learning method and device, computing equipment and storage medium
CN112402986B (en) Training method and device for reinforcement learning model in battle game
CN113230650B (en) Data processing method and device and computer readable storage medium
CN114404975A (en) Method, device, equipment, storage medium and program product for training decision model
CN112044076B (en) Object control method and device and computer readable storage medium
CN109977998A (en) Information processing method and device, storage medium and electronic device
CN114870403A (en) Battle matching method, device, equipment and storage medium in game
CN114404977A (en) Training method of behavior model and training method of structure expansion model
CN114404976A (en) Method and device for training decision model, computer equipment and storage medium
CN114344889A (en) Game strategy model generation method and control method of intelligent agent in game
CN115944921B (en) Game data processing method, device, equipment and medium
CN116999823A (en) Information display method, information display device, storage medium and electronic equipment
CN114307166A (en) Game fighting model training method, game fighting method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant