CN115463424A - Display control method and device of virtual role and electronic equipment - Google Patents

Display control method and device of virtual role and electronic equipment Download PDF

Info

Publication number
CN115463424A
CN115463424A CN202210850247.7A CN202210850247A CN115463424A CN 115463424 A CN115463424 A CN 115463424A CN 202210850247 A CN202210850247 A CN 202210850247A CN 115463424 A CN115463424 A CN 115463424A
Authority
CN
China
Prior art keywords
virtual character
character
virtual
information
prediction result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210850247.7A
Other languages
Chinese (zh)
Inventor
张林箭
郭燧冰
宋有伟
汪硕芃
张聪
范长杰
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210850247.7A priority Critical patent/CN115463424A/en
Publication of CN115463424A publication Critical patent/CN115463424A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a display control method and device of a virtual character and electronic equipment, and relates to the technical field of games, wherein the method comprises the following steps: firstly, according to background information of the virtual character, a strategy model is used for predicting behavior actions of the virtual character to obtain target behavior actions, then a pre-trained generative pre-training language model is used for generating reply texts of the virtual character, and finally the virtual character is controlled to execute the target behavior actions and display the reply texts to generate character reactions of the virtual character. The method controls the virtual roles in a mode of predicting behavior actions and generating reply texts through the model, so that role reactions are generated, the editing cost is greatly reduced, the generation efficiency of the role reactions is improved, the requirement of diversified actual scenes is met, and the game experience is improved.

Description

Display control method and device of virtual role and electronic equipment
Technical Field
The invention relates to the technical field of games, in particular to a display control method and device of a virtual character and electronic equipment.
Background
In a Massively Multiplayer Online Role-Playing Game (MMORPG), many Non-player Characters (NPCs) that are not manipulated by a real player are generally set and can interact with a Game player. The reactions of the body motion, emotional motion, text reply, follow-up state, etc. of the characters (NPC or game players) in the game to the surrounding environment (other NPC, game players, weather, time, etc. factors) are generally referred to as 'character reactions'.
In a related technology, a role reaction in a game scene is realized in a manual editing mode, a state corresponding to a corresponding instruction is written into a virtual role in advance, a corresponding role reaction is made under a characteristic instruction, and a pre-edited action reply loop is generally adopted. In another related art, text replies to characters tend to be fixed question-answer templates, such as microsoft ice, ali honey, meena of google, binder of Facebook, which rely on the form of a question-answer, in some practical scenarios, characters in games need to generate replies according to changes of surrounding environments, "ask" is only one dimension of the changes of the surrounding environments, and weather conditions, current time, actions performed by players, and the like are dimensions to be considered in the environments.
However, for the manual editing mode, in order to increase the diversity of the character reactions and improve the game experience of the player, multiple pieces of text are usually edited in each state, which actually increases the number of the documentations in geometric multiples, so that the large-scale data is very expensive to edit for the game documentations. For the role text reply adopting the question-answer template, usually, fixed reply can be generated only according to keywords in the question, and the requirement of diversified actual scenes cannot be met in the face of the change of the surrounding environment.
That is to say, in the existing MMORPG, the reaction of the virtual character cannot meet the requirement of diversified actual scenes, and there are technical problems of huge editing cost, small scene application range and poor game experience.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method and an apparatus for controlling display of a virtual character, and an electronic device, so as to solve the problems of high response cost, small application range of a scene, and poor game experience in editing a virtual character in the prior art.
In order to achieve the above object, the embodiments of the present invention adopt the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for controlling display of a virtual character, including: predicting the behavior action of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character; the behavior action comprises a first limb action and a language action of the virtual character; using a description text corresponding to the background information of the virtual character as input, and generating a reply text of the virtual character by using a pre-trained generative pre-training language model; controlling the virtual character to perform the target behavior action and display the reply text to generate a character reaction of the virtual character.
In one possible embodiment, after the step of generating the reply text of the virtual character by using a pre-trained generative pre-training language model with the description text corresponding to the background information of the virtual character as an input, the method further includes: predicting the state information of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role and the reply text of the virtual role, and determining the subsequent state of the virtual role; the subsequent state includes a second limb action of the virtual character.
In one possible embodiment, after the step of controlling the virtual character to perform the target behavior action and displaying the reply text to generate the character reaction of the virtual character, the method further includes: and controlling the virtual character to execute the second limb action.
In one possible embodiment, the policy model includes: PPL strategy and Seq2Seq model; the PPL strategy is generated based on a well-trained GPT model; the method comprises the following steps of predicting the behavior action of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character, wherein the steps comprise: predicting by using a GPT (general packet transport) model according to pre-generated background information of the virtual role to generate a first prediction result of the behavior action; predicting by using a Seq2Seq model according to pre-generated background information of the virtual character to generate a second prediction result of the behavior action; and determining a target behavior action based on the behavior actions corresponding to the first prediction result and the second prediction result.
In one possible implementation, the step of determining the target behavior action based on the behavior action corresponding to the first prediction result and the second prediction result includes: when the first prediction result is the same as the second prediction result, determining that the behavior action corresponding to the first prediction result or the second prediction result is a target behavior action; when the first prediction result is different from the second prediction result, and one of the first prediction result and the second prediction result is not empty, determining the behavior action corresponding to the prediction result which is not empty as a target behavior action; and when the first prediction result is different from the second prediction result and the first prediction result and the second prediction result are not empty, determining that the behavior action corresponding to the first prediction result is a target behavior action.
In one possible implementation, the method further includes: generating background information of the virtual character according to a character information description text of a plurality of dimensionalities obtained in advance; the background information of the virtual character includes: player information, environment information, and NPC information.
In a possible implementation manner, the step of generating the background information of the virtual character according to the character information description texts with multiple dimensions obtained in advance includes: obtaining player information in a current game scene; the player information includes a player character table including player character names, professions, and labels; each label corresponds to at least one descriptive text; and splicing the job of each player character name in the player character table and the description text to generate a final description text of the current player character.
In a possible implementation manner, the step of generating the background information of the virtual character according to the character information description texts with multiple dimensions obtained in advance further includes: acquiring NPC information in a current game scene; the NPC information comprises an NPC table, and the NPC table comprises NPC names, professions and labels; each label corresponds to at least one descriptive text; and splicing the occupation and the description text of each NPC name in the NPC table to generate a final description text of the current NPC.
In a second aspect, an embodiment of the present invention provides a display control apparatus for a virtual character, including: the target behavior action determining module is used for predicting the behavior action of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role to obtain the target behavior action of the virtual role; the behavior actions comprise a first limb action and a language action of the virtual character; a reply text generation module, configured to use a description text corresponding to the background information of the virtual character as an input, and generate a reply text of the virtual character by using a pre-trained generative pre-training language model; and the display control module is used for controlling the virtual role to execute the target behavior action and displaying the reply text so as to generate the role reaction of the virtual role.
In one possible embodiment, the method further comprises: a subsequent state generation module, configured to predict state information of the virtual character by using a policy model according to pre-generated background information of the virtual character and a reply text of the virtual character, and determine a subsequent state of the virtual character; the subsequent state includes a second limb action of the virtual character.
In a possible implementation manner, the display control module is further configured to control the virtual character to perform the second limb action.
In one possible implementation, the method further includes: the background information generation module is used for generating background information of the virtual character according to the role information description texts with multiple dimensionalities obtained in advance; the background information of the virtual character includes: player information, environment information, and NPC information.
In a possible implementation manner, the background information generating module is further configured to: obtaining player information in a current game scene; the player information includes a player character table including player character names, professions, and labels; each label corresponds to at least one descriptive text; and splicing the job of each player character name in the player character table and the description text to generate a final description text of the current player character.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method for controlling display of a virtual character according to any one of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing machine executable instructions that, when invoked and executed by a processor, cause the processor to execute a display control method for a virtual character according to any one of the first aspect.
The invention provides a display control method and device of a virtual character and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of firstly predicting behavior actions of the virtual character by using a strategy model according to background information of the virtual character to obtain target behavior actions, then generating reply texts of the virtual character by using a pre-trained generative pre-training language model, and finally controlling the virtual character to execute the target behavior actions and display the reply texts to generate a character response of the virtual character. The method controls the virtual roles in a mode of predicting behavior actions and generating reply texts through the model, so that role reactions are generated, the editing cost is greatly reduced, the generation efficiency of the role reactions is improved, the requirement of diversified actual scenes is met, and the game experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a display control method for a virtual character according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a display control system for a virtual character according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a display control apparatus for virtual roles according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
In a massively multiplayer online role-playing game MMORPG, a number of non-player corners NPC are typically provided that are not manipulated by a real player and can interact with the game player. The reactions of the body motion, emotional motion, text reply, follow-up state, etc. of the characters (NPC or game player) in the game to the surrounding environment (other NPC, game player, weather, time, etc. factors) are generally defined as the 'character reactions'. Currently, the virtual character and character reactions are generally realized by manually writing state feedback in advance, and in order to increase the diversity of the character reactions and improve the game experience of players, multiple pieces of text are usually edited in each state, which actually increases the number of copy cases in geometric multiples, such as: there are 1000 NPCs in the game, and the player can make 50 actions on the NPCs, 5 weather types, 3 times, then the number of text-only replies in this scenario will be 1000 × 50 × 5 × 3= 750000. Such large-scale data is very costly to edit for game files, and an automated method is needed to assist the game files in generating the character response data.
Besides the manual editing of micro-reactions, no solution for automatically generating the micro-reactions of the characters in the game scene exists at present. Specifically, three tasks of character limb action prediction, character emotional action and character subsequent state are very relevant to application scenes, no existing data set supports processing of the tasks, and the tasks need to be converted and disassembled. For the task of text reply of a character, there are some technical solutions related to dialog reply, such as microsoft ice, ali honey, meena of google, and binder of Facebook, but depending on the form of question and answer, in some practical scenarios, a character needs to generate a reply according to the change of the surrounding environment, a "question" is only one dimension of the change of the surrounding environment, and the weather condition, the current time, the action performed by a player, and the like are dimensions to be considered in the environment. The existing dialog reply technology cannot meet the requirements of actual scenes.
Based on this, the embodiment of the invention provides a display control method and device for a virtual character and an electronic device, so as to solve the problems that the existing character reaction method for editing the virtual character is high in cost, low in efficiency, incapable of meeting the requirement of diversified actual scenes, and poor in game experience.
To facilitate understanding of the present embodiment, first, a detailed description is given of a display control method for a virtual character disclosed in the present embodiment, referring to a flowchart of the display control method for a virtual character shown in fig. 1, where the method may be executed by an electronic device, and the method mainly includes the following steps S110 to S130:
s110: predicting the behavior action of the virtual character by using a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character;
the virtual character may be a player character or an NPC. That is, the display control method of the virtual character according to the present embodiment may be used to generate a character reaction of the player, and may also be used to generate a character reaction of the NPC. In particular, players may also include online players and offline players.
In this embodiment, the behavior action of the virtual character may include: first limb action and language behavior. Wherein, the first limb action may be an action that the character can actually perform in the game, such as: waving hands, kneeling, lifting head, etc.; linguistic behavior may determine the emotional propensity of the next text reply, such as: anger, joy, frustration, etc.
S120: the description text corresponding to the background information of the virtual character is used as input, and a pre-trained generative pre-training language model is utilized to generate a reply text of the virtual character;
the reply text may be a response of the virtual character to the current scene, such as: the player steals something from the NPC, which responds to say "steal something may be bad, so you see that many people are all in the viewing tweed. "
In this embodiment, the pre-trained generative pre-trained language model may be a GPT model pre-trained based on published chinese novel corpus and storyline of game scenes. The model can be made to generate the expected reply text given the context information.
As a specific example, the concatenation < player information > + < NPC information > + < environment information > + < player action > + "NPC would next" + < limb action > + "NPC" + < language action > + "NPC say" to the player "give this prefix information to the GPT model, leading the model to generate the words that the NPC says down. Such as: the swordsman is a Sanqing mountain Saizhige brother, a body-building martial arts and a tour just going down the mountain. Wu Sansao is a wife who stroll around the roadside as a passerby. The weather is now clear. Swordsmen have beaten a tremble before Wu Sansao. Wu Sansao then clenches the two punches. Wu Sansao feels uneasy. Wu Sansao for swordsmen: the resulting results are for example: girl, you don't worry about! We have no malicious intent.
To increase the variety of reply text generation, the following strategies may be employed: and (4) decoding for multiple times, wherein each time the topn is taken, similarity scoring is carried out on every two recovered topn candidates (sentence vectors are extracted by adopting BERT, then cosine distances are calculated), only the candidate with higher similarity is reserved with the higher decoding score, and finally, repeated decoding results are uniformly repeated (sentence vectors can be adopted for repeated decoding, and the repeated decoding results can be specifically adjusted according to specific scenes).
S130: and controlling the virtual character to execute the target behavior action and displaying the reply text to generate the character reaction of the virtual character.
In an embodiment, after the step S120, the method may further include: predicting the state information of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role and the reply text of the virtual role, and determining the subsequent state of the virtual role; the subsequent state includes a second limb action of the virtual character.
Correspondingly, after the step S130, the method may further include: and controlling the virtual character to execute the second limb action.
The subsequent state may be a next reaction after the virtual character responds to the current scene, and the second body action may be an action that the character can actually execute in the game. Such as: the NPC makes corresponding action and reply after being irritated by a game player, and then a subsequent state which is possibly entered is a fighting state, namely fighting with the game player, and the action of a second limb corresponding to the subsequent state is fighting; after the NPC finishes communicating with the player, the subsequent state that the NPC may enter is distinguished from the game player, and the second limb action corresponding to the subsequent state may be hand waving, fist holding and the like. This step may be used to predict whether the NPC needs to enter the subsequent state.
In one embodiment, before the step S110, the method may include: and generating background information of the virtual character according to the character information description texts with multiple dimensionalities obtained in advance. Wherein, the background information of the virtual character comprises: player information, environment information, and NPC information.
In general, various structured data need to be described in advance by using a "natural language" as an input of a subsequent pre-training language model. That is, the description text corresponding to each dimension needs to be edited manually in advance, where the multiple dimensions may include: player information, body movements, language behavior, follow-up status, environmental information, etc. In the display control method for the virtual character provided in this embodiment, text editing only needs to be performed on each structured dimension manually, and text descriptions of multiple dimensions can be automatically combined to obtain background information with rich scenes, which is used for assisting the generation of the character reaction.
In order to simplify the level of the file, the game players can be divided into two categories, namely men and women, and corresponding description texts are written respectively. For example, a man's player is described as "the swordsman is the san qing mountain, the family brother, the body is born with martial arts, and the tour just after the mountain. "but not limited to the dimension of gender, any dimension of a player in a game may be used to augment the descriptive text of the player's actions.
In this embodiment, the method for automatically acquiring player information may include the steps of:
(1) Obtaining player information in a current game scene;
wherein the player information comprises a player character table, the player character table comprising player character names, professions, and labels; each label corresponds to at least one descriptive text;
(2) And splicing the occupation and the description text of each player character name in the player character table to generate a final description text of the current player character.
Since the player's actions that a player can perform in a game are a finite set, the set of actions is determined by the game. As a specific example, the scene applied by the present embodiment includes 50 player actions, and each player action corresponds to a description text edited in advance. The descriptive text such as "wave" {1} towards {0 }. "(Note: in all subsequent text {0} represents the NPC name, {1} represents the player name, and is replaced by a specific value when data is actually generated).
The NPC information and the type of player information also describe the characteristics of the NPC by using a sentence, namely, each NPC corresponds to a sentence of description text. Writing description text in sequence is very time consuming, as the number of NPCs in a game can be as high as thousands. The present embodiment therefore proposes a method of automatically generating NPC information.
In this embodiment, the method for automatically acquiring NPC information may also include the following steps:
(A) Acquiring NPC information in a current game scene;
the NPC information comprises an NPC table, and the NPC table comprises NPC names, professions and labels; each label corresponds to at least one description text;
(B) And splicing the occupation and the description text of each NPC name in the NPC table to generate a final description text of the current NPC.
That is, the NPC table typically contains the names, professions (which may be empty), and labels of the existing NPCs in the game. First, the NPC description template is edited for the occupation of the NPC, and the templates of about 50 occupation categories, such as "monk" occupation, are: {0} is monk, cassock worn by men; then editing a corresponding file for each label, such as the file of the "rich" label is: the home is everlasting and solid; then, the occupation of each NPC in the NPC table provided by the game is obtained, (if not, the classification can be predicted by matching the NPC name with the word vector distance of the occupation name, and the NPC is classified into the occupation with the nearest distance); and finally, splicing the text descriptions of the occupation and the label to obtain a final description text of the current NPC. Such as: the temple wisdom of the country is a monk, and the label is a high son and a simple son, so that the corresponding description is as follows: the mosque minwisdom of facies is a monk, wears cassock, and son high is very simple and practical for the people.
The display control method of the virtual character provided by the embodiment can be used for the mission of the player to the NPC, and can also be used for the mission of the online player to the offline player. For example: for a specific player and a specific NPC, after the player performs a specific action on the NPC, the method provided by this embodiment may predict the next target behavior action (body action, language action, text reply, subsequent state) of the NPC according to the current context information (player information, NPC information, environment information).
The limb actions that the NPC can execute in the game are also a finite set, as a specific example, the scene applied by the embodiment includes 70 NPC actions, which are similar to player actions, and the description text corresponding to the NPC limb actions can be written in advance. Such as "tremble": {0} dozen trembles.
In this embodiment, the language (emotion) behaviors expressed by the NPC in the game are about 20 classes, similar to the body motions, and the description texts corresponding to the language behaviors of the NPC can be edited in advance. Such as "anecdotal": {0} feels very unfortunate. In this embodiment, the NPC follow-up state is about 10 types, and similar to the limb movement, the description text corresponding to the follow-up state is edited in advance. Such as "combat": {0} ready to start a battle.
Environmental information includes, but is not limited to, weather (sunny, snowy, rainy), time of day (day, night, dusk). Description text such as "sunny day" may be edited for the value of each environment information: this is a sunny day, clear in the sky.
In one embodiment, the policy model used in step S110 includes: the method comprises a PPL strategy and a Seq2Seq model, wherein the PPL strategy is generated based on a well-trained GPT model. As a specific example, the PPL policy includes: after the GPT model is trained in advance, in an inference stage (namely a decoding stage), a prefix and a suffix are given, the GPT model receives prefix input, a prediction output result is a fraction (PPL score) of the suffix, and the lower the fraction is, the more the suffix is connected behind the prefix, the more the suffix is communicated, and therefore the function of sorting a plurality of candidate items can be achieved.
The GPT model can be fine-tune (fine tuning) when in use, in this embodiment, by using the disclosed chinese novel corpus and the scenario data of the game scene (the MMORPG game usually has scenario stories which depict dialogs and behaviors between virtual characters, similar to a novel, and can be directly used to train a language model), the fine-tune is performed on the basis of the open-source basic GPT model, so that the GPT model has the capability of understanding dialogs, behaviors, and game scenes.
In this embodiment, the predicting, by using the policy model, the behavior of the game robbery according to the pre-generated background information of the virtual character includes: predicting by using a GPT (general packet transport) model according to pre-generated background information of the virtual role to generate a first prediction result of behavior action; predicting by using a Seq2Seq model according to the background information of the virtual character generated in advance to generate a second prediction result of the behavior action; and finally, determining the target behavior action based on the behavior actions corresponding to the first prediction result and the second prediction result.
When the target behavior action is determined based on the behavior actions corresponding to the first prediction result and the second prediction result, the prediction result of the PPL policy is prioritized, and the specific expression may be:
(1) When the first prediction result is the same as the second prediction result, determining that the behavior action corresponding to the first prediction result or the second prediction result is a target behavior action; if the first prediction result and the second prediction result are both null, the task is not predicted, or the prediction result is null.
(2) And when the first prediction result is different from the second prediction result, and only one of the first prediction result and the second prediction result is not empty, determining the behavior action corresponding to the prediction result which is not empty as the target behavior action.
(3) And when the first prediction result is different from the second prediction result and the first prediction result and the second prediction result are not empty, determining that the behavior action corresponding to the first prediction result is the target behavior action.
In one embodiment, if a first prediction result obtained by adopting the PPL strategy prediction is different from a second prediction result obtained by adopting the Seq2Seq model prediction, filtering thresholds are respectively set for the PPL strategy and the Seq2Seq model, confidence scores of the first prediction result and the second prediction result are generated, and then a behavior action corresponding to the prediction result with the higher confidence score is used as a target behavior action of the virtual role.
Taking the NPC limb movement task as an example, the process of setting the filtering threshold value for the model may include: firstly, batch data is run out in batch, and the data content is the body motion prediction result (candidate item sorting and scoring) of NPC (player, NPC, player motion, environment and other dimensions) in different scenes; and then manually marking the action prediction rationality in advance, and finding a threshold value T (such as 90%) which can meet the condition that the prediction result higher than the threshold value is consistent with the result of manual marking. Results scored below T may be filtered directly at the time of subsequent actual predictions. For both the PPL strategy and the Seq2Seq model, the method can be adopted to set the threshold value and filter unreasonable options.
Example 1: the PPL strategy adopts a GPT language model pre-trained under the linguistic data of the game scene to predict, and the confidence score of each candidate action under given background information can be obtained. Specifically, for three tasks, the following three types of templates are defined:
i. the limb action template: < player information > + < NPC information > + < environment information > + < player motion > + "NPC next would" + < limb motion candidate >. After the contents in the front are replaced by actual values, the actual values are input into a GPT model to obtain a PPL score, for example, the swordsman is a Sanqing mountain Zingmen, a body-building martial arts and a tour just going to the mountain. Wu Sansao is a wife who stroll around the roadside as a passerby. The weather is now clear. Swordsmen have beaten a tremble before Wu Sansao. Wu Sansao then clenches the two punches. "replace the content of < limb action candidate item" by limb action text description one by one, can get the PPL score of each candidate item separately.
Language behavior templates: < player information > + < NPC information > + < environment information > + < player action > + "NPC next would" + < body action > + "NPC" + < language action candidate >. The language behavior is performed after the prediction of the body movement, such as "swordsmen is a san qing mountain, a family child, a martial arts, and a tour of the mountain just below. Wu Sansao is a wife who stroll around the roadside as a passerby. The weather is now clear. Swordsmen have beaten a tremble before Wu Sansao. Wu Sansao then clenches the two punches. Wu Sansao feels uneasy. The PPL score of each candidate item can be obtained after the language behavior candidate item is replaced similarly.
Subsequent status template: < player information > + < NPC information > + < environment information > + < player action > + "NPC next would" + < limb action > + "NPC" + < language action > + "NPC say" + < reply text > + "NPC" + < follow-up status candidate > -to the player. The subsequent state prediction is closely related to the text actually replied by the virtual character, so that the language behavior prediction is completed, and the generation of the 'reply text' is completed and then the generation details of the 'reply text' are shown in step 4. Practical examples are such as: the swordsman is a Sanqing mountain Zizai, a body-building Wuyi and a tour just going to the mountain. Wu Sansao is a wife who stroll around the roadside as a passerby. The weather is now clear. Swordsmen have beaten a tremble before Wu Sansao. Wu Sansao then clenches the two punches. Wu Sansao feels uneasy. Wu Sansao for swordsmen: "girl, what you have is in quick-click bar! Do a laoman want to remove worsted woollen? "Wu Sansao is left off. Similarly, replacing < subsequent state candidate > can obtain the PPL score of each candidate.
Example 2: the Seq2Seq model is based on a trained T5 model, and is a model finely adjusted on a public data set related to multiple choice questions, and the multiple choice question data set can be generally processed into a format of Seq2Seq in advance.
The construction of the multiple topic data set is generally a data set based on several NER tasks: MSRA _ NER, duEE-fin, ccks2019_ event _ entity _ extract, etc., are processed into data in seq2seq format through a template. The NER task is given a piece of text, the entity involved in the output text and the entity category, and the task is constructed into seq2seq format as long as the question method is changed.
For the same multi-choice question, a plurality of samples can be expanded based on different templates, so that the training set is expanded. The Seq2Seq model trained by these data has certain inference decision capability, and is relatively suitable for the scenario of this embodiment.
As in example 1, input templates are set for three tasks, for example, the input template for limb movement prediction is: < player information > + < NPC information > + < environment information > + < player action > + "NPC which action is most likely to be done next: "+ < Limb motion candidate List >. Here, the < body motion candidate list > would list all possible candidates, and in order to reduce the prediction difficulty of the model, the case may edit the optional < body motion > for each < player motion > in advance, and the options are controlled within 10.
For example, the input content is: the swordsman is a Sanqing mountain Saizhige brother, a body-building martial arts and a tour just going down the mountain. Wu Sansao is a wife who stroll around the roadside as a passerby. The weather is now clear. Swordsmen have beaten a tremble before Wu Sansao. Wu Sansao which action is most likely to be done next: A. grasping the two fists B, waving the hands C, pointing the heads D, kneeling and winding. Let the model decode the output result. Similar input templates can be set for decoding generation for language behavior and subsequent state prediction. In order to improve the generation effect, a plurality of templates can be arranged for each scene, each template is predicted by a Seq2Seq model to obtain results, and the results are voted to select the final result.
If n templates are set, after respective answers are obtained by fine-tuning the fine 2Seq model, the mode of the answer is solved, and the mode is returned as the final answer. If a plurality of answers with the same count exist, calculating the sum of scores of the same answer when the answer is decoded under different templates (the score when decoding refers to the probability of outputting a decoding result, and a numerical value carried by the seq2seq model), and returning the answer with the highest score. The setting of the templates can relieve the bias of the models, if only one template is used for model prediction, the most common and general prediction results are likely to be generated, the models are respectively used for prediction by writing in a plurality of templates, and then the mode is selected after the results are voted, so that more accurate and reasonable results can be obtained.
In the scenario of the embodiment of the present application, the role reaction is not limited to the game NPC, but is also applicable to a scenario in which the "total scheduling module" in the display control system of a virtual role shown in fig. 2 is used to control the role reaction to be generated.
a) The player is offline. After the game player is off-line, the virtual character controlled by the game player can also appear in the game scene, the on-line player can act on the off-line player, and then the off-line player needs to perform feedback. In this scenario, we can generate the role reaction of the offline player in advance.
b) An online player. When the online player makes an action on the virtual character, a reply text matching the action may be generated, for example, when the player makes a cynical action on the swallow-head NPC, the action is generated as a reply to "you have a genuine ugly, leaving no frightening girl. The online player can visually see the reply text matched with the action made by the online player, and the game immersion of the player is improved.
c) And (4) a scene with partial information missing. Such as: (1) When player information is lost, the NPC only carries out role reaction based on the environment information; (2) When the NPC information is missing, the player performs a character reaction only based on the environment information after performing an action.
d) The generation of the reply text may be a sentence actually spoken or a psychological activity, and may be realized by modifying a template suffix generated by the reply text, for example, saying "xx to xx: "replace with" xx mind: ". The method is not limited to the two forms, and other scenes which can be generated downwards by guiding a trained language model through modifying the prefix can be modified in the mode to generate the required file content.
The character reaction data are generated in advance offline and stored in the database, and the character reactions meeting the conditions are acquired through key fields such as player information, NPC information and environment information during the running of the game and then expressed in the game, so that the time delay is lower compared with online generation.
The embodiment of the application provides a display control method of a virtual role, which comprises the following steps: firstly, according to background information of the virtual character, a strategy model is used for predicting behavior actions of the virtual character to obtain target behavior actions, then a pre-trained generative pre-training language model is used for generating reply texts of the virtual character, and finally the target behavior actions and the reply texts are combined to generate character reactions of the virtual character. According to the method, the role reaction corresponding to the role is generated in a mode of predicting the behavior action and generating the reply text through the model, so that the editing cost is greatly reduced, the generation efficiency of the role reaction is improved, the application range of an actual scene is expanded, and the game experience is improved.
An embodiment of the present invention further provides a display control device for a virtual character, and as shown in fig. 3, the device includes:
the target behavior action determining module 310 is configured to predict, according to the pre-generated background information of the virtual character, a behavior action of the virtual character by using a policy model, to obtain a target behavior action of the virtual character; the behavior action comprises a first limb action and language behavior of the virtual character;
the reply text generation module 320 is configured to use a description text corresponding to the background information of the virtual character as an input, and generate a reply text of the virtual character by using a pre-trained generative pre-training language model;
a display control module 330, configured to control the virtual character to perform the target behavior action and display the reply text, so as to generate a character reaction of the virtual character.
In one embodiment, the apparatus may further include: and the subsequent state generation module can be used for predicting the state information of the virtual role by utilizing the strategy model according to the pre-generated background information of the virtual role and the reply text of the virtual role, and determining the subsequent state of the virtual role.
In this embodiment, the display control module may be further configured to control the virtual character to perform the target behavior action and display the reply text to generate a character reaction of the virtual character.
In one embodiment, the apparatus may further include: the background information generation module can be used for generating background information of the virtual character according to the role information description texts with multiple dimensionalities obtained in advance; the background information of the virtual character includes: player information, environment information, and NPC information.
In addition, the background information generation module may be further configured to: obtaining player information in a current game scene; the player information includes a player character table including player character names, professions, and labels; each label corresponds to at least one descriptive text; and splicing the occupation and the description text of each player character name in the player character table to generate a final description text of the current player character.
The display control device of the virtual character provided by the embodiment of the application can be specific hardware on the equipment, or software or firmware installed on the equipment, and the like. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the system, the apparatus and the unit described above may all refer to the corresponding processes in the method embodiments, and are not described herein again. The display control device for the virtual character provided by the embodiment of the application has the same technical characteristics as the display control method for the virtual character provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the application further provides an electronic device, and specifically, the electronic device comprises a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the above described embodiments.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 400 includes: a processor 40, a memory 41, a bus 42 and a communication interface 43, wherein the processor 40, the communication interface 43 and the memory 41 are connected through the bus 42; the processor 40 is arranged to execute executable modules, such as computer programs, stored in the memory 41.
The Memory 41 may include a Random Access Memory (RAM) and a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 43 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
The bus 42 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but that does not indicate only one bus or one type of bus.
The memory 41 is used for storing a program, the processor 40 executes the program after receiving an execution instruction, and the method executed by the apparatus defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 40, or implemented by the processor 40.
The processor 40 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 40. The Processor 40 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory 41, and the processor 40 reads the information in the memory 41 and completes the steps of the method in combination with hardware thereof.
Corresponding to the method, the embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores machine executable instructions, and when the computer executable instructions are called and executed by a processor, the computer executable instructions cause the processor to execute the steps of the method.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters indicate like items in the figures, and thus once an item is defined in a figure, it need not be further defined or explained in subsequent figures, and moreover, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (15)

1. A display control method for a virtual character, comprising:
predicting the behavior action of the virtual character by utilizing a strategy model according to the pre-generated background information of the virtual character to obtain the target behavior action of the virtual character; the behavioral actions include a first limb action and a language behavior of the virtual character;
using a description text corresponding to the background information of the virtual character as input, and generating a reply text of the virtual character by using a pre-trained generative pre-training language model;
controlling the virtual character to perform the target behavior action and display the reply text to generate a character reaction of the virtual character.
2. The method of claim 1, wherein after the step of generating the reply text of the virtual character using a generated pre-trained language model trained in advance using a description text corresponding to background information of the virtual character as an input, the method further comprises:
predicting the state information of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role and the reply text of the virtual role, and determining the subsequent state of the virtual role; the subsequent state includes a second limb action of the virtual character.
3. The method according to claim 2, wherein the step of controlling the virtual character to perform the target behavior action and display the reply text to generate the character reaction of the virtual character is followed by the step of:
and controlling the virtual character to execute the second limb action.
4. The display control method of a virtual character according to claim 1, wherein the policy model includes: PPL strategy and Seq2Seq model; the PPL strategy is generated based on a trained GPT model;
the method comprises the following steps of predicting behavior actions of a virtual role by utilizing a strategy model according to pre-generated background information of the virtual role to obtain target behavior actions of the virtual role, and comprises the following steps:
predicting by using a GPT (general packet transport) model according to pre-generated background information of the virtual role to generate a first prediction result of the behavior action;
predicting by using a Seq2Seq model according to the background information of the virtual character generated in advance, and generating a second prediction result of the behavior action;
and determining a target behavior action based on the behavior actions corresponding to the first prediction result and the second prediction result.
5. The method for controlling display of a virtual character according to claim 4, wherein the step of determining a target behavior action based on the behavior actions corresponding to the first prediction result and the second prediction result includes:
when the first prediction result is the same as the second prediction result, determining that the behavior action corresponding to the first prediction result or the second prediction result is a target behavior action;
when the first prediction result is different from the second prediction result, and one of the first prediction result and the second prediction result is not empty, determining the behavior action corresponding to the prediction result which is not empty as a target behavior action;
and when the first prediction result is different from the second prediction result and the first prediction result and the second prediction result are not empty, determining that the behavior action corresponding to the first prediction result is a target behavior action.
6. The method for controlling display of a virtual character according to claim 1, further comprising:
generating background information of the virtual character according to a character information description text of a plurality of dimensionalities obtained in advance; the background information of the virtual character includes: player information, environment information, and NPC information.
7. The method of claim 6, wherein the step of generating background information of the virtual character based on the character information description texts with a plurality of dimensions obtained in advance comprises:
obtaining player information in a current game scene; the player information includes a player character table including player character names, professions, and labels; each label corresponds to at least one descriptive text;
and splicing the occupation of each player character name in the player character table and the description text to generate a final description text of the current player character.
8. The method for controlling display of a virtual character according to claim 6, wherein the step of generating background information of the virtual character based on character information description texts of a plurality of dimensions obtained in advance further comprises:
acquiring NPC information in a current game scene; the NPC information includes an NPC table, the NPC table including NPC names, professions, and labels; each label corresponds to at least one descriptive text;
and splicing the occupation and the description text of each NPC name in the NPC table to generate a final description text of the current NPC.
9. A display control apparatus for a virtual character, comprising:
the target behavior action determining module is used for predicting the behavior action of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role to obtain the target behavior action of the virtual role; the behavioral actions include a first limb action and a language behavior of the virtual character;
the reply text generation module is used for taking the description text corresponding to the background information of the virtual character as input and generating the reply text of the virtual character by utilizing a pre-trained generative pre-training language model;
and the display control module is used for controlling the virtual role to execute the target behavior action and displaying the reply text so as to generate the role reaction of the virtual role.
10. The display control apparatus of a virtual character according to claim 9, further comprising:
the subsequent state generation module is used for predicting the state information of the virtual role by utilizing a strategy model according to the pre-generated background information of the virtual role and the reply text of the virtual role and determining the subsequent state of the virtual role; the subsequent state includes a second limb action of the virtual character.
11. The apparatus as claimed in claim 10, wherein the display control module is further configured to control the virtual character to perform the second body motion.
12. The display control apparatus of a virtual character according to claim 9, further comprising:
the background information generation module is used for generating background information of the virtual character according to the role information description texts with multiple dimensionalities obtained in advance; the context information of the virtual character includes: player information, environment information, and NPC information.
13. The apparatus for controlling display of a virtual character according to claim 12, wherein the background information generating module is further configured to: obtaining player information in a current game scene; the player information includes a player character table including player character names, professions, and labels; each label corresponds to at least one descriptive text; and splicing the occupation of each player character name in the player character table and the description text to generate a final description text of the current player character.
14. An electronic device comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the steps of the method for controlling display of a virtual character according to any one of claims 1 to 8.
15. A computer-readable storage medium storing machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the display control method of the avatar of any of claims 1 to 8.
CN202210850247.7A 2022-07-19 2022-07-19 Display control method and device of virtual role and electronic equipment Pending CN115463424A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210850247.7A CN115463424A (en) 2022-07-19 2022-07-19 Display control method and device of virtual role and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210850247.7A CN115463424A (en) 2022-07-19 2022-07-19 Display control method and device of virtual role and electronic equipment

Publications (1)

Publication Number Publication Date
CN115463424A true CN115463424A (en) 2022-12-13

Family

ID=84367717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210850247.7A Pending CN115463424A (en) 2022-07-19 2022-07-19 Display control method and device of virtual role and electronic equipment

Country Status (1)

Country Link
CN (1) CN115463424A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116059646A (en) * 2023-04-06 2023-05-05 深圳尚米网络技术有限公司 Interactive expert guidance system
CN117018616A (en) * 2023-08-25 2023-11-10 广州市玄武无线科技股份有限公司 Role and environment interaction control method based on GPT

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116059646A (en) * 2023-04-06 2023-05-05 深圳尚米网络技术有限公司 Interactive expert guidance system
CN116059646B (en) * 2023-04-06 2023-07-11 深圳尚米网络技术有限公司 Interactive expert guidance system
CN117018616A (en) * 2023-08-25 2023-11-10 广州市玄武无线科技股份有限公司 Role and environment interaction control method based on GPT

Similar Documents

Publication Publication Date Title
Alpaydin Machine learning
US20200137001A1 (en) Generating responses in automated chatting
CN115463424A (en) Display control method and device of virtual role and electronic equipment
CN110990543A (en) Intelligent conversation generation method and device, computer equipment and computer storage medium
CN109271493A (en) A kind of language text processing method, device and storage medium
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
CN114757176A (en) Method for obtaining target intention recognition model and intention recognition method
CN111858898A (en) Text processing method and device based on artificial intelligence and electronic equipment
CN116704085B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN110675871A (en) Voice recognition method and device
CN109658931A (en) Voice interactive method, device, computer equipment and storage medium
CN112507124A (en) Chapter-level event causal relationship extraction method based on graph model
CN113617036A (en) Game dialogue processing method, device, equipment and storage medium
CN114298031A (en) Text processing method, computer device and storage medium
CN111428487B (en) Model training method, lyric generation method, device, electronic equipment and medium
CN110781327B (en) Image searching method and device, terminal equipment and storage medium
CN111767386A (en) Conversation processing method and device, electronic equipment and computer readable storage medium
CN115630152A (en) Virtual character live conversation mode, device, electronic equipment and storage medium
CN115936016A (en) Emotion theme recognition method, device, equipment and medium based on conversation
CN114461775A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN114225428A (en) Interactive control method, device, equipment and storage medium for non-player character
CN111045836B (en) Search method, search device, electronic equipment and computer readable storage medium
CN114153948A (en) Question-answer knowledge base construction method, intelligent interaction method and device
JP7044245B2 (en) Dialogue system reinforcement device and computer program
Pathak et al. Artificial Intelligence for .NET: Speech, Language, and Search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination