CN117839224A - Interaction method and device for AI virtual persons - Google Patents

Interaction method and device for AI virtual persons Download PDF

Info

Publication number
CN117839224A
CN117839224A CN202410041457.0A CN202410041457A CN117839224A CN 117839224 A CN117839224 A CN 117839224A CN 202410041457 A CN202410041457 A CN 202410041457A CN 117839224 A CN117839224 A CN 117839224A
Authority
CN
China
Prior art keywords
virtual object
user
neural network
virtual
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410041457.0A
Other languages
Chinese (zh)
Inventor
赵文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guanghe Future Technology Culture Media Co ltd
Original Assignee
Guangzhou Guanghe Future Technology Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guanghe Future Technology Culture Media Co ltd filed Critical Guangzhou Guanghe Future Technology Culture Media Co ltd
Priority to CN202410041457.0A priority Critical patent/CN117839224A/en
Publication of CN117839224A publication Critical patent/CN117839224A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method and device for AI virtual persons, belongs to the technical field of neural networks, and can be matched with the requirements of different users by using the interaction provided by the neural network created for NPC. The method comprises the following steps: acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene; and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the interaction of the user with the virtual object in the virtual scene, and if the complexity of the neural network model of the first virtual object is higher, the capability of the first virtual object to interact with the user through the neural network model is stronger.

Description

Interaction method and device for AI virtual persons
Technical Field
The application relates to the technical field of machine learning, in particular to an interaction method and device of an AI virtual person.
Background
In games, NPCs (non-player characters) are virtual buddies of players in the game world, typically used to provide tasks, information or entertainment. With the development of artificial intelligence technology, we can create a neural network for NPCs that enables them to interact with players more naturally and flexibly.
Traditional NPC interactions are typically based on preset rules and behaviors, which are typically static, lacking real intelligence. Through the neural network, the NPC can learn and adapt to the player's behavior and preferences in real time, thereby providing richer, more personalized interactions. Neural networks are algorithms that simulate human brain work and can learn and extract patterns from a vast array of data. In the game, the behavior and feedback of the player can be used as training data, so that the neural network learns how to effectively interact with the player. In this way, the NPC is able to provide services and experiences that are more consistent with the player's desires and preferences. In addition, the neural network may also help the NPC understand and interpret the player's language and emotion. Through natural language processing techniques, neural networks can parse the player's voice or text input to better understand the player's intent and emotion. Thus, the NPC can provide more pertinent responses and advice based on the player's language and emotion. In other words, by applying the neural network to the NPC, we can achieve more natural, flexible, interesting game interactions, providing a richer, more personalized game experience for the player.
However, due to the flexible diversity of the needs of users, it is a hotspot problem of current research how to ensure that interactions provided by neural networks created for NPCs can match the needs of different users.
Disclosure of Invention
The embodiment of the application provides an interaction method and device for AI virtual persons, which are used for realizing that interaction provided by a neural network created for NPC can be matched with requirements of different users.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, an AI virtual person interaction method is provided, the method including: acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene; and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the interaction of the user with the virtual object in the virtual scene, and if the complexity of the neural network model of the first virtual object is higher, the capability of the first virtual object to interact with the user through the neural network model is stronger.
Optionally, taking interactive behavior data of the user, including: responding to the user to execute interactive operation aiming at the first virtual object, and acquiring interactive behavior data of the user according to the first virtual object, wherein the user executes interactive operation aiming at the first virtual object comprises at least one of the following steps: a dialogue interaction, a behavior interaction, or an instruction interaction.
Optionally, according to the first virtual object, acquiring interaction behavior data of the user includes: determining at least one second virtual object from virtual objects which are interacted with a user in a history in the virtual scene according to the first virtual object, wherein the at least one second virtual object is matched with the first virtual object; acquiring interaction behavior data of a user interacting with at least one second virtual object; wherein the at least one second virtual object matching the first virtual object means: the at least one second virtual object is a virtual object of the same type as the first virtual object, or the interaction type of the at least one second virtual object and the first virtual object is the same; or alternatively; according to the first virtual object, acquiring interactive behavior data of the user, including: according to the virtual scene where the first virtual object is located, determining a historical virtual scene matched with the virtual scene where the first virtual object is located from historical virtual scene scenes where interaction occurs to the user; acquiring interaction behavior data of a user interacting with a second virtual object in the historical virtual scene; the matching of the historical virtual scene with the virtual scene where the first virtual object is located means that: the historical virtual scene and the virtual scene where the first virtual object is located are the same type of virtual scene, or the historical virtual scene and the virtual scene where the first virtual object is located can provide the same interaction type.
Optionally, the interactive behavior data of the user includes at least one of: the number of interactions between the user and the second virtual object, the length of the instruction issued by the user for the second virtual object, or the matching degree of the instruction issued for the second virtual object and the instruction acceptable by the second virtual object.
Optionally, the neural network model of the first virtual object is a deep neural network, and the complexity of the neural network model of the first virtual object refers to the number of neurons in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the number of neurons in the neural network model of the first virtual object is greater.
Optionally, the complexity of the neural network model of the first virtual object further refers to the complexity of each neuron structure in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the more complex the structure of each neuron in the neural network model, the larger the size of the matrix of m×m, that is, the larger the value of M.
Optionally, the complexity of the neural network model of the first virtual object further refers to the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object, and if the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object is larger, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the different structural degree of the neurons in the neural network model of the first virtual object means that the M values of different neurons in the neural network model of the first virtual object are different.
In a second aspect, there is provided an AI-virtual human interactive apparatus configured to: acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene; and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the interaction of the user with the virtual object in the virtual scene, and if the neural network model of the first virtual object is more complex, the capability of the first virtual object to interact with the user through the neural network model is stronger.
Optionally, the apparatus is configured to: responding to the user to execute interactive operation aiming at the first virtual object, and acquiring interactive behavior data of the user according to the first virtual object, wherein the user executes interactive operation aiming at the first virtual object comprises at least one of the following steps: a dialogue interaction, a behavior interaction, or an instruction interaction.
Optionally, the apparatus is configured to: determining at least one second virtual object from virtual objects which are interacted with a user in a history in the virtual scene according to the first virtual object, wherein the at least one second virtual object is matched with the first virtual object; acquiring interaction behavior data of a user interacting with at least one second virtual object; wherein the at least one second virtual object matching the first virtual object means: the at least one second virtual object is a virtual object of the same type as the first virtual object, or the interaction type of the at least one second virtual object and the first virtual object is the same; or alternatively; according to the first virtual object, acquiring interactive behavior data of the user, including: according to the virtual scene where the first virtual object is located, determining a historical virtual scene matched with the virtual scene where the first virtual object is located from historical virtual scene scenes where interaction occurs to the user; acquiring interaction behavior data of a user interacting with a second virtual object in the historical virtual scene; the matching of the historical virtual scene with the virtual scene where the first virtual object is located means that: the historical virtual scene and the virtual scene where the first virtual object is located are the same type of virtual scene, or the historical virtual scene and the virtual scene where the first virtual object is located can provide the same interaction type.
Optionally, the interactive behavior data of the user includes at least one of: the number of interactions between the user and the second virtual object, the length of the instruction issued by the user for the second virtual object, or the matching degree of the instruction issued for the second virtual object and the instruction acceptable by the second virtual object.
Optionally, the neural network model of the first virtual object is a deep neural network, and the complexity of the neural network model of the first virtual object refers to the number of neurons in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the number of neurons in the neural network model of the first virtual object is greater.
Optionally, the complexity of the neural network model of the first virtual object further refers to the complexity of each neuron structure in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the more complex the structure of each neuron in the neural network model, the larger the size of the matrix of m×m, that is, the larger the value of M.
Optionally, the complexity of the neural network model of the first virtual object further refers to the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object, and if the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object is larger, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the different structural degree of the neurons in the neural network model of the first virtual object means that the M values of different neurons in the neural network model of the first virtual object are different.
In a third aspect, there is provided a computer readable storage medium comprising: computer programs or instructions; the computer program or instructions, when run on a computer, cause the computer to perform the method of the first aspect.
In summary, the method and the system have the following technical effects:
virtual object interactions in a virtual scene may be understood as NPCs, since the complexity of the neural network model of the NPC currently interacting with the user (one created for that NPC, used only for that NPC) is dynamically adjustable, as determined dynamically according to the complexity of the user interacting with the virtual object in the virtual scene. Thus, if the interaction of the user is relatively simple, the complexity of the neural network is lower, and the calculation overhead can be reduced under the condition of meeting the interaction requirement of the user. If the interaction of the user is relatively complex, the complexity of the neural network is also higher, so as to provide more complex and diversified interactions for the user, so as to meet the interaction requirement of the user.
Drawings
FIG. 1 is a flow chart of an interaction method of AI virtual persons provided in an embodiment of the application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
The present application will present various aspects, embodiments, or features about a system that may include multiple devices, components, modules, etc. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, combinations of these schemes may also be used.
In addition, in the embodiments of the present application, words such as "exemplary," "for example," and the like are used to indicate an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion.
In the embodiment of the present application, "information", "signal", "message", "channel", and "signaling" may be used in a mixed manner, and it should be noted that the meaning of the expression is matched when the distinction is not emphasized. "of", "corresponding" and "corresponding" are sometimes used in combination, and it should be noted that the meanings to be expressed are matched when the distinction is not emphasized. Furthermore, references to "/" herein may be used to indicate a relationship of "or".
The network architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the network architecture and the appearance of the new service scenario, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
Exemplary, fig. 1 is a schematic flow chart of an interaction method of AI virtual persons provided in an embodiment of the present application. The method may be applicable to interactions of electronic devices.
As shown in fig. 1, the flow of the interaction method of the AI virtual person is as follows:
s101, acquiring interactive behavior data of a user.
Wherein the interaction behavior data of the user can be used to represent the complexity of the user interaction with the virtual objects in the virtual scene. The virtual scene may be a 3-dimensional interactive scene in the game, i.e., a virtual 3-dimensional space in which the user-controlled character is able to play the game interaction link. The virtual object may be an NPC in a game created in 3-dimensional space for interactive interaction with a user-controlled character, such as a dialogue, limb interaction, a combat, etc.
The electronic device may respond to the user to execute the interaction operation for the first virtual object, and obtain interaction behavior data of the user according to the first virtual object, where the user executes the interaction operation for the first virtual object includes at least one of: a dialogue interoperation (e.g., a sentence input by a user through a voice or tablet), a behavioral interoperation (e.g., an attack behavior, a protection behavior, etc.), or an instruction interoperation (e.g., an instruction instructing an NPC to perform some operation).
In mode 1, an electronic device may acquire interactive behavior data of a user based on a virtual object.
For example, the electronic device may determine, from the first virtual object, at least one second virtual object from among virtual objects in the virtual scene that have historically interacted with the user. Wherein the at least one second virtual object matches the first virtual object, for example, the at least one second virtual object matches the first virtual object means: at least one second virtual object is a virtual object of the same type as the first virtual object (e.g., both pedestrian NPCs, or both special event NPCs), or at least one second virtual object is of the same type of interaction as the first virtual object (e.g., both NPCs providing conversational services, or both NPCs providing auxiliary attacks). In this manner, the electronic device may obtain data related to the at least one second virtual object from the stored data according to the identification of the at least one second virtual object, and obtain interaction behavior data of the user for the user's interaction with the at least one second virtual object from the data related to the at least one second virtual object.
In mode 2, the electronic device may also acquire interactive behavior data of the user based on the virtual scene.
For example, the electronic device may determine, according to the virtual scene in which the first virtual object is located, a historical virtual scene that matches the virtual scene in which the first virtual object is located from historical virtual scene scenes in which interactions have occurred by the user. The matching of the historical virtual scene with the virtual scene where the first virtual object is located means that: the historical virtual scene and the virtual scene where the first virtual object are located are the same type of virtual scene (such as a fight scene or a dialogue scene), or the historical virtual scene and the virtual scene where the first virtual object are located can provide the same interaction type (such as a scene which can provide purchase and consumption services, or a scene which can provide interesting game services). The electronic device may obtain interaction behavior data of a user interacting with a second virtual object in the historical virtual scene. For example, the electronic device may query the identifier of the history virtual scene matched with the first virtual object according to the identifier of the virtual scene where the first virtual object is located, obtain data related to the history virtual scene from the stored data according to the identifier of the history virtual scene, and obtain interaction behavior data of the user for interaction between the user and at least one second virtual object in the history virtual scene from the data related to the history virtual scene.
It will be appreciated that the user's interaction data may include at least one of: the number of interactions between the user and the second virtual object, the length of the instruction issued by the user for the second virtual object, or the matching degree of the instruction issued for the second virtual object and the instruction acceptable by the second virtual object. If the number of interactions between the user and the second virtual object is larger, the complexity of the interaction between the user and the virtual object in the virtual scene is higher, and conversely, the complexity of the interaction between the user and the virtual object in the virtual scene is lower. If the length of the instruction issued by the user for the second virtual object is longer, the complexity of the interaction between the user and the virtual object in the virtual scene is higher, and conversely, the complexity is lower. If the matching degree of the instruction issued by the user aiming at the second virtual object and the instruction which can be accepted by the second virtual object is lower, the complexity of the interaction of the user and the virtual object in the virtual scene is higher, and conversely, the complexity is lower. For example, the second virtual may accept the instruction to provide treatment, but the instruction issued by the user to the second virtual object indicates that he is launching an attack, indicating that the matching degree of the two is low, whereas if the instruction issued by the user to the second virtual object indicates that he is providing treatment, indicating that the matching degree of the two is high.
S102, determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user.
The neural network model of the first virtual object refers to a model created for the first virtual object, and may be a trained model, and the number may be a plurality of neural network models, that is, the neural network models of the first virtual object with different complexity degrees. The complexity of the model of the neural network model of the first virtual object is positively correlated with the complexity of the interaction between the user and the virtual objects in the virtual scene, and if the complexity of the neural network model of the first virtual object is higher, the capability of the first virtual object to interact with the user through the neural network model is stronger, that is, the first virtual object can perform more complex interactions with the user through the neural network model, for example, more complex language or instruction of the user can be understood and fed back, and more interactions, such as the number of conversations, etc., can be performed with the user.
The electronic device may determine which intervals the parameters in the behavior data of the user are located in, respectively. For example, the number of interactions of the user with the second virtual object is 12, which is located in the interval of 10-15 times. The instruction issued by the user for the second virtual object is 8 characters in length and is located in the interval of 5-10 characters. The matching degree of the instruction issued by the user aiming at the second virtual object and the instruction which can be accepted by the second virtual object is a value, for example, 0.6, and the matching degree is positioned in the interval of 0.5-0.7. The electronic device may determine the intervals, determine a level of model complexity of the neural network model of the first virtual object, and select the neural network model of the first virtual object of the level from the neural network models of the first virtual object of different respective complexity levels according to the level.
Specifically, the neural network model of the first virtual object may be a deep neural network, where the complexity of the neural network model of the first virtual object refers to the number of neurons in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the number of neurons in the neural network model through which the first virtual object passes is greater. For example, the number of neurons in the neural network model may be 1000, 2000, 3000, etc. in order with the level.
Optionally, the complexity of the neural network model of the first virtual object may also refer to the complexity of each neuron structure in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the structure of each neuron in the neural network model of the first virtual object is more complex.
For example, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position other than the diagonal line of the m×m matrix, weights of the first type element sk at different positions in the m×m matrix are different, and weights of the second type element sk at different positions in the m×m matrix are different; the more complex the structure of each neuron in the neural network model, the larger the size of the matrix of m×m, that is, the larger the value of M.
Optionally, the complexity of the neural network model of the first virtual object further refers to the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object, and if the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object is larger, the structure of each neuron in the neural network model of the first virtual object is more complex.
For another example, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position other than the diagonal line of the m×m matrix, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the different structural degree of the neurons in the neural network model of the first virtual object means that the M values of different neurons in the neural network model of the first virtual object are different. For example, in the case where the complexity of the neural network model of the first virtual object is 1 level, the number of m=3 neurons is 200, the number of m=4 neurons is 600, in the case where the complexity of the neural network model of the first virtual object is 2 level, the number of m=3 neurons is 200, the number of m=4 neurons is 800, the number of m=5 neurons is 400, and the number of m=6 neurons is 600, that is, neurons having more complex structures, naturally, the higher the complexity of the neural network model is, the stronger its processing ability for information.
To sum up:
virtual object interactions in a virtual scene may be understood as NPCs, since the complexity of the neural network model of the NPC currently interacting with the user (one created for that NPC, used only for that NPC) is dynamically adjustable, as determined dynamically according to the complexity of the user interacting with the virtual object in the virtual scene. Thus, if the interaction of the user is relatively simple, the complexity of the neural network is lower, and the calculation overhead can be reduced under the condition of meeting the interaction requirement of the user. If the interaction of the user is relatively complex, the complexity of the neural network is also higher, so as to provide more complex and diversified interactions for the user, so as to meet the interaction requirement of the user.
The method for interacting the AI virtual person provided in the embodiment of the present application is described in detail above with reference to fig. 1. The following describes in detail an AI-virtual person interaction apparatus for performing the AI-virtual person interaction method provided by the embodiments of the present application.
The AI virtual person interaction device is configured to: acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene; and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the interaction of the user with the virtual object in the virtual scene, and if the neural network model of the first virtual object is more complex, the capability of the first virtual object to interact with the user through the neural network model is stronger.
Optionally, the apparatus is configured to: responding to the user to execute interactive operation aiming at the first virtual object, and acquiring interactive behavior data of the user according to the first virtual object, wherein the user executes interactive operation aiming at the first virtual object comprises at least one of the following steps: a dialogue interaction, a behavior interaction, or an instruction interaction.
Optionally, the apparatus is configured to: determining at least one second virtual object from virtual objects which are interacted with a user in a history in the virtual scene according to the first virtual object, wherein the at least one second virtual object is matched with the first virtual object; acquiring interaction behavior data of a user interacting with at least one second virtual object; wherein the at least one second virtual object matching the first virtual object means: the at least one second virtual object is a virtual object of the same type as the first virtual object, or the interaction type of the at least one second virtual object and the first virtual object is the same; or alternatively; according to the first virtual object, acquiring interactive behavior data of the user, including: according to the virtual scene where the first virtual object is located, determining a historical virtual scene matched with the virtual scene where the first virtual object is located from historical virtual scene scenes where interaction occurs to the user; acquiring interaction behavior data of a user interacting with a second virtual object in the historical virtual scene; the matching of the historical virtual scene with the virtual scene where the first virtual object is located means that: the historical virtual scene and the virtual scene where the first virtual object is located are the same type of virtual scene, or the historical virtual scene and the virtual scene where the first virtual object is located can provide the same interaction type.
Optionally, the interactive behavior data of the user includes at least one of: the number of interactions between the user and the second virtual object, the length of the instruction issued by the user for the second virtual object, or the matching degree of the instruction issued for the second virtual object and the instruction acceptable by the second virtual object.
Optionally, the neural network model of the first virtual object is a deep neural network, and the complexity of the neural network model of the first virtual object refers to the number of neurons in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the number of neurons in the neural network model of the first virtual object is greater.
Optionally, the complexity of the neural network model of the first virtual object further refers to the complexity of each neuron structure in the neural network model of the first virtual object, and if the complexity of the neural network model of the first virtual object is higher, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the more complex the structure of each neuron in the neural network model, the larger the size of the matrix of m×m, that is, the larger the value of M.
Optionally, the complexity of the neural network model of the first virtual object further refers to the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object, and if the number of neurons with different structural impurities of the neurons in the neural network model of the first virtual object is larger, the structure of each neuron in the neural network model of the first virtual object is more complex.
Optionally, each neuron in the neural network model has a structure of m×m matrix, M is an integer greater than 1, the m×m matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the m×m matrix, the second type element so is located at a position of the m×m matrix other than the diagonal line, weights of the first type elements sk at different positions in the m×m matrix are different, and weights of the second type elements sk at different positions in the m×m matrix are different; the different structural degree of the neurons in the neural network model of the first virtual object means that the M values of different neurons in the neural network model of the first virtual object are different.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be a terminal device, or may be a chip (system) or other part or component that may be provided in the terminal device, for example. As shown in fig. 2, the electronic device 400 may include a processor 401. Optionally, the electronic device 400 may also include memory 402 and/or a transceiver 403. Wherein the processor 401 is coupled to the memory 402 and the transceiver 403, e.g. may be connected by a communication bus. In addition, the electronic device 400 may also be a chip, such as including the processor 401, in which case the transceiver may be an input/output interface of the chip.
The following description is made in detail with respect to the various constituent elements of the electronic device 400 of fig. 2:
the processor 401 is a control center of the electronic device 400, and may be one processor or a collective name of a plurality of processing elements. For example, processor 401 is one or more central processing units (central processing unit, CPU), but may also be an integrated circuit (application specific integrated circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (digital signal processor, DSPs), or one or more field programmable gate arrays (field programmable gate array, FPGAs).
Alternatively, the processor 401 may perform various functions of the electronic device 400, such as performing the above-described AI virtual human interaction method shown in fig. 1, by running or executing a software program stored in the memory 402, and invoking data stored in the memory 402.
In a particular implementation, processor 401 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 2, as an embodiment.
In a particular implementation, electronic device 400 may also include multiple processors, as one embodiment. Each of these processors may be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer programs or instructions).
The memory 402 is configured to store a software program for executing the solution of the present application, and the processor 401 controls the execution of the software program, and the specific implementation may refer to the above method embodiment, which is not described herein again.
Alternatively, memory 402 may be, but is not limited to, read-only memory (ROM) or other type of static storage device that may store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that may store information and instructions, but may also be electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 402 may be integrated with the processor 401 or may exist separately and be coupled to the processor 401 through an interface circuit (not shown in fig. 2) of the electronic device 400, which is not specifically limited in this embodiment of the present application.
A transceiver 403 for communication with other electronic devices. For example, electronic device 400 is a terminal device and transceiver 403 may be used to communicate with a network device or with another terminal device. As another example, electronic device 400 is a network device and transceiver 403 may be used to communicate with a terminal device or with another network device.
Alternatively, the transceiver 403 may include a receiver and a transmitter (not separately shown in fig. 2). The receiver is used for realizing the receiving function, and the transmitter is used for realizing the transmitting function.
Alternatively, transceiver 403 may be integrated with processor 401 or may exist separately and be coupled to processor 401 through interface circuitry (not shown in fig. 2) of electronic device 400, as embodiments of the present application are not specifically limited in this regard.
It will be appreciated that the configuration of the electronic device 400 shown in fig. 2 is not limiting of the electronic device, and that an actual electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In addition, the technical effects of the electronic device 400 may refer to the technical effects of the method described in the above method embodiments, which are not described herein.
It should be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should also be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example but not limitation, many forms of random access memory (random access memory, RAM) are available, such as Static RAM (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with the embodiments of the present application are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center by a wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An AI virtual man-in-the-art interaction method, the method comprising:
acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene;
and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the user interacting with the virtual object in the virtual scene, and if the complexity of the neural network model of the first virtual object is higher, the capability of the first virtual object interacting with the user through the neural network model is stronger.
2. The method of claim 1, wherein the obtaining interactive behavior data of the user comprises:
responding to the user to execute interactive operation aiming at the first virtual object, and acquiring interactive behavior data of the user according to the first virtual object, wherein the user to execute interactive operation aiming at the first virtual object comprises at least one of the following steps: a dialogue interaction, a behavior interaction, or an instruction interaction.
3. The method according to claim 2, wherein the obtaining the interactive behavior data of the user according to the first virtual object includes:
determining at least one second virtual object from the virtual objects which are interacted with the user in the history in the virtual scene according to the first virtual object, wherein the at least one second virtual object is matched with the first virtual object;
acquiring interaction behavior data of the user interaction with the at least one second virtual object;
wherein the at least one second virtual object matching the first virtual object means: the at least one second virtual object is a virtual object of the same type as the first virtual object, or the interaction type of the at least one second virtual object and the first virtual object is the same;
Or alternatively;
the obtaining, according to the first virtual object, the interactive behavior data of the user includes:
according to the virtual scene where the first virtual object is located, determining a historical virtual scene matched with the virtual scene where the first virtual object is located from the historical virtual scene where the user interacts;
acquiring interaction behavior data of the user interacted with a second virtual object in the historical virtual scene;
wherein, the matching of the historical virtual scene with the virtual scene where the first virtual object is located means that: the historical virtual scene and the virtual scene where the first virtual object is located are the same type of virtual scene, or the historical virtual scene and the virtual scene where the first virtual object is located can provide the same interaction type.
4. A method according to claim 3, wherein the user's interaction data comprises at least one of: the interaction times of the user and the second virtual object, the length of the instruction issued by the user for the second virtual object, or the matching degree of the instruction issued by the user for the second virtual object and the instruction which can be accepted by the second virtual object.
5. The method of claim 1, wherein the neural network model of the first virtual object is a deep neural network, and the complexity of the neural network model of the first virtual object refers to the number of neurons in the neural network model of the first virtual object, and the higher the complexity of the neural network model of the first virtual object, the greater the number of neurons in the neural network model that the first virtual object passes through.
6. The method of claim 5, wherein the complexity of the neural network model of the first virtual object further refers to the complexity of each neuron structure in the neural network model of the first virtual object, and the higher the complexity of the neural network model of the first virtual object, the more complex the structure of each neuron in the neural network model the first virtual object passes through.
7. The method according to claim 6, wherein each neuron in the neural network model has a structure of M x M matrix, M is an integer greater than 1, the M x M matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the M x M matrix, the second type element so is located at a position of the M x M matrix other than the diagonal line, weights of the first type element sk at different positions in the M x M matrix are different, and weights of the second type element sk at different positions in the M x M matrix are different;
The more complex the structure of each neuron in the neural network model, the larger the size of the matrix of m×m, that is, the larger the value of M.
8. The method of claim 5, wherein the complexity of the neural network model of the first virtual object further refers to a number of neurons of different degrees of structural clutter in the neural network model of the first virtual object, the more the number of neurons of different degrees of structural clutter in the neural network model of the first virtual object, the more complex the first virtual object is to traverse the structure of each neuron in the neural network model.
9. The method according to claim 8, wherein each neuron in the neural network model has a structure of M x M matrix, M is an integer greater than 1, the M x M matrix includes a first type element sk and a second type element so, the first type element sk is located on a diagonal line of the M x M matrix, the second type element so is located at a position of the M x M matrix other than the diagonal line, weights of the first type element sk at different positions in the M x M matrix are different, and weights of the second type element sk at different positions in the M x M matrix are different;
The difference of the structural impurity degree of the neurons in the neural network model of the first virtual object means that the M values of different neurons in the neural network model of the first virtual object are different.
10. An AI virtual human interactive apparatus, characterized in that the apparatus is configured to:
acquiring interaction behavior data of a user, wherein the interaction behavior data of the user is used for representing the complexity degree of interaction between the user and a virtual object in a virtual scene;
and determining a neural network model of a first virtual object currently interacted with the user in the virtual scene according to the behavior data of the user, wherein the model complexity of the neural network model of the first virtual object is positively correlated with the complexity of the user interacting with the virtual object in the virtual scene, and if the neural network model of the first virtual object is more complex, the capability of the first virtual object interacting with the user through the neural network model is stronger.
CN202410041457.0A 2024-01-10 2024-01-10 Interaction method and device for AI virtual persons Pending CN117839224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410041457.0A CN117839224A (en) 2024-01-10 2024-01-10 Interaction method and device for AI virtual persons

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410041457.0A CN117839224A (en) 2024-01-10 2024-01-10 Interaction method and device for AI virtual persons

Publications (1)

Publication Number Publication Date
CN117839224A true CN117839224A (en) 2024-04-09

Family

ID=90540082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410041457.0A Pending CN117839224A (en) 2024-01-10 2024-01-10 Interaction method and device for AI virtual persons

Country Status (1)

Country Link
CN (1) CN117839224A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108499107A (en) * 2018-04-16 2018-09-07 网易(杭州)网络有限公司 The control method of virtual role, device and storage medium in virtual reality
CN109871940A (en) * 2019-01-31 2019-06-11 清华大学 A kind of multilayer training algorithm of impulsive neural networks
CN110009432A (en) * 2019-04-15 2019-07-12 武汉理工大学 A kind of personal consumption behavior prediction technique
CN111738294A (en) * 2020-05-21 2020-10-02 深圳海普参数科技有限公司 AI model training method, use method, computer device and storage medium
CN113952723A (en) * 2021-10-29 2022-01-21 北京市商汤科技开发有限公司 Interactive method and device in game, computer equipment and storage medium
US20220383078A1 (en) * 2020-02-12 2022-12-01 Huawei Technologies Co., Ltd. Data processing method and related device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108499107A (en) * 2018-04-16 2018-09-07 网易(杭州)网络有限公司 The control method of virtual role, device and storage medium in virtual reality
CN109871940A (en) * 2019-01-31 2019-06-11 清华大学 A kind of multilayer training algorithm of impulsive neural networks
CN110009432A (en) * 2019-04-15 2019-07-12 武汉理工大学 A kind of personal consumption behavior prediction technique
US20220383078A1 (en) * 2020-02-12 2022-12-01 Huawei Technologies Co., Ltd. Data processing method and related device
CN111738294A (en) * 2020-05-21 2020-10-02 深圳海普参数科技有限公司 AI model training method, use method, computer device and storage medium
CN113952723A (en) * 2021-10-29 2022-01-21 北京市商汤科技开发有限公司 Interactive method and device in game, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US11938403B2 (en) Game character behavior control method and apparatus, storage medium, and electronic device
Babaeizadeh et al. GA3C: GPU-based A3C for deep reinforcement learning
CN108888958B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN110443284B (en) Artificial intelligence AI model training method, calling method, server and readable storage medium
CN111282267B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN112016704B (en) AI model training method, model using method, computer device and storage medium
CN111841018B (en) Model training method, model using method, computer device, and storage medium
CN111738294B (en) AI model training method, AI model using method, computer device, and storage medium
US20220280870A1 (en) Method, apparatus, device, and storage medium, and program product for displaying voting result
CN111228813B (en) Virtual object control method, device, equipment and storage medium
WO2015153878A1 (en) Modeling social identity in digital media with dynamic group membership
CN110555529B (en) Data processing method and related device
Wang et al. A novel deep residual network-based incomplete information competition strategy for four-players Mahjong games
CN111589120A (en) Object control method, computer device, and computer-readable storage medium
CN116680391A (en) Custom dialogue method, training method, device and equipment for custom dialogue model
CN117839224A (en) Interaction method and device for AI virtual persons
Wang et al. A new approach to compute deficiency number of Mahjong configurations
CN111330282A (en) Method and device for determining card-playing candidate items
CN110975294A (en) Game fighting implementation method and terminal
CN116943204A (en) Virtual object control method and device, storage medium and electronic equipment
CN117861220A (en) Interactive interaction method and device for virtual digital person
CN108837511A (en) The method and system interacted in online game with NPC artificial intelligence
CN114470787A (en) Service processing method, service processing device, electronic device, storage medium, and program product
US20240123347A1 (en) Game interactive control method and apparatus, storage medium and electronic device
US20220191159A1 (en) Device and method for generating an electronic card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination