CN114768246A - Game man-machine interaction method and system - Google Patents

Game man-machine interaction method and system Download PDF

Info

Publication number
CN114768246A
CN114768246A CN202210700965.6A CN202210700965A CN114768246A CN 114768246 A CN114768246 A CN 114768246A CN 202210700965 A CN202210700965 A CN 202210700965A CN 114768246 A CN114768246 A CN 114768246A
Authority
CN
China
Prior art keywords
data
behavior
interaction
action
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210700965.6A
Other languages
Chinese (zh)
Other versions
CN114768246B (en
Inventor
连胜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huanxi Times Shenzhen Technology Co ltd
Original Assignee
Huanxi Times Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huanxi Times Shenzhen Technology Co ltd filed Critical Huanxi Times Shenzhen Technology Co ltd
Priority to CN202210700965.6A priority Critical patent/CN114768246B/en
Publication of CN114768246A publication Critical patent/CN114768246A/en
Application granted granted Critical
Publication of CN114768246B publication Critical patent/CN114768246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of electronic information products, in particular to a game man-machine interaction method and a game man-machine interaction system, which are applied in a game scene; through setting up the comparison mechanism to through set up basic database and the training model that corresponds the setting on the server, can realize the operation of the real player under the simulation real environment of intelligent agent in the interactive process, the process of will interacting and the game experience that the interaction brought is felt the reinforcing.

Description

Game man-machine interaction method and system
Technical Field
The invention relates to the field of electronic information products, in particular to a game man-machine interaction method and a game man-machine interaction system, which are applied in a game scene.
Background
Real Time Game (Real Time Game) is a Game which is played immediately in the Game process, and is different from round-system games such as go and chess, and the Real Time Game generally has the characteristics of complex Game rules, dynamic and changeable Game scenes, uncertain opponent and character behaviors, incomplete behavior decision information, short behavior decision Time and the like. Typical real-time games include, but are not limited to, a battle type game, which is a game in which a virtual character is manipulated to battle an opponent character to exhaust a life value of the opponent character as a winning goal.
In the human-machine fighting mode of the real-time game, the virtual character controlled by the real player can fight against the virtual character controlled by the game intelligent system. For the game intelligent system, when the game intelligent system controls the virtual characters to fight, a huge action decision space is met, the requirement of real-time decision is also met, and how to select and execute the game strategy under the condition is the key for realizing high-level anthropomorphic control on the virtual characters by the game intelligent system, so that the game experience of real players is influenced to a great extent.
At present, aiming at human-computer interaction mainly comprising man-made actions, voice and input characters and aiming at the actions, no targeted interaction method is available, most of the prior technologies mainly carry out interaction based on a character mode, but most of the man-made actions have the related problem of insensitive response, and under the condition of existing actions, voice and characters, the computer cannot give corresponding optimal execution feedback according to a complex behavior mode.
Disclosure of Invention
The embodiment of the application provides a game man-machine interaction method, which is used for judging the behavior of a player in a mode of fusing the behavior, voice and characters, and realizing the judgment of the behavior through a model by establishing a corresponding model, so that the judgment of a computer on the behavior in a complex scene and the effect of giving corresponding feedback action based on the judgment result are improved.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a game man-machine interaction method, which is applied to a server, where the server is used for collecting user interaction data, and a game end, where the user end feeds back a behavior event to the game end based on the interaction data; a basic database is configured on the server, the basic database is configured with characteristic data, and the characteristic data is used for comparing the acquired interaction data in different interaction modes; the method comprises the following steps: collecting the interaction data; uploading the interactive data to the server and comparing the interactive data with the characteristic data in the basic database to obtain a comparison result; triggering a behavior event corresponding to the comparison result based on the comparison result; sending the behavior event to a game end to realize interaction with a user; the feature data are multi-structure data and comprise first interaction data, second interaction data and third interaction data, the first interaction data are set as invalid behavior data and used for representing invalid behaviors and reducing the data quantity of comparison, the second interaction data are set as basic behavior data and used for representing basic behaviors and determining the basic behaviors, and the third interaction data are configured as key behavior data and used for representing key behaviors and determining the key behaviors; the comparing of the interaction data with the feature data comprises comparing the first interaction data, the second interaction data, the third interaction data and the interaction data to obtain a comparison result, and specifically comprises the following steps: comparing whether the interactive data comprises first interactive data or not, removing the first interactive data to reserve residual data, extracting, storing and labeling the second interactive data contained in the reserved data, extracting, storing and labeling the third interactive data contained in the residual data to obtain behavior data consisting of the second interactive data and the third interactive data; triggering a behavior event corresponding to the comparison result based on the comparison result specifically comprises the following steps: comparing at least two groups of labels in the behavior data with interaction behavior labels in a stored label library, wherein the interaction behavior labels correspond to a plurality of behavior events.
In a first possible implementation manner of the first aspect, the method further includes selecting an optimal behavior event for the plurality of behavior events, and includes: determining an optimal behavior event according to the target game state information based on a layered decision model and a layered action decision model; the hierarchical action decision model comprises a strategy selection submodel and a plurality of strategy execution submodels which are mutually independent, and corresponding action events are configured in the strategy selection submodel; the strategy selection submodel is used for selecting a strategy execution submodel required to run from the strategy execution submodels according to the game state information; the strategy execution submodel is used for determining the action to be executed by the virtual character according to the game state information; and controlling the target virtual role to execute the target action.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, determining an optimal behavior event according to the target game state information includes determining an optimal behavior event according to a game running state preset by a user and an acquired program running state, where the game running state preset by the user includes game difficulty level setting, game running environment setting, and other personalized operation setting, and the program running state includes real-time physical data of program running.
With reference to the first aspect, in a third possible implementation manner, the collecting of the interaction information includes interaction information generated based on the user behavior; the user behaviors comprise action behaviors, voice behaviors and character behaviors; acquiring action behaviors to obtain action behavior interaction data based on external equipment, wherein the external equipment comprises hardware provided with a sensor, and the sensor is physically bound with a user to acquire behaviors generated by user motion and is communicated with the server; the voice behavior is collected, voice equipment is used for collecting voice of a user to obtain voice behavior interaction data, and the voice behavior interaction data is communicated with the server; and the character behavior is collected and used for collecting character interaction data generated by a user on an interaction interface and communicating with the server.
With reference to the first aspect, in a fourth possible implementation manner, the method further includes constructing the basic database, and includes: the method comprises the steps of obtaining existing user behavior data, carrying out classification training on the behavior data, constructing a training set, extracting preset key information from the training set, carrying out data processing on the preset key information, introducing a non-strategic batch reinforcement learning algorithm, constructing a behavior pre-judgment model based on reinforcement learning, training the pre-judgment model by using the obtained data, obtaining the trained behavior pre-judgment model, obtaining behavior data information to be processed, extracting preset information from the behavior data information to be processed, obtaining a first vector, a second vector and a third vector through data processing, and obtaining a corresponding first label, a corresponding second label and a corresponding third label, wherein the first vector is the first interaction data, the second vector is the second interaction data, and the third vector is the third interaction data.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, the action behavior interaction data is uploaded to the server and compared with the feature data in the basic database to obtain a comparison result, which includes the following specific steps: uploading the action behavior interaction data to a server and comparing the action behavior interaction data with action behavior characteristic data in a basic database to obtain a comparison result, wherein the action behavior characteristic data comprises first action behavior characteristic data, second action behavior characteristic data and third action behavior characteristic data, and the method comprises the following steps: comparing first action behavior characteristic data with action behavior interaction data to obtain a first result, comparing the first result with second action behavior characteristic data to obtain a second result, comparing the second result with third action behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain an action behavior comparison result, wherein the action behavior comparison result is a final comparison result; the method for comparing the first action characteristic data with the action interaction data to obtain a first result comprises the following steps: the first action characteristic data comprises action characteristic data and action intensity data corresponding to the action characteristic data, the action intensity data is a threshold value, the interaction data and the action characteristic data are compared to obtain corresponding action intensity data, and the corresponding data of the interaction data in the threshold value range of the action intensity data is determined, namely a first result.
In a sixth possible implementation manner of the first aspect, the comparing the first action behavior feature data with the action behavior interaction data to obtain the first result further includes: and determining that the action behavior interaction data is continuous behavior data, wherein the action behavior interaction data is interaction data generated by continuous actions performed by a user within a preset time period.
With reference to the fourth possible implementation manner of the first aspect, in a seventh possible implementation manner, the method includes the following specific steps: uploading voice behavior interaction data to a server and comparing the voice behavior interaction data with voice behavior feature data in a basic database to obtain a comparison result, wherein the voice behavior feature data comprise first voice behavior feature data, second voice behavior feature data and third voice behavior feature data, and the method comprises the following steps: comparing first voice behavior characteristic data with voice behavior interaction data to obtain a first result, comparing the first result with second action voice characteristic data to obtain a second result, comparing the second result with third voice behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a voice behavior comparison result, wherein the voice behavior comparison result is a final comparison result; comparing the first voice behavior characteristic data with the voice behavior interaction data to obtain a first result, identifying the voice length of the voice behavior interaction data, and determining whether to compare the voice behavior interaction data with the first voice behavior characteristic data or not based on the voice length.
With reference to the fourth possible implementation manner of the first aspect, in an eighth possible implementation manner, the method includes the following specific steps: uploading the character behavior interaction data to a server and comparing the character behavior interaction data with character behavior characteristic data in a basic database to obtain a comparison result, wherein the character behavior characteristic data comprises first character behavior characteristic data, second character behavior characteristic data and third character behavior characteristic data, and the method comprises the following steps: comparing first character behavior characteristic data with character behavior interaction data to obtain a first result, comparing the first result with second character voice characteristic data to obtain a second result, comparing the second result with third character behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a character behavior comparison result, wherein the character behavior comparison result is a final comparison result; comparing the first character behavior characteristic data with the character behavior interaction data to obtain a first result, identifying the voice length of the character behavior interaction data, and determining whether to compare the character behavior interaction data with the first character behavior characteristic data or not based on the character length.
In a second aspect, an embodiment of the present application further provides a game human-computer interaction system, which is used in the game human-computer interaction method, and includes a server, a game end and at least one client, where the client is provided with an external device for acquiring interaction data; a basic database is configured on the server of the game terminal, the basic database is configured with characteristic data, and the characteristic data is used for comparing the acquired interaction data in different interaction forms; the game end is used for executing and realizing human-computer interaction.
According to the technical scheme, the comparison mechanism is arranged, the basic database and the correspondingly arranged training model are arranged on the server, so that the operation of a real player under the real environment simulation of an intelligent agent in the interaction process can be realized, and the interaction process and the game experience brought by the interaction are enhanced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
The methods, systems, and/or processes of the figures are further described in accordance with the exemplary embodiments. These exemplary embodiments will be described in detail with reference to the drawings. These exemplary embodiments are non-limiting exemplary embodiments in which example numerals represent similar mechanisms throughout the various views of the drawings.
Fig. 1 is a schematic system architecture diagram of a communication system according to an embodiment of the present application.
Fig. 2 is a block diagram of a server provided in an embodiment of the present application.
FIG. 3 is a flow diagram illustrating a method of game human-machine interaction, according to some embodiments of the present application.
Detailed Description
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant guidance. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. In other instances, well-known methods, procedures, systems, compositions, and/or circuits have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present application.
The present application uses flowcharts to illustrate the implementations performed by a system according to embodiments of the present application. It should be expressly understood that the processes performed by the flowcharts may be performed out of order. Rather, these implementations may be performed in the reverse order or simultaneously. In addition, at least one other implementation may be added to the flowchart. One or more implementations may be deleted from the flowchart.
Referring to fig. 1, an embodiment of the present disclosure provides a game human-machine interaction system 100, where the game human-machine interaction system 100 includes a test server 200, a user terminal 300, and a game terminal 400, which are in communication with each other.
In practice, the test server 200 may be a single server or a server cluster composed of a plurality of servers, and only a single server is illustrated in fig. 1.
In this embodiment, the game terminal 400 is an intelligent terminal loaded with a game application program, and may be a device supporting various electronic devices with a display screen, including but not limited to a smart phone, a tablet computer, a laptop computer, a desktop computer, and the like, and fig. 1 only illustrates the smart phone. Optionally, one or more game software may be installed on one electronic device, which is not limited in this respect.
In this embodiment, the user terminal 300 is a hardware device cooperating with the game terminal 400, and the specific hardware attributes thereof are different in different scenes. For example, in the action behavior, the user terminal is a hardware device which is matched with the action of the user, and includes a sensor arranged at a position where the key action behavior of the human body occurs, and the sensor simulates the action behavior through the position deviation when the action of the human body occurs or an electric signal generated by the deviation of the horizontal module and based on the change generated by the electric signal. And data interaction between the sensors and the server occurs. Under the voice action, the user side is a hardware device matched with the voice of the user, mainly a voice input device, the voice input device can be a hardware arranged on the game side or a hardware connected with the game side in a wired/wireless mode, and data interaction is carried out between the hardware and the server. Under the text behavior, the user side is a text input device, which is the same as the voice input device, the text input device may be hardware or software arranged on the game side, or may be hardware, such as a keyboard, connected with the game side in a wired/wireless manner, and the hardware is the same as the server for data interaction.
In the embodiment of the present application, a basic database is configured in the test server 200, and feature data is configured in the basic database, because in this embodiment, user behaviors include an action behavior, a voice behavior, and a text behavior, action behavior feature data, voice behavior feature data, and text behavior feature data are correspondingly set in the server, collected data are first classified into a corresponding action behavior, a voice behavior, and a text behavior by the server 200, collected data generated by a corresponding behavior are compared with corresponding feature data, and a corresponding behavior of a virtual character is generated at a game end based on a comparison result.
In this embodiment, when the server receives the data collected by the user side, the collected data is compared based on the measured feature data.
Referring to fig. 2, a block diagram of a test server 200 according to some embodiments of the present disclosure is shown, where the test server 200 includes a human-machine interaction device 210 based on a game human-machine interaction method, a memory 220, a processor 230, and a communication unit 240. The elements of the memory 220, processor 230 and communication unit 240 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The game-based test management apparatus 210 includes at least one software function module that may be stored in the memory 220 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device. The processor 230 is used for executing executable modules stored in the memory 220, such as software functional modules and computer programs included in the human-computer interaction device 210.
The Memory 220 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 220 is used for storing programs, and the processor 230 executes the programs after receiving execution instructions. The communication unit 240 is used to establish a communication connection between the test server 200 and the game terminal 400 through a network, and to transmit and receive data through the network.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP)), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. It will be appreciated that the configuration shown in FIG. 2 is merely illustrative, and that in other embodiments the test server 200 may include more or fewer components than shown in FIG. 2, or have a different configuration than shown in FIG. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, a flowchart of a game human-computer interaction method according to some embodiments of the present application is shown, and the main method is to compare collected user action behavior data based on feature data configured in a basic database, trigger a behavior event corresponding to a comparison result based on the comparison result, and send the behavior event to a game end for guiding a virtual character in the game end to make a behavior decision.
The method specifically comprises the following steps:
step 210, collecting interactive data, in this embodiment, the interactive data includes at least three behaviors according to game behaviors sent by an agent, i.e. a user, to a game end, the three behaviors are respectively motion behavior data, voice behavior data and text behavior data, which are respectively collected by corresponding hardware devices, for example, obtaining a sensing signal based on a sensor arranged at a position where a human motion behavior occurs for the motion behavior data, determining specific data based on a change of an electric signal inside the sensor, and determining that the sensor is not used as a key technical content in this embodiment based on an existing peripheral hardware device, for example, connecting a hardware sensing device to a hand joint, collecting a posture change of a hand by the hardware sensing device to form a corresponding analog signal, and converting the analog signal into a digital signal, the digital signal is the interaction data generated by the action behavior in this embodiment. For the voice interaction behavior, the collection is performed by a voice input device built in the game terminal or connected with the game terminal, and the same voice input device that can be used in the embodiment can be an existing hardware device with a voice input function. Aiming at the character interaction behavior, the output of characters is realized through a character input device such as a keyboard and the like arranged inside the game terminal or connected with the outside of the game terminal, so that the character behavior is formed.
Because acquisition of the corresponding data can be achieved based on the existing technology for acquisition of the interactive data, and the acquisition is not a necessary technical feature of the human-computer interaction method provided by the embodiment, detailed description is not given in the embodiment as long as the acquisition of the data can be achieved based on the existing technology.
Step 220, uploading the acquired interaction data to a server, in this embodiment, a basic database is configured in the server, and feature data corresponding to the interaction data is stored in the basic database. In this embodiment, the basic database needs to be constructed in a specific manner, in this embodiment, for the construction of the basic database, data that has been acquired in a factory setting stage or obtained in a simulation manner is used, and in order to make the data of the database more prone to data generated by artificial processing in a real environment, the basic database may be strengthened in a manner of model training after storing data acquired in the operation process of the whole system, and in a process of training for daily acquired data, not only data acquired by a single client is individually used, but also information can be stored and distributed and shared among servers, large-scale data training and storage are realized in a manner of distributed server setting, the data amount is increased by the above method, and a server for training data is added, the accuracy of the subsequent comparison process is also improved.
For training of a single model, this embodiment provides a corresponding specific method:
the existing user behavior data is obtained, the data can be newly increased by iteration in the training process or can be based on the existing data, and the behavior data is subjected to classification training to construct a training set. The classification training in this embodiment is determined based on the difference of the interactive behaviors, and for the training set, the classification is performed based on the classified data, and a corresponding training set is established after classification under the same category. Preset key information is extracted from the established training set, and the extraction of the key information can be set according to specific interaction behaviors, for example, if the interaction behaviors are classified, specific motion generation digital signals can be used as the key information. The method comprises the steps of carrying out data processing on preset key information, introducing a non-strategic batch reinforcement learning algorithm, constructing a behavior pre-judgment model based on reinforcement learning, training the pre-judgment model by using obtained data to obtain a trained behavior pre-judgment model, obtaining behavior data information to be processed, extracting information from the behavior data information to be processed, and obtaining a vector and a corresponding label through data processing.
In this embodiment, training the prejudgment model specifically includes the following processes: extracting parameters to be optimized from the pre-judgment model, wherein the parameters to be optimized comprise Q functions, performing offline learning training on existing interactive data and action networks according to a batch reinforcement learning algorithm, eliminating over-estimated Q values, discretizing limited Q values, combining priors into strategies through relative entropy control, and performing equalization training to obtain an intelligent action network to complete pre-optimization of the network. In this embodiment, the Q function is the following functional formula:
Figure DEST_PATH_IMAGE001
in this embodiment, st represents the environment status, at represents the action performed by the agent during reinforcement learning, r (st, at) represents the reward function, and γ represents the discount factor.
In this embodiment, the feature data is multi-structure data, and includes first interaction data, second interaction data, and third interaction data, in this embodiment, the first interaction data is set as invalid behavior data for characterizing invalid behaviors, the second interaction data is set as basic behavior data for characterizing basic behaviors for determining the basic behaviors by characterizing the invalid behaviors to reduce the data amount of comparison, and the third interaction data is configured as key behavior data for characterizing the key behaviors for determining the key behaviors.
And regressing to the construction process of the training model based on the multi-structure features of the feature data, extracting preset information from behavior data information to be processed, and processing the data to obtain a vector, wherein the training model has an incidence relation with a final basic database, and corresponding structure determination is carried out on the data in the process of training the model, namely the vector comprises a first vector, a second vector and a third vector, wherein the first vector is first interactive data, the second vector is second interactive data, and the third vector is third interactive data.
Based on the acquired action behavior data and the first interaction data, the second interaction data and the third interaction data which are stored in the basic database, mutual comparison is carried out to obtain a comparison result, and the specific process is as follows:
and comparing the interactive data with the first interactive data, determining whether the interactive data contains the first interactive data, and reserving the residual data after the first interactive data is removed.
The comparison between the retained data and the second interaction data is different from the comparison between the retained interaction data and the first interaction data in that the comparison between the retained interaction data and the second interaction data aims at the second interaction data determined in the retained interaction data, the second interaction data is extracted, stored and subjected to tag processing, and the tag is determined to indicate that the data represents a certain basic behavior because the second interaction data is basic behavior data for representing the basic behavior.
And comparing the interaction data after the first interaction data are removed and the second interaction data are stored with the third interaction data, wherein the comparison with the second interaction data is the same as that of the third interaction data, the third interaction data of the data center are extracted, stored and subjected to label processing, and the extracted interaction data corresponding to the second interaction data and the third interaction data form integral behavior data.
According to the comparison method, second interaction data and third interaction data with representation significance are not directly extracted and stored in the interaction data, the first interaction data are filtered firstly, the data size of a basic database is increased, the training model is further trained based on the obtained new interaction data, the filtered data also have certain technical effect and technical significance, and the data extraction accuracy is directly improved.
And 230, triggering a behavior event corresponding to the comparison result based on the comparison result.
In step 220, the acquired behavior is converted into data, and the data is processed, but the final interaction of the game end is a behavior event, so in the process, the determined interaction data corresponding to the second interaction data and the third interaction data for representing the behavior is mainly associated with a specific behavior event, the behavior event is a behavior event of an agent in the game end, and the agent can be determined according to specific game settings and determined according to specific executors corresponding to different game types.
The process mainly comprises the following steps:
at least two groups of tags in the interactive data are compared with interactive behavior tags in a stored tag library, and in the embodiment, a plurality of behavior events of the interactive behavior tags are obtained.
Therefore, in order to optimize the final interaction effect of the intelligent agent, a plurality of behavior events need to be screened to determine an optimal behavior event, and the method for selecting the optimal behavior event comprises the following steps:
for selecting the optimal behavior event, judgment needs to be performed through a corresponding model. In this embodiment, the optimal behavior event is determined according to the target game state information based on a hierarchical action decision model of a hierarchical decision model, which in this embodiment includes a policy selection submodel and a plurality of policy execution submodels that are independent from each other, where the policy selection submodel is configured with a corresponding behavior event, and the policy selection submodel is used to select a policy execution submodel that needs to be operated among the plurality of policy execution submodels according to the game state information. In this embodiment, the policy execution submodel is used to determine an action that a virtual character, i.e. an agent, needs to execute according to the game state information, and control the target virtual character to execute the target action.
In addition, in this embodiment, determining the optimal behavior event according to the target game state information includes determining the optimal behavior event according to a game running state preset by the user and the acquired program running state, where the game running state preset by the user includes game difficulty level setting, game running environment setting, and other personalized operation setting, and the program running state includes real-time physical data of program running.
In order to enhance the sense of reality and the playability of the game, the behavior decisions of the intelligent bodies in the game need to be different for different game settings, for example, the feedback made by the intelligent bodies based on the action collection of the players in the game process with high difficulty is different from the situation of low difficulty setting compared with the behaviors, and the game experience brought by the players is different. In the prior art, for different settings of high-difficulty and low-difficulty games, feedback brought by an agent is only different in physical parameters, for example, in a shooting game, the main difference of the NPC, that is, the agent in the high-difficulty setting, is that the survival value of the agent is higher than that in the low-difficulty setting, so that the game experience is brought about in that a player needs to make a stronger attack for the same scene and the same behavior action, but the behavior property and processing are the same and only increase in the output amount of the player in unit time. In the embodiment, the difference of the interaction behavior of the intelligent agent caused by the difference of game settings is not only the change of the physical data level, but also the judgment of the intelligent agent on the behavior and the behavior after the judgment, so that the intelligent agent has more sense of reality and playability compared with the prior art, and the game experience is enhanced.
In addition, in this embodiment, the optimal behavior event can be determined according to the specific running environment of the game program, for example, when the running environment of the game end program is poor, the behavior events of some intelligent agents cannot be completely displayed, and the behavior event most suitable for being displayed needs to be made under the condition of the running environment of the game end program. For example, in the case of a low number of screen frames, some actions that require high rendering requirements cannot be expressed or the expression occurs in a stuck situation, which requires an optimal choice for the behavior of the agent.
The program running environment of the game end can be timely adjusted through background calling running data, and when the background calling data is not allowed under the condition of confidentiality or privacy, the game environment needs to be determined through a player. In other implementations, recording may be performed according to the occurrence of bad time such as game pause caused by the operation environment factors, and real-time autorun data adjustment may be performed by model training.
In this embodiment, the man-machine interaction behaviors performed by different behavior actions need to be described in detail.
For action behaviors, the embodiment provides a game man-machine interaction method, which specifically comprises the following processes:
acquiring action behavior interaction data of a user through external equipment, uploading the action behavior interaction data to a server, determining the action behavior interaction data as continuous action data, wherein the action behavior interaction data are interaction data generated by continuous actions performed by the user in a preset time period, comparing the determined action behavior interaction data with action behavior characteristic data in a basic database, comparing first action behavior characteristic data with action behavior characteristic data and action intensity data corresponding to the action behavior characteristic data, taking the action intensity data as a threshold value in the embodiment, comparing the interaction data with the action behavior characteristic data and acquiring corresponding action intensity data, determining corresponding data of the interaction data in the threshold value range of the action intensity data to obtain a second result, and comparing the second result with third action behavior characteristic data, and obtaining a third result, combining the second result with the third result to obtain an action behavior comparison result, comparing the behavior tags corresponding to the data in the second result and the third result with the interaction behavior tags in the stored tag library to obtain a plurality of behavior events, determining the optimal behavior event based on the plurality of behavior events, and sending a behavior event command to the game end to control the intelligent agent to interact with the user.
In the embodiment, the first motion behavior feature data is meaningless data, and the judgment on the meaningless data is based on the motion intensity data, because the acquisition of the motion behavior is based on the external sensor, and the setting on the sensor is mainly based on the key point issued by the human behavior. In this embodiment, 17 bone key points are selected as the key points, which may be: head, right shoulder, right elbow, right wrist, right hand, left shoulder, left elbow, left wrist, left hand, right knee, right ankle, right foot, left knee, left ankle, left foot, right hip, left hip. The first action behavior characteristic data is data of which the action intensity data does not reach a threshold value when a certain key point behavior is sent out, and needs to be removed; for example, in a "boxing" fighting game, a behavior of "touching the face" that a player consciously performs when making a punch is based on a sensor disposed on an upper arm, and the occurrence of the motion is captured by a change in the angle between the upper arm and a lateral chest position.
The second action characteristic data and the third action characteristic data are determined according to the criticality of the action in the remaining continuous meaningful actions, for example, in a 'boxing' game, a player makes a left hook, after no first action characteristic data is determined, the second action characteristic data is the swing angle of the forearm, the intelligent body determines that the action is the swing arm through the change of the angle and swings the arm in a specific direction through the judgment of the angle, and the basic characteristic data is used for determining the action. The third action behavior feature data is key feature data, that is, the specific speed is determined by the sensor arranged on the forearm, so as to determine the intensity of the impact, which is not described in detail for this part, and speed-intensity conversion may be performed by specific rules for each action, and this data is key data.
Therefore, through the above contents, when a player performs a behavior of 'left hook', data of each key point are collected and compared to determine whether the first behavior feature data corresponding to the key point exist, then comparison of second behavior feature data corresponding to the key point is performed, and a result of comparison of the second behavior feature data, that is, the punching strength of the hook and the third behavior feature data, is combined to be a condition of a final behavior event.
Through the setting of the comparison mechanism, the false triggering rate of the behavior event is reduced, the decision efficiency of the intelligent agent is improved through the determination of the basic behavior and the key behavior, and the correspondence between the behavior event and the behavior of the player is improved, namely, the sensitivity is improved.
For the language behavior, the voice behavior interaction data is uploaded to the server and compared with the voice behavior feature data in the basic database, and in this embodiment, the voice behavior feature data includes first voice behavior feature data, second voice behavior feature data, and third voice behavior feature data. And recognizing the voice length of the voice behavior interaction data, and determining whether to compare the voice behavior interaction data with the first voice behavior characteristic data or not based on the voice length. In this embodiment, because in some cases, the player may spontaneously generate some meaningless voice behaviors even in the interactive environment, and the meaningless comparison amount may be increased if the voice behaviors are compared strictly in the interactive opening manner, it is necessary to determine whether the voice behaviors have significance between performing data comparison, and the voice length is used as the basis, and the probability that the voice behavior has no significance is higher in the case that the voice length is shorter in a high probability. The same storage is performed for meaningless voice behavior, and the amount of data in the basic database is increased.
Aiming at the voice data with meaningful voice behaviors, comparing the first voice behavior characteristic data with the voice behavior interaction data to obtain a first result, comparing the first result with the second action voice characteristic data to obtain a second result, comparing the second result with the third voice behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a voice behavior comparison result, wherein the voice behavior comparison result is a comparison result.
In this embodiment, the first speech behavior feature data in the speech behavior is meaningless data, for example, some words without meaning, including but not limited to words without obvious practical meanings such as o, h, oh, etc., the above exclamation words may be preset in a corresponding database or obtained based on a later training, and how to recognize the above words is determined based on an existing relatively mature speech recognition algorithm. The final expression of the voice behavior is a phrase consisting of a plurality of words, each word has a part of speech, and the word can be divided according to the part of speech, such as the main meaning object of "I", "you", "him", and the like, is used as a basic behavior for indicating the object and the identity. For the third speech behavior feature, the key behaviors are verbs and nouns which do not belong to the parts of speech such as "main", "predicate" and "guest" and some directional nouns, for example: pig, dog, upper, lower and knife.
The method provided by the embodiment can extract the characteristics in the long sentence, for example, in a chess game, when a player says that the player can tell the player to go to the left and move five cases, wherein the 'er' only indicates that the tone belongs to the first voice behavior characteristic data, the 'er' serves as the second voice behavior characteristic data, and the 'car', 'moving ahead' and 'five cases' serve as the third voice behavior characteristic data, so that behavior actions are extracted more quickly, and the recognition sensitivity and accuracy are improved.
And comparing the behavior tags corresponding to the data in the second result and the third result with the interaction behavior tags in the stored tag library to obtain a plurality of behavior events, determining the optimal behavior event based on the plurality of behavior events, and sending a behavior event command to the game end to control the intelligent agent to interact with the user.
For the text behavior, the text behavior is uploaded to the server as the interactive data and compared with the text behavior feature data in the basic database, and in this embodiment, the text behavior feature data includes first text behavior feature data, second text behavior feature data, and third text behavior feature data.
Similarly, the same as the voice behavior and the motion behavior, the character behavior has both meaningless and meaningless, so the meaning judgment needs to be performed on the character behavior before performing data comparison, and for the particularity of the character, the character with shorter characters has lower possibility of meaning, and the degree of meaning is higher with a Chinese character or a single word. Therefore, it is necessary to identify the text length of the text behavior interaction data, and determine whether the text behavior has significance within a certain threshold range. And comparing the determined data, wherein the comparison process is similar to the action behavior and voice behavior comparison process, comparing the first character behavior characteristic data with the character behavior interaction data to obtain a first result, comparing the first result with the second character voice characteristic data to obtain a second result, comparing the second result with the third character behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a character behavior comparison result, wherein the character behavior comparison result is a primary comparison result.
And comparing the behavior labels corresponding to the data in the second result and the third result in the character behaviors with the interaction behavior labels in the stored label library to obtain a plurality of behavior events, determining the optimal behavior event based on the plurality of behavior events, and sending a behavior event command to the game end to control the intelligent agent to interact with the user.
However, the difference from the voice behavior is that the characters contain punctuation marks, and meaningless judgment for punctuation marks needs to be added, that is, when a single punctuation mark appears, the character is determined to belong to the first character behavior characteristic, and subsequent behavior processing is not performed.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is enabled to execute the interpersonal interaction method for the game provided by the embodiment of the present application.
It should be understood that, for technical terms that are not noun-explained in the above, a person skilled in the art can deduce and unambiguously determine the meaning of the reference according to the above disclosure, for example, for terms such as some thresholds and coefficients, a person skilled in the art can deduce and determine according to the logical relationship before and after, and the value range of these values can be selected according to the actual situation, for example, 0.1 to 1, for example, 1 to 10, for example, 50 to 100, and is not limited herein.
The skilled person can determine some preset, reference, predetermined, set and preference labels without any doubt based on the above disclosure, such as threshold, threshold interval, threshold range, etc. For some technical characteristic terms which are not explained, the skilled person is fully capable of reasonably and unambiguously deriving the technical solution based on the logical relations between the preceding and following terms, so as to clearly and completely implement the technical solution. Prefixes of technical-feature terms not to be explained, such as "first", "second", "example", "target", etc., can be unambiguously derived and determined from the context. Suffixes of technical feature terms not explained, such as "set", "list", etc., can also be derived and determined unambiguously from the preceding and following text.
The above disclosure of the embodiments of the present application will be apparent to those skilled in the art from the above description. It should be understood that the process of deriving and analyzing technical terms, which are not explained, by those skilled in the art based on the above disclosure is based on the contents described in the present application, and thus the above contents are not an inventive judgment of the overall scheme.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered as illustrative and not restrictive of the application. Various modifications, adaptations, and alternatives may occur to one skilled in the art, though not explicitly described herein. Such alterations, modifications, and improvements are intended to be suggested herein and are intended to be within the spirit and scope of the exemplary embodiments of this application.
Also, this application uses specific terminology to describe embodiments of the application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily to the same embodiment. Furthermore, some features, structures, or characteristics of at least one embodiment of the present application may be combined as appropriate.
In addition, those skilled in the art will recognize that the various aspects of the application may be illustrated and described in terms of several patentable species or contexts, including any new and useful combination of procedures, machines, articles, or materials, or any new and useful modifications thereof. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as a "unit", "component", or "system". Furthermore, aspects of the present application may be embodied as a computer product, located in at least one computer readable medium, which includes computer readable program code.
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the execution of aspects of the present application may be written in any combination of one or more programming languages, including object oriented programming, such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, or similar conventional programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages, such as Python, Ruby, and Groovy, or other programming languages. The programming code may execute entirely on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, unless explicitly stated in the claims, the order of processing elements and sequences, use of numerical letters, or use of other designations in this application is not intended to limit the order of the processes and methods in this application. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware means, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
It should also be appreciated that in the foregoing description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of at least one embodiment of the invention. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.

Claims (10)

1. The game man-machine interaction method is applied to a server, the server is connected with a user side and a game side, the user side is used for collecting interaction data of a user, and the game side feeds back behavior events to the game side based on the interaction data; a basic database is configured on the server, the basic database is configured with characteristic data, and the characteristic data is used for comparing the acquired interaction data in different interaction modes; the method comprises the following steps:
collecting the interaction data;
uploading the interactive data to the server and comparing the interactive data with the characteristic data in the basic database to obtain a comparison result;
triggering a behavior event corresponding to the comparison result based on the comparison result;
sending the behavior event to a game end to realize interaction with a user;
the feature data are multi-structure data and comprise first interaction data, second interaction data and third interaction data, the first interaction data are set as invalid behavior data and used for representing invalid behaviors and reducing the data quantity of comparison, the second interaction data are set as basic behavior data and used for representing basic behaviors and determining the basic behaviors, and the third interaction data are configured as key behavior data and used for representing key behaviors and determining the key behaviors;
the comparing of the interaction data with the feature data comprises comparing the first interaction data, the second interaction data, the third interaction data and the interaction data to obtain a comparison result, and specifically comprises the following steps: comparing whether the interactive data contains first interactive data or not, removing the first interactive data to reserve residual data, extracting, storing and labeling the second interactive data contained in the reserved data, extracting, storing and labeling the third interactive data contained in the residual data to obtain behavior data consisting of the second interactive data and the third interactive data;
triggering a behavior event corresponding to the comparison result based on the comparison result specifically comprises the following steps: comparing at least two groups of labels in the behavior data with interaction behavior labels in a stored label library, wherein the interaction behavior labels correspond to a plurality of behavior events.
2. The method of claim 1, further comprising selecting an optimal behavior event for the plurality of behavior events, comprising:
determining an optimal behavior event according to the state information of the target game based on a layered decision model and a layered action decision model; the hierarchical action decision model comprises a strategy selection submodel and a plurality of strategy execution submodels which are mutually independent, and corresponding action events are configured in the strategy selection submodel; the strategy selection submodel is used for selecting a strategy execution submodel required to run from the strategy execution submodels according to the game state information; the strategy execution submodel is used for determining actions required to be executed by the virtual role according to the game state information; and controlling the target virtual character to execute the target action.
3. The human-computer interaction method for the game of claim 2, wherein the determining of the optimal behavior event according to the target game state information comprises determining the optimal behavior event according to a game running state preset by a user and a collected program running state, the game running state preset by the user comprises game difficulty level setting, game running environment setting and personalized operation setting, and the program running state comprises real-time physical data of program running.
4. The method of claim 1, wherein collecting the interaction data comprises generating interaction data based on user behavior; the user behaviors comprise action behaviors, voice behaviors and character behaviors; acquiring the action behavior to obtain action behavior interaction data based on external equipment, wherein the external equipment comprises hardware provided with a sensor, and the sensor is physically bound with a user to acquire the behavior generated by the motion of the user and is communicated with the server; the voice behavior is collected, voice equipment is used for collecting voice of a user to obtain voice behavior interaction data, and the voice behavior interaction data is communicated with the server; and the character behavior is collected and used for collecting character interaction data generated by a user on an interaction interface and communicating with the server.
5. The method of claim 4, further comprising building the base database, including the steps of:
the method comprises the steps of obtaining existing user behavior data, carrying out classification training on the behavior data, constructing a training set, extracting preset key information from the training set, carrying out data processing on the preset key information, introducing a non-strategic batch reinforcement learning algorithm, constructing a behavior pre-judgment model based on reinforcement learning, training the pre-judgment model by using the obtained data, obtaining the trained behavior pre-judgment model, obtaining behavior data information to be processed, extracting preset information from the behavior data information to be processed, obtaining a first vector, a second vector and a third vector through data processing, and obtaining a corresponding first label, a corresponding second label and a corresponding third label, wherein the first vector is the first interaction data, the second vector is the second interaction data, and the third vector is the third interaction data.
6. The human-computer interaction method for the game as claimed in claim 5, wherein the action and behavior interaction data are uploaded to the server and compared with the characteristic data in the basic database to obtain a comparison result, and the method comprises the following specific steps:
uploading the action behavior interaction data to a server and comparing the action behavior interaction data with action behavior characteristic data in a basic database to obtain a comparison result, wherein the action behavior characteristic data comprises first action behavior characteristic data, second action behavior characteristic data and third action behavior characteristic data, and the method comprises the following steps:
comparing first action behavior characteristic data with action behavior interaction data to obtain a first result, comparing the first result with second action behavior characteristic data to obtain a second result, comparing the second result with third action behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain an action behavior comparison result, wherein the action behavior comparison result is a final comparison result;
comparing the first action characteristic data with the action interaction data to obtain a first result, wherein the method comprises the following steps of:
the first action characteristic data comprises action characteristic data and action intensity data corresponding to the action characteristic data, the action intensity data is a threshold value, the interaction data and the action characteristic data are compared to obtain corresponding action intensity data, and the corresponding data of the interaction data in the threshold value range of the action intensity data is determined, namely a first result.
7. The game human-computer interaction method of claim 6, wherein the step of comparing the first action characteristic data with the action interaction data to obtain a first result further comprises: and determining that the action behavior interaction data are continuous behavior data, wherein the action behavior interaction data are interaction data generated by continuous actions performed by a user within a preset time period.
8. The human-computer interaction method for the game as claimed in claim 5, wherein the voice behavior interaction data is uploaded to the server and compared with the characteristic data in the basic database to obtain a comparison result, and the method comprises the following specific steps:
uploading the voice behavior interaction data to a server and comparing the voice behavior interaction data with voice behavior characteristic data in a basic database to obtain a comparison result, wherein the voice behavior characteristic data comprises first voice behavior characteristic data, second voice behavior characteristic data and third voice behavior characteristic data, and the method comprises the following steps of:
comparing first voice behavior characteristic data with voice behavior interaction data to obtain a first result, comparing the first result with second action voice characteristic data to obtain a second result, comparing the second result with third voice behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a voice behavior comparison result, wherein the voice behavior comparison result is a final comparison result;
comparing the first voice behavior characteristic data with the voice behavior interaction data to obtain a first result, identifying the voice length of the voice behavior interaction data, and determining whether to compare the voice behavior interaction data with the first voice behavior characteristic data or not based on the voice length.
9. The human-computer interaction method for the game as claimed in claim 5, wherein the interactive data of the character behaviors are uploaded to the server and compared with the characteristic data in the basic database to obtain a comparison result, and the method comprises the following specific steps:
uploading the character behavior interaction data to a server and comparing the character behavior interaction data with character behavior characteristic data in a basic database to obtain a comparison result, wherein the character behavior characteristic data comprises first character behavior characteristic data, second character behavior characteristic data and third character behavior characteristic data, and the method comprises the following steps:
comparing first character behavior characteristic data with character behavior interaction data to obtain a first result, comparing the first result with second character voice characteristic data to obtain a second result, comparing the second result with third character behavior characteristic data to obtain a third result, and combining the second result with the third result to obtain a character behavior comparison result, wherein the character behavior comparison result is a final comparison result;
comparing the first character behavior characteristic data with the character behavior interaction data to obtain a first result, identifying the character length of the character behavior interaction data, and determining whether to compare the character behavior interaction data with the first character behavior characteristic data or not based on the character length.
10. A game man-machine interaction system is characterized by being used for executing the game man-machine interaction method of any one of claims 1 to 9, and comprising a server, a game end and at least one client, wherein the client is provided with external equipment for acquiring interaction data; a basic database is configured on the server of the game terminal, the basic database is configured with characteristic data, and the characteristic data is used for comparing the acquired interaction data in different interaction forms; the game end is used for executing and realizing human-computer interaction.
CN202210700965.6A 2022-06-21 2022-06-21 Game man-machine interaction method and system Active CN114768246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210700965.6A CN114768246B (en) 2022-06-21 2022-06-21 Game man-machine interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210700965.6A CN114768246B (en) 2022-06-21 2022-06-21 Game man-machine interaction method and system

Publications (2)

Publication Number Publication Date
CN114768246A true CN114768246A (en) 2022-07-22
CN114768246B CN114768246B (en) 2022-08-30

Family

ID=82421540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210700965.6A Active CN114768246B (en) 2022-06-21 2022-06-21 Game man-machine interaction method and system

Country Status (1)

Country Link
CN (1) CN114768246B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116672721A (en) * 2023-07-31 2023-09-01 欢喜时代(深圳)科技有限公司 Game popularization webpage real-time management method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090098920A1 (en) * 2007-10-10 2009-04-16 Waterleaf Limited Method and System for Auditing and Verifying User Spoken Instructions for an Electronic Casino Game
CN107773982A (en) * 2017-10-20 2018-03-09 科大讯飞股份有限公司 Game voice interactive method and device
US20180290058A1 (en) * 2016-04-08 2018-10-11 Tencent Technology (Shenzhen) Company Limited Method for controlling character movement in game, server, and client
CN108939533A (en) * 2018-06-14 2018-12-07 广州市点格网络科技有限公司 Somatic sensation television game interactive approach and system
US20200078682A1 (en) * 2018-09-11 2020-03-12 Ncsoft Corporation System, sever and method for controlling game character
CN111282267A (en) * 2020-02-11 2020-06-16 腾讯科技(深圳)有限公司 Information processing method, information processing apparatus, information processing medium, and electronic device
CN111729291A (en) * 2020-06-12 2020-10-02 网易(杭州)网络有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN112870721A (en) * 2021-03-16 2021-06-01 腾讯科技(深圳)有限公司 Game interaction method, device, equipment and storage medium
WO2021218440A1 (en) * 2020-04-28 2021-11-04 腾讯科技(深圳)有限公司 Game character behavior control method and apparatus, and storage medium and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090098920A1 (en) * 2007-10-10 2009-04-16 Waterleaf Limited Method and System for Auditing and Verifying User Spoken Instructions for an Electronic Casino Game
US20180290058A1 (en) * 2016-04-08 2018-10-11 Tencent Technology (Shenzhen) Company Limited Method for controlling character movement in game, server, and client
CN107773982A (en) * 2017-10-20 2018-03-09 科大讯飞股份有限公司 Game voice interactive method and device
CN108939533A (en) * 2018-06-14 2018-12-07 广州市点格网络科技有限公司 Somatic sensation television game interactive approach and system
US20200078682A1 (en) * 2018-09-11 2020-03-12 Ncsoft Corporation System, sever and method for controlling game character
CN111282267A (en) * 2020-02-11 2020-06-16 腾讯科技(深圳)有限公司 Information processing method, information processing apparatus, information processing medium, and electronic device
WO2021218440A1 (en) * 2020-04-28 2021-11-04 腾讯科技(深圳)有限公司 Game character behavior control method and apparatus, and storage medium and electronic device
CN111729291A (en) * 2020-06-12 2020-10-02 网易(杭州)网络有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN112870721A (en) * 2021-03-16 2021-06-01 腾讯科技(深圳)有限公司 Game interaction method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FRASER ALLISON 等: "Design Patterns for Voice Interaction in Games", 《PROCEEDINGS OF THE 2018 ANNUAL SYMPOSIUM ON COMPUTER-HUMAN INTERACTION IN PLAY》 *
张栋楠: "支持语音指令的非真人玩家智能决策行为模型研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116672721A (en) * 2023-07-31 2023-09-01 欢喜时代(深圳)科技有限公司 Game popularization webpage real-time management method and system
CN116672721B (en) * 2023-07-31 2023-10-13 欢喜时代(深圳)科技有限公司 Game popularization webpage real-time management method and system

Also Published As

Publication number Publication date
CN114768246B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN110738211B (en) Object detection method, related device and equipment
CN109893857B (en) Operation information prediction method, model training method and related device
CN111282267B (en) Information processing method, information processing apparatus, information processing medium, and electronic device
CN110368690B (en) Game decision model training method, game strategy generation method and device
CN108897732B (en) Statement type identification method and device, storage medium and electronic device
CN112221139B (en) Information interaction method and device for game and computer readable storage medium
CN111414946A (en) Artificial intelligence-based medical image noise data identification method and related device
CN111222486A (en) Training method, device and equipment for hand gesture recognition model and storage medium
CN114768246B (en) Game man-machine interaction method and system
CN116975214A (en) Text generation method, device, storage medium and computer equipment
CN107817799B (en) Method and system for intelligent interaction by combining virtual maze
CN114967937B (en) Virtual human motion generation method and system
CN104536677A (en) Three-dimensional digital portrait with intelligent voice interaction function
CN110580897B (en) Audio verification method and device, storage medium and electronic equipment
KR101160868B1 (en) System for animal assisted therapy using gesture control and method for animal assisted therapy
CN113537122A (en) Motion recognition method and device, storage medium and electronic equipment
CN116821693B (en) Model training method and device for virtual scene, electronic equipment and storage medium
CN111773669B (en) Method and device for generating virtual object in virtual environment
CN116894242A (en) Identification method and device of track verification code, electronic equipment and storage medium
CN115712739B (en) Dance motion generation method, computer device and storage medium
CN104460962A (en) 4D somatosensory interaction system based on game engine
CN113476833B (en) Game action recognition method, game action recognition device, electronic equipment and storage medium
CN116630736A (en) Training method and system for user expression capturing model
CN114373098B (en) Image classification method, device, computer equipment and storage medium
Patel et al. Gesture Recognition Using MediaPipe for Online Realtime Gameplay

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant