CN112221139B - Information interaction method and device for game and computer readable storage medium - Google Patents

Information interaction method and device for game and computer readable storage medium Download PDF

Info

Publication number
CN112221139B
CN112221139B CN202011136794.6A CN202011136794A CN112221139B CN 112221139 B CN112221139 B CN 112221139B CN 202011136794 A CN202011136794 A CN 202011136794A CN 112221139 B CN112221139 B CN 112221139B
Authority
CN
China
Prior art keywords
information
interaction
game page
modal
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011136794.6A
Other languages
Chinese (zh)
Other versions
CN112221139A (en
Inventor
高波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011136794.6A priority Critical patent/CN112221139B/en
Publication of CN112221139A publication Critical patent/CN112221139A/en
Application granted granted Critical
Publication of CN112221139B publication Critical patent/CN112221139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information interaction method and device for a game and a computer readable storage medium; after multi-mode input information received by a terminal in a current game page is obtained, the multi-mode input information comprises input information of multiple interaction types, a conversion mode corresponding to each interaction type is determined, the multi-mode input information is converted into operation instructions according to the conversion modes, then a virtual object in the current game page is operated according to the operation instructions to obtain an operation result, then an updated game page and multi-mode output information are generated based on the operation result, and the multi-mode output information and the updated game page are sent to the terminal, so that the terminal can effectively play the updated game page according to the multi-mode output information; the scheme can greatly improve the interaction efficiency and the interaction effect of the information interaction of the game.

Description

Information interaction method and device for game and computer readable storage medium
Technical Field
The invention relates to the technical field of communication, in particular to an information interaction method and device of a game and a computer readable storage medium.
Background
With the rapid development of internet technology, online games become an indispensable entertainment mode in life, especially some shooting games. In a shooting type mobile phone game, a user inputs information on a game page to perform information interaction with a terminal and a server in the game. The existing information interaction mode is that a user inputs information through point touch operation to carry out information interaction in a game.
In the process of research and practice of the prior art, the inventor of the present invention finds that the existing information interaction mode of the user and the game is relatively single, and the existing point-touch interaction is limited by the screen of the terminal, so that the interaction efficiency and the interaction effect of the information interaction of the game are greatly reduced.
Disclosure of Invention
The embodiment of the invention provides an information interaction method and device for a game and a computer readable storage medium, which can greatly improve the interaction efficiency and interaction effect of information interaction of the game.
An information interaction method for a game comprises the following steps:
obtaining multi-modal input information received by a terminal in a current game page, wherein the multi-modal input information comprises input information of multiple interaction types;
determining a conversion mode corresponding to each interaction type, and respectively converting the multi-mode input information into operation instructions according to the conversion modes;
operating the virtual object in the current game page according to the operation instruction to obtain an operation result;
generating an updated game page and multi-modal output information based on the operation result, wherein the multi-modal output information comprises output information of a plurality of dynamic effect types;
and sending the multi-modal output information and the updated game page to the terminal, so that the terminal can effectively play the updated game page according to the multi-modal output information.
Optionally, another game information interaction method may also be provided, including:
receiving input information of multiple interactive types input by a user on a current game page to obtain multi-mode input information;
sending the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information;
acquiring the updated game page and multi-mode output information returned by the interactive server;
and performing action and effect playing on the updated game page according to the multi-mode output information.
Correspondingly, the embodiment of the invention provides an information interaction device for a game, which comprises:
the terminal comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring multi-modal input information received by the terminal in a current game page, and the multi-modal input information comprises input information of various interaction types;
the conversion unit is used for determining a conversion mode corresponding to each interaction type and respectively converting the multi-modal input information into an operation instruction according to the conversion mode;
the operation unit is used for operating the virtual object in the current game page according to the operation instruction to obtain an operation result;
the generating unit is used for generating an updated game page and multi-modal output information based on the operation result, wherein the multi-modal output information comprises output information of a plurality of dynamic effect types;
and the first sending unit is used for sending the multi-modal output information and the updated game page to the terminal so that the terminal can effectively play the updated game page according to the multi-modal output information.
Optionally, an information interaction device of another game may also be provided, including:
the receiving unit is used for receiving input information of various interaction types input by a user on a current game page to obtain multi-mode input information;
the second sending unit is used for sending the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information;
the second acquisition unit is used for acquiring the updated game page and the multi-mode output information returned by the interactive server;
and the playing unit is used for performing action-effect playing on the updated game page according to the multi-mode output information.
Optionally, in some embodiments, the conversion unit may be specifically configured to determine that a conversion manner of the voice interaction is a voice conversion manner when the interaction type is voice interaction, where the voice conversion manner is a conversion manner of converting input information corresponding to the voice interaction into an operation instruction; and when the interaction type is sensing interaction, determining that a conversion mode of the sensing interaction is a sensing conversion mode, wherein the sensing interaction refers to interaction by changing the spatial position of the terminal, and the sensing conversion mode refers to a conversion mode of converting input information corresponding to the sensing interaction into an operation instruction.
Optionally, in some embodiments, the conversion unit may be specifically configured to, when voice information exists in the multi-modal input information, convert the voice information into a voice operation instruction according to the voice conversion manner, where the voice information is input information whose interaction type is voice interaction; and when the multi-modal input information contains the spatial movement information of the mobile terminal, converting the spatial movement information into a sensing operation instruction according to the sensing conversion mode, wherein the spatial movement information is input information of which the interaction type is sensing interaction.
Optionally, in some embodiments, the conversion unit may be specifically configured to translate the voice information into text information, and perform feature extraction on the text information to obtain text features; according to the text characteristics, identifying the operation intention of the user in the current game page to obtain intention information; and determining a voice operation instruction corresponding to the voice information based on the intention information.
Optionally, in some embodiments, the conversion unit may be specifically configured to calculate a feature similarity between the text feature and an intention feature in a preset intention feature set; screening target intention characteristics used for determining intention information from the preset intention characteristic set according to the characteristic similarity; and taking the target intention information corresponding to the target intention characteristics as the intention information of the user on the current game page.
Optionally, in some embodiments, the conversion unit may be specifically configured to determine, when the intention information is preset marking intention information, a marking type corresponding to the intention information; identifying entity words used for marking in the text information according to the marking type to obtain marking information; acquiring attribute information of the current game page, and extracting player view angle information from the attribute information; and generating a voice operation instruction corresponding to the voice information based on the marking information and the player visual angle information.
Optionally, in some embodiments, the conversion unit may be specifically configured to identify a target object to be marked in the current game page according to the mark type; determining position information of the target object based on the player perspective information; and generating a voice operation instruction corresponding to the voice information according to the position information and the mark information.
Optionally, in some embodiments, the conversion unit may be specifically configured to acquire first scene information of the current game page, and determine a first scene type of a virtual scene of the current game page according to the first scene information; when the scene type is a preset interactive scene, extracting the direction acceleration of the mobile terminal in each direction from the space movement information; and when the direction acceleration in any direction exceeds a preset acceleration threshold value, generating a sensing operation instruction corresponding to the space movement information according to the direction acceleration.
Optionally, in some embodiments, the operation unit may be specifically configured to, when the operation instruction is a voice operation instruction, mark a virtual object in the current game page to obtain the operation result; and when the operation instruction is a sensing operation instruction, interacting the virtual object in the current game page to obtain the operation result.
Optionally, in some embodiments, the generating unit may be specifically configured to generate page update data according to the operation result, and update the current game page according to the page update data to obtain the updated game page; acquiring second scene information of the updated game page, and determining a second scene type of a virtual scene of the updated game page according to the second scene information; and when the second scene type is a preset feedback scene, screening the multi-modal output information corresponding to the updated game page from a preset multi-modal output information set.
Optionally, in some embodiments, the playing unit may be specifically configured to generate multi-modal dynamic effect information according to the multi-modal output information, where the multi-modal dynamic effect information includes dynamic effect information of at least one dynamic effect type; determining playing information of the multi-mode dynamic effect information based on the dynamic effect type; and displaying the updated game page, and playing the multi-mode dynamic effect information on the updated game page according to the playing information.
In addition, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores an application program, and the processor is configured to run the application program in the memory to implement the information interaction method for the game provided in the embodiment of the present invention.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are suitable for being loaded by a processor to perform the steps in the information interaction method of any game provided by the embodiment of the present invention.
After multi-mode input information received by a terminal in a current game page is acquired, the multi-mode input information comprises input information of multiple interaction types, a conversion mode corresponding to each interaction type is determined, the multi-mode input information is converted into operation instructions according to the conversion modes respectively, then a virtual object in the current game page is operated according to the operation instructions to obtain an operation result, then an updated game page and multi-mode output information are generated based on the operation result, the multi-mode output information comprises output information of multiple dynamic effect types, and the multi-mode output information and the updated game page are sent to the terminal, so that the terminal can perform dynamic effect playing on the updated game page according to the multi-mode output information; in the scheme, the user can input information through multiple interaction types, the input information is converted into the operation instruction according to the conversion mode corresponding to the interaction types, information interaction is realized, multi-mode output information can be output after the virtual object is operated according to the operation instruction, and the updated game page is played effectively, so that the interaction efficiency and the interaction effect of information interaction of the game can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an information interaction system of a game provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a scene of information interaction of a game provided by an embodiment of the invention;
FIG. 3 is a flow chart of information interaction of a game provided by an embodiment of the invention;
FIG. 4 is a diagram illustrating the translation of voice information into text information according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of intent recognition model training provided by an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating intention information corresponding to text information according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a process of converting voice information into voice operation instructions according to an embodiment of the present invention;
FIG. 8 is a flow chart illustrating the generation of a sense operation instruction according to an embodiment of the present invention;
FIG. 9 is a schematic illustration of a page of a material tag provided by an embodiment of the invention;
FIG. 10 is a schematic illustration of a page of an enemy tab provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of a page of a destination tag provided by an embodiment of the invention;
FIG. 12 is a schematic diagram of a destination marker on a map provided by an embodiment of the present invention;
FIG. 13 is a schematic page view of a sensing prompt message displayed on a current game page according to an embodiment of the present invention;
FIG. 14 is a schematic diagram of generating multi-modal output information as provided by embodiments of the present invention;
FIG. 15 is a schematic diagram of a page of multi-modal attribute information for a grenade throwing scene provided by an embodiment of the invention;
FIG. 16 is another flow chart of information interaction for a game provided by an embodiment of the present invention;
FIG. 17 is a schematic diagram of a page showing a grenade throwing scene effect playing according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of a page of an equipment recoil feedback scene dynamic effect play provided by an embodiment of the present invention;
fig. 19 is a schematic view of a page of dynamic effect playing of a cold weapon knocking feedback scene according to an embodiment of the present invention;
FIG. 20 is a schematic diagram of a third flow chart of information interaction for a game according to an embodiment of the present invention;
FIG. 21 is a schematic structural diagram of an information interaction device of a first game according to an embodiment of the present invention;
FIG. 22 is a schematic structural diagram of an information interaction device of a second game according to an embodiment of the present invention;
fig. 23 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The embodiment of the invention provides an information interaction method and device for a game and a computer-readable storage medium. The information interaction device of the game may be integrated in an electronic device, and the electronic device may be a server or a terminal. Specifically, the embodiment of the present invention provides an information interaction apparatus suitable for a game of a first electronic device (which may be referred to as a first game for distinguishing), and an information interaction apparatus suitable for a game of a second electronic device (which may be referred to as a second game for distinguishing).
The first electronic device may be a server or the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, network acceleration service (CDN), big data and an artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The second electronic device may be a terminal, and the terminal may be a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like, but is not limited thereto. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The embodiment of the invention takes the first electronic equipment as a server and the second electronic equipment as a terminal, wherein the server is an interactive server as an example to introduce the information interaction method of the game.
For example, referring to fig. 1, an information interaction system of a game provided by an embodiment of the present invention includes an interaction server 10 and a terminal 20, where the interaction server 10 and the terminal 20 are connected through a network, for example, a wired or wireless network connection may be provided.
The interaction server 10 may be configured to obtain multi-modal input information received by the terminal in the current game page, where the multi-modal input information includes input information of multiple interaction types, then determine a conversion manner corresponding to each interaction type, respectively convert the multi-modal input information into an operation instruction according to the conversion manner, then operate a virtual object in the current game page according to the operation instruction to obtain an operation result, then generate an updated game page and multi-modal output information based on the operation result, where the multi-modal output information includes output information of multiple action types, and send the multi-modal output information and the updated game page to the terminal, so that the terminal performs action-effect playing on the updated game page according to the multi-modal output information, which may be specifically shown in fig. 2.
The multi-modal input information may be input information in a plurality of modalities, and the modality may be information of a type, for example, image information and sound information belong to two different modalities. The multi-modal input information includes input information of multiple interaction types, for example, information input by a user through point touch interaction, where the point touch interaction may be that the user interacts with the terminal through a click or trigger control and an input region on a screen of the terminal, so-called point touch refers to a click or trigger operation, the information may be understood as input information corresponding to a point touch mode, information input by the user through voice interaction may be understood as input information corresponding to a voice mode, that is, information input by the voice information user through sensing interaction, the information may be understood as input information corresponding to a sensing mode, that is, spatial movement information of the terminal, and the multi-modal information may include at least the input information of the two interaction types.
The terminal 20 may be configured to receive input information of multiple interaction types input by a user on a current game page, obtain multi-modal input information, send the multi-modal input information to the interaction server, so that the interaction server generates an updated game page and multi-modal output information, then obtain the updated game page and the multi-modal output information returned by the interaction server, and perform action-effect playing on the updated game page according to the multi-modal output information.
The multi-modal output information corresponds to the multi-modal output information, and includes output information of a plurality of types of animation effects, which can be understood as information of a plurality of output forms, for example, output information of at least two types of animation effects, such as sound, image, video, animation, or vibration, can be output at the same time. For example, when a user triggers an operation of throwing the grenade on the game page, a stopwatch feedback sound of grenade countdown, grenade countdown image output, multi-mode output such as reminding the position of the grenade of a player by vibrating the mobile terminal according to the throwing position of the grenade and the like can be played on the game page at the same time, so that the interaction effect of information interaction of the game can be greatly improved, and higher mobile phone game immersion feeling can be created for the user on the mobile terminal.
The following are detailed descriptions. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
The embodiment will be described in terms of an information interaction apparatus for a first game, which may be specifically integrated in an electronic device, and the electronic device may be a device such as a server.
An information interaction method of a game comprises the following steps:
the method comprises the steps of obtaining multi-modal input information received by a terminal in a current game page, wherein the multi-modal input information comprises input information of multiple interaction types, determining a conversion mode corresponding to each interaction type, respectively converting the multi-modal input information into operation instructions according to the conversion modes, operating a virtual object in the current game page according to the operation instructions to obtain an operation result, generating an updated game page and multi-modal output information based on the operation result, and sending the multi-modal output information and the updated game page to the terminal so that the multi-modal mobile terminal can effectively play the updated game page according to the multi-modal output information.
As shown in fig. 3, the specific flow of the information interaction method of the game is as follows:
101. and acquiring multi-modal input information received by the terminal in the current game page.
Wherein the multimodal input information includes input information of a plurality of interaction types. For example, the information input by point touch interaction, the information input by voice interaction and the information input by sensing interaction can be included, and the point touch interaction can be that the user clicks or triggers a control area or various controls of the current game page at the terminal. The voice interaction can be used for inputting voice information for a user on the current game page, for example, the voice information can be sent out in the game process, and the terminal receives the voice information sent out by the user through the voice input interface, so that the voice interaction is completed. The sensing interaction can be realized by changing the spatial position of the mobile terminal so as to interact with the mobile terminal, for example, a user can shake the mobile terminal, so that the mobile terminal generates spatial movement information such as acceleration in any direction in space, the mobile terminal generates or calculates the spatial movement information through an internal sensor, so that some specific operation actions of the user are converted into the spatial movement information which can be identified by the mobile terminal, and the sensing interaction is completed.
For example, the multi-modal input information received by the terminal on the current game page can be directly acquired, for example, the user inputs information through multiple interaction types on the current game page of the terminal, the terminal receives the input information as the multi-modal input information and directly sends the multi-modal input information to the information interaction device of the first game, and the information interaction device of the first game can acquire the multi-modal input information. For example, the terminal may store the received input information as multi-modal input information in a memory or a cache, and transmit a storage address to the information interaction device of the first game, and the information interaction device of the first game may extract the multi-modal information from the memory or the cache of the terminal according to the storage address, and may transmit the prompt information to the terminal when the multi-modal information is successfully acquired.
102. And determining a conversion mode corresponding to each interaction type, and respectively converting the multi-mode input information into an operation instruction according to the conversion modes.
For example, a conversion manner corresponding to each interaction type in the multimodal information may be determined, and the input information of each interaction type in the multimodal input information may be converted into the operation instruction according to the determined conversion manner, specifically as follows:
s1, determining a conversion mode corresponding to each interaction type.
For example, when the interaction type is voice interaction, it is determined that the conversion mode of the voice interaction is a voice conversion mode, and the voice conversion mode may be a conversion mode of converting voice information input by the user into an operation instruction, for example, taking the user inputting a voice "material in front of the mark", the voice conversion mode may convert the input voice into an operation instruction of the material mark, and then mark the material on the current game page according to the operation instruction.
When the interaction type is sensing interaction, determining that a conversion mode of the sensing interaction mode is a sensing conversion mode, where the sensing conversion mode is a conversion mode of converting input information corresponding to the sensing interaction into an operation instruction, and the sensing interaction is interaction by changing a spatial position of the terminal, for example, a user changes the position of the terminal by shaking the terminal (e.g., a mobile phone or a tablet computer), and the terminal acquires information such as acceleration of the terminal in each direction through a device such as a gyroscope to perform interaction, where the input information may be information such as acceleration. Therefore, the sensing conversion method is a conversion method for converting information such as acceleration generated by a user by changing the spatial position of the terminal into an operation command, for example, in a racing game, the user changes the spatial position of the terminal and converts information such as acceleration generated by a gyroscope into a direction control command of a racing car.
And S2, respectively converting the multi-mode input information into operation instructions according to the conversion mode.
For example, the input information in the multi-modal input information is converted into the operation instruction according to the conversion mode corresponding to the interaction type, the multi-modal input information may include at least two types of interaction type input information, and the interaction type is the information input by the point-touch interaction, and the operation instruction may be directly generated, so that an additional conversion mode is not required. For the input information corresponding to other interaction types, the input information needs to be converted into an operation instruction according to the interaction mode corresponding to the interaction type, which may specifically be as follows:
(1) And when the voice information exists in the multi-mode input information, converting the voice information into a voice operation instruction according to a voice conversion mode.
The voice information is input information of which the interaction type is voice interaction, for example, a language uttered by a user to control a current game page in the mobile terminal may be the voice information, and the voice information may be uttered by the user himself, for example, by speaking to the mobile terminal, or by playing a sound uttered by other audio devices.
For example, the voice information may be translated into text information, feature extraction may be performed on the text information to obtain text features, an operation intention of the user in the current game page is identified according to the text features to obtain intention information, and a voice operation instruction corresponding to the voice information is determined based on the intention information, which may specifically be as follows:
and C1, translating the voice information into text information, and performing feature extraction on the text information to obtain text features.
For example, the voice information may be subjected to feature extraction to obtain voice features of the voice information, the voice features may be subjected to word matching and sentence pattern matching with preset text features, words and sentence patterns successfully matched with the voice features may be combined to obtain text information, for example, the voice information may be subjected to feature extraction by using a voice recognition model, then the voice features may be subjected to word matching by using an acoustic recognition network in the voice recognition model, sentence pattern matching may be performed by using a language recognition network of the voice recognition model for the matched words, and the matched words may be verified in reverse by using the sentence pattern matching, so that the word matching and the sentence pattern matching are bidirectional, and finally, the words and the sentence patterns successfully matched with the words are combined to obtain the text information corresponding to the voice information. The text information is subjected to Word segmentation to obtain text words of the text information, feature extraction is performed on the text words to obtain text features of the text information, for example, a vector model corresponding to a Word2Vec (Word vector algorithm) algorithm can be adopted to perform vectorization processing on the text words to obtain text feature vectors of the text words, and the text feature vectors of the text words are used as the text features of the text information.
The training of the acoustic recognition network of the voice recognition model can be completed by performing word segmentation processing on words in the dictionary to obtain voice characteristics corresponding to each word in the dictionary, and matching the voice characteristics with voice characteristics of an input voice information sample. The training of the language recognition network of the speech recognition model can be completed by performing semantic processing in grammar to obtain semantic features corresponding to the grammar and matching the semantic features with the semantic features of words. The trained acoustic recognition network and the trained language recognition network are used for performing word matching and sentence pattern matching on the voice features of the voice information, so as to obtain text information, which can be specifically shown in fig. 4.
And C2, identifying the operation intention of the user in the current game page according to the text characteristics to obtain intention information.
For example, feature similarity between the text feature and the intention feature in the preset intention feature set may be calculated, for example, a matching degree between a text feature vector and a feature vector of the intention feature in the preset intention feature set may be calculated by a vector matching degree search algorithm, and the matching degree is taken as the feature similarity. And screening target intention features for determining intention information from a preset intention feature set according to the feature similarity, for example, screening at least one candidate intention feature with the feature similarity of the text feature exceeding a preset similarity threshold from the preset intention feature set, using an intention recognition model (such as an RNN deep learning model), screening the target intention features for determining the intention information from the candidate intention features according to a recognition strategy, and outputting intention Identifications (IDs) of the target intention features. The target intention information corresponding to the target intention characteristics is used as the intention information of the user on the current game page, for example, the intention information corresponding to the intention ID is a material mark, and then the intention information of the user on the current game page can be determined to be a material mark.
For the offline training of the intention identification model, the intention ID is configured in advance, and similar question expansion is performed on intention information, for example, an N-Gram (an algorithm based on a statistical language model) algorithm can be adopted to perform similar question expansion on the intention, and also the intention information can be understood to correspond to similar word expansion, for example, when the intention information is a material mark, a sniping gun is assumed to correspond to the material, and similar question expansion is performed on the sniping gun, so that words similar to the sniping gun can be obtained, for example, 98k or other words can be obtained. Screening out text information from text logs of voices of other player teams, marking supervised linguistic data corresponding to intention information related to business to similar questions of the supervised or unsupervised linguistic data and the intention information in the text information to obtain text information samples, respectively inputting the text information samples into a vector model of Word2Vec to generate text characteristic samples of the intention information, carrying out intention classification on the text characteristic samples, carrying out vectorization processing on the classified text characteristic samples, inputting the vectorized text characteristic samples into an RNN model for training, and obtaining a trained intention recognition model as shown in FIG. 5. When the text information is subjected to online intention recognition, text features can be extracted from the text information by adopting a Word2Vec vector model, the extracted text features are input into a trained intention recognition model, and intention information corresponding to the text features can be output, as shown in fig. 6.
And C3, determining a voice operation instruction corresponding to the voice information based on the intention information.
For example, when the intention information is preset mark intention information, for example, information such as a material mark, an enemy mark, a destination mark, and the like, a mark type corresponding to the intention information is determined, for example, when the intention information is the material mark, the mark type corresponding to the intention information may be determined to be the material mark, and for example, when the intention information is the enemy mark, the mark type corresponding to the intention information may be determined to be the enemy mark. According to the mark type, the entity words used for marking are identified in the text information to obtain the mark information, for example, according to the mark type, the identification strategy of the entity words in the text information is determined, for example, when the mark type is a material mark, the entity words in the text information can be determined to be information such as a material name and a material direction, a keyword identification strategy can be adopted for the material name, a number + keyword identification strategy can be adopted for the material direction, and the like.
Based on the identification strategy, at least one entity word is identified in the text information, and the entity value of the entity word is determined, for example, when the mark type is a material mark, the entity word such as a material name, a material type or a material direction can be obtained by identifying in the text information according to a material keyword, for example, the text information is' sniper is played? For example, the extracted entity word may be a material name, and the entity value corresponding to the material name may be a sniping gun, when a plurality of entity words or entity values exist in the text information, the entity words or entity values are disambiguated according to a preset priority, so as to obtain the tag information, for example, the text information is "someone wants to snipe? The text information is extracted to obtain the entity words which are the material class and the material name, the entity value of the material class is a sniper gun, the entity value of the material name is Kar98K, the entity words are disambiguated according to the preset priority (the material name is greater than the material class), and the disambiguated entity words which are the material name and the entity value is Kar98K are obtained. According to the entity words, the entity types of the entity words are determined, the entity words, the entity values or the target texts containing the entity words or the entity values in the voice information can be used as mark information, and information corresponding to the entity types can be screened from a preset mark information set to be used as mark information. When only one entity word exists in the text information and only one entity value exists in the entity word, a target text containing the entity word or the entity value in the entity word, the entity value or the voice information can be used as mark information, and information corresponding to the entity type can be screened from a preset mark information set to be used as mark information.
The method includes the steps of obtaining attribute information of a current game page, extracting player visual angle information from the attribute information, for example, determining the position of a player character corresponding to a user and visual field radar information from the attribute information, obtaining information corresponding to a visual field radar area, and taking the information as player visual angle information. And generating a voice operation instruction corresponding to the voice information based on the marking information and the player perspective information, for example, identifying a target object to be marked in the current game page according to the marking type, for example, identifying the material to be marked in the current game page as the target object when the marking type is the material marking.
Optionally, before the target object is identified, the verification information of the intention information may be determined according to the mark type, for example, when the verification information is the intention information and needs to be verified, the intention information is verified according to the perspective information and the mark information, for example, when the mark information is a material mark, the intention information needs to be verified, a game-play detection module is invoked to initiate an inquiry request for the material in the current game page to verify that the material exists in the current game page, if the material exists, the intention information can be determined to pass the verification, and when the intention information fails to pass the verification, the voice information is stopped being converted. When the verification information is the intention information and verification is not needed, the target object needing to be marked can be directly identified in the current game page.
The position information of the target object is determined based on the player perspective information, and for example, the position information of the target object may be calculated in a perspective radar area of the player of the current game page based on the player perspective information. And generating a voice operation instruction corresponding to the voice information according to the position information and the mark information, for example, generating the voice operation instruction corresponding to the voice information by calling a game interface of the current game picture.
For the generation of the voice operation, it can be considered that the voice operation instruction corresponding to the voice information is generated by inputting the voice information into the current game page by the user, performing voice translation on the voice information to obtain text information, performing intent recognition on the text information, obtaining intent information, and then obtaining perspective radar information and a positioning position of the user in the current game page, so as to obtain player perspective information of the user, and meanwhile, continuously performing entity recognition on the intent information to obtain an entity type and an entity value, and then inputting the entity type, the entity value, the player perspective information, and the intent information into a central controller to call a game interface, which may be specifically as shown in fig. 7.
(2) And when the multi-modal input information has the spatial movement information of the mobile terminal, converting the spatial movement information into a sensing operation instruction according to a sensing conversion mode.
The spatial movement information is input information of which the interaction type is sensing interaction, and can be specifically represented as information of position, speed, acceleration and the like of the terminal in each direction.
For example, first scene information of a current game page is acquired, and a first scene type of a virtual scene of the current game page is determined according to the first scene information, for example, the first scene information may be acquired from attribute information of the current game page, picture information of the virtual scene may be extracted from the first scene information, and the first scene type of the virtual scene may be determined according to the picture information. For example, when a player character holds a cold weapon in the picture information of the virtual scene, it can be determined that the first scene type is a cold weapon interaction scene, and when a player holds equipment and the like in the picture information of the virtual scene, it can be determined that the first scene type is an equipment interaction scene and the like.
When the scene type is a preset interactive scene, the preset interactive scene may be a cold weapon interactive scene, and directional acceleration of the mobile terminal in each direction is extracted from the spatial movement information, and the directional acceleration may be calculated by the mobile terminal, or calculated by the information interaction device of the first game according to the spatial movement information sent by the mobile terminal. When the direction acceleration in any one direction exceeds a preset acceleration threshold value, a sensing operation instruction corresponding to the space movement information is generated according to the direction acceleration, for example, a cold weapon hitting instruction is generated according to the direction acceleration and the direction information thereof, an operation action corresponding to the cold weapon hitting instruction can be used for a player character to pick up the cold weapon to hit the place, and the cold weapon can be equipment such as various cutters, pans or helmets. The cold weapon hitting instruction is used as a sensing operation instruction corresponding to the space movement information, and may be specifically as shown in fig. 8.
103. And operating the virtual object in the current game page according to the operation instruction to obtain an operation result.
For example, when the operation instruction is a voice operation, the virtual object in the current game page is marked to obtain an operation result, and when the operation instruction is a sensing operation instruction, the virtual object in the current game page is interacted to obtain an operation result. Specifically, the following may be mentioned:
(1) And when the operation instruction is a voice operation instruction, marking the virtual object in the current game page to obtain an operation result.
For example, when the operation instruction is a voice operation instruction, a virtual object is recognized in the current game page, for example, when a mark type corresponding to the voice operation instruction is a material mark, according to the voice operation instruction, a material is recognized in the current game page, a mark position to be marked is determined according to the recognized material, and all or part of mark information is marked at the mark position, for example, according to the mark information, a mark "i has material there" can be marked at the mark position directly, which can be specifically shown in fig. 9. When the mark type corresponding to the voice operation instruction is an enemy mark, the mark may be directly marked at a position where the enemy exists or at a designated mark position, for example, "there is an enemy 200 m ahead", which may be specifically shown in fig. 10. When the type of the mark corresponding to the voice operation instruction is the destination mark, the mark may be directly marked at the location of the destination or the designated mark location in the current game page, for example, taking the destination as a house as an example, the mark "we go to the house bar in front" may be directly marked at the location of the house or the designated location, specifically, as shown in fig. 11, and the mark may be marked on the game map of the current game page as shown in fig. 12. And acquiring marked page information of the marked current game page, and taking the marked page information as an operation result.
(2) And when the operation instruction is a sensing operation instruction, interacting the virtual object in the current game page to obtain an operation result.
For example, when the operation instruction is a sensing operation instruction, a virtual object which needs to interact with a player character is identified in a current game page, a cold weapon held by the player character is operated to interact with the virtual object according to the directional acceleration of the sensing input, for example, a user operates a sensing operation of rapidly swinging a mobile terminal to be used as an input mode of hitting the cold weapon, which is similar to the swinging operation of a game handle, the information interaction device of the first game operates the player to hold the cold weapon to hit an enemy according to the directional acceleration generated by the swinging operation, the cold weapon can be a pan, a broadsword, a crow or the like, and hit page information generated by the hitting operation is used as an operation result. Before the user generates the sensing operation instruction by sensing the interaction input information, when it is detected that the virtual character of the user holds a cold weapon in the current game page, sensing prompt information of the sensing operation may be displayed, for example, the sensing prompt information may be "waving a mobile phone to perform a striking operation" or the like, and is used for prompting the user to perform the sensing interaction input information, as shown in fig. 13.
104. And generating an updated game page and multi-modal output information based on the operation result.
The multi-modal output information includes output information of a plurality of dynamic effect types, for example, the output information may be a combination of output information of any two or more dynamic effect types among a sound dynamic effect, an image dynamic effect and a vibration dynamic effect.
For example, according to the operation result, page update data is generated, the current game page is updated according to the page update data, and an updated game page is obtained, for example, when the operation result is marked page information, the marked page information is compared with the current page information of the current game page, page update data of the marked page is generated, according to the page update data, the current game page is updated, and a marked game page is obtained, and the marked game page is used as the updated game page.
The second scene information of the updated game page is obtained, and the second scene type of the virtual scene of the updated game page is determined according to the second scene information, for example, the second scene information may be obtained from the attribute information of the updated game page, the picture information of the virtual scene is extracted from the second scene information, and the second scene type of the virtual scene is determined according to the picture information. When the second scene type is a preset feedback scene, the preset feedback scene may be a multi-modal game scene, and the multi-modal game scene may be an equipment feedback scene, such as a scene of throwing a grenade, an equipment recoil feedback, or a cold weapon strike. The multi-modal output information corresponding to the updated game page is screened from the preset multi-modal output information set, for example, the multi-modal output information corresponding to the second scene type is screened from the preset multi-modal output information set according to the second scene type of the virtual scene of the updated game page, and the multi-modal information may be information of multiple modalities such as sound, image, and/or vibration, and may be specifically as shown in fig. 14. Taking a scene of throwing the grenade as an example, when detecting that there is an action or a behavior of throwing the grenade in a picture of an updated game page, it may be determined that the type of the virtual scene at this time is the scene of throwing the grenade, and multi-modal output information such as sound, image and vibration corresponding to the scene of throwing the grenade is screened out from a preset multi-modal output information set, wherein the sound output information may be stopwatch sound feedback provided according to the grenade countdown, the image information may be an image of the grenade countdown, the vibration information may provide directional vibration according to the grenade throwing position, and the vibration information is mainly used for reminding the grenade position, as shown in fig. 15.
105. And sending the multi-modal output information and the updated game page to the terminal so that the terminal can perform action and effect playing on the updated game page according to the multi-modal output information.
The dynamic effect playing can be playing of various dynamic effects corresponding to the multi-modal output information on the updated game page. The dynamic effect can be animation dynamic effect, sound dynamic effect and vibration dynamic effect.
For example, when multimodal output information exists in the updated game page, the updated game page and the multimodal output information are transmitted to the terminal. And when the multi-mode output information does not exist in the updated game page, directly sending the updated game page to the terminal. When the terminal receives the updated game page and the multi-mode output information, the updated game page can be played effectively according to the multi-mode output information.
As can be seen from the above, in the embodiment of the present invention, after multi-modal input information received by a terminal in a current game page is obtained, the multi-modal input information includes input information of multiple interaction types, a conversion mode corresponding to each interaction type is determined, the multi-modal input information is respectively converted into an operation instruction according to the conversion mode, then, a virtual object in the current game page is operated according to the operation instruction to obtain an operation result, then, an updated game page and multi-modal output information are generated based on the operation result, the multi-modal output information includes output information of multiple action types, and the multi-modal output information and the updated game page are sent to the terminal, so that the terminal performs action effect playing on the updated game page according to the multi-modal output information; in the scheme, the user can input information through multiple interaction types, the input information is converted into the operation instruction according to the conversion mode corresponding to the interaction types, the information interaction is realized, the multi-mode output information can be output after the virtual object is operated according to the operation instruction, and the updated game page is effectively played, so that the interaction efficiency and the interaction effect of the information interaction of the game can be greatly improved.
The method described in the above examples is further illustrated in detail below by way of example.
The embodiment will be described in terms of an information interaction apparatus of a second game, where the information interaction apparatus of the second game may be specifically integrated in an electronic device, and the electronic device may be a mobile terminal or other device; the mobile terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), a wearable device, a virtual reality device, or other intelligent devices that can access business resources.
An information interaction method for a game comprises the following steps:
the method comprises the steps of receiving input information of multiple interaction types input by a user on a current game page to obtain multi-mode input information, sending the multi-mode input information to an interaction server, enabling the interaction server to generate an updated game page and multi-mode output information, obtaining the updated game page and the multi-mode output information returned by the interaction server, and performing action effect playing on the updated game page according to the multi-mode output information.
As shown in fig. 16, the information interaction method of the game specifically includes the following steps:
201. and receiving input information of various interaction types input by a user on the current game page to obtain multi-mode input information.
For example, when the user inputs information by touch interaction on the current game page, for example, the user performs touch interaction on a designated area or control on the current game page to input information, the information interaction device of the second game may directly receive the input information of which the interaction type is touch interaction. When a user inputs information through voice interaction on a current game page, for example, the user speaks towards the mobile terminal or plays recorded sound information on the current game page, the information interaction device of the second game can receive the sound information input by the user through a module for collecting the sound information such as an audio input interface of the mobile terminal, and the like, and the sound information is used as input information of which the interaction type is voice interaction. When a user interactively inputs information by sensing on a current game page, for example, the user shakes or shakes the mobile terminal on the current game page to change the spatial position of the mobile terminal, the information interaction device of the second game collects change information in one or more directions, directional acceleration is calculated according to the change information, and the directional acceleration at the moment can be input information of a sensing interaction type. And taking the input information of the plurality of interaction types as multi-modal input information.
202. And sending the multi-modal input information to the interaction server, so that the interaction server generates the updated game page and the multi-modal output information.
For example, the multi-modal input information is sent to the interaction server, so that the interaction server determines a conversion mode corresponding to each interaction type, converts the multi-modal input information into operation instructions according to the conversion modes respectively, operates the virtual object in the current game page according to the operation instructions to obtain operation results, and generates the updated game page and the multi-modal output information based on the operation results.
203. And obtaining the updated game page and the multi-mode output information returned by the interactive server.
For example, updated game pages and multimodal output information returned by the interaction server may be retrieved directly. The updated game page and the multi-modal output information returned by the interaction server can also be indirectly acquired, for example, the interaction server stores the updated game page and the multi-modal output information in a third-party database, the interaction server sends the storage address to the information interaction device of the second game, the information interaction device of the second game acquires the updated game page and the multi-modal output information in the third-party database according to the storage address, and after the updated game page and the multi-modal output information are successfully acquired, prompt information can also be sent to the interaction server.
204. And performing action and effect playing on the updated game page according to the multi-mode output information.
For example, as shown in fig. 17, specifically, the updated game page is displayed, one or more types of animation information corresponding to the multi-modal attribute information are generated according to the type of the virtual scene corresponding to the multi-modal output information, and the animation information is played on the updated game page, for example, when the virtual scene corresponding to the multi-modal output information is a grenade throw scene, an image animation and a sound animation of grenade countdown and vibration animation of minute vibration in different directions are generated, and then the image animation of grenade countdown, the sound animation of grenade countdown and the vibration animation of grenade explosion are displayed on the updated game page. The vibration effect may generate vibration effects of different intensities according to a distance between the virtual object and the explosive. When the updated game page is played in an action effect, particularly in a vibration action effect, no matter whether the player is a grenade or not, the player feels the tiny vibration of the grenade direction, so that the operation of accidentally injuring the player and the teammates is avoided.
When the virtual scene corresponding to the multi-modal output information is the scene with the recoil feedback function, when it is detected that the user triggers the shooting operation of the equipment, according to the recoil of the currently used equipment, screen vibration effects with different degrees can be fed back on the updated game page, the vibration amplitude of the vibration effects is small, the shaking of the updated game page cannot be influenced, the visual angle of a player can be displayed on the updated game page to move upwards, the image movement effect with the recoil feedback function is generated, the preset equipment shooting sound is played on the updated game page, as shown in fig. 18, the visual angle of the player with the image movement effect with the recoil feedback function 181 moves upwards, the sound movement effect is 182, and the vibration movement effect is 183. The game immersion of the user or the player can be improved through the vibration effect, the image effect and the sound effect.
When the virtual scene corresponding to the multi-mode output information is a cold weapon knocking feedback scene, a user inputs a cold weapon hitting operation instruction through point touch interaction or sensing interaction, and when the fact that the cold weapon knocks on other players is detected, according to the knocking action amplitude, the knocking image effect is displayed on the updated game page, and the knocking sound effect and the hitting vibration effect are added, as shown in fig. 19, the knocking image effect is 191, the sound effect is 192, and the vibration effect is 193, so that the game which evolves the attack mode of the cold weapon is more interesting.
As can be seen from the above, in this embodiment, after receiving input information of multiple interaction types input by a user on a current game page, multi-modal input information is obtained, and the multi-modal input information is sent to an interaction server, so that the interaction server generates an updated game page and multi-modal output information, then obtains the updated game page and the multi-modal output information returned by the interaction server, and then performs an action-effect playing on the updated game page according to the multi-modal output information.
The method described in the above examples is further illustrated in detail below by way of example.
In this embodiment, an information interaction device of a first game is integrated in a server, the server is an interaction server, an information interaction device of a second game is integrated in a terminal, the terminal is a mobile terminal, and a game corresponding to a current game page is a shooting game.
As shown in fig. 20, an information interaction method for a game includes the following specific processes:
301. the mobile terminal receives input information of multiple interaction types input by a user on a current game page to obtain multi-mode input information.
For example, a user performs a touch interaction on a designated area or control on a current game page to input information, and the mobile terminal can directly receive the input information of which the interaction type is a touch interaction. When a user speaks towards the mobile terminal or plays recorded sound information on the current game page, the mobile terminal can receive the sound information input by the user through a module and the like for collecting the sound information, such as an audio input interface and the like of the mobile terminal, and the sound information is used as input information of which the interaction type is voice interaction. A user changes the spatial position of the mobile terminal by shaking or whipping the mobile terminal on the current game page, the mobile terminal collects change information in one or more directions, the directional acceleration is calculated according to the change information, and the directional acceleration at the moment can be input information of a sensing interaction type. And taking the input information of the plurality of interaction types as multi-modal input information.
302. The interaction server acquires multi-mode input information received by the mobile terminal in the current game page.
For example, a user inputs information through multiple interaction types in a current game page of the mobile terminal, the mobile terminal directly sends the received input information to the interaction server as multi-modal input information, and the interaction server can directly acquire the multi-modal input information. The interaction server can also read the multi-mode input information from the memory or cache of the mobile terminal directly. When the memory of the multi-mode input information is large, the mobile terminal can also store the multi-mode input information to a third-party database and send the storage address to the interaction server, the interaction server obtains the multi-mode input information from the third-party database according to the storage address, and the interaction server can also send prompt information to the mobile terminal after obtaining the multi-mode input information.
303. And the interaction server determines a conversion mode corresponding to each interaction type and respectively converts the multi-mode input information into an operation instruction according to the conversion mode.
For example, the interaction server determines a conversion mode corresponding to each interaction type, and converts the input information in the multi-modal input information into the operation instruction according to the conversion mode corresponding to the interaction type, and the multi-modal input information may include input information of at least two interaction types, where the interaction type is the information input by the point-touch interaction, and the operation instruction may be directly generated, and therefore, an additional conversion mode is not required. For the input information corresponding to other interaction types, the input information needs to be converted into an operation instruction according to the interaction mode corresponding to the interaction type, which may specifically be as follows:
a1, when voice information exists in the multi-mode input information, the interactive server converts the voice information into a voice operation instruction according to a voice conversion mode.
For example, the interaction server may translate the voice information into text information, perform feature extraction on the text information to obtain text features, recognize an operation intention of the user in the current game page according to the text features to obtain intention information, and determine a voice operation instruction corresponding to the voice information based on the intention information, which may specifically be as follows:
(1) The interactive server translates the voice information into text information and extracts the characteristics of the text information to obtain text characteristics.
For example, the interactive server may extract features of the speech information by using a speech recognition model, then perform word matching on the speech features by using an acoustic recognition network in the speech recognition model, perform sentence pattern matching on the matched words by using a language recognition network of the speech recognition model, and verify the matched words by using the sentence pattern matching, so as to obtain text information corresponding to the speech information by combining the words and the sentence patterns which are successfully matched with the words and the sentence patterns. Performing Word segmentation processing on the text information to obtain text words of the text information, performing vectorization processing on the text words by adopting a vector model corresponding to a Word2Vec algorithm to obtain text characteristic vectors of the text words, and taking the text characteristic vectors of the text words as text characteristics of the text information.
(2) And the interactive server identifies the operation intention of the user in the current game page according to the text characteristics to obtain intention information.
For example, the interaction server may calculate a matching degree between the text feature vector and a feature vector of an intention feature in a preset intention feature set by using a vector matching degree retrieval algorithm, and use the matching degree as a feature similarity. Screening at least one candidate intention feature with feature similarity exceeding a preset similarity threshold from a preset intention feature set, screening a target intention feature for determining intention information from the candidate intention features by adopting an RNN deep learning model as an intention recognition model according to a recognition strategy, and further outputting an intention ID of the target intention feature. The intention information corresponding to the intention ID is used as the intention information of the user on the current game page, and the intention information may be a material mark, an enemy mark, a destination mark or the like.
The training of the intention identification model mainly comprises configuring an intention ID in advance, performing similar question expansion on intention information by adopting an N-Gram algorithm, screening out text information from text logs of voices of other player teams, marking supervised or unsupervised linguistic data corresponding to the intention information related to the business and supervised linguistic data corresponding to the similar question of the intention information in the text information to obtain text information samples, respectively inputting the text information samples into a vector model of Word2Vec to generate text characteristic samples of the intention information, performing intention classification on the text characteristic samples, performing vectorization on the classified text characteristic samples, inputting the vectorized text characteristic samples into an RNN model for training, and obtaining the trained intention identification model.
(3) And the interactive server determines a voice operation instruction corresponding to the voice information based on the intention information.
For example, when the intention information is preset marked intention information such as a material mark, an enemy mark or a destination mark, the interactive server can determine a mark type corresponding to the intention information, and determine an identification strategy of an entity word in the text information according to the mark type, for example, when the mark type is the material mark, the entity word in the text information can be determined to be information such as a material name and a material direction, a keyword identification strategy can be adopted for the material name, and a number + keyword identification strategy can be adopted for the material direction. Taking the mark type as an example of the material mark, identifying in the text information according to the material key word, and obtaining the material name, the material type or the material direction and other entity words and entity values corresponding to the entity words. When a plurality of entity words or entity values of the entity words exist in the text information, disambiguation processing is performed on the entity words or the entity values according to a preset priority level, so that one entity word exists in the text information, one entity word corresponds to one entity value, the entity types of the entity words are determined according to the entity words, target texts containing the entity words or the entity values in the entity words, the entity values or the voice information can be used as mark information, and information corresponding to the entity types can be screened out from a preset mark information set to be used as mark information.
In an embodiment, the interaction server may determine, in the attribute information, a position and view radar information of a player character corresponding to the user, acquire information corresponding to a view radar area, and use the information as player view information. And identifying a target object needing to be marked in the current game page according to the mark type, for example, identifying the material needing to be marked as the target object in the current game page when the mark type is the material mark.
Optionally, before the target object is identified, the verification information of the intention information may be determined according to the mark type, when the verification information is that the intention information needs to be verified, taking the mark information as an example of a material mark, the intention information needs to be verified, and a query request is initiated for the material in the current game page by calling the game-play detection module to verify that the material exists in the current game page, if the material exists, it may be determined that the intention information passes verification, and when the intention information fails verification, the conversion of the voice information is stopped at this time. When the verification information is the intention information and verification is not needed, the target object needing to be marked can be directly identified in the current game page.
After identifying the target object and acquiring the player perspective information, the interaction server can also calculate the position information of the target object in the perspective radar area of the player of the current game page based on the perspective information of the player. And converting the position information and the mark information of the target object into a voice operation instruction corresponding to the voice information by calling a game interface of the current game picture.
And A2, when the multi-modal input information contains the spatial movement information of the mobile terminal, the interaction server converts the spatial movement information into a sensing operation instruction according to a sensing conversion mode.
For example, when the multi-modal input information includes spatial movement information of the mobile terminal, the interaction server acquires first scene information of a current game page, acquires the first scene information from attribute information of the current game page, extracts picture information of a virtual scene from the first scene information, and determines a first scene type of the virtual scene according to the picture information. When the scene type is a preset interactive scene, the preset interactive scene can be a cold weapon interactive scene, directional acceleration of the mobile terminal in each direction is extracted from the space moving information, the directional acceleration is compared with a preset acceleration threshold value, when the directional acceleration in any direction exceeds the preset acceleration threshold value, a cold weapon hitting instruction is generated according to the directional acceleration and the directional information, an operation action corresponding to the cold weapon hitting instruction can be used for a player character to pick up the cold weapon to hit the place, and the cold weapon can be equipment such as various cutters, pans or helmets. And taking the cold weapon hitting instruction as a sensing operation instruction corresponding to the space movement information.
304. And the interactive server operates the virtual object in the current game page according to the operation instruction to obtain an operation result.
For example, when the operation instruction is a voice operation, the interaction server marks the virtual object in the current game page to obtain an operation result, and when the operation instruction is a sensing operation instruction, the interaction server interacts with the virtual object in the current game page to obtain an operation result. Specifically, the following may be used:
(1) And when the operation instruction is a voice operation instruction, the interactive server marks the virtual object in the current game page to obtain an operation result.
For example, when the operation instruction is a voice operation instruction, different marking methods are used for operation according to different marking types corresponding to the voice operation instruction, for example, when the marking type corresponding to the voice operation instruction is a material mark, the material can be identified in the current game page according to the voice operation instruction, a marking position to be marked is determined according to the identified material, all or part of marking information is marked at the marking position, for example, the marking position is marked with 'i has material' directly according to the marking information. When the mark type corresponding to the voice operation instruction is an enemy mark, the mark can be directly marked at the position where the enemy exists or the designated mark position, for example, the mark that 'enemy exists 200 meters ahead' can be directly marked. When the type of the mark corresponding to the voice operation instruction is the destination mark, the mark can be directly made at the position of the destination or the designated mark position in the current game page, for example, taking the destination as a house, the mark "we go to the house bar ahead" can be directly made at the position of the house or the designated position, and the destination can be marked on the game map of the current game page. And acquiring marked page information of the marked current game page, and taking the marked page information as an operation result.
(2) And when the operation instruction is a sensing operation instruction, the interaction server interacts the virtual object in the current game page to obtain an operation result.
For example, when the operation command is a sensing operation command, for example, a user uses a sensing operation of quickly flicking the mobile terminal as an input mode of cold weapon hitting, so as to generate the sensing operation command of cold weapon hitting, identify a virtual object to be interacted with a player character in a current game page, operate the player to hold a pan, a broadsword or a crow bar to hit the virtual object (enemy) according to a directional acceleration generated by the swiping operation, and use information of the hit page generated by the hitting operation as an operation result.
305. The interaction server generates an updated game page and multimodal output information based on the operation result.
For example, when the operation result is the marked page information, the interaction server compares the marked page information with the current page information of the current game page to generate page update data of the marked page, updates the current game page according to the page update data to obtain the marked game page, takes the marked game page as the updated game page, when the operation result is the hit page information, compares the hit page information with the current page information of the current game page to generate page update data of the hit page, updates the current game page according to the page update data to obtain the hit game page, and takes the hit game page as the updated game page.
The interactive server acquires second scene information of the updated game page, acquires the second scene information in the attribute information of the updated game page, extracts the picture information of the virtual scene in the second scene information, and determines the second scene type of the virtual scene according to the picture information. When the second scene type is a multi-modal game scene, for example, a feedback scene of equipment such as throwing a grenade, setting recoil feedback or cold weapon striking, the multi-modal output information corresponding to the second scene type is screened from a preset multi-modal output information set according to the second scene type, and the multi-modal output information can be information of multiple modes such as sound, image and/or vibration.
306. And the interactive server returns the multi-mode output information and the updated game page to the mobile terminal.
For example, when the multi-modal output information exists in the updated game page, the interaction server returns the updated game page and the multi-modal output information to the mobile terminal. And when the multi-mode output information does not exist in the updated game page, the interactive server directly returns the updated game page to the mobile terminal.
307. And the mobile terminal performs action and effect playing on the updated game page according to the multi-mode output information.
For example, the mobile terminal displays an updated game page, generates one or more types of animation information corresponding to the multi-modal attribute information according to the type of the virtual scene corresponding to the multi-modal output information, plays the animation information on the updated game page, for example, when the virtual scene corresponding to the multi-modal output information is a grenade throwing scene, generates an image animation, a sound animation, and a vibration animation of minute vibrations in different directions, etc. of the grenade countdown, and then displays the image animation, the sound animation, and the vibration animation of the grenade explosion countdown on the updated game page. When the virtual scene corresponding to the multi-mode output information is the scene with the recoil feedback function, when the fact that a user triggers the shooting operation of the equipment is detected, screen vibration moving effects with different degrees can be fed back on the updated game page according to the recoil size of the equipment used currently, the upward moving of the visual angle of a player can be displayed on the updated game page, the image moving effect with the recoil feedback function is generated, and the preset shooting sound of the equipment is played on the updated game page. When the virtual scene corresponding to the multi-mode output information is a cold weapon knocking feedback scene, a user inputs a cold weapon knocking operation instruction through point touch interaction or sensing interaction, and when the situation that the cold weapon knocks other players is detected, according to the knocking action amplitude, the knocking image effect is displayed on the updated game page, and the knocking sound effect and the hitting vibration effect are added.
As can be seen from the above, after the interaction server in this embodiment acquires the multi-modal input information received by the terminal in the current game page, the multi-modal input information includes input information of multiple interaction types, determines a conversion manner corresponding to each interaction type, and respectively converts the multi-modal input information into an operation instruction according to the conversion manner, then operates the virtual object in the current game page according to the operation instruction to obtain an operation result, and then generates an updated game page and multi-modal output information based on the operation result, the multi-modal output information includes output information of multiple action types, and sends the multi-modal output information and the updated game page to the terminal, so that the terminal can perform action effect playing on the updated game page according to the multi-modal output information; in the scheme, the user can input information through multiple interaction types, the input information is converted into the operation instruction according to the conversion mode corresponding to the interaction types, the information interaction is realized, the multi-mode output information can be output after the virtual object is operated according to the operation instruction, and the updated game page is effectively played, so that the interaction efficiency and the interaction effect of the information interaction of the game can be greatly improved.
In order to better implement the method, an embodiment of the present invention further provides an information interaction apparatus for a game (that is, an information interaction apparatus for a first game), where the information interaction apparatus for the first game may be integrated in a device such as a server, and the server may be a single server or a server cluster formed by multiple servers.
For example, as shown in fig. 21, the information interaction apparatus of the first game may include a first obtaining unit 401, a converting unit 402, an operating unit 403, a generating unit 404, and a first transmitting unit 405 as follows:
(1) A first acquisition unit 401;
the first obtaining unit 401 is configured to obtain multi-modal input information received by the mobile terminal in a current game page, where the multi-modal input information includes input information of multiple interaction types.
For example, the first obtaining unit 401 may be specifically configured to enable a user to input information through multiple interaction types in a current game page of the terminal, the terminal receives the input information and directly sends the input information as multi-modal input information to the information interaction device of the first game, the terminal may store the received input information as multi-modal input information in a memory or a cache and send a storage address to the information interaction device of the first game, and the information interaction device of the first game extracts the multi-modal information from the memory or the cache of the terminal according to the storage address.
(2) A conversion unit 402;
the conversion unit 402 is configured to determine a conversion manner corresponding to each interaction type, and respectively convert the multi-modal input information into an operation instruction according to the conversion manner.
For example, the conversion unit 402 may be specifically configured to determine that the conversion manner of the voice interaction is a voice conversion manner when the interaction type is voice interaction, determine that the conversion manner of the sensing interaction is a sensing conversion manner when the interaction type is sensing interaction, convert voice information into a voice operation instruction according to the voice conversion manner when voice information exists in the multi-modal input information, and convert spatial movement information into a sensing operation instruction according to the sensing conversion manner when spatial movement information of the mobile terminal exists in the multi-modal input information.
(3) An operation unit 403;
an operation unit 403, configured to operate the virtual object in the current game page according to the operation instruction, so as to obtain an operation result.
For example, the operation unit 403 may be specifically configured to mark a virtual object in the current game page to obtain an operation result when the operation instruction is a voice operation instruction, and interact with the virtual object in the current game page to obtain the operation result when the operation instruction is a sensing operation instruction.
(4) A generation unit 404;
a generating unit 404 for generating an updated game page and multi-modal output information including output information of a plurality of action types based on the operation result.
For example, the generating unit 404 may be specifically configured to generate page update data according to the operation result, update the current game page according to the page update data to obtain an updated game page, obtain second scene information of the updated game page, determine a second scene type of a virtual scene of the updated game page according to the second scene information, and screen multi-modal output information corresponding to the updated game page from the preset multi-modal output information set when the second scene type is a preset feedback scene.
(5) A first transmitting unit 405;
the first sending unit 405 is configured to send the multi-modal output information and the updated game page to the mobile terminal, so that the mobile terminal performs action-effect playing on the updated game page according to the multi-modal output information.
For example, the first sending unit 405 may be specifically configured to send the updated game page and the multi-modal output information to the terminal when the multi-modal output information exists in the updated game page. And when the multi-mode output information does not exist in the updated game page, directly sending the updated game page to the terminal.
In specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily, and implemented as the same or several entities, and specific implementations of the above units may refer to the foregoing method embodiment, which is not described herein again.
As can be seen from the above, in this embodiment, after the first obtaining unit 401 obtains the multi-modal input information received by the terminal in the current game page, the multi-modal input information includes input information of multiple interaction types, the converting unit 402 determines a conversion manner corresponding to each interaction type, and converts the multi-modal input information into the operation instruction according to the conversion manner, then the operating unit 403 operates the virtual object in the current game page according to the operation instruction to obtain the operation result, then the generating unit 404 generates the updated game page and the multi-modal output information based on the operation result, the multi-modal output information includes output information of multiple dynamic effect types, and the first sending unit 405 sends the multi-modal output information and the updated game page to the terminal, so that the terminal performs dynamic effect playing on the updated game page according to the multi-modal output information; in the scheme, the user can input information through multiple interaction types, the input information is converted into the operation instruction according to the conversion mode corresponding to the interaction types, information interaction is realized, multi-mode output information can be output after the virtual object is operated according to the operation instruction, and the updated game page is played effectively, so that the interaction efficiency and the interaction effect of information interaction of the game can be greatly improved.
In order to better implement the above method, the embodiment of the present invention further provides an information interaction device for a game (i.e., an information interaction device for a second game), where the information interaction device for the second game may be integrated in a mobile terminal, and the mobile terminal may include a mobile phone, a tablet computer, a notebook computer, and/or a personal computer, etc.
For example, as shown in fig. 22, the information interaction apparatus of the second game may include a receiving unit 501, a second sending unit 502, a second obtaining unit 503, and a playing unit 504, as follows:
(1) A receiving unit 501;
the receiving unit 501 is configured to receive input information of multiple interaction types input by a user on a current game page, so as to obtain multi-modal input information.
For example, the receiving unit 501 may be specifically configured to receive input information of interaction types, such as a point-and-touch interaction, a voice interaction, and a sensing interaction, input by a user on a current game page, obtain multi-modal input information,
(2) A second transmitting unit 502;
the second sending unit 502 is configured to send the multi-modal input information to the interaction server, so that the interaction server generates an updated game page and multi-modal output information.
For example, the second sending unit 502 may be specifically configured to send multi-modal input information to the interaction server, so that the interaction server determines a conversion manner corresponding to each interaction type, converts the multi-modal input information into operation instructions according to the conversion manners, operates the virtual object in the current game page according to the operation instructions to obtain an operation result, and generates the updated game page and the multi-modal output information based on the operation result.
(3) A second acquisition unit 503;
the second obtaining unit 503 is configured to obtain the updated game page and the multimodal output information returned by the interaction server.
For example, the second obtaining unit 503 may be specifically configured to directly obtain the updated game page and the multi-modal output information returned by the interaction server, or indirectly obtain the updated game page and the multi-modal output information returned by the interaction server.
(4) A playback unit 504;
the playing unit 504 is configured to perform effective playing on the updated game page according to the multi-modal output information.
For example, the playing unit 504 may be specifically configured to generate multi-modal activity information according to the multi-modal output information, where the multi-modal activity information includes activity information of at least one activity type, determine playing information of the multi-modal activity information based on the activity type, display an updated game page, and play the multi-modal activity information on the updated game page according to the playing information.
As can be seen from the above, in this embodiment, after the receiving unit 501 receives input information of multiple interaction types input by a user on a current game page, multi-modal input information is obtained, the second sending unit 502 sends the multi-modal input information to the interaction server, so that the interaction server generates an updated game page and multi-modal output information, then, the second obtaining unit 503 obtains the updated game page and the multi-modal output information returned by the interaction server, and then, the playing unit 504 performs effective playing on the updated game page according to the multi-modal output information.
An electronic device according to an embodiment of the present invention is further provided, as shown in fig. 23, which shows a schematic structural diagram of the electronic device according to the embodiment of the present invention, specifically:
the electronic device may include components such as a processor 601 of one or more processing cores, memory 602 of one or more computer-readable storage media, a power supply 603, and an input unit 604. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 23 is not limiting of electronic devices and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 601 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 602 and calling data stored in the memory 602. Optionally, processor 601 may include one or more processing cores; preferably, the processor 601 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store software programs and modules, and the processor 601 executes various functional applications and data processing by operating the software programs and modules stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 602 may also include a memory controller to provide the processor 601 with access to the memory 602.
The electronic device further comprises a power supply 603 for supplying power to the various components, and preferably, the power supply 603 is logically connected to the processor 601 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 603 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 604, and the input unit 604 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 601 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 602 according to the following instructions, and the processor 601 runs the application programs stored in the memory 602, thereby implementing various functions as follows:
the method comprises the steps of obtaining multi-modal input information received by a terminal in a current game page, wherein the multi-modal input information comprises input information of multiple interaction types, determining a conversion mode corresponding to each interaction type, respectively converting the multi-modal input information into operation instructions according to the conversion modes, operating a virtual object in the current game page according to the operation instructions to obtain an operation result, generating an updated game page and multi-modal output information based on the operation result, wherein the multi-modal output information comprises output information of multiple dynamic effect types, and sending the multi-modal output information and the updated game page to the terminal so that the multi-modal terminal can effectively play the updated game page according to the multi-modal output information.
Or
The method comprises the steps of receiving input information of multiple interaction types input by a user on a current game page to obtain multi-mode input information, sending the multi-mode input information to an interaction server, enabling the interaction server to generate an updated game page and multi-mode output information, obtaining the updated game page and the multi-mode output information returned by the interaction server, and performing action effect playing on the updated game page according to the multi-mode output information.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, in the embodiment of the present invention, after multi-modal input information received by a terminal in a current game page is obtained, the multi-modal input information includes input information of multiple interaction types, a conversion mode corresponding to each interaction type is determined, the multi-modal input information is respectively converted into an operation instruction according to the conversion mode, then, a virtual object in the current game page is operated according to the operation instruction to obtain an operation result, then, an updated game page and multi-modal output information are generated based on the operation result, the multi-modal output information includes output information of multiple action types, and the multi-modal output information and the updated game page are sent to the terminal, so that the terminal performs action effect playing on the updated game page according to the multi-modal output information; in the scheme, the user can input information through multiple interaction types, the input information is converted into the operation instruction according to the conversion mode corresponding to the interaction types, the information interaction is realized, the multi-mode output information can be output after the virtual object is operated according to the operation instruction, and the updated game page is effectively played, so that the interaction efficiency and the interaction effect of the information interaction of the game can be greatly improved.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, the embodiment of the present invention provides a computer-readable storage medium, in which a plurality of instructions are stored, where the instructions can be loaded by a processor to execute the steps in the information interaction method of any one of the games provided by the embodiment of the present invention. For example, the instructions may perform the steps of:
the method comprises the steps of obtaining multi-modal input information received by a terminal in a current game page, wherein the multi-modal input information comprises input information of multiple interaction types, determining a conversion mode corresponding to each interaction type, respectively converting the multi-modal input information into operation instructions according to the conversion modes, operating a virtual object in the current game page according to the operation instructions to obtain an operation result, generating an updated game page and multi-modal output information based on the operation result, wherein the multi-modal output information comprises output information of multiple dynamic effect types, and sending the multi-modal output information and the updated game page to the terminal so that the multi-modal terminal can effectively play the updated game page according to the multi-modal output information.
Or
The method comprises the steps of receiving input information of multiple interaction types input by a user on a current game page to obtain multi-mode input information, sending the multi-mode input information to an interaction server, enabling the interaction server to generate an updated game page and multi-mode output information, obtaining the updated game page and the multi-mode output information returned by the interaction server, and performing action effect playing on the updated game page according to the multi-mode output information.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the computer-readable storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the computer-readable storage medium can execute the steps in the information interaction method for any game provided in the embodiment of the present invention, the beneficial effects that can be achieved by the information interaction method for any game provided in the embodiment of the present invention can be achieved, for details, see the foregoing embodiments, and are not described herein again.
According to one aspect of the application, there is provided, among other things, a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method provided in the various alternative implementation modes of the information interaction aspect of the game.
The information interaction method, device and computer-readable storage medium for a game provided by the embodiment of the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as limiting the present invention.

Claims (14)

1. An information interaction method for a game, comprising:
obtaining multi-modal input information received by a terminal in a current game page, wherein the multi-modal input information comprises input information of various interaction types;
determining a conversion mode corresponding to each interaction type, and respectively converting the multi-mode input information into operation instructions according to the conversion modes;
operating the virtual object in the current game page according to the operation instruction to obtain an operation result;
generating an updated game page and multi-modal output information based on the operation result, wherein the multi-modal output information comprises output information of various dynamic effect types, and the multi-modal output information is determined based on a scene type corresponding to scene information of the updated game page;
sending the multi-modal output information and the updated game page to the terminal so that the terminal can effectively play the updated game page according to the multi-modal output information;
the operating the virtual object in the current game page according to the operating instruction to obtain an operating result includes:
when the operation instruction is a voice operation instruction, marking a virtual object in the current game page to obtain an operation result;
when the operation instruction is a sensing operation instruction corresponding to target input information of the terminal which is quickly shaken by a user, identifying a virtual object needing to interact with a player character in the current game page and identifying the directional acceleration of the target input information input;
and controlling the player character to operate game equipment to interact with the virtual object according to the directional acceleration to obtain an operation result.
2. The method of claim 1, wherein the determining the conversion method corresponding to each interaction type comprises:
when the interaction type is voice interaction, determining that a conversion mode of the voice interaction is a voice conversion mode, wherein the voice conversion mode is a conversion mode for converting input information corresponding to the voice interaction into an operation instruction;
and when the interaction type is sensing interaction, determining that a conversion mode of the sensing interaction is a sensing conversion mode, wherein the sensing interaction refers to interaction by changing the spatial position of the terminal, and the sensing conversion mode refers to a conversion mode of converting input information corresponding to the sensing interaction into an operation instruction.
3. The information interaction method of a game according to claim 2, wherein the converting the multi-modal input information into the operation commands according to the conversion manner comprises:
when voice information exists in the multi-mode input information, converting the voice information into a voice operation instruction according to the voice conversion mode, wherein the voice information is input information of which the interaction type is voice interaction;
and when the multi-modal input information has the spatial movement information of the terminal, converting the spatial movement information into a sensing operation instruction according to the sensing conversion mode, wherein the spatial movement information is input information of which the interaction type is sensing interaction.
4. The information interaction method of the game according to claim 3, wherein the converting the voice information into the voice operation command according to the voice conversion mode includes:
translating the voice information into text information, and extracting the characteristics of the text information to obtain text characteristics;
according to the text characteristics, identifying the operation intention of the user in the current game page to obtain intention information;
and determining a voice operation instruction corresponding to the voice information based on the intention information.
5. The method for information interaction of a game according to claim 4, wherein the identifying the operation intention of the user in the current game page according to the text feature to obtain intention information comprises:
calculating the feature similarity of the text features and intention features in a preset intention feature set;
screening target intention characteristics used for determining intention information from the preset intention characteristic set according to the characteristic similarity;
and taking the target intention information corresponding to the target intention characteristics as the intention information of the user on the current game page.
6. The information interaction method of the game according to claim 4, wherein the determining, based on the intention information, a voice operation instruction corresponding to the voice information includes:
when the intention information is preset marking intention information, determining a marking type corresponding to the intention information;
identifying entity words used for marking in the text information according to the marking type to obtain marking information;
acquiring attribute information of the current game page, and extracting player view angle information from the attribute information;
and generating a voice operation instruction corresponding to the voice information based on the marking information and the player perspective information.
7. The method for information interaction of a game according to claim 6, wherein the generating of the voice operation instruction corresponding to the voice information based on the tag information and the player perspective information comprises:
identifying a target object needing to be marked in the current game page according to the mark type;
determining position information of the target object based on the player perspective information;
and generating a voice operation instruction corresponding to the voice information according to the position information and the mark information.
8. The information interaction method of claim 3, wherein the converting the spatial movement information into the sensing operation command according to the sensing conversion manner comprises:
acquiring first scene information of the current game page, and determining a first scene type of a virtual scene of the current game page according to the first scene information;
when the scene type is a preset interactive scene, extracting the direction acceleration of the mobile terminal in each direction from the space movement information;
and when the direction acceleration in any direction exceeds a preset acceleration threshold value, generating a sensing operation instruction corresponding to the space movement information according to the direction acceleration.
9. The information interaction method of a game according to claim 8, wherein the generating of the updated game page and the multi-modal output information based on the operation result comprises:
generating page updating data according to the operation result, and updating the current game page according to the page updating data to obtain the updated game page;
acquiring second scene information of the updated game page, and determining a second scene type of a virtual scene of the updated game page according to the second scene information;
and when the second scene type is a preset feedback scene, screening out multi-modal output information corresponding to the updated game page from a preset multi-modal output information set.
10. An information interaction method for a game, comprising:
receiving input information of various interaction types input by a user on a current game page to obtain multi-mode input information;
sending the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information;
acquiring the updated game page and multi-modal output information returned by the interactive server, wherein the multi-modal output information comprises output information of various dynamic effect types, and the multi-modal output information is determined based on a scene type corresponding to scene information of the updated game page;
performing action effect playing on the updated game page according to the multi-mode output information;
the multi-mode input information comprises target input information of a user for quickly shaking the terminal;
the sending the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information, includes:
and sending the multi-modal input information containing the target input information to an interaction server so that the interaction server identifies a virtual object needing to interact with a player character in the current game page, extracts directional acceleration from the target input information, and controls the player character to operate game equipment to interact with the virtual object according to the directional acceleration to obtain an updated game page and multi-modal output information.
11. The method for information interaction of a game according to claim 10, wherein the playing the updated game page dynamically according to the multi-modal output information comprises:
generating multi-mode dynamic effect information according to the multi-mode output information, wherein the multi-mode dynamic effect information comprises dynamic effect information of at least one dynamic effect type;
based on the dynamic effect type, determining the playing information of the multi-mode dynamic effect information;
and displaying the updated game page, and playing the multi-mode dynamic effect information on the updated game page according to the playing information.
12. An information interaction device for a game, comprising:
the terminal comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring multi-modal input information received by the terminal in a current game page, and the multi-modal input information comprises input information of various interaction types;
the conversion unit is used for determining a conversion mode corresponding to each interaction type and respectively converting the multi-mode input information into an operation instruction according to the conversion mode;
the operation unit is used for operating the virtual object in the current game page according to the operation instruction to obtain an operation result;
the generating unit is used for generating an updated game page and multi-modal output information based on the operation result, the multi-modal output information comprises output information of a plurality of dynamic effect types, and the multi-modal output information is determined based on a scene type corresponding to scene information of the updated game page;
the first sending unit is used for sending the multi-modal output information and the updated game page to the terminal so that the terminal can play the updated game page effectively according to the multi-modal output information;
the operating the virtual object in the current game page according to the operating instruction to obtain an operating result includes:
when the operation instruction is a voice operation instruction, marking a virtual object in the current game page to obtain an operation result;
when the operation instruction is a sensing operation instruction corresponding to target input information of the terminal which is quickly flicked by a user, identifying a virtual object needing to interact with a player character in the current game page and identifying the directional acceleration input by the target input information;
and controlling the player character to operate the game equipment to interact with the virtual object according to the directional acceleration to obtain an operation result.
13. An information interaction device for a game, comprising:
the receiving unit is used for receiving input information of various interaction types input by a user on a current game page to obtain multi-mode input information;
the second sending unit is used for sending the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information;
the second obtaining unit is used for obtaining the updated game page and multi-modal output information returned by the interaction server, the multi-modal output information comprises output information of various dynamic effect types, and the multi-modal output information is determined based on a scene type corresponding to scene information of the updated game page;
the playing unit is used for performing effective playing on the updated game page according to the multi-mode output information;
the multi-mode input information comprises target input information of a user for quickly shaking the terminal;
the sending of the multi-modal input information to an interaction server, so that the interaction server generates an updated game page and multi-modal output information, comprises:
and sending the multi-modal input information containing the target input information to an interaction server so that the interaction server identifies a virtual object needing to interact with a player character in the current game page, extracts directional acceleration from the target input information, and controls the player character to operate game equipment to interact with the virtual object according to the directional acceleration to obtain an updated game page and multi-modal output information.
14. A computer readable storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor to execute the steps of the information interaction method of the game according to any one of claims 1 to 11.
CN202011136794.6A 2020-10-22 2020-10-22 Information interaction method and device for game and computer readable storage medium Active CN112221139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011136794.6A CN112221139B (en) 2020-10-22 2020-10-22 Information interaction method and device for game and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011136794.6A CN112221139B (en) 2020-10-22 2020-10-22 Information interaction method and device for game and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112221139A CN112221139A (en) 2021-01-15
CN112221139B true CN112221139B (en) 2023-02-24

Family

ID=74109767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011136794.6A Active CN112221139B (en) 2020-10-22 2020-10-22 Information interaction method and device for game and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112221139B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113018864B (en) * 2021-03-26 2024-02-13 网易(杭州)网络有限公司 Virtual object prompting method and device, storage medium and computer equipment
CN114168878B (en) * 2021-11-23 2022-11-04 上海鸿米信息科技有限责任公司 Dynamic effect playing method, device, equipment, storage medium and program product
CN117369633A (en) * 2023-10-07 2024-01-09 上海铱奇科技有限公司 AR-based information interaction method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163178A (en) * 1998-11-26 2000-06-16 Hitachi Ltd Interaction device with virtual character and storage medium storing program generating video of virtual character
JP2002166049A (en) * 2000-12-01 2002-06-11 Taito Corp Video game apparatus having two or more control device
CN102968549A (en) * 2012-10-17 2013-03-13 北京大学 Multi-user on-line interaction method and system based on intelligent mobile terminal equipment
CN107773982A (en) * 2017-10-20 2018-03-09 科大讯飞股份有限公司 Game voice interactive method and device
CN108465238A (en) * 2018-02-12 2018-08-31 网易(杭州)网络有限公司 Information processing method, electronic equipment in game and storage medium
CN111672098A (en) * 2020-06-18 2020-09-18 腾讯科技(深圳)有限公司 Virtual object marking method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163178A (en) * 1998-11-26 2000-06-16 Hitachi Ltd Interaction device with virtual character and storage medium storing program generating video of virtual character
JP2002166049A (en) * 2000-12-01 2002-06-11 Taito Corp Video game apparatus having two or more control device
CN102968549A (en) * 2012-10-17 2013-03-13 北京大学 Multi-user on-line interaction method and system based on intelligent mobile terminal equipment
CN107773982A (en) * 2017-10-20 2018-03-09 科大讯飞股份有限公司 Game voice interactive method and device
CN108465238A (en) * 2018-02-12 2018-08-31 网易(杭州)网络有限公司 Information processing method, electronic equipment in game and storage medium
CN111672098A (en) * 2020-06-18 2020-09-18 腾讯科技(深圳)有限公司 Virtual object marking method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112221139A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112221139B (en) Information interaction method and device for game and computer readable storage medium
CN110288077B (en) Method and related device for synthesizing speaking expression based on artificial intelligence
CN112131988B (en) Method, apparatus, device and computer storage medium for determining virtual character lip shape
LaViola Jr 3d gestural interaction: The state of the field
CN112040263A (en) Video processing method, video playing method, video processing device, video playing device, storage medium and equipment
CN111383631B (en) Voice interaction method, device and system
CN111672098A (en) Virtual object marking method and device, electronic equipment and storage medium
WO2023082703A1 (en) Voice control method and apparatus, electronic device, and readable storage medium
US11819764B2 (en) In-game resource surfacing platform
US10360775B1 (en) Systems and methods for designing haptics using speech commands
CN107608799B (en) It is a kind of for executing the method, equipment and storage medium of interactive instruction
KR20200040097A (en) Electronic apparatus and method for controlling the electronicy apparatus
US10175938B2 (en) Website navigation via a voice user interface
CN110955818A (en) Searching method, searching device, terminal equipment and storage medium
CN112562723A (en) Pronunciation accuracy determination method and device, storage medium and electronic equipment
CN111314771A (en) Video playing method and related equipment
KR20190094087A (en) User terminal including a user customized learning model associated with interactive ai agent system based on machine learning, and computer readable recording medium having the customized learning model thereon
CN112742024B (en) Virtual object control method, device, equipment and storage medium
JP2022020062A (en) Mining method for feature information, device and electronic apparatus
CN112527105B (en) Man-machine interaction method and device, electronic equipment and storage medium
JP5318016B2 (en) GAME SYSTEM, GAME SYSTEM CONTROL METHOD, AND PROGRAM
CN113474781A (en) Extensible dictionary for game events
CN113569043A (en) Text category determination method and related device
CN111723783A (en) Content identification method and related device
KR20200040396A (en) Apparatus and method for providing story

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant