CN111265851A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111265851A
CN111265851A CN202010080399.4A CN202010080399A CN111265851A CN 111265851 A CN111265851 A CN 111265851A CN 202010080399 A CN202010080399 A CN 202010080399A CN 111265851 A CN111265851 A CN 111265851A
Authority
CN
China
Prior art keywords
target
player
voice
target player
display object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010080399.4A
Other languages
Chinese (zh)
Other versions
CN111265851B (en
Inventor
张艳军
陈裕龙
陈明标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010080399.4A priority Critical patent/CN111265851B/en
Publication of CN111265851A publication Critical patent/CN111265851A/en
Application granted granted Critical
Publication of CN111265851B publication Critical patent/CN111265851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/577Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a data processing method, an apparatus, an electronic device, and a storage medium, wherein the method includes: obtaining voice instructions for at least one target player for controlling a state change of a display located in a first area in a game play in which at least two participating players exist, the at least two participating players including the at least one target player; respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes; respectively determining target state changes which can be controlled by the voice instructions of the at least one target player to the display object based on the attribute performances; and controlling the display object to change the state in the first area based on the target state change which can be controlled by the voice instruction of the at least one target player. The embodiment of the disclosure can improve the operation richness of the participatory player when the state of the display object is changed through the voice instruction control.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
When the human-computer interaction is carried out by the majority of users, the users always hope to experience more novel, interesting and richer operation. Particularly, with the rapid development of information technology, when a user performs human-computer interaction, the user must perform operations on gestures, and the user can perform human-computer interaction only through voice, for example, controlling a display object to move. In the prior art, when the user moves the display object through voice control, the operation that can be performed is single, and the requirements of the user cannot be effectively met.
Disclosure of Invention
An object of the present disclosure is to provide a data processing method, apparatus, electronic device, and storage medium, which can improve the richness of operations of participating players when controlling a display object to change states through a voice instruction.
According to an aspect of the disclosed embodiments, a data processing method is disclosed, the method comprising:
obtaining voice instructions for at least one target player for controlling a state change of a display located in a first area in a game play in which at least two participating players exist, the at least two participating players including the at least one target player;
respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes;
respectively determining target state changes which can be controlled by the voice instructions of the at least one target player to the display object based on the attribute performances;
and controlling the display object to change the state in the first area based on the target state change which can be controlled by the voice instruction of the at least one target player.
According to an aspect of the disclosed embodiments, a data processing apparatus is disclosed, the apparatus comprising:
a first obtaining module configured to obtain a voice of at least one target player currently capable of controlling a state change of a display located in a first area in a play, where at least two participating players participating in the play exist, the at least two participating players including the at least one target player;
the second acquisition module is configured to respectively acquire attribute expressions of the voice of the at least one target player on preset voice attributes;
a determination module configured to determine, based on the attribute performances, target state changes that the at least one target player can control the display to perform, respectively;
and the control module is configured to control the display object to change the state in the first area based on the at least one target state change.
According to an aspect of an embodiment of the present disclosure, there is disclosed a data processing electronic device including: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.
According to an aspect of an embodiment of the present disclosure, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.
In the embodiment of the disclosure, at least two participating players exist in the game, and each participating player can be used as a target player to control the display object positioned in the first area to change the state through voice instructions. Specifically, the game server acquires a voice instruction of at least one target player; respectively determining target state change of a display object which can be controlled by the voice instruction of at least one target player according to the attribute expression of the voice instruction on the preset voice attribute; and then the display object is controlled to change the state in the first area on the basis. By the method, the participating player can interact with other participating players besides the machine, so that the operation richness of the participating player in controlling the display object to change the state through the voice instruction is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates the basic architecture components according to one embodiment of the present disclosure.
FIG. 2 illustrates the architecture components according to one embodiment of the present disclosure.
FIG. 3 shows a flow diagram of a data processing method according to one embodiment of the present disclosure.
Fig. 4 illustrates the structure of an LSTM network used in accordance with one embodiment of the present disclosure.
Fig. 5 illustrates a terminal interface diagram of a participating player before a session starts in a live scene of a session according to one embodiment of the present disclosure.
Fig. 6 shows a terminal interface diagram of a participating player in a game-play process in a live scene of game-play according to an embodiment of the present disclosure.
Fig. 7 is a diagram illustrating a terminal interface for a spectator in a game-play live scenario to transmit virtual resources to a target player during the game-play according to an embodiment of the present disclosure.
Fig. 8 shows a flow diagram of a complete session-alignment process in a live scene of a session-alignment according to one embodiment of the present disclosure.
FIG. 9 shows a flow diagram of a participant player matching in a live scene of a match, according to one embodiment of the present disclosure.
FIG. 10 shows a flow diagram of a participant player interacting with the background during a play session in a live scene of a play session according to one embodiment of the present disclosure.
FIG. 11 illustrates a flow diagram of an off-site user interacting with the background in a live scene of an on-site according to one embodiment of the disclosure.
FIG. 12 shows a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
FIG. 13 shows a hardware diagram of a data processing electronic device according to one embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The disclosed embodiments relate to the field of artificial intelligence. Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice technology, a natural language processing technology, machine learning/deep learning and the like. Specifically, the artificial intelligence field related to the embodiment of the disclosure is mainly a speech technology and machine learning in the artificial intelligence field.
Key technologies for Speech Technology (Speech Technology) are automatic Speech recognition Technology (ASR) and Speech synthesis Technology (TTS), as well as voiceprint recognition Technology. The computer can listen, see, speak and feel, and the development direction of the future human-computer interaction is provided, wherein the voice becomes one of the best viewed human-computer interaction modes in the future.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
A brief explanation of some concepts of the embodiments of the present disclosure follows.
The first region refers to a region in which the movement range of the display object is located. For example: after each participating player joins the same game, a game interface is displayed on the participating player end (for example, the participating player's mobile phone terminal and the participating player's computer terminal) of each participating player. The moving range of the display object (for example, a circular display object with a preset size) in the game is a square area with 20 unit length sides at a preset position in the game interface, and then the first area is — "a square area with 20 unit length sides at the preset position".
The voice instruction refers to information in a voice form for controlling the display object in the first area to make a state change. It should be noted that the voice command in the embodiment of the present disclosure may not include specific semantics. For example: the target player simply shouts and generates a voice command.
The preset voice attribute refers to a voice attribute that is set in advance as a measure of a state change. For example: if the preset voice attribute is pitch, determining the state change corresponding to the voice instruction by taking the pitch of the voice instruction as a measurement standard; and if the preset voice attribute is the volume, determining the state change corresponding to the voice instruction by taking the volume of the voice instruction as a measurement standard.
The attribute representation refers to the concrete representation of the preset voice attribute on the data level. For example: specific Hertz pitch; specific decibel magnitude of the volume.
The participating player is a player who can control the state change of the display object located in the first area in the game by voice command when a certain condition is satisfied. For example: the first region in the pair is divided into a left half region and a right half region. When the display object is positioned in the left half area, the player can control the display object to change the state through the voice instruction; when the display object is positioned in the right half area, the player pink can control the display object to change the state through voice instructions. Then both xiaoming and xiaohong are the participating players of the pair.
The target player is a player who can control the state change of the display object positioned in the first area in the game through voice commands at the current moment. The target player belongs to the participating player. For example: xiaoming and Xiaohong are both players participating in the game; the first region in the pair is divided into a left half region and a right half region. When the display object is positioned in the left half area, the Xiaoming can control the display object to change the state through a voice instruction; when the display object is positioned in the right half area, the small red can control the display object to change the state through the voice instruction. If the display object at the current moment is positioned in the left half area, the target player is small and bright; and if the display object at the current moment is positioned in the right half area, the target player is reddish.
It should be noted that there may be more than one target player depending on the particular application scenario. For example: xiaoming and Xiaohong are both players participating in the game; in the whole course of the game, the little bright and the little red can control the display object to change the state. Then at the current moment, twilight and pinkish red are both target players.
State change refers to a change in the state of a display, including but not limited to: the motion state of the display object changes (e.g., displacement, speed change, acceleration change), and the appearance state of the display object changes (e.g., shape change, color change).
The target state change refers to a state change that the voice instruction of the target player can control the display object to perform, and the target state change corresponds to the target player one to one. For example: at the present moment, Xiaoming and Xiaohong are both target players. The small and clear voice command can control the display object to move 5 unit lengths along the positive direction of the X axis from the current position, and then the target state of the display object which can be controlled by the small and clear voice command is changed into- 'the display object moves 5 unit lengths along the positive direction of the X axis'; the red voice command can control the display object to move 3 unit lengths along the Y-axis negative direction from the current position, and then the red voice command can control the target state of the display object to change to be- 'move 3 unit lengths along the Y-axis negative direction'; the sound command of little just can control the display object to move 7 unit length along Y axle negative direction from the present position, increase 2 colourity simultaneously, then the sound command of little just can control the display object to carry out the target state change to- "move 7 unit length along Y axle negative direction, increase 2 colourity simultaneously". It is understood that when the state change is only displacement, the target state change is target displacement, which refers to displacement of the display object controlled by the voice command of the target player.
The state scalar quantity refers to a physical quantity having no direction for measuring the degree of change of the corresponding state. For example: if the corresponding state is changed into displacement, the state scalar quantity is the distance for measuring the displacement degree; if the corresponding state change is a color change, the state scalar is the chromaticity difference that measures the degree of color change.
The state scalar weight refers to a weight used to measure the importance of the corresponding virtual resource to affect the state scalar. For example: if the state is changed into displacement and the state scalar is distance, the weight of the state scalar is distance weight and is used for measuring the importance degree of the influence of the corresponding virtual resource on the distance; if the state is changed to be color-changed and the state scalar is chroma, the state scalar weight is chroma weight for measuring the importance degree of the corresponding virtual resource to influence the chroma.
The architecture of an embodiment of the present disclosure is described below with reference to fig. 1 and 2.
FIG. 1 shows the basic architecture components of one embodiment of the present disclosure: the game server 10 and the participant end 20. After each participating player joins the same game created by the game server 10 through the corresponding participating player terminal 20, each participating player may send a voice instruction to the corresponding participating player terminal 20, so as to control the display object located in the first area in the game to change the state. When the game server 10 controls the display object to change the state in the first area, acquiring a voice instruction of at least one target player; respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes; respectively determining target state change of a display object which can be controlled by the voice instruction of the at least one target player based on the attribute performance; and controlling the display object to change the state in the first area based on the determined target state change.
For example: the Xiaoming and the Xiaohong are added into the same game created by the game server through respective computer terminals, and game interfaces comprising a first area are displayed on the computer terminal of the Xiaoming and the computer terminal of the Xiaohong. In the game, the Xiaoming player and the Xiaohong player can control the display object to change the state in the first area by sending voice commands to the computer terminal. The first region is divided into a left half region and a right half region. When the display object is positioned in the left half area, the Xiaoming can control the display object to change the state through a voice instruction; when the display object is positioned in the right half area, the small red can control the display object to change the state through the voice instruction.
When the display is in the right half area, the small red is the target player. The office server acquires a red voice instruction; acquiring the attribute expression of the small red voice command on the volume, namely the volume of the small red voice command; based on the volume, determining target state change of a display object controlled by the small red voice instruction; and controlling the display object to change the state in the first area based on the target state change.
The embodiment has the advantages that the participator can interact with each other by controlling the display object, and the operation richness of the participator when the participator controls the display object to change the state through the voice instruction is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
FIG. 2 illustrates the architectural components of one embodiment of the present disclosure: a game server 10, a participant player terminal 20 and a spectator terminal 30. After each participating player joins the same game created by the game server 10 through the corresponding participating player terminal 20, each participating player may send a voice instruction to the corresponding participating player terminal 20, so as to control the display object located in the first area in the game to change the state. When the game server 10 controls the display object to change the state in the first area, acquiring a voice instruction of at least one target player; respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes; respectively determining target state change of a display object which can be controlled by the voice instruction of the at least one target player based on the attribute performance; and controlling the display object to change the state in the first area based on the determined target state change.
During the game, each audience receives the multimedia data stream of the game from the game server 10 through the corresponding audience terminal 30, so as to watch the game; each spectator may also send virtual resources to the game server 10 for a particular target player through the corresponding spectator terminal 30 to change the target state change that the target player can control the display.
For example: and the game-playing server creates a live broadcasting room, the Xiaoming and Xiaohong join the live broadcasting room through respective computer terminals and carry out game-playing, and meanwhile, each audience can enter the live broadcasting room through respective computer terminals to watch the live broadcasting of the game-playing. Specifically, after entering the live broadcasting room, the audience can obtain the real-time multimedia data stream of the match sent by the match server through the computer terminal of the audience, so that the audience can watch the match live broadcasting.
In the game, when the minired is the target player, the game server acquires a minired voice instruction; acquiring the attribute expression of the small red voice command on the volume, namely the volume of the small red voice command; based on the volume, determining target state change of a display object controlled by the small red voice instruction; based on the target state change, the display object is controlled to move in the first area. In this process, the viewer can send virtual resources to the game server for the small red to change the target state change of the display object controlled by the voice command of the small red.
Specifically, when the target state change that the small red voice command can control the display object to perform originally is- "move 3 unit lengths along the Y-axis negative direction", if the viewer sends the first type of virtual resource to the game server for the small red, the target state change that the small red voice command can control the display object to perform is changed- "move 6 unit lengths along the Y-axis negative direction"; if the viewer sends the second type of virtual resource to the game server for the small red, the target state change of the display object can be controlled by the voice command of the small red to be changed into- "move 1.5 unit length along the negative direction of the Y axis".
The embodiment has the advantages that the user who cannot directly control the display object to move through the voice instruction (for example, the audience in the embodiment) outside the game can indirectly participate in the game, so that the participating player can interact with the user outside the game, and the operation richness of the participating player in controlling the display object to change the state through the voice instruction is further improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Specific implementations of embodiments of the present disclosure are described below.
It should be particularly noted that, in order to briefly and intuitively demonstrate the performance of the embodiments of the present disclosure in a specific scenario, the following embodiments are mainly described with respect to the case of "state change to displacement". However, it is understood from the above description that the embodiments of the present disclosure are not limited to the case of "changing the state into the displacement", and the following embodiments are only exemplary illustrations, and should not limit the functions and the scope of the present disclosure.
As shown in fig. 3, a data processing method includes:
step S410, obtaining a voice instruction of at least one target player for controlling the state change of a display object positioned in a first area in a game, wherein at least two participating players exist in the game, and the at least two participating players comprise the at least one target player;
step S420, respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes;
step S430, respectively determining target state change which can be controlled by the voice instruction of the at least one target player and is performed by the display object based on the attribute expression;
step S440, controlling the display object to change the state in the first area based on the target state change that can be controlled by the voice command of the at least one target player.
In the embodiment of the disclosure, at least two participating players exist in the game, and each participating player can be used as a target player to control the display object positioned in the first area to change the state through voice instructions. Specifically, the game server acquires a voice instruction of at least one target player; respectively determining target state change of a display object which can be controlled by the voice instruction of at least one target player according to the attribute expression of the voice instruction on the preset voice attribute; and then the display object is controlled to change the state in the first area on the basis. By the method, the participating player can interact with other participating players besides the machine, so that the operation richness of the participating player in controlling the display object to change the state through the voice instruction is improved.
In an embodiment of the disclosure, the state change includes: motion state change, appearance state change. According to different application scenarios, the local server may control the display object to perform motion state change (e.g., displacement, speed change) in the first area, or control the display object to perform appearance state change (e.g., color change, appearance change) in the first area, or control the display object to perform appearance change (e.g., color change while displacement) while performing motion state change in the first area.
In step S410, a voice instruction of at least one target player for controlling a state change of a display located in a first area in a game pair in which at least two participating players exist, the at least two participating players including the at least one target player, is acquired.
In the embodiment of the disclosure, at least two participating players exist in the game, and each participating player can be used as a target player to control the display object positioned in the first area to change the state through voice instructions. There is a difference in the process of determining at least one target player from at least two participating players, depending on the particular application scenario.
The process of determining at least one target player from at least two participating players is described below.
In one embodiment, prior to obtaining voice instructions for controlling at least one target player for making a state change to a display located in a first area in a game, the method comprises: each participating player is determined to be a target player.
In this embodiment, each participating player can be a target player during the game, and the display object in the first area is controlled by voice command to change the state. For example: xiaoming, Xiaohong, Xiao just and Xiaotian are all used as participating players to join in the same game. At any moment of the game, Xiaoming, Xiaohong, Xiaojust and Xiaotian are determined as target players by the game server; that is, at any time of the game, the user can control the display object located in the first area to change the state by the voice command of each of xiao ming, xiao hong, xiao jun and xiao tian.
This embodiment has the advantage that each participating player can control the display to change state throughout the game, increasing the frequency of interaction between participating players.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, the at least two participating players in the first zone each have a corresponding second zone; before obtaining speech for controlling at least one target player who makes a state change to a display located in a first area in a game, the method includes: and determining the participating player with the corresponding second area overlapped with the display object as the target player.
In the embodiment, in the game-playing process, only the participating players meeting the preset conditions can be used as target players, and the display object in the first area is controlled to change the state through the voice instruction. Specifically, in this embodiment, before starting the game-play, the game-play server respectively allocates a second area located in the first area to each participating player; the preset condition determined as the target player is that the corresponding second area overlaps the display. That is, during the course of the game-play, each participating player will be determined by the game-play server as the target player only if the display overlaps its corresponding second area.
For example: xiaoming, Xiaohong, Xiaogang and Xiaotian are used as participating players to join in the same game; the first region in the pair is — "a square region of 20 unit length sides of the preset position". Before the game is started, the game server equally divides the first area into four second areas, namely a square area A with 10 unit length sides at the upper left part, a square area B with 10 unit length sides at the upper right part, a square area C with 10 unit length sides at the lower left part and a square area D with 10 unit length sides at the lower right part. And square area a is assigned to xiaoming, square area B is assigned to xiaohong, square area C is assigned to xiaohai, and square area D is assigned to xiatian.
Only when the display object is overlapped with the square area A, the Xiaoming is determined as a target player by the game server, so that the Xiaoming can control the display object to change the state through a voice instruction; only when the display object overlaps the square area B, the small red is determined as a target player by the game server, so that the small red can control the display object to change the state through voice instructions. It is understood that the cases of the small day and the small day being determined as the target players are the same, and therefore, the description thereof is omitted.
This embodiment has the advantage that the participating players can control the display as a target player to change the state only if there is an overlap of the display with the corresponding second area, ensuring the ordering of the actions of the respective participating players in the game.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, the first area further includes a preset initial area where the display object is located when the game starts; the method further comprises the following steps: at the start of the game, each of the participating players is determined to be a target player.
In this embodiment, the first area further includes a preset initial area, and the initial area is an area where the display object is located when the game starts. When the game is started and the display object is located in the initial area, each participating player can be used as a target player, and the display object located in the first area is controlled to change the state through a voice instruction.
For example: the first area in the pair is — "a square area with 20 unit length sides at a preset position", and the preset initial area is — "a circular area with 5 unit length radii and the center of the square area as the center". The Xiaoming, Xiaohong, Xiaogang and Xiaotian are all used as participating players to join the game, when the game starts, the display object is positioned in the initial area, and the game server determines the Xiaoming, Xiaohong, Xiaogang and Xiaotian as target players, so that the Xiaoming, Xiaohong, Xiaogang and Xiaotian can control the display object to change the state through voice commands.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure. It can be understood that, in another embodiment, when the starting display object of the game play is located in the preset initial area, any one of the participating players is not the target player, namely the display object moves freely, until the participating player corresponding to a second area is determined as the target player after the display object moves to the second area; in another embodiment, the first area may also be free of an initial area where the display object is located when the game play starts — when the game play starts, the display object is randomly generated at any position of the first area, and the corresponding target player is determined according to a second area where the randomly generated position of the display object is located.
In one embodiment, the at least two participating players in the first zone each have a corresponding second zone; the method further comprises the following steps: alert information is sent to the participating players having the second area overlapping the display.
In this embodiment, before the game play starts, the game play server allocates a second area located in the first area to each participating player. When the display object is overlapped with a second area, the game server sends warning information (such as text information for warning and audio information for warning) to the participating player corresponding to the second area so as to warn the participating player that the display object moves into the second area owned by the participating player; furthermore, the participating player can be warned that the participating player is confirmed as the target player, and the display object can be controlled to perform state change in the first area through the voice instruction.
This embodiment has the advantage that by sending an alert message to a participating player having a second area overlapping the display, the participating player is ready to control the display to change state by voice command in a timely manner.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, obtaining the voice of at least one target player for controlling the state change of a preset display object located in a first area in the game comprises:
respectively acquiring the audio of the at least one target player;
and respectively carrying out voice detection on the audio of the at least one target player so as to obtain the voice instruction of the at least one target player.
In this embodiment, the audio of the target player generally includes background noise in addition to the voice command of the target player. Therefore, after the game server respectively acquires the audio of each target player, the game server respectively performs voice detection on the acquired audio of each target player, and filters out noise in the audio, so as to acquire the voice instruction of each target player. Voice detection may be performed on the audio using VAD (voice activity detection) technology to filter out noise in the audio.
In one embodiment, the VAD technique used for voice detection of audio is based on LSTM (Long Short-Term Memory network) and DNN (Deep Neural Networks). Specifically, in this embodiment, feature vectors are extracted for each speech frame in the speech frame sequence through DNN to obtain a corresponding feature data set (each element in the feature data set is a feature vector of a corresponding speech frame); and then inputting the feature data set into the LSTM, and carrying out time sequence analysis on the voice frame sequence by the LSTM so as to classify each voice frame, namely, judging whether each voice frame is noise or not, and further filtering the noise in the voice frame according to the obtained classification result.
Fig. 4 shows the structure of the LSTM network used in this embodiment. For a speech frame sequence of time length T ═ x (x)1,x2,...,xT) The LSTM network calculates the following formula in time order T1 to T:
Figure BDA0002380101040000141
Figure BDA0002380101040000142
Figure BDA0002380101040000143
Figure BDA0002380101040000144
Figure BDA0002380101040000145
Figure BDA0002380101040000146
the LSTM network comprises a memory unit C, an input gate I, an output gate O and a forgetting gate F. Wherein: x represents the input to the LSTM network; g represents the output of each gate; h represents the output of the LSTM network; w is al、whRespectively representing an input weight matrix and a cyclic weight matrix; w is atAs a matrix of connections of memory cells and gates, this is known as the peephole (peephole) technique; f. σ is the activation function used by the different gates, and σ is typically a sigmoid function.
In this embodiment, when applied to speech detection — the time length T of LSTM analysis is 2k + 1; extracting the characteristic data set according to a fixed time window T each time to obtain a corresponding characteristic sequence of the LSTM network to be input
Figure BDA0002380101040000147
The time window moves according to the step length u (u is more than or equal to 1 and less than or equal to T); characteristic sequence
Figure BDA0002380101040000148
After the time sequence analysis of the LSTM network, the corresponding marking sequence is output
Figure BDA0002380101040000149
Marker sequences
Figure BDA00023801010400001410
Each element in (1) is a classification result indicating whether the corresponding speech frame is noise or not, and is generally represented as 0 or 1 (e.g., 0 represents that the corresponding speech frame is noise, and 1 represents that the corresponding speech frame is non-noise).
In particular, the characteristic sequence input at the time t
Figure BDA00023801010400001411
Expressed as:
Figure BDA00023801010400001412
corresponding marker sequences
Figure BDA00023801010400001413
Is shown as
Figure BDA00023801010400001414
Wherein t is t0+ nxu (n ═ 1, 2.,), n is a positive integer, and t is a positive integer0Is a reference time.
This embodiment has the advantage that, since DNN is good at feature extraction, acoustic feature information of a lower layer can be mapped to feature information more suitable for voice detection; LSTM is good at sequence analysis and can more accurately mine information between speech frames. Therefore, DNN and LSTM are combined to carry out voice detection, and the accuracy of voice detection is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In step S420, attribute representations of the voice commands of the at least one target player on preset voice attributes are respectively obtained.
In step S430, based on the attribute performance, it is determined that the voice command of the at least one target player can control the target state change of the display object.
In the embodiment of the present disclosure, after the voice instruction of each target player is obtained, the game server obtains the attribute representation of the voice instruction of each target player on the preset voice attribute, and further determines the target state change that the display object can control by the voice instruction of each target player according to the attribute representation.
In an embodiment, the preset voice attribute is a volume attribute, and the obtaining of the attribute representation of the voice instruction of the at least one target player on the preset voice attribute respectively includes: and respectively acquiring attribute representation of the voice instruction of the at least one target player on the volume attribute.
In this embodiment, each target player controls the state change of the display object by the volume level of the voice instruction. Specifically, after the game server acquires the voice command of each target player, the volume of the voice command of each target player is detected and acquired, and then the target state change of the display object controlled by the voice command of each target player can be determined according to the volume of the voice command.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the preset voice attribute is a pitch attribute, and the obtaining of the attribute representation of the voice instruction of the at least one target player on the preset voice attribute respectively includes: attribute representations of the voice instructions of the at least one target player on pitch attributes are obtained, respectively.
In this embodiment, each target player controls the state change of the display object by the pitch size of the voice instruction. Specifically, after the game server acquires the voice command of each target player, the pitch of the voice command of each target player is detected and acquired, and then the target state change of the display object which can be controlled by the voice command of each target player can be determined according to the pitch of the voice command.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure. It will be appreciated that in addition to determining the target state change based on volume level, and determining the target state change based on pitch level, the volume level may be combined with pitch level to determine the target state change.
In one embodiment, determining the target state change that the voice command of the at least one target player can control the display to perform based on the attribute performance respectively comprises:
for each target player, determining a state scalar quantity of the target state change which can be controlled by the target player and is carried out by the display object based on the attribute performance corresponding to the target player;
and aiming at each target player, determining the target state change of the display object, which can be controlled by the voice instruction of the target player, based on the preset direction and the state scalar.
In this embodiment, for each target player, the direction of the target state change corresponding to the target player is preset; meanwhile, the attribute expression of the voice instruction of the target player on the preset voice attribute is mainly used for determining a state scalar quantity of the target state change corresponding to the target player. Therefore, after the game server respectively obtains the attribute expression of the voice command of each target player on the preset voice attribute, the state scalar quantity of the target state change corresponding to each target player can be respectively determined, and the target state change of the display object which can be controlled by each target player is further respectively determined by combining the direction of the corresponding target state change.
For example: the participating players in the game have twilight, red, rigid and small days; the direction of the target displacement preset for Xiaoming is- "along the positive direction of the X axis"; the direction of target displacement preset for small red is-along the negative direction of Y axis; the direction of the small and rigid preset target displacement is minus direction along the X axis; the direction of target displacement preset for the small day is- "along the positive direction of Y axis".
At the current time of the game, the target player is twilight. After the opposite server obtains the small and clear voice command, if the volume of the detected and obtained small and clear voice command is 80 decibels, the distance of the target displacement corresponding to the small and clear voice command is determined to be 80/10 which is 8 unit lengths, and then the target displacement which can be controlled by the small and clear voice command to carry out on the display object is determined to be '8 unit lengths along the positive direction of the X axis'; when the volume of the detected and acquired small and bright voice command is 90 decibels, the distance of the target displacement corresponding to the small and bright voice command is determined to be 90/10 units of length, and further the target displacement which can be controlled by the small and bright voice command to be displayed is determined to be 'moved by 9 units of length along the positive direction of the X axis'.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In an embodiment, the method further comprises: respectively acquiring virtual resources received by the at least one target player;
respectively determining target state changes which can be controlled by the voice command of the at least one target player to the display object based on the attribute performances, wherein the target state changes comprise: and respectively determining the target state change of the display object which can be controlled by the voice command of the at least one target player based on the attribute representation and the virtual resource.
In this embodiment, when determining that the target state change of the display object can be obtained by the voice command of each target player, the game server considers not only the attribute expression of the voice command of the corresponding target player on the preset voice attribute, but also the virtual resource received by the corresponding target player.
Specifically, during the course of a game, the target player receives virtual resources that can affect a change in the state of the target. The reception of virtual resources mainly affects the distance of the target state change; the effect of the influence is different according to the different types of the virtual resources. For example: the target player is red, and when the target state change of the display object, which can be controlled by the red voice instruction, is originally- 'moving 3 unit lengths along the Y-axis negative direction from the current position', if the red receives the first type of virtual resource, the target state change of the display object, which can be controlled by the red voice instruction, is changed- 'moving 6 unit lengths along the Y-axis negative direction from the current position'; if the red receives the second type of virtual resource, the voice command of the red controls the display to change the target state to- "move 1.5 unit length along the negative direction of the Y axis from the current position".
In one embodiment, the source of the virtual resource includes: a participating player in the pair, a user outside the pair.
In this embodiment, the virtual resources received by the target player may originate from participating players in the game pair (including the target player itself) or may be users outside the game pair (e.g., viewers watching the live game of the game pair).
For example: and the game-playing server creates a live-broadcasting room, all of the players of Xiaoming, Xiaohong, Xiaogang and Xiaotian are used as participating players to join the live-broadcasting room and carry out game-playing, and meanwhile, each audience enters the live-broadcasting room to watch the live broadcast of the game-playing.
In the game, when the pinkish red is taken as a target player, the display object is controlled to move in the first area through the voice command, namely the pinkish red can use the 'first type prop' (namely the pinkish red receives the first type virtual resource), so that the distance of the target movement of the display object is controlled by the voice command of the pinkish red to be farther than the original distance of the target movement; the 'second type prop' (namely, the small red receives the second type virtual resource) can be sent to the small red just before the small red so that the distance of the target displacement of the display object is controlled by the voice command of the small red and is closer than the original distance of the target displacement; the "first type prop" may also be transmitted by the viewer to the pinkish red (i.e., the pinkish red receives the first type virtual resource) such that the voice command of the pinkish red controls the distance of the target displacement by the display, which is further than the original target displacement.
The embodiment has the advantages that the state change of the display object can be directly controlled by the participatory player through the voice instruction, and can be indirectly influenced by the virtual resource used by the participatory player or the virtual resource used by other people, so that the operation richness of the participatory player is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, determining the target state change that the at least one target player can control the display to perform based on the attribute representation and the virtual resource respectively comprises:
for each target player, determining a state scalar weight pre-allocated to the virtual resource received by the target player;
for each target player, determining a state scalar quantity of the target state change which can be controlled by the target player and is carried out by the display object based on the attribute expression and the state scalar weight corresponding to the target player;
and aiming at each target player, determining the target state change which can be controlled by the target player to perform by the display object based on the preset direction and the state scalar.
In this embodiment, the receipt of virtual resources primarily affects the state scalar of the target state change; allocating corresponding state scalar weight to virtual resources of different types in advance; presetting a target state change direction corresponding to each target player; meanwhile, the attribute expression of the voice instruction of the target player on the preset voice attribute is mainly used for determining a state scalar quantity of the target state change corresponding to the target player. Therefore, after the opposite-game server respectively acquires the attribute representation of the voice command of each target player on the preset voice attribute and the correspondingly received virtual resource, the state scalar of the target state change corresponding to each target player can be respectively determined according to the attribute representation and the corresponding state scalar weight, and the target state change of the display object which can be controlled by each target player can be further respectively determined by combining the corresponding direction of the target state change.
For example: the participating players in the game have twilight, red, rigid and small days; the direction of the target displacement preset for Xiaoming is- "along the positive direction of the X axis"; the direction of target displacement preset for small red is-along the negative direction of Y axis; the direction of the small and rigid preset target displacement is minus direction along the X axis; the direction of target displacement preset for a small day is- 'along the positive direction of the Y axis'; the distance weight allocated to the first type of prop is 2; the distance weight assigned to the second type of prop is 0.5.
At the current time of the game, the target player is twilight. The xiaoming receives a 'first type prop' from the audience when the display object in the first area is controlled to move through a voice command. When the opposite server acquires the small and bright voice command and detects that the volume of the small and bright voice command is 80 decibels, the distance of the target displacement corresponding to the small and bright is determined to be (80/10) × 2 which is 16 unit lengths, and the target displacement which can be controlled by the small and bright voice command to be displayed is determined to be 'moved by 16 unit lengths along the positive direction of the X axis'.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In step S440, the display object is controlled to perform a state scalar quantity in the first area based on the target state change that the voice command of the at least one target player can control the display object to perform.
In one embodiment, the controlling the display object to perform the state change in the first area based on the target state change that the voice command of the at least one target player can control the display object to perform includes:
superposing the target state change which can be controlled by the display object by the voice instruction of the at least one target player to acquire the corresponding superposed state change;
and controlling the display object to change the state in the first area based on the superposition state change.
In this embodiment, there are a plurality of target players who can simultaneously control the display object in the first area to change the state by the voice command. And the game server superposes the target state changes corresponding to each target player to acquire the corresponding superposed state changes, and then controls the display object to carry out state change in the first area on the basis.
For example: at the current moment of the game, the target player has little light and little stiffness. The target displacement of the display object controlled by the small and bright voice instruction is '8 unit lengths of movement along the positive direction of the X axis'; the target displacement which can be controlled by the sound command of the small steel bar to carry out by the display object is 'moving 5 unit lengths along the negative direction of the X axis'. And superposing the two target displacements to obtain superposed displacement, namely moving 3 unit lengths along the positive direction of the X axis, and further controlling the display object to move 3 unit lengths along the positive direction of the X axis in the first area.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
In one embodiment, the at least two participating players respectively have corresponding camera areas, and the camera areas display video pictures of the corresponding participating players in real time, the method further includes: and if the preset gesture to the camera area is detected, intercepting the video picture of the corresponding participating player.
In this embodiment, in the game-play interface of each terminal (which may be a participating player end of a participating player, or a spectator end of a spectator), in addition to the first region showing the moving range of the display object, a camera region showing a video image of the participating player in real time is also shown, so that a corresponding user (a corresponding participating player, or a corresponding spectator) can intercept the video image of the participating player corresponding to the camera region by making a preset gesture (for example, double-clicking the camera region) on a specific camera region.
The embodiment has the advantages that the participating players or the spectators can intercept and store the video pictures in the game, and the interaction frequency between the participating players and between the participating players and the spectators is improved.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
The product-side office procedure in an embodiment of the present disclosure is described below with reference to fig. 5 to 8.
Fig. 5 is a diagram illustrating terminal interfaces of participating players before a game play starts in a live scene of a game play according to an embodiment of the present disclosure. In this embodiment, the participating player is a small steel as a anchor. Referring to fig. 5, in the process of just broadcasting, the video frame is displayed in the upper half area of the terminal interface; the lower half area of the terminal interface displays the chat records of the audiences in the live broadcast room; on the left side of the chat log, there is a spherical display, labeled "match entry". The match of the game is started just after the small click on the spherical display object; if the matching is successful, the young player just matches the matched other participating players.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 6 is a diagram illustrating a terminal interface of a participant player performing a game-play in a live game-play scene during a game-play process according to an embodiment of the present disclosure. In this embodiment, the Xiao-Gai serves as a participating player, and the Xiao-Gai, Xiao-hong and Xiao-Tian play a match with other participating players. The middle of the upper half area of the terminal interface is a large rectangular area containing 8 semicircles, which is the first area, the moving range of the spherical display object. The first region is divided on average into four small rectangular regions, four second regions, each of which contains 1 large semicircle and 1 small semicircle. The second area of the Xiaoming is a small rectangular area at the upper left part of the first area; the second area owned by the small red is a small rectangular area at the upper right part of the first area; the second area which is just possessed by the small steel is a small rectangular area at the lower right part of the first area; the second area owned by the small day is a small rectangular area at the lower left part of the first area.
Wherein the two semi-circles within each second region are for dividing the score limit: the small semicircle is 5 minutes; the distance between the big semicircle and the small semicircle is 2 minutes. That is, if the target player controls the spherical display to move to a small semicircle in the second area of other participating players through the voice command, the target player is given 5 points; and if the target player controls the spherical display to move between the large semicircle and the small semicircle in the second area of other participating players through the voice command, the target player is divided into 2 points. For example: the current moment is small and is obviously the target player. If the Xiaoming moves into a small semicircle in the small red second area (namely, a small semicircle in a small rectangular area at the upper right part of the first area) under the control of the voice instruction, the Xiaoming is divided into 5 points; if the Xiaoming controls the spherical display object to move to a position between the big semicircle and the small semicircle in the red second area (namely, between the big semicircle and the small semicircle in the small rectangular area at the upper right part of the first area) through the voice command, the Xiaoming is divided into 2 points.
In the upper half area of the terminal interface, except the first area, camera areas of participating players, namely a small camera area and a small-day camera area which are positioned on the left side of the upper half area, are respectively displayed on two sides of the upper half area; a reddish camera area, a rigid camera area, located to the right of the upper half area. Video pictures corresponding to live broadcasts of the participating players are respectively displayed in the camera area; the corner of the camera area also displays the score that the corresponding participating player has obtained in the game-2 points are small and clear; the reddish color is 12 points; the small steel is obtained by 20 minutes; 8 points are obtained in the small day.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 7 is a terminal interface diagram illustrating a spectator sending a virtual resource to a target player during a live session of a session according to an embodiment of the present disclosure. In this embodiment, in the course of dealing, when xiao ming is the target player, spectator a selects "gift" - "speed ball" to be sent to xiao ming by clicking the "gift" button in the lower right corner of the terminal interface on spectator a's terminal, so that xiao ming can control the distance of displacement of the ball display by voice to be larger. After the 'acceleration ball' is sent to the Xiaoming, the visual special effect of the spherical display object controlled by the Xiaoming is changed, and the distance that the Xiaoming can control the displacement of the spherical display object through voice in the subsequent process is larger; meanwhile, the chat interface at the lower half part of the terminal interface displays the text of the 'audience A sending a small and bright acceleration ball' and the image of the 'acceleration ball'.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
Fig. 8 shows a flowchart of a complete game-play process in a live scene of game-play according to an embodiment of the present disclosure. In this embodiment, the display in the game is a spherical display; the participating player is the anchor with successful matching; the off-office user is a viewer watching a live broadcast.
Specifically, the anchor clicks a matching entry to request for matching the office, if the matching is not successful, the anchor end displays a text prompt to prompt the anchor to continue matching or close the matching entry; if the match is successful, the local system matches the opponent (i.e., the other participating players) for the anchor.
After the play begins, when the main player is taken as a target player in the process of play, the main player controls the ball through the volume of a voice command, if the ball moves to scoring areas (such as a large semicircular area and a small semicircular area shown in fig. 6) of other participating players, 5 points or 2 points are correspondingly added to the main player; if the ball does not move to the scoring area of the other participating players, the anchor does not score. Wherein the spectator may send a gift for the participating player (i.e., the target player) currently acting, indirectly affecting the opponent.
And after the game is finished, the background ranks each participating player according to the credit of each participating player.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
The following describes an example of a local alignment process on the technical side in an embodiment of the present disclosure with reference to fig. 9 to 11.
FIG. 9 shows a flowchart of matching of participating players in a live scene of a game play according to an embodiment of the present disclosure. In this embodiment, the participating players are anchor players who play live on the game. The anchor sends a matching request to the background through the anchor end; the background verifies whether the anchor meets the matching condition, if the anchor meets the matching condition, the background establishes communication with the game-playing system, creates a live broadcasting room of a game, and adds the anchor and other participating players (also the anchor) into the live broadcasting room of the game, wherein the anchor is successfully matched.
And after the matching of the anchor is successful, the background sends a notice of the start of the game to all anchors in the live broadcast room and all audiences of each anchor. When the audience receives the notification of the start of the game, an entrance interface for entering a live room of the game is popped up. And the audience clicks the entrance interface, and the audience terminal requests the background to enter the live broadcasting room of the match, so that the background adds the audience into the live broadcasting room of the match and provides corresponding match live broadcasting service for the audience.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
FIG. 10 shows a flowchart of a participant player interacting with the background during a game play in a live scene of a game play according to an embodiment of the present disclosure. In this embodiment, the participating player is an anchor with a successful match; the display in the game is a spherical display. After the exchange starts, the background issues prompt information (for example, highlighting a second area owned by the anchor) for turning to the anchor to start action; after receiving the prompt message, the anchor terminal starts audio acquisition to acquire the audio of the anchor; the anchor end judges whether the collected audio is human voice through a voice detection technology; the host side detects the volume of the audio frequency judged as the voice and uploads the volume of the voice to the background; the background correspondingly controls the ball displayed in the game to move according to the volume.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure. It can be understood that after the audio of the anchor is collected, the anchor end can also upload the audio of the anchor to the background, and the background judges whether the collected audio is human voice through a voice detection technology; and then carrying out volume detection on the audio frequency judged as the voice to obtain the volume of the voice.
FIG. 11 shows a flow diagram of an off-site user interacting with a background in a live scenario of an on-site session according to an embodiment of the present disclosure. In the embodiment, the off-office user is a viewer watching the live broadcast of the office in a live broadcast room of the office; the display in the game is a spherical display. After the exchange starts, the background sends an exchange start notification to the audience; the audience terminal pops up an entrance interface of the live broadcasting room entering the opposite office after receiving the notice, so that the audience can enter the live broadcasting room after clicking the entrance interface; in the process of watching the live broadcast in the live broadcast room, the audience can send corresponding gifts; after receiving the gift, the background determines the gift as the gift sent to the current acting anchor (i.e. target player), and further modifies the state of the ball (e.g. shape, displacement distance) according to the type of the gift; and further, the state of the ball and the score of the anchor are updated and synchronized to each terminal (including each anchor terminal and each audience terminal).
It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.
According to an embodiment of the present disclosure, as shown in fig. 12, there is also provided a data processing apparatus including:
a first obtaining module 510 configured to obtain a voice of at least one target player currently capable of controlling a state change of a display located in a first area in a play, in which there are at least two participating players participating in the play, the at least two participating players including the at least one target player;
a second obtaining module 520, configured to obtain attribute expressions of the voices of the at least one target player on preset voice attributes, respectively;
a determining module 530 configured to respectively determine a target state change that the at least one target player can control the display based on the attribute performances;
a control module 540 configured to control the display object to perform a state change in the first area based on the at least one target state change.
In an exemplary embodiment of the present disclosure, the state change includes: motion state change, appearance state change.
In an exemplary embodiment of the disclosure, the apparatus is configured to: determining each of the participating players as one of the target players.
In an exemplary embodiment of the disclosure, the at least two participating players each have a corresponding second zone in the first zone, the apparatus is configured to: determining a participating player having a second area overlapping the display as the target player.
In an exemplary embodiment of the disclosure, the first area further includes a preset initial area where the display object is located when the game starts, and the apparatus is configured to: at the start of the game, each of the participating players is determined to be one of the target players.
In an exemplary embodiment of the disclosure, the at least two participating players each have a corresponding second zone in the first zone, the apparatus is configured to: and sending warning information to the player with the corresponding second area overlapped with the display object.
In an exemplary embodiment of the present disclosure, the first obtaining module 510 is configured to:
respectively acquiring the audio of the at least one target player;
and respectively carrying out voice detection on the audio of the at least one target player so as to obtain the voice instruction of the at least one target player.
In an exemplary embodiment of the present disclosure, the determining module 530 is configured to:
for each target player, determining a state scalar quantity of target state change which can be controlled by the voice instruction of the target player and is performed by the display object based on the attribute performance corresponding to the target player;
and for each target player, determining a target state change which can be controlled by the voice instruction of the target player to the display object based on a preset direction and the state scalar.
In an exemplary embodiment of the disclosure, the apparatus is configured to:
respectively acquiring virtual resources received by the at least one target player;
and respectively determining target state change which can be controlled by the voice instruction of the at least one target player and is performed by the display object based on the attribute representation and the virtual resource.
In an exemplary embodiment of the present disclosure, the determining module 530 is configured to:
for each of the target players, determining a state scalar weight pre-assigned to a virtual resource received by the target player;
for each target player, determining a state scalar by which voice instructions of the target player can control a target state change of the display based on the attribute representation and the state scalar weight corresponding to the target player;
and for each target player, determining target state change which can be controlled by the voice instruction of the target player on the display object based on a preset direction and the state scalar.
In an exemplary embodiment of the present disclosure, the source of the virtual resource includes: the participating players in the game, the users outside the game.
In an exemplary embodiment of the present disclosure, the control module 540 is configured to:
overlaying the target state change which can be controlled by the display object by the voice instruction of the at least one target player to acquire the corresponding overlaid state change;
and controlling the display object to change the state in the first area based on the superposition state change.
In an exemplary embodiment of the disclosure, the at least two participating players respectively have corresponding camera areas, the camera areas displaying video pictures of the corresponding participating players in real time, the apparatus is configured to: and if the preset gesture for one camera area is detected, intercepting the video picture of the corresponding participating player.
Data processing electronics 60 according to an embodiment of the present disclosure is described below with reference to fig. 13. The data processing electronics 60 shown in fig. 13 is only an example and should not impose any limitations on the functionality or scope of use of embodiments of the disclosure.
As shown in fig. 13, the data processing electronics 60 is embodied in the form of a general purpose computing device. The components of the data processing electronics 60 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 630 that couples the various system components including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the description part of the above exemplary methods of the present specification. For example, the processing unit 610 may perform various steps as shown in fig. 3.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The data processing electronic device 60 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the data processing electronic device 60, and/or with any device (e.g., router, modem, etc.) that enables the data processing electronic device 60 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. An input/output (I/O) interface 650 is connected to the display unit 640. Also, the data processing electronics 60 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the data processing electronics 60 via the bus 630. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the data processing electronics 60, including but not limited to: microcode, device controllers, redundant processing units, external disk control arrays, RAID systems, tape controllers, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.
According to an embodiment of the present disclosure, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a random access memory (RGM), a Read Only Memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a portable compact disk read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a local area network (KGN) or a wide area network (WGN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (15)

1. A method of data processing, the method comprising:
obtaining voice instructions for at least one target player for controlling a state change of a display located in a first area in a game play in which at least two participating players exist, the at least two participating players including the at least one target player;
respectively acquiring attribute expressions of the voice instruction of the at least one target player on preset voice attributes;
respectively determining target state changes which can be controlled by the voice instructions of the at least one target player to the display object based on the attribute performances;
and controlling the display object to change the state in the first area based on the target state change which can be controlled by the voice instruction of the at least one target player.
2. The method of claim 1, wherein the state change comprises: motion state change, appearance state change.
3. The method of claim 1, wherein prior to obtaining voice instructions for controlling at least one target player for making a state change to a display located in the first area in the game, the method comprises: determining each of the participating players as one of the target players.
4. The method of claim 1, wherein the at least two participating players each have a corresponding second zone in the first zone;
before obtaining speech for controlling at least one target player who makes a state change to a display located in a first area in a game, the method includes: determining a participating player having a corresponding second region overlapping the display as the target player.
5. The method according to claim 4, wherein the first area further comprises a preset initial area where the display object is located when the game starts;
the method further comprises the following steps: at the start of the game, each of the participating players is determined to be one of the target players.
6. The method of claim 1, wherein the at least two participating players each have a corresponding second zone in the first zone;
the method further comprises the following steps: and sending warning information to the player with the second area overlapped with the display object.
7. The method of claim 1, wherein determining, based on the attribute representation, a target state change that the at least one target player's voice command can control the display to make, respectively, comprises:
for each target player, determining a state scalar quantity of target state change which can be controlled by the voice instruction of the target player and is performed by the display object based on the attribute performance corresponding to the target player;
and for each target player, determining a target state change which can be controlled by the voice instruction of the target player to the display object based on a preset direction and the state scalar.
8. The method of claim 1, further comprising: respectively acquiring virtual resources received by the at least one target player;
respectively determining target state changes which can be controlled by the voice instructions of the at least one target player to the display object based on the attribute performances, wherein the target state changes comprise: and respectively determining target state change which can be controlled by the voice instruction of the at least one target player and is performed by the display object based on the attribute representation and the virtual resource.
9. The method of claim 8, wherein determining a goal state change that the at least one goal player's voice command can control the display to make based on the attribute representation and the virtual resource, respectively, comprises:
for each of the target players, determining a state scalar weight pre-assigned to a virtual resource received by the target player;
for each target player, determining a state scalar by which voice instructions of the target player can control a target state change of the display based on the attribute representation and the state scalar weight corresponding to the target player;
and for each target player, determining target state change which can be controlled by the voice instruction of the target player on the display object based on a preset direction and the state scalar.
10. The method of claim 8, wherein the source of the virtual resource comprises: the participating players in the game, the users outside the game.
11. The method of claim 1, wherein controlling the display to make a state change in the first area based on the target state change that the at least one target player's voice command can control the display to make based on, comprises:
overlaying the target state change which can be controlled by the display object by the voice instruction of the at least one target player to acquire the corresponding overlaid state change;
and controlling the display object to change the state in the first area based on the superposition state change.
12. The method of claim 1, wherein the at least two participating players each have a corresponding camera area that displays a video frame of the corresponding participating player in real time, the method further comprising: and if the preset gesture for one camera area is detected, intercepting the video picture of the corresponding participating player.
13. A data processing apparatus, characterized in that the apparatus comprises:
a first obtaining module configured to obtain a voice of at least one target player currently capable of controlling a state change of a display located in a first area in a play, where at least two participating players participating in the play exist, the at least two participating players including the at least one target player;
the second acquisition module is configured to respectively acquire attribute expressions of the voice of the at least one target player on preset voice attributes;
a determination module configured to determine, based on the attribute performances, target state changes that the at least one target player can control the display to perform, respectively;
and the control module is configured to control the display object to change the state in the first area based on the at least one target state change.
14. An electronic device for data processing, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-12.
15. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-12.
CN202010080399.4A 2020-02-05 2020-02-05 Data processing method, device, electronic equipment and storage medium Active CN111265851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010080399.4A CN111265851B (en) 2020-02-05 2020-02-05 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010080399.4A CN111265851B (en) 2020-02-05 2020-02-05 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111265851A true CN111265851A (en) 2020-06-12
CN111265851B CN111265851B (en) 2023-07-04

Family

ID=70992205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010080399.4A Active CN111265851B (en) 2020-02-05 2020-02-05 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111265851B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489638A (en) * 2020-11-13 2021-03-12 北京捷通华声科技股份有限公司 Voice recognition method, device, equipment and storage medium
CN114697685A (en) * 2020-12-25 2022-07-01 腾讯科技(深圳)有限公司 Comment video generation method, comment video generation device, server and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110207532A1 (en) * 2008-08-22 2011-08-25 Konami Digital Entertainment Co.,Ltd. Game device, method for controlling game device, program, and information storage medium
US20110230258A1 (en) * 2010-03-16 2011-09-22 Andrew Van Luchene Computer Controlled Video Game Incorporating Constraints
CN107148614A (en) * 2014-12-02 2017-09-08 索尼公司 Message processing device, information processing method and program
CN109395376A (en) * 2018-11-06 2019-03-01 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
CN110211585A (en) * 2019-06-05 2019-09-06 广州小鹏汽车科技有限公司 In-car entertainment interactive approach, device, vehicle and machine readable media

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110207532A1 (en) * 2008-08-22 2011-08-25 Konami Digital Entertainment Co.,Ltd. Game device, method for controlling game device, program, and information storage medium
US20110230258A1 (en) * 2010-03-16 2011-09-22 Andrew Van Luchene Computer Controlled Video Game Incorporating Constraints
CN107148614A (en) * 2014-12-02 2017-09-08 索尼公司 Message processing device, information processing method and program
CN109395376A (en) * 2018-11-06 2019-03-01 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
CN110211585A (en) * 2019-06-05 2019-09-06 广州小鹏汽车科技有限公司 In-car entertainment interactive approach, device, vehicle and machine readable media

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489638A (en) * 2020-11-13 2021-03-12 北京捷通华声科技股份有限公司 Voice recognition method, device, equipment and storage medium
CN112489638B (en) * 2020-11-13 2023-12-29 北京捷通华声科技股份有限公司 Voice recognition method, device, equipment and storage medium
CN114697685A (en) * 2020-12-25 2022-07-01 腾讯科技(深圳)有限公司 Comment video generation method, comment video generation device, server and storage medium
CN114697685B (en) * 2020-12-25 2023-05-23 腾讯科技(深圳)有限公司 Method, device, server and storage medium for generating comment video

Also Published As

Publication number Publication date
CN111265851B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
WO2021114881A1 (en) Intelligent commentary generation method, apparatus and device, intelligent commentary playback method, apparatus and device, and computer storage medium
CN110446115B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN111683263B (en) Live broadcast guiding method, device, equipment and computer readable storage medium
CN112040263A (en) Video processing method, video playing method, video processing device, video playing device, storage medium and equipment
KR102037419B1 (en) Image display apparatus and operating method thereof
US20220044693A1 (en) Internet calling method and apparatus, computer device, and storage medium
JP2022505718A (en) Systems and methods for domain adaptation in neural networks using domain classifiers
CN111615002B (en) Video background playing control method, device and system and electronic equipment
CN113301358B (en) Content providing and displaying method and device, electronic equipment and storage medium
US20230182028A1 (en) Game live broadcast interaction method and apparatus
US11418848B2 (en) Device and method for interactive video presentation
CN111265851B (en) Data processing method, device, electronic equipment and storage medium
CN113537056A (en) Avatar driving method, apparatus, device, and medium
US20230335121A1 (en) Real-time video conference chat filtering using machine learning models
CN111667728B (en) Voice post-processing module training method and device
CN113750523A (en) Motion generation method, device, equipment and storage medium for three-dimensional virtual object
CN114501064B (en) Video generation method, device, equipment, medium and product
CN114333774B (en) Speech recognition method, device, computer equipment and storage medium
JP2023059937A (en) Data interaction method and device, electronic apparatus, storage medium and program
CN112785669B (en) Virtual image synthesis method, device, equipment and storage medium
CN116229311B (en) Video processing method, device and storage medium
US20230030502A1 (en) Information play control method and apparatus, electronic device, computer-readable storage medium and computer program product
CN116614665A (en) Video interactive play system for interacting with personas in video
CN113571063A (en) Voice signal recognition method and device, electronic equipment and storage medium
CN112533009A (en) User interaction method, system, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant