CN110598542A - Game interaction method, system, server and storage device based on expression recognition - Google Patents

Game interaction method, system, server and storage device based on expression recognition Download PDF

Info

Publication number
CN110598542A
CN110598542A CN201910718528.5A CN201910718528A CN110598542A CN 110598542 A CN110598542 A CN 110598542A CN 201910718528 A CN201910718528 A CN 201910718528A CN 110598542 A CN110598542 A CN 110598542A
Authority
CN
China
Prior art keywords
information
game
preset
expression
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910718528.5A
Other languages
Chinese (zh)
Inventor
陈成
王天旸
王啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201910718528.5A priority Critical patent/CN110598542A/en
Publication of CN110598542A publication Critical patent/CN110598542A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a game interaction method, a game interaction system, a game interaction server and a game interaction storage device based on expression recognition. The game interaction method is applied to a live broadcast system, the live broadcast system at least comprises a main broadcast end and a server, and the game interaction method comprises the following steps: the server acquires game request information from the anchor terminal; the server generates game interface information according to the game request information and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information; the server acquires the face information of the anchor from the anchor terminal and matches the face information with preset expression information; and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface. Through the method, the live game interaction based on expression recognition can be realized.

Description

Game interaction method, system, server and storage device based on expression recognition
Technical Field
The application relates to the technical field of live broadcast, in particular to a game interaction method, a game interaction system, a game interaction server and a game interaction storage device based on expression recognition.
Background
Facial expression recognition is generally divided into recognition of facial actions and recognition of emotions. For example, some researchers have identified single and mixed action units based on facial motion coding systems from human facial expressions. Most researchers identify the emotion of people such as happiness, surprise, sadness, fear and the like from the facial expression. Because the facial expression change is non-rigid motion and is influenced by individual difference, visual angle change, illumination and the like, facial expression recognition is a difficult task, and at present, few facial expression recognition systems which can be applied to actual environments exist.
With the development of internet technology and the application development of intelligent devices, live broadcast platforms have diversified live broadcast contents, such as online entertainment or game live broadcast. However, in the current live broadcast technology, game interaction based on expression recognition is not realized.
Disclosure of Invention
The method, the system, the server and the storage device mainly solve the technical problem of providing the game interaction method, the system, the server and the storage device based on the expression recognition, and can realize the live game interaction based on the expression recognition.
In order to solve the technical problem, the application provides a game interaction method based on expression recognition. The game interaction method is applied to a live broadcast system, the live broadcast system at least comprises a main broadcast end and a server, and the game interaction method comprises the following steps: the server acquires game request information from the anchor terminal; the server generates game interface information according to the game request information and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information; the server acquires the face information of the anchor from the anchor terminal and matches the face information with preset expression information; and if the face information is successfully matched with the preset expression information, eliminating the preset expression corresponding to the successfully matched preset expression information from the game interface by the server.
Wherein, game interface information includes a plurality of preset expressions at game interface's display rule, and the display rule includes: dividing the game interface into a plurality of display areas extending along the longitudinal axis of the game interface; and displaying a plurality of expression information on non-adjacent display areas according to the preset time and the preset sequence.
The step of acquiring the face information of the anchor from the anchor terminal and matching the face information with the preset expression information comprises the following steps: acquiring the face information of the anchor from the anchor terminal, and acquiring a plurality of key point information of the face information according to a convolutional neural network algorithm; acquiring relative position information of a plurality of key point information; and comparing the relative position information with the relative position information corresponding to the preset expression letter.
Wherein, the game interaction method further comprises the following steps: accumulating the number of the eliminated preset expressions in the game interface; judging whether the number in the preset time is greater than or equal to the preset number; if the number is larger than or equal to the preset number, acquiring another game interface information corresponding to the game, and sending the another game interface information to the anchor terminal; if the number is less than the preset number, the game is terminated.
Wherein, the live broadcast system still includes spectator's end, and the step of the above-mentioned predetermined expression information of acquireing includes: obtaining the appreciation information from the audience; acquiring preset expression information corresponding to the appreciation information; and updating the preset expression information corresponding to the appreciation information to the game interface information so as to display the preset expression corresponding to the appreciation information on the game interface.
In order to solve the technical problem, the application provides a live broadcast system. The live broadcast system at least comprises a main broadcast end and a server; the anchor terminal sends game request information to the server; the server acquires game request information, generates game interface information according to the game request information, and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information; the anchor terminal obtains game interface information and obtains face information of the anchor; the server acquires face information and matches the face information with preset expression information; and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface.
In order to solve the technical problem, the application provides a server. The server comprises a receiving module, a receiving module and a playing module, wherein the receiving module is used for acquiring game request information from a main player; the processing module is used for generating game interface information according to the game request information, wherein the game interface information comprises a plurality of pieces of preset expression information; the sending module is used for sending the game interface information to the anchor terminal; the receiving module is further used for acquiring the face information of the anchor from the anchor terminal; the processing module is further used for matching the face information with preset expression information and eliminating the preset expression corresponding to the successfully matched preset expression information from a game interface when the face information is successfully matched with the preset expression information.
In order to solve the technical problem, the application provides a server. The server includes a processor and a communication circuit, the processor coupled with the communication circuit; the communication circuit is used for acquiring game request information from the anchor terminal; the processor is used for generating game interface information according to the game request information, wherein the game interface information comprises a plurality of pieces of preset expression information; the communication circuit is further used for sending the game interface information to the anchor terminal and acquiring the face information of the anchor from the anchor terminal; the processor is further used for matching the face information with the preset expression information and eliminating the preset expression corresponding to the successfully matched preset expression information from the game interface when the face information is successfully matched with the preset expression information.
In order to solve the technical problem, the application provides an electronic device. The electronic equipment comprises a processor and an image sensor, wherein the processor is used for realizing the game interaction method based on expression recognition.
In order to solve the above technical problem, the present application provides a device with a storage function. The device with the storage function stores program data which can be executed to realize the game interaction method based on expression recognition.
Compared with the prior art, the beneficial effects of this application are: the game interaction method based on expression recognition is applied to a live broadcast system, the live broadcast system at least comprises a main broadcast end and a server, and the game interaction method comprises the following steps: the server acquires game request information from the anchor terminal; the server generates game interface information according to the game request information and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information; the server acquires the face information of the anchor from the anchor terminal and matches the face information with preset expression information; and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface. In this way, the server in the embodiment of the application can acquire the face information of the anchor from the anchor terminal, match the face information with the preset expression information, and eliminate the preset expression corresponding to the successfully matched preset expression information from the game interface if the face information is successfully matched with the preset expression information, so that the anchor can perform expression recognition game through a live broadcast system, and live broadcast game interaction based on expression recognition can be realized.
Drawings
FIG. 1 is a schematic structural diagram of an embodiment of a live broadcast system of the present application;
FIG. 2 is a schematic flowchart illustrating an embodiment of a game interaction method based on expression recognition according to the present application;
FIG. 3 is a game interface diagram of the live system of the embodiment of FIG. 1;
FIG. 4 is a game interface diagram of the live system of the embodiment of FIG. 1;
FIG. 5 is a specific flowchart of step S203 in the embodiment of FIG. 2;
FIG. 6 is a schematic flowchart of an embodiment of a game interaction method based on expression recognition according to the present application;
FIG. 7 is a game interface diagram of the live system of the embodiment of FIG. 1;
FIG. 8 is a game interface diagram of the live system of the embodiment of FIG. 1;
FIG. 9 is a detailed flowchart of step S606 in the embodiment of FIG. 6;
FIG. 10 is a flowchart illustrating an embodiment of a game interaction method based on expression recognition according to the present application;
FIG. 11 is a specific flowchart of step S1003 in the embodiment of FIG. 10;
FIG. 12 is a schematic diagram of information interaction of the embodiment of FIG. 1;
FIG. 13 is a block diagram of an embodiment of a server of the present application;
FIG. 14 is a block diagram of an embodiment of a server of the present application;
FIG. 15 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 16 is a schematic structural diagram of an embodiment of an apparatus for implementing a storage function.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application firstly provides a game interaction method based on expression recognition, which can be applied to a live broadcast system 10 shown in fig. 1. The live broadcast system 10 at least comprises a main broadcast terminal 11 and a server 13; in the live broadcast process, the anchor terminal 11 establishes a connection with the server 13, so that the anchor terminal 11 performs live broadcast through the server 12.
Further, the live broadcasting system 10 of the present embodiment further includes a viewer end 12, and the viewer end 12 establishes a connection with the server 13, so that the viewer end 12 watches live broadcasting through the server 12. The terminal device corresponding to the anchor terminal 11 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a computer, or a wearable device, and the terminal device corresponding to the viewer terminal 12 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a computer, or a wearable device.
The anchor terminal 11 and the viewer terminal 12 may establish wireless connection such as WIFI, bluetooth, or ZigBee with the server 13.
The live broadcast system 10 includes a plurality of viewers 12, and the types of devices corresponding to the viewers 12 and the types of devices corresponding to the broadcasters 11 and the viewers 12 may be the same or different.
Referring to fig. 2, fig. 2 is a flow chart illustrating an embodiment of a game interaction method based on expression recognition according to the present application. The game interaction method comprises the following steps:
step S201: the server 13 acquires game request information from the anchor terminal 11.
The game request information at least includes game related information, such as a game name or other identification information.
The anchor terminal 11 is, for example, installed with corresponding live broadcast software, a live broadcast application program or a live broadcast APP, hereinafter referred to as a live broadcast program, clicks the live broadcast program to start and enters a game interface (i.e., a live broadcast interface corresponding to a live broadcast room) to start live broadcast, and the anchor terminal 11 generates live broadcast process data in a live broadcast process; the viewer end 12 correspondingly installs a live program, and by clicking the live program to watch live broadcast, the viewer end 12 can also generate live broadcast process data in the live broadcast process. The live process data may include: video stream data, voice data, text data, picture data, live broadcast ID, viewer ID, etc.
A game icon is provided on the live interface in the non-game state, and the anchor terminal 11 can send game request information to the server 13 by clicking the game icon; the spectator end 12 can enter the live broadcast room of the anchor end 11 by clicking the game icon to watch the game interface of the live broadcast process, and can participate in the ongoing game interaction of the anchor end 11.
Step S202: the server 13 generates game interface information according to the game request information, and sends the game interface information to the anchor terminal 11, wherein the game interface information includes a plurality of pieces of preset emotion information.
Further, the game interface information is transmitted to the spectator terminal 12, so that the spectator terminal 12 can watch the game live broadcast.
After receiving the game interface information, the anchor terminal 11 and the spectator terminal 12 update the game interface information to the game interface.
For example, in the above manner, the anchor terminal 11 may start the expression recognition game by clicking a game icon of the expression recognition game "xiao le" to make the live interface proceed to the expression recognition game interface, as shown in fig. 3.
Step S203: the server 13 acquires the face information of the anchor from the anchor terminal 11, and matches the face information with the preset expression information.
The terminal device corresponding to the anchor terminal 11 is provided with an image sensor (not shown), such as a camera, and the like, and the anchor terminal 11 starts the camera after receiving the game interface information sent by the server 13, and obtains the face information of the anchor through the camera, as shown in fig. 4; the anchor terminal 11 sends the face information of the anchor to the server 13, and the server 13 matches the face information with the preset expression information.
Alternatively, the present embodiment may implement step S203 by the method as shown in fig. 5. The method of the present embodiment includes steps S501 to S503.
Step S501: the method comprises the steps of obtaining the face information of a anchor from an anchor terminal 11, and obtaining a plurality of key point information of the face information according to a convolutional neural network algorithm.
For example, the server 13 acquires the human face information of the anchor from the anchor terminal 11, and respectively acquires the coordinate information of the human face information at the key points according to the convolutional neural network algorithm.
The key points are points with obvious changes when the facial expression changes, such as the positions of the corners of the mouth, the lips, the corners of the eyes, eyelids, eyebrows and the like.
The method for acquiring the key point information according to the convolutional neural network algorithm is not described here.
Step S502: and acquiring relative position information of a plurality of key point information.
The server 13 acquires the relative position information of the key points according to the coordinate information of the key points. The relative positional relationship may include an abstract positional relationship, such as up, down, left, right, left-up, right-down, etc., and the relative positional relationship may also include a concrete positional relationship, such as up-left 30 degrees, etc.
Step S503: and comparing the relative position information with the relative position information of the preset expression information.
The server 13 compares the relative position information with the relative position information corresponding to the preset expression information. If the relative position information of the face key points of the anchor is the same as or similar to the relative position information corresponding to the preset expression information, or the number of the same or similar relative position information exceeds the preset number, the face information of the anchor is considered to be matched with the preset expression information; and if not, determining that the face information of the anchor is not matched with the preset expression information.
The preset expression information comprises expression information such as cheerful, hurting heart, happy and melancholy.
Of course, in other embodiments, other face recognition algorithms and matching methods may be used to implement step S203, such as an image similarity algorithm.
Step S204: if the face information is successfully matched with the preset expression information, the server 13 eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface.
And if the matching of the face information and the preset expression information fails, no processing is performed.
When the face information is successfully matched with the preset facial expression information, the server 13 deletes the successfully matched preset facial expression information from the game interface information to update the game interface information, and sends the updated game interface information to the anchor terminal 11 and the audience terminal 12, so that the successfully matched preset facial expression is deleted from the game interfaces of the anchor terminal 11 and the audience terminal 12.
Different from the prior art, the server of the embodiment can acquire the face information of the anchor from the anchor terminal, match the face information with the preset expression information, and eliminate the preset expression corresponding to the successfully matched preset expression information from the game interface when the face information is successfully matched with the preset expression information, so that the anchor can perform expression recognition game through a live broadcast system, and the live broadcast game interaction based on expression recognition can be realized.
The game interface information of the embodiment includes a display rule of a plurality of preset expressions on the game interface.
The server 13 obtains a plurality of pieces of preset emotion information and display rules of a plurality of preset emotions on the game interface corresponding to the plurality of pieces of preset emotion information from a memory (not shown). The display rule may include an initial position, a moving direction, a presentation mode, and the like of a plurality of preset emotions presented in the game interface. For example, a plurality of preset emotions may be rendered from bottom to top in a preset period from the bottom of the game interface to generate the effect of the preset emotions drifting (as shown in fig. 8).
The preset emotions may be randomly acquired by the server 13, or may be paid emotions or free emotions selected by the viewer 12 (as shown in fig. 7).
In one embodiment, as shown in fig. 6, the display rule includes step S601 and step S602.
Step S601: the game interface is divided into a plurality of display areas extending along a longitudinal axis of the game interface.
For example, the appearance game interface is divided into 4 display areas.
Step S602: and displaying a plurality of expression information on non-adjacent display areas according to the preset time and the preset sequence.
For example, after the game is started, a first emoticon is randomly drawn from one of the display areas, and a second emoticon is drawn at a position separated from the first emoticon by one of the display areas. The server 13 may control the time interval of the appearance of the plurality of preset expressions to be: one expression appears every 1 second. The present application further proposes another embodiment of a game interaction method based on expression recognition, as shown in fig. 9, the present embodiment further includes steps S901 to S903 on the basis of the embodiment of fig. 2.
Step S901: accumulating the number of the eliminated preset expressions in the game interface;
if the server 13 determines that the matching between the face information of the anchor and the preset expression information is successful, deleting the successfully matched preset expression information from the game interface information, and accumulating the number of the deleted preset expression information.
To enhance the interest of the game, different sound information may be loaded to the anchor terminal 11 (and the spectator terminal 12) when the matching of the facial expression information is successful and the matching is unsuccessful.
Step S902: the amount is updated to the game interface information.
Step S903: and sending the updated game interface information to the anchor terminal 11.
Further, the updated game interface information is transmitted to the spectator terminal 12.
The server 13 deletes the amount of the preset emotion information to the game interface information, and sends the updated game interface information to the anchor terminal 11 and the audience terminal 12, so that the anchor and the audience can view the game result in real time from the live interface.
Further, the method of this embodiment further includes: acquiring treasure box information corresponding to the quantity; updating the treasure box information to game interface information; the updated game interface information is transmitted to the anchor terminal 11 and the spectator terminal 12. The treasure box information may include an unrecognized part of the emoticon of the anchor terminal 11, an interaction opportunity or an interaction content of the audience terminal 12 and the anchor terminal 11, for example, selecting a preset emoticon in the right, etc.; the quantity is different, and treasure case information is different.
The present application further proposes another embodiment of a game interaction method based on expression recognition, which can be used in the above live broadcast system 10. As shown in fig. 10 and 12. The method of the present embodiment further includes step S1001 to step S1003 on the basis of the embodiment of fig. 2.
Step S1001: and accumulating the number of the eliminated preset expressions in the game interface.
Step S1001 is similar to step S901 and is not described here.
Step S1002: and judging whether the number in the preset time is greater than or equal to the preset number.
The preset time may be 15S or 45S, etc., and the preset number may be 5 or 15, etc.
The anchor terminal 11 or the viewer terminal 12 may set or select the preset time and the preset number.
Step S1003: if the number is greater than or equal to the preset number, another game interface information corresponding to the game is acquired, and the another game interface information is sent to the anchor terminal 11.
Further transmits another game interface information to the spectator terminal 12.
If the server 13 determines that the deleted preset emotion information, that is, the number of the emotions disappeared in the live interface is greater than or equal to the preset number, it is determined that the game is successful or the game is successful in passing, and the server 13 acquires the next-level game interface information of the emotion recognition game and sends the next-level game interface information to the anchor terminal 11 and the audience terminal 12, so that the anchor plays the next-level game.
Wherein, the server 13 may increase the number of preset emotions in the game interface or shorten the preset time in the next level.
Step S1004: if the number is less than the preset number, the game is terminated.
If the server 13 determines that the deleted preset expression information is smaller than the preset number, the server 13 determines that the game fails and terminates the game.
Optionally, the method of the present embodiment includes steps S1101 to S1103.
Step S1101: obtaining the reward information from the spectator terminal 12;
step S1102: acquiring preset expression information corresponding to the appreciation information;
step S1103: and updating the preset expression information corresponding to the appreciation information to the game interface information so as to display the preset expression corresponding to the appreciation information on the game interface.
In this way, the interesting experience between the anchor and the audience can be improved.
For example, the viewer side 12 may accept the present information of the viewer; when the server 13 receives the gift information of the viewer 12, corresponding paid emotion information or free emotion information is added to the game interface information according to the gift information.
In an application scenario, the server 13 may set the expression recognition game as a first stage and a second stage, where in the first stage: the server 13 controls a plurality of expressions to be loaded into a live broadcast interface at a low frequency, so that the anchor and the audience are familiar with the game; the server 13 renders a plurality of expressions which individually float into the live broadcast interface from bottom to top, and the time for controlling the individual expression to enter the recognition circle is 1.5S, and the time for controlling the individual expression to leave the recognition circle is 2.5S; if the expression is close to the central point of the identification ring in 2S, judging that the expression identification is 'perfect', and adding 15 points; the others are 'good', and 10 minutes is added; the expressions outside the identification circle are not identified and not divided; when the expression is recognized in a perfect state, the effects of eliminating sound effect and shaking a filter are achieved; the server 13 may judge whether the next rank can be entered according to the total score of the preset time.
Wherein, in the second stage: the server 13 can select the effect that the area of the anchor face and the recognition circle are exploded when the 15S is rendered for the live interface, and the game background is not changed; the server 13 may also control a plurality of expressions to enter the live interface in a plurality of vertical coordinates, where a difference between the vertical coordinates and a boundary coordinate of the live interface is greater than a preset difference value, so as to avoid that the user cannot focus on the live interface.
In an application scenario, the preset emotions may include system emotions, audience free-to-present emotions, and audience paid emotions, that is, the preset emotions may include system emotions, audience free-to-present emotions, and audience paid emotions. The logic that the server 13 can control the appearance of a plurality of preset expressions is as follows: 15S before the game, the single system expression slowly drifts into a live broadcast interface, and the falling quantity is dynamically adjusted according to the current audience quantity in a live broadcast channel after 15S; the server 13 can control the positions where the various preset expressions appear to be: the appearance area is divided into 4 runways (namely display areas), after the formal game is started, the first expression randomly drifts out of one of the runways, and the second expression drifts out of the position and the first expression are separated by one runway. The server 13 may control the time interval of the appearance of the plurality of preset expressions to be: one expression appears every 1 second.
Wherein, free expression below has spectator's nickname, and its position of appearance is: dividing an appearance area into 4 runways, wherein the first expression appears from one of the runways at random but is mutually exclusive with the appearance position of the system expression, namely the first free expression and the first system expression cannot appear from the same runway; the position of the second expression is separated from the first expression by a runway, and so on; after an expression is presented, the free expression button is grayed, countdown occurs, the free expression can not be presented until the cooling time is over, and the cooling time is determined according to the number of anchor people.
Wherein, pay expression below and have spectator's nickname, when the expression is presented in the continuous hit, the expression is single appears in succession, and its appearance position is: the recognition area is divided into 4 runways, the first expression randomly appears from one of the runways, the position of the second expression and the first expression are separated by one runway, and the like, but the runways in which the first paid expression and the other two expressions appear are mutually exclusive and the positions cannot be the same.
Further, the server 13 may control the appearance speed of three expressions: the total time of each expression from bottom to top is 5 seconds, or the appearance speed of the paid expression is greater than that of the free expression, and the appearance speed of the free expression is greater than that of the system expression; when the number of people in the live broadcast room is more than 100, the system expression does not appear; the server 13 may control the queuing logic of the three expressions to preferentially appear the expression which is not recognized currently by the anchor when the anchor does not complete the recognition task; when the total number of the three expressions does not exceed 4 × 4-16 in the same state screen (namely, each runway does not exceed 4 expressions at the same time), other given expressions are stored in the background; when the background expressions are larger than the preset number, the server 13 accelerates the appearance speed of each runway expression; if the audience cannot give the emotion after the game is over, the server 13 reserves 5 seconds to identify the emotion which is sent by the audience but not identified.
Further, the server 13 controls the expression in the live interface to be in a semitransparent state; the server 13 sets filter dithering sound effect, countdown sound effect and the like for recognizing the expressions in a 'perfect' state in the first stage; the server 13 provides the viewer with the functions of right-slide screen clearing and the like.
As can be seen from fig. 1, the live system 10 includes an anchor terminal 11 and a server 13; wherein, the anchor terminal 11 sends game request information to the server 13; the server 12 acquires the game request information, generates game interface information according to the game request information, and sends the game interface information to the anchor terminal 11, wherein the game interface information comprises a plurality of pieces of preset expression information; the anchor terminal 11 obtains game interface information and obtains face information of the anchor; the server 13 acquires face information and matches the face information with preset expression information; if the face information is successfully matched with the preset expression information, the server 13 eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface.
The live broadcast system 10 of this embodiment is further configured to implement the above game interaction method based on expression recognition.
As shown in fig. 13, the server 20 of this embodiment includes a receiving module 210, a processing module 220, and a sending module 230, where the receiving module 210 is configured to obtain game request information from an anchor terminal; the processing module 220 is configured to generate game interface information according to the game request information; the sending module 230 is configured to send game interface information to the anchor terminal, where the game interface information includes a plurality of pieces of preset expression information; the receiving module 210 is further configured to obtain the face information of the anchor from the anchor; the processing module 220 is further configured to match the face information with the preset expression information, and when the face information is successfully matched with the preset expression information, eliminate a preset expression corresponding to the successfully matched preset expression information from the game interface.
The server 20 of the present embodiment is further configured to implement the above game interaction method based on expression recognition.
The present application further proposes a server according to another embodiment, as shown in fig. 14, the server 30 includes a processor 310 and a communication circuit 320, the processor 310 is coupled to the communication circuit 320; the communication circuit 320 is used for obtaining game request information from the anchor terminal; the processor 310 is configured to generate game interface information according to the game request information, where the game interface information includes a plurality of pieces of preset emotion information; the communication circuit 320 is further configured to send the game interface information to the anchor terminal, and obtain the face information of the anchor from the anchor terminal; the processor 310 is further configured to match the face information with the preset expression information, and when the face information is successfully matched with the preset expression information, eliminate a preset expression corresponding to the successfully matched preset expression information from the game interface.
The server 20 of the present embodiment is further configured to implement the above game interaction method based on expression recognition.
The processor 310 may also be referred to as a Central Processing Unit (CPU). The processor 310 may be an integrated circuit chip having signal processing capabilities. The processor 310 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 310 may be any conventional processor or the like.
The present application further provides an electronic device, as shown in fig. 15, where the electronic device 50 includes a processor 51 and an image sensor 52, where the processor 51 is configured to obtain a game start instruction of a main player, generate game request information according to the game start instruction, and send the game request information to a server, so that the server generates game interface information according to the game request information, where the game interface information includes a plurality of pieces of preset expression information and display rules of a plurality of preset expressions corresponding to the plurality of pieces of preset expression information; the processor 51 is further configured to divide the game interface into a plurality of display areas extending along a longitudinal axis of the game interface according to the display rule, and display a plurality of expression information on non-adjacent display areas according to the display rule and according to a preset time and a preset sequence; the image sensor 52 is used for acquiring the face information of the anchor, so that the server matches the face information with the preset expression information; the processor 51 is further configured to, when the face information is successfully matched with the preset facial expression information, eliminate the preset facial expression corresponding to the successfully matched preset facial expression information from the game interface.
The electronic device 50 of the present embodiment is further configured to implement the above game interaction method based on expression recognition.
As shown in fig. 16, the storage device 40 of the present embodiment is configured to store the related data 420 and the program data 410 of the foregoing embodiment, where the related data 420 at least includes the game interface information, the preset expression information, and the like, and the program data 410 can be executed by the method of the foregoing method embodiment. The related data 420 and the program data 410 have been described in detail in the above method embodiments, and are not described herein again.
The storage device 40 of the present embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a mobile hard disk, a mass floppy drive, a flash memory, a multimedia memory card, a server, etc.
Different from the prior art, the game interaction method based on expression recognition in the embodiment of the application is applied to a live broadcast system, the live broadcast system at least comprises a main broadcast terminal and a server, and the game interaction method comprises the following steps: the server acquires game request information from the anchor terminal; the server generates game interface information according to the game request information and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information; the server acquires the face information of the anchor from the anchor terminal and matches the face information with preset expression information; and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface. In this way, the server in the embodiment of the application can acquire the face information of the anchor from the anchor terminal, match the face information with the preset expression information, and eliminate the preset expression corresponding to the successfully matched preset expression information from the game interface when the face information is successfully matched with the preset expression information, so that the anchor can perform expression recognition game through a live broadcast system, and live broadcast game interaction based on expression recognition can be realized.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, the functions may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
In the description of the present application, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory. The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A game interaction method based on expression recognition is characterized in that the game interaction method is applied to a live broadcast system, the live broadcast system at least comprises a main broadcast end and a server, and the game interaction method comprises the following steps:
the server acquires game request information from the anchor terminal;
the server generates game interface information according to the game request information and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information;
the server acquires the face information of the anchor from the anchor terminal and matches the face information with the preset expression information;
and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from a game interface.
2. The game interaction method of claim 1, wherein the game interface information includes a plurality of display rules of the preset emoticons on the game interface, and the display rules include:
dividing the game interface into a plurality of display areas extending along a longitudinal axis of the game interface;
and displaying the expression information on the non-adjacent display areas according to preset time and a preset sequence.
3. The game interaction method of claim 1, wherein the step of obtaining the face information of the anchor from the anchor terminal and matching the face information with preset expression information comprises:
acquiring the face information of the anchor from the anchor terminal, and acquiring a plurality of key point information of the face information according to a convolutional neural network algorithm;
acquiring relative position information of the plurality of key point information;
and comparing the relative position information with the relative position information corresponding to the preset expression letter.
4. A game interaction method as recited in claim 1, further comprising:
accumulating the number of the eliminated preset expressions in the game interface;
judging whether the number is greater than or equal to a preset number within preset time;
if the number is larger than or equal to the preset number, acquiring another game interface information corresponding to the game, and sending the another game interface information to the anchor terminal;
and if the number is smaller than the preset number, terminating the game.
5. A game interaction method as recited in claim 1, wherein the live system further comprises a spectator, the game interaction method further comprising:
obtaining reward information from the audience;
acquiring preset expression information corresponding to the reward information;
and updating the preset expression information corresponding to the reward information to the game interface information so as to display the preset expression corresponding to the reward information on the game interface.
6. A live broadcast system is characterized by at least comprising a main broadcast terminal and a server;
the anchor terminal sends game request information to the server;
the server acquires the game request information, generates game interface information according to the game request information, and sends the game interface information to the anchor terminal, wherein the game interface information comprises a plurality of pieces of preset expression information;
the anchor terminal acquires the game interface information and acquires the face information of the anchor;
the server acquires the face information and matches the face information with preset expression information;
and if the face information is successfully matched with the preset expression information, the server eliminates the preset expression corresponding to the successfully matched preset expression information from the game interface.
7. A server, comprising:
the receiving module is used for acquiring game request information from the anchor terminal;
the processing module is used for generating game interface information according to the game request information, wherein the game interface information comprises a plurality of pieces of preset expression information;
the sending module is used for sending the game interface information to the anchor terminal;
the receiving module is further used for acquiring the face information of the anchor from the anchor terminal; the processing module is further used for matching the face information with preset expression information and eliminating a preset expression corresponding to the successfully matched preset expression information from a game interface when the face information is successfully matched with the preset expression information.
8. A server comprising a processor and communication circuitry, the processor coupled with the communication circuitry;
the communication circuit is used for acquiring game request information from the anchor terminal;
the processor is used for generating game interface information according to the game request information, wherein the game interface information comprises a plurality of pieces of preset expression information;
the communication circuit is further used for sending the game interface information to the anchor terminal and acquiring the face information of the anchor from the anchor terminal;
the processor is further used for matching the face information with preset expression information and eliminating a preset expression corresponding to the successfully matched preset expression information from a game interface when the face information is successfully matched with the preset expression information.
9. An electronic device, characterized in that the electronic device comprises a processor and an image sensor, wherein the processor is used for implementing the game interaction method based on expression recognition in any one of claims 1-5.
10. An apparatus having a storage function, characterized in that program data is stored, the program data being executable to implement the expression recognition-based game interaction method of any one of claims 1 to 5.
CN201910718528.5A 2019-08-05 2019-08-05 Game interaction method, system, server and storage device based on expression recognition Pending CN110598542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910718528.5A CN110598542A (en) 2019-08-05 2019-08-05 Game interaction method, system, server and storage device based on expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910718528.5A CN110598542A (en) 2019-08-05 2019-08-05 Game interaction method, system, server and storage device based on expression recognition

Publications (1)

Publication Number Publication Date
CN110598542A true CN110598542A (en) 2019-12-20

Family

ID=68853487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910718528.5A Pending CN110598542A (en) 2019-08-05 2019-08-05 Game interaction method, system, server and storage device based on expression recognition

Country Status (1)

Country Link
CN (1) CN110598542A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453032A (en) * 2021-06-28 2021-09-28 广州虎牙科技有限公司 Gesture interaction method, device, system, server and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040850A (en) * 2018-08-06 2018-12-18 广州华多网络科技有限公司 Exchange method, system, electronic equipment and device is broadcast live in game
CN109120945A (en) * 2018-08-06 2019-01-01 广州华多网络科技有限公司 Game matching process, game interaction system and server based on live streaming
CN109173272A (en) * 2018-08-06 2019-01-11 广州华多网络科技有限公司 Game interaction method, system, server and device based on live streaming
CN109173271A (en) * 2018-08-06 2019-01-11 广州华多网络科技有限公司 A method, the game interaction system based on live streaming and server are robbed in direct broadcasting room game

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040850A (en) * 2018-08-06 2018-12-18 广州华多网络科技有限公司 Exchange method, system, electronic equipment and device is broadcast live in game
CN109120945A (en) * 2018-08-06 2019-01-01 广州华多网络科技有限公司 Game matching process, game interaction system and server based on live streaming
CN109173272A (en) * 2018-08-06 2019-01-11 广州华多网络科技有限公司 Game interaction method, system, server and device based on live streaming
CN109173271A (en) * 2018-08-06 2019-01-11 广州华多网络科技有限公司 A method, the game interaction system based on live streaming and server are robbed in direct broadcasting room game

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453032A (en) * 2021-06-28 2021-09-28 广州虎牙科技有限公司 Gesture interaction method, device, system, server and storage medium

Similar Documents

Publication Publication Date Title
CN111698523B (en) Method, device, equipment and storage medium for presenting text virtual gift
CN107911724B (en) Live broadcast interaction method, device and system
CN109005417B (en) Live broadcast room entering method, system, terminal and device for playing game based on live broadcast
CN107911736B (en) Live broadcast interaction method and system
CN111857923B (en) Special effect display method and device, electronic equipment and computer readable medium
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
CN112637622A (en) Live broadcasting singing method, device, equipment and medium
WO2019223354A1 (en) Animation display method and apparatus, electronic device, and storage medium
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
CN113453034B (en) Data display method, device, electronic equipment and computer readable storage medium
EP4300431A1 (en) Action processing method and apparatus for virtual object, and storage medium
CN112367528B (en) Live broadcast interaction method and computer equipment
CN109495427B (en) Multimedia data display method and device, storage medium and computer equipment
CN113490004B (en) Live broadcast interaction method and related device
CN110602511B (en) Interaction method, live broadcast system, electronic equipment and storage device
US20170289633A1 (en) Information processing device
CN113301358A (en) Content providing and displaying method and device, electronic equipment and storage medium
CN113487709A (en) Special effect display method and device, computer equipment and storage medium
CN110598542A (en) Game interaction method, system, server and storage device based on expression recognition
CN110572686B (en) Interaction method, live broadcast system, electronic equipment and storage device
KR102384182B1 (en) Method, apparatus and computer program for providing bidirectional interaction broadcasting service with viewer participation
CN116708853A (en) Interaction method and device in live broadcast and electronic equipment
JP7429930B2 (en) Computer program, method and server device
KR20200028830A (en) Real-time computer graphics video broadcasting service system
CN115228091A (en) Game recommendation method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210115

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511449 28th floor, block B1, Wanda Plaza, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220