CN107911724B - Live broadcast interaction method, device and system - Google Patents

Live broadcast interaction method, device and system Download PDF

Info

Publication number
CN107911724B
CN107911724B CN201711168742.5A CN201711168742A CN107911724B CN 107911724 B CN107911724 B CN 107911724B CN 201711168742 A CN201711168742 A CN 201711168742A CN 107911724 B CN107911724 B CN 107911724B
Authority
CN
China
Prior art keywords
client
target
data
virtual gift
interactive scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711168742.5A
Other languages
Chinese (zh)
Other versions
CN107911724A (en
Inventor
吴震
王京京
黄锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201711168742.5A priority Critical patent/CN107911724B/en
Publication of CN107911724A publication Critical patent/CN107911724A/en
Application granted granted Critical
Publication of CN107911724B publication Critical patent/CN107911724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26291Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for providing content or additional data updates, e.g. updating software modules, stored at the client
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4826End-user interface for program selection using recommendation lists, e.g. of programs or channels sorted out according to their score
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

The application provides a live broadcast interaction method, a live broadcast interaction device and a live broadcast interaction system, wherein the method comprises the following steps: the method comprises the steps that a first client side and a second client side are connected through a microphone, and a first interactive scene and a second interactive scene are displayed in a microphone connecting live broadcast room; after receiving an instruction of presenting a virtual gift to a first client, a server acquires target data corresponding to the virtual gift and sends the target data to a target client, wherein the target client is the first client and/or a second client; wherein the virtual gift is shown in at least one interactive scene; and the target client acquires the image frame acquired by the camera of the equipment, identifies the characteristic data and judges whether the characteristic data is matched with the target data or not so that the client in the live webcast room updates the first and/or second interactive scene according to the matching result and the interactive rule. The scheme that this application provided aims at increasing the interactivity in the live broadcast.

Description

Live broadcast interaction method, device and system
Technical Field
The application relates to the field of live broadcasting, in particular to a live broadcasting interaction method, device and system.
Background
Interactivity is one of the features of live webcasting, and interactivity mainly occurs between viewers and a anchor, and between an anchor and an anchor. The method for presenting the virtual gift is an important method for increasing the interaction between the audience and the anchor, but in the prior art, the method for presenting the virtual gift to the anchor by the audience is only to directly obtain the virtual gift at the anchor client in the same live broadcast room after the virtual gift is presented by the audience, and to display the picture of the virtual gift in the live broadcast room where the audience and the anchor are located or play a flash animation at a certain position.
Disclosure of Invention
In view of this, the present application provides a live broadcast interaction method, device and system, which aim to increase interactivity in live broadcast, improve interest of live broadcast, and increase viscosity of a user.
Specifically, the method is realized through the following technical scheme:
a live interactive method comprises the following steps:
the method comprises the steps that a first client side and a second client side are connected through a microphone, and a first interactive scene and a second interactive scene are displayed in a microphone connecting live broadcast room;
after receiving an instruction of presenting a virtual gift to a first client, a server acquires target data corresponding to the virtual gift and sends the target data to a target client, wherein the target client is the first client and/or a second client; wherein the virtual gift is shown in at least one interactive scene;
and the target client acquires the image frame acquired by the camera of the equipment, identifies the characteristic data and judges whether the characteristic data is matched with the target data or not so that the client in the live webcast room updates the first and/or second interactive scene according to the matching result and the interactive rule.
In some examples, the characteristic data includes at least any one of: facial feature data and motion feature data.
In some examples, when the target clients are a first client and a second client, the interaction rule includes: determining an interactive scene to be updated according to the sequence of the matching results of the first client and the second client received by the server;
the method further comprises the steps of: the server side informs a client side in the live broadcast room to display the virtual gifts in the first interactive scene and the second interactive scene respectively;
the step of updating the first and/or second interactive scenes by the client in the live webcast room comprises the following steps:
and the server receives the matching results sent by the first client and the second client, and informs the clients in the live broadcasting room to update the first interactive scene and the second interactive scene according to the interactive rules.
In some examples, the virtual gift moves along a predetermined trajectory;
the step of updating the first and/or second interactive scenes by the client in the live microphone connecting room comprises the following steps:
and the client in the live broadcast room changes the motion track or the display state of the virtual gift according to the matching result, wherein the display state comprises the visible state or the invisible state of the virtual gift.
In some examples, the interaction rules include: if the target client is successfully matched with the target data, setting the virtual gift in an invisible state; or
If the target client-side is successfully matched with the target data, the difficulty of matching the target data by the target client-side next time is reduced; if the data is unsuccessful, increasing the difficulty of matching the target data by the target client for the next time; or
If the target client is successfully matched with the target data, the score of the target client is increased; if not, reducing or not changing the score of the target client; or
And if the target client-side is successfully matched with the target data, the target client-side acquires the value corresponding to the virtual gift.
In some examples, the method further comprises:
the display effect of the virtual gift on the client giving the virtual gift is different from that of other clients; the display effect at the client giving the virtual gift includes any one of: highlight display, amplification display and special effect display.
In some examples, the feature data includes facial feature data, the virtual gift includes a virtual expression, and the target data is used to describe features of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the step of judging whether the feature data is matched with the target data comprises the following steps:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
In some examples, the identifying feature data includes:
identifying the number of human faces, determining a target object according to a preset judgment rule when the number of the human faces is more than 1, and identifying the characteristic data of the target object;
wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking a face in an image frame acquired firstly as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the face of the login account of the client as a target object.
In some examples, the method further comprises:
the server receives a matching result sent by a target client, counts scores based on the matching result, and displays the scores in a corresponding interactive scene;
and when the interaction is finished, adding a special effect corresponding to the achievement in the first and/or second interaction scene. In some examples, the method further comprises:
updating the score ranking list according to the score of the target client;
and recommending the client side connected with the wheat according to the score ranking list.
A live interaction method, the method comprising:
establishing connection with other clients through a microphone connecting room, and displaying a first interactive scene and a second interactive scene in the microphone connecting live broadcast room;
after receiving an instruction of presenting a virtual gift, the server receives target data which is sent by the server and corresponds to the virtual gift; wherein the virtual gift is shown in at least one interactive scene;
acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
In some examples, the characteristic data includes at least any one of: facial feature data and motion feature data.
In some examples, the interaction rules include: determining an interactive scene to be updated according to the sequence of the matching results sent by the client side of the interconnection live broadcast room in which the server side receives the interconnection live broadcast room;
after receiving an updating notification sent by a server, updating a first interactive scene and a second interactive scene of a live broadcast room; the update notification is sent based on interaction rules.
In some examples, the virtual gift moves along a predetermined trajectory;
the step of updating the first and/or second interactive scenes by the client in the live microphone connecting room comprises the following steps:
and the client in the live broadcast room changes the motion track or the display state of the virtual gift according to the matching result, wherein the display state comprises the visible state or the invisible state of the virtual gift.
In some examples, the interaction rules include: if the characteristic data is successfully matched with the target data, setting the virtual gift in an invisible state; or
If the feature data is successfully matched with the target data, the difficulty of matching the target data by the target client at the next time is reduced; if the data is unsuccessful, increasing the difficulty of matching the target data by the target client for the next time; or
If the target client successfully matches the target data, increasing the score of the target client; if not, reducing or not changing the score of the target client; or
And if the target client-side is successfully matched with the target data, the target client-side acquires the value corresponding to the virtual gift.
In some examples, the feature data includes facial feature data, the virtual gift includes a virtual expression, and the target data is used to describe features of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the step of judging whether the feature data is matched with the target data comprises the following steps:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
In some examples, the identifying feature data includes:
identifying the number of human faces, determining a target object according to a preset judgment rule when the number of the human faces is more than 1, and identifying the characteristic data of the target object;
wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking a face in an image frame acquired firstly as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the face of the login account of the client as a target object.
A live device, comprising:
a connecting module: the system comprises a live broadcasting room, a connecting client, a first interactive scene and a second interactive scene, wherein the live broadcasting room is used for establishing connection with other clients through connecting with a microphone and displaying the first interactive scene and the second interactive scene in the live broadcasting room;
the processing module is used for receiving target data which is sent by the server and corresponds to the virtual gift after the server receives an instruction of presenting the virtual gift; wherein the virtual gift is shown in at least one interactive scene; acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
A live broadcast system comprises a microphone connecting client and a server;
the server is used for acquiring target data corresponding to the virtual gift after receiving an instruction of presenting the virtual gift to one party of the connected wheat client and sending the target data to the target client, wherein the target client is at least one party of the connected wheat clients; wherein the virtual gift is shown in at least one interactive scene;
the connecting wheat client side is used for establishing connection with other client sides through connecting wheat, displaying a first interactive scene and a second interactive scene in the connecting wheat live broadcast room, acquiring image frames acquired by a camera of the equipment after target data are received, identifying characteristic data, and judging whether the characteristic data are matched with the target data, so that the client sides in the connecting wheat live broadcast room update the first interactive scene and/or the second interactive scene according to matching results and interaction rules.
An electronic device, comprising:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing connection with other clients through a microphone connecting room, and displaying a first interactive scene and a second interactive scene in the microphone connecting live broadcast room;
when the server receives an instruction of presenting the virtual gift, target data corresponding to the virtual gift is obtained; wherein the virtual gift is shown in at least one interactive scene;
acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
The method comprises the steps that connection is established through two anchor clients through a microphone, and a first interactive scene and a second interactive scene are displayed in a microphone connecting live broadcast room; after receiving an instruction of giving a virtual gift to one of the anchor clients, the server acquires target data corresponding to the virtual gift and sends the target data to a target client, wherein the target client is one of the two anchors or the two anchor clients; then displaying the virtual gift in at least one interactive scene; the target client acquires the image frames acquired by the camera of the equipment where the target client is located, identifies the characteristic data, and judges whether the characteristic data is matched with the target data, so that the client in the live broadcast room updates the first and/or second interactive scenes according to the matching result and the interaction rule, the interactivity in live broadcast is increased, the interest of live broadcast is improved, and the viscosity of a user is increased.
Drawings
Fig. 1 is a partial flow diagram illustrating a method of live interaction in accordance with an exemplary embodiment of the present application;
fig. 2 is a schematic diagram of a live system shown in an exemplary embodiment of the present application;
fig. 3 is a diagram illustrating a live interaction method according to an exemplary embodiment of the present application;
FIG. 4a is an interface schematic diagram of a first client shown in an exemplary embodiment of the present application;
FIG. 4b is an interface diagram of a first client shown in an exemplary embodiment of the present application;
fig. 5 is a diagram illustrating a method for interaction between clients in a live broadcast according to an exemplary embodiment of the present application;
fig. 6 is a diagram illustrating a live interaction method according to an exemplary embodiment of the present application;
fig. 7 is a diagram illustrating a live interaction method according to an exemplary embodiment of the present application;
fig. 8 is a diagram illustrating a live interaction method according to an exemplary embodiment of the present application;
fig. 9 is a diagram illustrating a live interaction method according to an exemplary embodiment of the present application;
FIG. 10 is a partial flow diagram illustrating a method of live interaction in accordance with an exemplary embodiment of the present application;
FIG. 11 is a logical block diagram of an electronic device shown in an exemplary embodiment of the present application;
fig. 12 is a logical block diagram of a live device according to an exemplary embodiment of the present application;
fig. 13 is a logical block diagram of a live system according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Interactivity is one of the features of live webcasting, and interactivity mainly occurs between viewers and a anchor, and between an anchor and an anchor. The method for presenting the virtual gift is an important method for increasing the interaction between the audience and the anchor, but in the prior art, the method for presenting the virtual gift to the anchor by the audience is only that after the virtual gift is presented by the audience, the value corresponding to the virtual gift is directly obtained at the anchor client side of the same live broadcast room, and the picture of the virtual gift is displayed in the live broadcast room where the audience and the anchor are located or a flash animation is played at a certain position.
Based on this, the present application provides a brand-new interaction scheme between clients in a live broadcast room, and referring to fig. 1, a flow chart of a live broadcast interaction method provided in an embodiment of the present application is provided, and some steps are as follows:
s110: the method comprises the steps that a first client side and a second client side are connected through a microphone, and a first interactive scene and a second interactive scene are displayed in a microphone connecting live broadcast room;
s120: the server receives an instruction of presenting the virtual gift to the first client;
s130: acquiring target data corresponding to the virtual gift;
s140: sending target data to a target client, wherein the target client is a first client and/or a second client; wherein the virtual gift is shown in at least one interactive scene;
s150: and the target client acquires the image frame acquired by the camera of the equipment, identifies the characteristic data and judges whether the characteristic data is matched with the target data or not so that the client in the live webcast room updates the first and/or second interactive scene according to the matching result and the interactive rule.
In this embodiment, the present embodiment is applied to a live view in live broadcasting, and as shown in fig. 2, for a live view schematic diagram provided in the embodiment of the present application, a first client 211, a second client 221, and a third client 231 are respectively installed on electronic devices 210, 220, and 230, the first client 211 and the second client 221 establish a connection in a live broadcasting mode through a server 200 based on a network, and the third client 231 and any one of the first client 211 and the second client 221 are in the same live broadcasting room; at least one of the first client 211 and the second client 221 belongs to an anchor client. When the third client uses one of the first client 211 and the second client 221 as a presentation object and triggers an instruction for presenting a virtual gift, the instruction is sent to the server 200, the server 200 retrieves target data of the virtual gift corresponding to the instruction, and a target client is determined based on an interaction rule, where the target client may be the first client, the second client, or both the first client and the second client. After the target client is determined, the target client acquires the target data and displays the virtual gift in an interactive scene corresponding to the target client, the target client acquires the image frame acquired by the camera of the equipment where the target client is located, the image frame acquired by the camera of the equipment where the target client is located can be executed after the target client enters a live broadcast room, and the application does not limit the opportunity for the target client to acquire the image frame acquired by the camera of the equipment where the target client is located; and then identifying characteristic data from the image frame, and judging whether the characteristic data is matched with the target data or not so as to enable a client side in the live broadcast connected room to update the first and/or second interactive scenes according to a matching result and an interaction rule.
The electronic device can be a smart phone, a computer, a notebook computer, a smart tablet and the like with networking function. The application does not limit the type of electronic device. The first client, the second client, and the third client provided in the embodiment of the present application may be anchor clients or audience clients, and for convenience of description, the first client and the second client are both anchor clients, and the third client is an audience client. The server and the client provided by the embodiment of the application may be software installed on the electronic device, for example, the server may be software installed on a server device.
The method comprises the steps that the identities of the anchor and the audience are changed into an initiator and a participant in the process of connecting the live broadcasting, when the initiator initiates a request for connecting the live broadcasting to the participant, the participant receives the request, a connection is established between the client sides where the initiator and the participant are located, and the live broadcasting pictures are provided by the two client sides together. In general, the live broadcast picture can be displayed in a picture-in-picture mode in which the live broadcast picture of the initiator is a large window and the live broadcast picture of the participant is a small window. Of course, the display mode can be adjusted by the initiator or the participant at will. In some examples, in the live broadcast frames of the clients in the live broadcast rooms where the initiator and the participants are located, the display effects of the live broadcast frame of the participant and the live broadcast frame of the initiator may be the same or different. In some examples, the continuous wheat can also be multi-person continuous wheat. In the application, a first client and a second client are connected through a microphone, the first client can be an initiator, the second client can be a participant, the first client can also be a participant, the second client is an initiator, the live broadcast picture of the first client is a first interactive scene, and the live broadcast picture of the second client can be a second interactive scene.
The method comprises the steps that a main broadcast client (such as a first client or a second client) in live broadcast makes live broadcast data, the live broadcast data comprise image frames acquired by a camera of the main broadcast client, the live broadcast data are sent to a server, the server sends the live broadcast data to the corresponding client to be displayed in an interactive scene, when a client (such as a third client) presents virtual gifts to the main broadcast client, the virtual gifts can be displayed in the interactive scene of each corresponding client, the virtual gifts can be displayed in the forms of pictures, layers, dynamic pictures and the like, and the virtual gifts can also be combined with target image frames of the live broadcast data. Of course, the present application does not limit the way in which the interactive scene is displayed at each client. In some examples, the interaction scenario may further include a score column displayed according to the matching result and the interaction rule. Other contents can be displayed in the interactive scene, such as 'you are very excellent' characters or 'thumbs are erected' pictures or special effects and the like after the characteristic data of the anchor client is successfully matched with the target data. For example, referring to fig. 3, the first interactive scene 310 includes a first image frame 311 captured by a first client camera, a virtual gift 312, and a score bar 319, and the second interactive scene 320 includes a second image frame 321 captured by a second client camera and a score bar 329.
In some examples, the virtual gift displayed in the interactive scene may be still or move along a predetermined track, for example, from the upper end to the lower end of the interactive scene, or from the lower end to the upper end of the interactive scene, although the form of the predetermined track is not limited in the present application. In some examples, the client in the live webcast room updates the first and/or second interactive scenes according to the matching result and the interaction rule, which may be changing a motion track or a display state of the virtual gift in the first and/or second interactive scenes according to the interaction rule; the embodiment of the application provides that the interaction rule can be as follows: if the target client is successfully matched with the target data, setting the virtual gift in an invisible state, or if the target client is successfully matched with the target data, increasing the score of the target client; if not, the score of the target client is reduced or unchanged. For example: referring to fig. 3, for example, when the viewer client triggers an instruction to present a virtual gift 312 to a first client after the virtual gift 312 moves from the upper end to the lower end of the interactive scene 310, the virtual gift 312 is displayed in the interactive scene 310 corresponding to the target client, appears from the upper end of the interactive scene 310, and disappears from the lower end of the interactive scene 310, and during the movement of the virtual gift 312, if it is recognized that feature data in an image frame 311 of the target client matches target data corresponding to the virtual gift 312, the display state of the virtual gift 312 in the interactive scene 310 is set to be invisible, and the score value in the score column 319 of the target client is updated, and all clients displaying the interactive scene 310 in the live broadcast room are notified to set the display state of the virtual gift 312 to be invisible, and the score value in the score column 319 of the target client is updated; if the matching has not been successful as the virtual gift 312 moves to the lower end of the interactive scene 310, the display status of the virtual gift 312 is set to invisible, and the score value of the score bar 319 is reduced or not changed. Of course, in some examples, the matching timing of the target data corresponding to the virtual gift and the feature data of the first image frame may be arbitrary. In some examples, the interaction rule provided in the embodiment of the present application may be: if the target client-side is successfully matched with the target data, the difficulty of matching the target data by the target client-side next time is reduced; and if the data is unsuccessful, increasing the difficulty of matching the target data by the target client for the next time. For example: taking the example that the virtual gift moves from the upper end to the lower end of the interactive scene, if the target client is successfully matched with the target data, the speed of the virtual gift moving from the upper end to the lower end of the interactive scene is slowed down, so that the time for the anchor to match the virtual gift is increased, and the difficulty in matching the target data is reduced; if not, the speed of the virtual gift movement is increased to increase the difficulty of matching the target data. In some examples, the target client may obtain the value corresponding to the virtual gift if the target client matches the target data successfully, for example, when the viewer client triggers an instruction to present the virtual gift to the first client, the server deducts the value corresponding to the virtual gift from an account corresponding to the viewer client, and when the target client matches the target data corresponding to the virtual gift successfully, the deducted value is added to the account of the target client.
In some examples, the first client and the second client may each receive the virtual gift given by the viewer client to perform the interactive PK, where the target client is the first client, and in one example, referring to fig. 4a, the specific steps are as follows: after entering a live broadcast room, a first client acquires a first image frame 311 captured by a camera 301 of a device as a live broadcast picture, when the first client is used as an initiator to perform a microphone connection with a second client, when the second client agrees to the microphone connection request, the first client can acquire a second image frame 321 of the second client, the first image frame 311 and the second image frame 321 are displayed in a picture-in-picture mode, and the first client and the second client enter an interactive PK mode, at this time, subfields 319 and 329 can be displayed in a first interactive scene 310 and a second interactive scene 320, when the client in the live broadcast room triggers an instruction of presenting a virtual gift 312 to the first client, the first client receives the target data, displays the virtual gift 312 in the first interactive scene 310, acquires the target data corresponding to the virtual gift 312, identifies characteristic data from the first image frame 311, and judging whether the characteristic data is matched with the target data, if so, updating the first interactive scene 310 in the corresponding client, and if not, updating the first interactive scene 310 in the corresponding client when the virtual gift 312 reaches the preset display time.
In some examples, the target client is the second client, and after the third client triggers an instruction to give away the virtual gift, the server sends target data corresponding to the virtual gift to the second client, and displays the virtual gift in the second interactive scene, and if the second client matches the target data successfully, the first and/or second interactive scenes are updated, which may be changing the display state of the virtual gift at the second client, storing the value corresponding to the virtual gift into an account of the first client, or increasing the score value in the score column of the first client.
In some examples, the first client and the second client may receive the same virtual gift given by the viewer client to perform the interactive PK, where the target clients are the first client and the second client, and in some examples, referring to fig. 4b, which is an interface schematic diagram of the first client exemplarily shown in this embodiment of the present application, the first client and the second client both collect image frames through a camera of a device where the first client and the second client are located, respectively obtain a first image frame 411 and a second image frame 421, receive target data corresponding to the virtual gift 412, respectively identify feature data from the first image frame 411 and the second image frame 421, match the feature data with the target data, and send a matching result to the server if the matching is successful, where the server first receives the matching result of which client, and then assign the value of the virtual gift 412 to which client, and updating the first and second interactive scenes of the corresponding client. For example, if the first client matches successfully, the score of the column 419 in the first interactive scene is updated, so that the display effect of the virtual gift 412 in the first interactive scene 410 and the second interactive scene 420 disappears. A special effect 413, such as "your true stick," may also be displayed at the first client.
Because there are many users on the whole live broadcast platform, there can be a ranking list relating to the scores (scores) of all users, and the scores of the users in the ranking list can be synchronously updated every time a PK completes an interaction. There may be multiple categories in the leaderboard, such as, for example, a win ratio, a score for a single field, a number of virtual gifts harvested, a number of viewers, and the like. Therefore, PK objects can be recommended for users according to the ranking list, for example, PK objects with close recommendation scores can be recommended for popular anchor broadcasts with fewer friends, or PK objects can be recommended for anchor broadcasts with more audiences, so that medium and small anchor broadcasts can interact with more people, the exposure rate of the anchor broadcasts, particularly the medium and small anchor broadcasts, can be increased, and the popularity can be improved.
In some examples, in order to make the spectator client giving the virtual gift clearly and intuitively view the dynamic state of the given virtual gift, referring to fig. 5, which is a schematic diagram of an interaction method between the clients in live broadcast shown in an exemplary embodiment of the present application, for convenience of understanding, a target client is a first client, for description, a first client 510 and a second client 520 are main broadcast clients, a third client 530 and a fourth client 540 are spectator clients, the first client 510 and the second client 520 are connected through a direct broadcast, the four clients are located in the direct broadcast live broadcast, and after the third client 530 triggers an instruction to give the virtual gift to the first client 510, the virtual gift can be displayed in a first interaction scene 511, referring to fig. 5, in the first interaction scene 511 of the first client 510, the second client 520 and the fourth client 540, the display effect of the virtual gift 512 is the same, the third client 530 acts as a client giving a virtual gift, the virtual gift 532 is specially displayed in the interactive scene 511 of the third client 530, and the display effect is different from the display effect of the virtual gift of other clients, for example, in fig. 5, the display effect of the virtual gift 532 on the third client 530 is a highlight display, and of course, the display effect of the virtual gift on the third client may be various, and may also be an enlarged display or a special effect display, but is not limited to the above. It should be understood that the above embodiments are only one implementation method for implementing the special display of the virtual gift at the client giving the virtual gift, and other ways for making the display effect of the virtual gift different between the client giving the virtual gift and other clients in the live broadcast room fall within the scope of the present application. It can be seen that in this embodiment, the client giving the virtual gift can see the dynamic state of the virtual gift given by the client, and the user experience of the audience giving the virtual gift is good.
In some examples, the feature data proposed in the present application may include facial feature data or motion feature data, and the motion feature may be a gesture motion or a body gesture, such as a gesture of one person making an "S", or a spelling of a Chinese character or a letter or a word by a plurality of persons, and so on. The target data proposed by the embodiment of the present application may be facial expression data or motion data, and in some examples, when the target data is facial expression data, the feature data is facial feature data, and when the target data is motion data, the feature data is motion feature data.
Taking facial feature data as an example, the target data mentioned in the present application may be data for describing facial expressions, such as: like happiness, anger, sadness, happiness, etc., the expression feature data may be facial expression data recognized from image frames by image recognition techniques. The judgment of whether the feature data is matched with the target data or not mentioned in the embodiment of the application may be that the matching degree of the target data and the expression feature data reaches a certain threshold, and the matching is considered to be successful.
The expression is various, for example, smile expressions are divided into a plurality of expressions, such as face smile, mouth smile, smile and the like, the face smile is that the head faces upwards, the mouth is open, the teeth are exposed, and the eyes are squinted into a downward line; the Kai Huai laugh is formed by opening the mouth to expose teeth and squinting eyes to form a downward line; smiling refers to opening a mouth to expose teeth, enabling eyes to be in a meniscus shape, and smiling refers to opening eyes, closing the mouth and slightly raising mouth corners, when expression types of target data corresponding to a virtual gift are more, feature data in an image frame are identified through very fine image identification, so that the calculation amount of a target client is very huge, the configuration requirement on equipment is higher, the matching time is longer, and in order to reduce the calculation amount of the equipment where the target client is located, reduce the requirement on equipment configuration and reduce the matching time, in one example, the virtual gift comprises a virtual expression; the target data is used for describing the characteristics of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the step of judging whether the feature data is matched with the target data comprises the following steps:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
In a specific example, the audience client triggers an instruction for presenting a virtual gift, for example, the virtual gift is a smiling face with open eyes and closed mouth, after receiving the first instruction, the server correspondingly calls target data of the corresponding virtual gift, where the target data is that the eyes are open and closed, the mouth is closed, and the face is oriented to the front, and sends the target data to the first client;
referring to fig. 6, the target client acquires an image frame 620 captured by a camera where the target client is located, and identifies feature data from the image frame 620, and the specific method may be: the facial feature points 630 of the face of the image frame 620 are acquired, for example, 106 feature points are acquired on the face, five sense organs are recognized, then the opening and closing condition of the eyes is determined according to the relative distance of the coordinates of the feature points of the upper eyelid and the lower eyelid of the eyes, the opening and closing condition of the mouth is determined according to the relative distance of the lower edge of the upper lip and the upper edge of the lower lip of the mouth, the orientation of the face is determined according to the symmetric relation of the facial coordinates, and as shown in fig. 6, the feature data of the image frame is that the eyes are in an open state, and the mouth is in a closed state, and the orientation is a right face. The feature data is then matched to the target data, which is considered to match when all three state parameters of eye, mouth and face orientation match.
In practical applications, there may be a case where a plurality of viewer clients give a virtual gift to a first client at the same time in a live broadcast room.
In some examples, referring to fig. 7, taking the target client as the first client for illustration, after the plurality of viewer clients trigger the instruction to gift the virtual gifts, the server displays the virtual gifts 731-733 in the first interactive scene 720, and when the feature data of the image frame 721 matches the target data of the virtual gifts 731-733 displayed in the first interactive scene, all the virtual gifts matching the image frame 721 in the first interactive scene disappear, and the score of the score bar 701 of the anchor is updated. For example, in fig. 7, the first image frame 721 matches the expression (virtual gift) 733-735 with open eyes and closed mouth, all the eyes in the first interactive scene 720 are open, the expression with closed mouth disappears, corresponding to the virtual gift 733-735 in fig. 7, the virtual gift 733-735-disappears, and the score of the anchor is changed in the score bar 701 accordingly.
In other examples, as shown in fig. 8, taking the target client as the first client and the virtual gift moves from the lower end to the upper end of the interactive scene as an example, a plurality of movement tracks 810 may be divided in the interactive scene, for example, 3 movement tracks are divided in fig. 8, after the server receives an instruction sent by a plurality of viewer clients to give a virtual gift to the first client, for example, two virtual gifts, it is determined whether the tracks both show the virtual gift, and if two tracks are free, the two virtual gifts are simultaneously shown from the lower end of the free track and move along the track to the upper portion of the interactive scene; if only one rail is free, one rail starts to be displayed from the lower end of the free rail and moves to the upper part of the interactive scene along the rail, when the free rail is available again, the other virtual gift is displayed in the free rail, and when the virtual gift moves to the ending line 840, the virtual gift is not successfully matched, the corresponding virtual gift disappears, and the corresponding rail is in a free state; if the matching of the virtual gifts is successful during the period from the beginning movement to the end movement, the virtual gifts disappear at the moment of successful matching, and the corresponding tracks are in an idle state.
Taking the feature data as the gesture data as an example, referring to fig. 9, the viewer client gives a virtual gift, for example, "love heart" 930 made by two hands, the server calls the target data corresponding to the "love heart" and sends the target data to the target client, the target client obtains the image frame 910, identifies the feature data of the first image frame, for example, in fig. 8, the coordinate data of the hand of the identification frame 920 is matched with the target data, and if the matching degree reaches a certain threshold, the matching is considered to be successful.
In an actual application scene, a target client side may have a plurality of anchor in image frames captured by a camera, but in some cases, only a specific number of feature data of the anchor can be identified, if the feature data is face data and only one face is allowed to be identified, the target client side can acquire the image frames captured by the camera, identify the number of faces, and when the number of faces is greater than 1, determine a target object according to a preset judgment rule and identify the feature data of the target object; wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object; usually, the person face of the anchor is located in the middle of the picture and is closer to the camera, so the area of the face is larger;
taking the face detected earliest as a target object; usually, the person shot by the camera is the anchor, or other people go into the mirror when the anchor carries out interactive live broadcast, so the face detected at the earliest is taken as a target object;
taking a face matched with the user identity information as a target object; for example, the anchor will register an account number, especially the anchor, and needs to authenticate an identity card and face information to perform real-name authentication, so that the face of a registered user can be matched from multiple faces as a target object according to a photo used when the user registers;
the above manner is that the system automatically matches the target object, and may be used alone or in combination, and of course, the user may also directly specify the target object, for example, when a plurality of faces are detected, a selection box pops up on each face, and which selection box is pointed, the face is considered as the target object, that is, the target object is determined according to an externally input instruction.
The application also discloses a live broadcast interaction method, and with reference to fig. 10, the method comprises the following steps:
s1010: establishing connection with other clients through a microphone connecting room, and displaying a first interactive scene and a second interactive scene in the microphone connecting live broadcast room;
s1020: after receiving an instruction of presenting a virtual gift, the server receives target data which is sent by the server and corresponds to the virtual gift; wherein the virtual gift is shown in at least one interactive scene;
s1030: acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
An execution main body of the interaction method between the clients in the live broadcast may be a main broadcast client participating in the interaction, and may be the first and/or second clients, and specific implementation manners and processes of each step in the flow shown in fig. 10 may refer to the description of the foregoing embodiment, and are not described again here.
Corresponding to the embodiment of the live broadcast interaction method, the application also provides an embodiment of a live broadcast device.
The embodiment of the live device can be applied to the electronic equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 11, the present application is a hardware structure diagram of an electronic device where a live broadcast apparatus is located, where in addition to the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 11, the electronic device where the apparatus is located in the embodiment may also include other hardware, such as a camera, according to an actual function of the live broadcast apparatus, which is not described again.
Referring to fig. 12, the present application further discloses a live broadcasting apparatus 1200, including:
the connection module 1210: the system comprises a live broadcasting room, a connecting client, a first interactive scene and a second interactive scene, wherein the live broadcasting room is used for establishing connection with other clients through connecting with a microphone and displaying the first interactive scene and the second interactive scene in the live broadcasting room;
the processing module 1220 is configured to receive target data corresponding to a virtual gift sent by a server after the server receives an instruction for presenting the virtual gift; wherein the virtual gift is shown in at least one interactive scene; acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
In some examples, the characteristic data includes at least any one of: facial feature data and motion feature data.
In some examples, the interaction rules include: determining an interactive scene to be updated according to the sequence of the matching results sent by the client side of the interconnection live broadcast room in which the server side receives the interconnection live broadcast room;
after receiving an updating notification sent by a server, updating a first interactive scene and a second interactive scene of a live broadcast room; the update notification is sent based on interaction rules.
In some examples, the virtual gift moves along a predetermined trajectory;
the client side in the live microphone connecting room updates the first and/or second interactive scenes, and the method comprises the following steps:
and the client in the live broadcast room changes the motion track or the display state of the virtual gift according to the matching result, wherein the display state comprises the visible state or the invisible state of the virtual gift.
In some examples, the interaction rules include: if the characteristic data is successfully matched with the target data, setting the virtual gift in an invisible state; or
If the feature data is successfully matched with the target data, the difficulty of matching the target data by the target client at the next time is reduced; if the data is unsuccessful, increasing the difficulty of matching the target data by the target client for the next time; or
If the target client successfully matches the target data, increasing the score of the target client; if not, reducing or not changing the score of the target client; or
And if the target client-side is successfully matched with the target data, the target client-side acquires the value corresponding to the virtual gift.
In some examples, the feature data includes facial feature data, the virtual gift includes a virtual expression, and the target data is used to describe features of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the judging whether the feature data is matched with the target data comprises:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
In some examples, the identifying feature data includes:
identifying the number of human faces, determining a target object according to a preset judgment rule when the number of the human faces is more than 1, and identifying the characteristic data of the target object;
wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking a face in an image frame acquired firstly as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the face of the login account of the client as a target object.
Referring to fig. 13, a live broadcasting system 1300 includes a connecting client 1320 and a server 1310;
the server 1310 is configured to obtain target data corresponding to a virtual gift after receiving an instruction to present the virtual gift to a party connected to the mobile client; sending target data to a target client, wherein the target client is at least one party in a microphone connecting client; wherein the virtual gift is shown in at least one interactive scene;
the wheat connecting client 1320 is used for establishing connection with other clients through wheat connecting and displaying a first interactive scene and a second interactive scene in a wheat connecting live broadcast room; after receiving the target data, acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data, so that a client in the live broadcasting room can update a first interactive scene and/or a second interactive scene according to a matching result and an interactive rule.
Referring to fig. 11, the present application further discloses an electronic device, including:
a memory storing processor-executable instructions; wherein the processor is coupled to the memory for reading program instructions stored by the memory and, in response, performing the following:
establishing connection with other clients through a microphone connecting room, and displaying a first interactive scene and a second interactive scene in the microphone connecting live broadcast room;
when the server receives an instruction of presenting the virtual gift, target data corresponding to the virtual gift is obtained; wherein the virtual gift is shown in at least one interactive scene;
acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so as to enable a client in the live broadcast connected wheat room to update a first interactive scene and/or a second interactive scene according to a matching result and an interaction rule.
In the embodiments of the present application, the computer readable storage medium may be in various forms, such as, in different examples: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof. In particular, the computer readable medium may be paper or another suitable medium upon which the program is printed. Using these media, the programs can be electronically captured (e.g., optically scanned), compiled, interpreted, and processed in a suitable manner, and then stored in a computer medium.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (17)

1. A live broadcast interaction method is characterized by comprising the following steps:
the method comprises the steps that a first client side and a second client side are connected through a microphone, and a first interactive scene and a second interactive scene are displayed in a microphone connecting live broadcast room;
after receiving an instruction of presenting a virtual gift to a first client, a server acquires target data corresponding to the virtual gift and sends the target data to a target client, wherein the target client is the first client and a second client; wherein the virtual gift is shown in the first and second interactive scenes;
the method comprises the steps that a target client acquires an image frame acquired by a camera of equipment where the target client is located, characteristic data is identified, and whether the characteristic data is matched with the target data or not is judged, so that the client in a live wheat connecting room updates a first interactive scene and a second interactive scene according to a matching result and an interaction rule;
wherein the characteristic data comprises at least any one of: facial feature data and motion feature data.
2. The method of claim 1, wherein the interaction rule comprises: determining an interactive scene to be updated according to the sequence of the matching results of the first client and the second client received by the server;
the step of updating the first and second interactive scenes by the client in the live webcast room comprises the following steps:
and the server receives the matching results sent by the first client and the second client, and informs the clients in the live broadcasting room to update the first interactive scene and the second interactive scene according to the interactive rules.
3. The method of claim 1, wherein the virtual gift moves along a predetermined trajectory;
the step of updating the first interactive scene and the second interactive scene by the client in the live webcast room comprises the following steps:
and the client in the live broadcast room changes the motion track or the display state of the virtual gift according to the matching result, wherein the display state comprises the visible state or the invisible state of the virtual gift.
4. The method of claim 1, wherein the interaction rule comprises: if the target client is successfully matched with the target data, setting the virtual gift in an invisible state; or
If the target client-side is successfully matched with the target data, the difficulty of matching the target data by the target client-side next time is reduced; if the data is unsuccessful, increasing the difficulty of matching the target data next time by the target client; or
If the target client is successfully matched with the target data, the score of the target client is increased; if not, reducing or not changing the score of the target client; or
And if the target client-side is successfully matched with the target data, the target client-side acquires the value corresponding to the virtual gift.
5. The method of claim 1, wherein the method further comprises:
the display effect of the virtual gift on the client giving the virtual gift is different from that of other clients; the display effect at the client giving the virtual gift includes any one of: highlight display, amplification display and special effect display.
6. The method of claim 1, wherein the feature data comprises facial feature data, the virtual gift comprises a virtual expression, and the target data is used to describe features of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the step of judging whether the feature data is matched with the target data comprises the following steps:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
7. The method of claim 1, wherein the identifying feature data comprises:
identifying the number of human faces, determining a target object according to a preset judgment rule when the number of the human faces is more than 1, and identifying the characteristic data of the target object;
wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking a face in an image frame acquired firstly as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the face of the login account of the client as a target object.
8. The method of claim 1, wherein the method further comprises:
the server receives a matching result sent by a target client, counts scores based on the matching result, and displays the scores in a corresponding interactive scene;
and adding special effects corresponding to the scores in the first interactive scene and the second interactive scene at the end of the interaction.
9. The method of claim 8, wherein the method further comprises:
updating the score ranking list according to the score of the target client;
and recommending the client side connected with the wheat according to the score ranking list.
10. A live interaction method, comprising:
establishing connection with other clients through a microphone connecting room, and displaying a first interactive scene and a second interactive scene in the microphone connecting live broadcast room;
after receiving an instruction of presenting a virtual gift, the server receives target data which is sent by the server and corresponds to the virtual gift; wherein the virtual gift is shown in the first and second interactive scenes;
acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so that a client in the live broadcast room with wheat can update a first interactive scene and a second interactive scene according to a matching result and an interaction rule;
wherein the characteristic data comprises at least any one of: facial feature data and motion feature data.
11. The method of claim 10, wherein the interaction rules comprise: determining an interactive scene to be updated according to the sequence of the matching results sent by the client side of the interconnection live broadcast room in which the server side receives the interconnection live broadcast room;
after receiving an updating notification sent by a server, updating a first interactive scene and a second interactive scene of a live broadcast room; the update notification is sent based on interaction rules.
12. The method of claim 10, wherein the virtual gift moves along a predetermined trajectory;
the step of updating the first interactive scene and the second interactive scene by the client in the live webcast room comprises the following steps:
and the client in the live broadcast room changes the motion track or the display state of the virtual gift according to the matching result, wherein the display state comprises the visible state or the invisible state of the virtual gift.
13. The method of claim 10, wherein the interaction rules comprise: if the characteristic data is successfully matched with the target data, setting the virtual gift in an invisible state; or
If the feature data is successfully matched with the target data, the difficulty of matching the target data by the target client at the next time is reduced; if the data is unsuccessful, increasing the difficulty of matching the target data next time by the target client; or
If the target client successfully matches the target data, increasing the score of the target client; if not, the score of the target client is reduced or unchanged.
14. The method of claim 10, wherein the feature data includes facial feature data, the virtual gift includes a virtual expression, and the target data is used to describe features of five sense organs in the virtual expression; the features of the five sense organs include any of: opening and closing of eyes, opening and closing of mouth and orientation of face; the facial feature data includes opening and closing of eyes, opening and closing of mouth and orientation of face;
the step of judging whether the feature data is matched with the target data comprises the following steps:
and judging whether the matching degree of the facial feature data and the target data is in a preset range, if so, judging that the result is matching, and otherwise, judging that the result is mismatching.
15. The method of claim 10, wherein the identifying feature data comprises:
identifying the number of human faces, determining a target object according to a preset judgment rule when the number of the human faces is more than 1, and identifying the characteristic data of the target object;
wherein the preset judgment rule comprises at least one of the following:
taking the face with the centered position as a target object;
taking the face with the largest area as a target object;
taking a face in an image frame acquired firstly as a target object;
determining a target object according to an externally input instruction;
and taking the face matched with the face of the login account of the client as a target object.
16. A live broadcast apparatus, comprising:
a connecting module: the system comprises a live broadcasting room, a connecting client, a first interactive scene and a second interactive scene, wherein the live broadcasting room is used for establishing connection with other clients through connecting with a microphone and displaying the first interactive scene and the second interactive scene in the live broadcasting room;
the processing module is used for receiving target data which is sent by the server and corresponds to the virtual gift after the server receives an instruction of presenting the virtual gift; wherein the virtual gift is shown in the first and second interactive scenes; acquiring an image frame acquired by a camera of the equipment, identifying characteristic data, and judging whether the characteristic data is matched with the target data so that a client in the live broadcast room with wheat can update a first interactive scene and a second interactive scene according to a matching result and an interaction rule;
wherein the characteristic data comprises at least any one of: facial feature data and motion feature data.
17. A live broadcast system is characterized by comprising a microphone connecting client and a server;
the server is used for acquiring target data corresponding to the virtual gift after receiving an instruction of presenting the virtual gift to one party of the connected wheat client and sending the target data to the target client, wherein the target client is at least two parties of the connected wheat client; wherein the virtual gift is shown in the first and second interactive scenes;
the connecting wheat client is used for establishing connection with other clients through connecting wheat, displaying a first interactive scene and a second interactive scene in the connecting wheat live broadcast room, acquiring an image frame acquired by a camera of equipment in which the connecting wheat client is located after target data is received, identifying characteristic data, and judging whether the characteristic data is matched with the target data so that the client in the connecting wheat live broadcast room updates the first interactive scene and the second interactive scene according to a matching result and an interaction rule;
wherein the characteristic data comprises at least any one of: facial feature data and motion feature data.
CN201711168742.5A 2017-11-21 2017-11-21 Live broadcast interaction method, device and system Active CN107911724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711168742.5A CN107911724B (en) 2017-11-21 2017-11-21 Live broadcast interaction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711168742.5A CN107911724B (en) 2017-11-21 2017-11-21 Live broadcast interaction method, device and system

Publications (2)

Publication Number Publication Date
CN107911724A CN107911724A (en) 2018-04-13
CN107911724B true CN107911724B (en) 2020-07-07

Family

ID=61846852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711168742.5A Active CN107911724B (en) 2017-11-21 2017-11-21 Live broadcast interaction method, device and system

Country Status (1)

Country Link
CN (1) CN107911724B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109195001A (en) * 2018-07-02 2019-01-11 广州虎牙信息科技有限公司 Methods of exhibiting, device, storage medium and the terminal of present is broadcast live
CN109040849B (en) * 2018-07-20 2021-08-31 广州虎牙信息科技有限公司 Live broadcast platform interaction method, device, equipment and storage medium
CN109151592B (en) * 2018-09-21 2021-12-28 广州方硅信息技术有限公司 Cross-channel microphone connection interaction method and device and server
CN109525883B (en) * 2018-10-16 2022-12-27 北京达佳互联信息技术有限公司 Interactive special effect display method and device, electronic equipment, server and storage medium
CN109587509A (en) * 2018-11-27 2019-04-05 广州市百果园信息技术有限公司 Live-broadcast control method, device, computer readable storage medium and terminal
CN109711263B (en) * 2018-11-29 2021-06-04 国政通科技有限公司 Examination system and processing method thereof
CN109766473B (en) * 2018-11-30 2019-12-24 北京达佳互联信息技术有限公司 Information interaction method and device, electronic equipment and storage medium
CN109729411B (en) * 2019-01-09 2021-07-09 广州酷狗计算机科技有限公司 Live broadcast interaction method and device
CN109756504B (en) * 2019-01-16 2021-11-09 武汉斗鱼鱼乐网络科技有限公司 Communication method based on live broadcast platform and related device
CN110149332B (en) * 2019-05-22 2022-04-22 北京达佳互联信息技术有限公司 Live broadcast method, device, equipment and storage medium
CN110446064A (en) * 2019-07-31 2019-11-12 广州华多网络科技有限公司 Living broadcast interactive method, server, living broadcast interactive system and storage medium
CN110856008B (en) * 2019-11-25 2021-12-03 广州虎牙科技有限公司 Live broadcast interaction method, device and system, electronic equipment and storage medium
CN111586427B (en) * 2020-04-30 2022-04-12 广州方硅信息技术有限公司 Anchor identification method and device for live broadcast platform, electronic equipment and storage medium
CN111866535B (en) * 2020-07-24 2022-09-02 北京达佳互联信息技术有限公司 Live somatosensory item interaction method, device, equipment and storage medium
CN112616063B (en) * 2020-12-11 2022-10-28 北京字跳网络技术有限公司 Live broadcast interaction method, device, equipment and medium
CN112672182B (en) * 2020-12-25 2023-08-04 北京城市网邻信息技术有限公司 Live broadcast interface display method, device, electronic equipment and computer readable medium
CN112738544B (en) * 2020-12-26 2022-10-04 北京达佳互联信息技术有限公司 Live broadcast room interaction method and device, electronic equipment and storage medium
CN113038229A (en) * 2021-02-26 2021-06-25 广州方硅信息技术有限公司 Virtual gift broadcasting control method, virtual gift broadcasting control device, virtual gift broadcasting control equipment and virtual gift broadcasting control medium
CN114501041B (en) * 2021-04-06 2023-07-14 抖音视界有限公司 Special effect display method, device, equipment and storage medium
WO2023019982A1 (en) * 2021-08-17 2023-02-23 广州博冠信息科技有限公司 Same-screen interaction control method and apparatus, and electronic device and storage medium
CN113727131B (en) * 2021-08-31 2023-03-14 北京达佳互联信息技术有限公司 Interaction method, system, device, electronic equipment and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060597A (en) * 2016-06-30 2016-10-26 广州华多网络科技有限公司 Method and system for carrying out anchor competition
CN106303662A (en) * 2016-08-29 2017-01-04 网易(杭州)网络有限公司 Image processing method in net cast and device
CN106658038A (en) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN106878411A (en) * 2017-02-13 2017-06-20 北京奇虎科技有限公司 Provide control method, device and the server of electronics prize
CN106981015A (en) * 2017-03-29 2017-07-25 武汉斗鱼网络科技有限公司 The implementation method of interactive present
CN107172497A (en) * 2017-04-21 2017-09-15 北京小米移动软件有限公司 Live broadcasting method, apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106060597A (en) * 2016-06-30 2016-10-26 广州华多网络科技有限公司 Method and system for carrying out anchor competition
CN106303662A (en) * 2016-08-29 2017-01-04 网易(杭州)网络有限公司 Image processing method in net cast and device
CN106658038A (en) * 2016-12-19 2017-05-10 广州虎牙信息科技有限公司 Live broadcast interaction method based on video stream and corresponding device thereof
CN106878411A (en) * 2017-02-13 2017-06-20 北京奇虎科技有限公司 Provide control method, device and the server of electronics prize
CN106981015A (en) * 2017-03-29 2017-07-25 武汉斗鱼网络科技有限公司 The implementation method of interactive present
CN107172497A (en) * 2017-04-21 2017-09-15 北京小米移动软件有限公司 Live broadcasting method, apparatus and system

Also Published As

Publication number Publication date
CN107911724A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN107911724B (en) Live broadcast interaction method, device and system
CN107911736B (en) Live broadcast interaction method and system
CN107680157B (en) Live broadcast-based interaction method, live broadcast system and electronic equipment
JP6700463B2 (en) Filtering and parental control methods for limiting visual effects on head mounted displays
CN108108012B (en) Information interaction method and device
US10545339B2 (en) Information processing method and information processing system
US20090202114A1 (en) Live-Action Image Capture
US11778263B2 (en) Live streaming video interaction method and apparatus, and computer device
CN113453034B (en) Data display method, device, electronic equipment and computer readable storage medium
CN110868554B (en) Method, device and equipment for changing faces in real time in live broadcast and storage medium
WO2022213727A1 (en) Live broadcast interaction method and apparatus, and electronic device and storage medium
US10499097B2 (en) Methods, systems, and media for detecting abusive stereoscopic videos by generating fingerprints for multiple portions of a video frame
US20220270302A1 (en) Content distribution system, content distribution method, and content distribution program
CN109068181B (en) Football game interaction method, system, terminal and device based on live video
CN113301358A (en) Content providing and displaying method and device, electronic equipment and storage medium
JP6941245B1 (en) Information processing system, information processing method and computer program
CN114900738A (en) Film viewing interaction method and device and computer readable storage medium
Yang Media Evolution,“Double-edged Sword” Technology and Active Spectatorship: investigating “Desktop Film” from media ecology perspective
JP7455300B2 (en) Information processing system, information processing method and computer program
Pettersson et al. A perceptual evaluation of social interaction with emotes and real-time facial motion capture
US11704854B2 (en) Information processing system, information processing method, and computer program
CN111659114B (en) Interactive game generation method and device, interactive game processing method and device and electronic equipment
Tasli et al. Real-time facial character animation
CN116076075A (en) Live interaction method, device, equipment, storage medium and program product
CN113542844A (en) Video data processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210112

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right