CN111147880A - Interaction method, device and system for live video, electronic equipment and storage medium - Google Patents

Interaction method, device and system for live video, electronic equipment and storage medium Download PDF

Info

Publication number
CN111147880A
CN111147880A CN201911395712.7A CN201911395712A CN111147880A CN 111147880 A CN111147880 A CN 111147880A CN 201911395712 A CN201911395712 A CN 201911395712A CN 111147880 A CN111147880 A CN 111147880A
Authority
CN
China
Prior art keywords
live broadcast
target object
live
broadcast end
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911395712.7A
Other languages
Chinese (zh)
Inventor
陈华
翁国川
庄楚斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201911395712.7A priority Critical patent/CN111147880A/en
Publication of CN111147880A publication Critical patent/CN111147880A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the application discloses a live video interaction method, a live video interaction device, a live video interaction system, electronic equipment and a storage medium, and relates to the technical field of computers. The method is applied to a first live broadcast end of a video live broadcast system, the video live broadcast system also comprises a second live broadcast end communicated with the first live broadcast end, and the method comprises the following steps: acquiring a video stream of a second live broadcast end; when a target object is detected from the video stream, displaying the target object in a first live broadcast picture of a first live broadcast end according to the video stream, and extracting feature data of the target object; obtaining drawing data aiming at a target object, and generating drawing information according to the drawing data and the characteristic data; and sending the drawing information to a second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object. The method and the device expand the interaction mode during video live broadcasting, and improve the experience of the user.

Description

Interaction method, device and system for live video, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a live video interaction method, device, system, electronic device, and storage medium.
Background
With the continuous development of internet technology, live webcasting has become a very common entertainment mode in people's daily life.
In order to improve the interest of a user in watching live broadcast, real-time interaction can be carried out between anchor broadcasts or between the anchor broadcasts and the user in the process of video live broadcast. However, the current interaction mode is single, and the interactivity needs to be improved.
Disclosure of Invention
In view of the foregoing problems, the present application provides an interactive method, an interactive device, an interactive system, an electronic device, and a storage medium for live video, so as to solve the foregoing problems.
In a first aspect, an embodiment of the present application provides an interaction method for live video, where the method is applied to a first live broadcast end of a live video system, the live video system further includes a second live broadcast end communicating with the first live broadcast end, and the method includes: acquiring a video stream of a second live broadcast end; when a target object is detected from the video stream, displaying the target object in a first live broadcast picture of a first live broadcast end according to the video stream, and extracting feature data of the target object; obtaining drawing data aiming at a target object, and generating drawing information according to the drawing data and the characteristic data; and sending the drawing information to a second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object.
In a second aspect, an embodiment of the present application provides an interaction method for live video, where the method is applied to a second live broadcast end of a live video broadcast system, the live video broadcast system further includes a first live broadcast end communicating with the second live broadcast end, and the method includes: obtaining drawing information sent by a first live broadcast end, wherein the drawing information is obtained based on characteristic data of a target object and drawing data aiming at the target object, and the target object is obtained by detecting in a video stream sent to the first live broadcast end by a second live broadcast end; and when the second live broadcast terminal displays a second live broadcast picture comprising the target object, rendering the target object based on the drawing information.
In a third aspect, an embodiment of the present application provides an interactive device for live video, where the device is applied to a first live broadcast end of a live video system, the live video system further includes a second live broadcast end communicating with the first live broadcast end, and the device includes: the video stream acquisition module is used for acquiring the video stream of the second live broadcast end. The characteristic data extraction module is used for displaying the target object in a first live broadcast picture of a first live broadcast end according to the video stream when the target object is detected from the video stream, and extracting the characteristic data of the target object. The drawing information generation module is used for acquiring drawing data aiming at the target object and generating drawing information according to the drawing data and the characteristic data. The sending module is used for sending the drawing information to the second live broadcast end so as to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast screen including the target object is displayed.
In a fourth aspect, an embodiment of the present application provides an interactive device for live video, where the device is applied to a second live broadcast end of a live video system, the live video system further includes a first live broadcast end communicating with the second live broadcast end, and the device includes: the drawing information acquisition module is used for acquiring drawing information sent by the first live broadcast end, wherein the drawing information is acquired based on characteristic data of a target object and drawing data aiming at the target object, and the target object is detected from a video stream sent by the second live broadcast end to the first live broadcast end. The rendering module is used for rendering the target object based on the drawing information when the second live broadcast terminal displays a second live broadcast screen comprising the target object.
In a fifth aspect, an embodiment of the present application provides an interactive system for live video, where the system includes a first live broadcast end and a second live broadcast end, where: the first live broadcast end is used for acquiring a video stream of the second live broadcast end; the first live broadcast terminal is used for displaying a target object in a first live broadcast picture of the first live broadcast terminal according to the video stream and extracting feature data of the target object when the target object is detected from the video stream; the first direct broadcasting end is used for acquiring drawing data aiming at the target object and generating drawing information according to the drawing data and the characteristic data; the first live broadcast end is used for sending the drawing information to the second live broadcast end; the second live broadcast end is used for acquiring the drawing information sent by the first live broadcast end; and the second live broadcast end is used for rendering the target object based on the drawing information when the second live broadcast end displays a second live broadcast screen comprising the target object.
In a sixth aspect, embodiments of the present application provide an electronic device, which includes one or more processors, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs are configured to perform the above-mentioned live video interaction method.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the above interaction method for live video streaming.
According to the interaction method, the device, the system, the electronic equipment and the storage medium for live video broadcast, the video stream of the second live broadcast end is obtained, when the target object is detected from the video stream, the target object is displayed in the first live broadcast picture of the first live broadcast end according to the video stream, the feature data of the target object is extracted, and then the drawing data for the target object are obtained, wherein the drawing data can be generated based on drawing operation of a user on the target object, for example, the user scrawls the picture displayed with a main broadcast through a touch screen of a mobile phone, and then the drawing information is generated according to the drawing data and the feature data, so that the drawing data and the target object can be bound. And then sending the drawing information to a second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object. Therefore, the user of the first live broadcast end can interact with the user of the second live broadcast end through the drawing operation, the interaction mode is expanded, the user experience is improved, the drawing data and the target object are bound, the drawing image generated based on the drawing data can move along with the movement of the target object, and the display effect of the drawing data is enhanced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment according to an embodiment of the present application.
Fig. 2 shows a flow chart of an interactive method of live video according to an embodiment of the present application.
Fig. 3 shows a flow chart of an interactive method of live video according to another embodiment of the present application.
Fig. 4 shows a flowchart of a method according to an embodiment of step S220 in the interactive method of live video shown in fig. 3 of the present application.
FIG. 5 illustrates a drawing area schematic according to one embodiment of the present application.
Fig. 6 shows a flow chart of an interactive method of live video according to another embodiment of the present application.
Fig. 7 is a flowchart illustrating a method according to an embodiment of step S320 in the interactive method for live video shown in fig. 6 of the present application.
Fig. 8 is a flow chart illustrating an interactive method of live video according to still another embodiment of the present application.
FIG. 9 shows a schematic view of a target object rendered live view according to an embodiment of the present application.
Fig. 10 shows a flow diagram of an interactive method of live video according to yet another embodiment of the present application.
Fig. 11 is a functional block diagram of an interactive apparatus for live video according to an embodiment of the present application.
Fig. 12 is a functional block diagram of an interactive apparatus for live video according to another embodiment of the present application.
Fig. 13 shows a block diagram of an electronic device according to an embodiment of the present application.
Fig. 14 is a storage medium storing or carrying program code for implementing an interactive method for live video according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the rapid development of internet technology, live video has penetrated into people's daily life and has become one of the most common entertainment modes of people. In order to improve the interest of video live broadcast and increase the experience of a user in live broadcast or live broadcast watching, an interaction mode between a main broadcast and audiences or an interaction mode between the main broadcast and the main broadcast is increased in the current live broadcast process. For example, during live broadcasting, viewers like praise, gift, etc. for the main broadcast in the live broadcasting room of the main broadcast, and accordingly, the pattern of the praise and the pattern of the gift, etc. are displayed in the live broadcasting picture of the main broadcast. For another example, the sound of the audience or other anchor can be heard in the live broadcast of the anchor through the connection between the audience and the anchor or among a plurality of anchors, thereby achieving the interactive effect.
However, the above interaction methods are seriously homogenized, and many live broadcast platforms can basically realize the interaction between the anchor and the audience through the above methods, so that the current interaction methods in the live broadcast process are single, and cannot meet the requirements of users, resulting in poor user experience.
The inventor finds that if in the live video process, a user can make a doodle operation on an anchor in a live broadcast picture to generate drawing data of the doodle, and generates a corresponding doodle pattern based on the drawing data of the doodle, so that the doodle pattern and the anchor are displayed in real time in the live broadcast picture, the interactive effect between the user and the anchor is effectively enhanced, and the interactive interest is improved.
However, the inventor finds in research that if only the graffiti pattern is covered on the face of the anchor in the live broadcast picture, the graffiti pattern cannot move, and when the action of the anchor changes, the graffiti pattern can be kept unchanged all the time, so that the display effect of the graffiti pattern in the live broadcast picture is hard, the live broadcast effect is further reduced, and meanwhile, the user experience is also reduced.
Therefore, in order to solve the above problems, the inventor provides an interactive method, an apparatus, a system, an electronic device, and a storage medium for live video broadcast in an embodiment of the present application, which can obtain feature data of a target object in a live broadcast picture, and draw data for the target object, and generate drawing information according to the drawing data and the feature data, so as to bind the drawing data and the target object, after binding, when the target object in the live broadcast picture changes in motion, a pattern corresponding to the drawing data also changes along with the change in motion of the target object, so that a pattern corresponding to the drawing data is displayed more smoothly, a display effect is enhanced, an interactive mode of live video broadcast is also expanded, and a user experience is improved.
An application environment of the video processing method provided by the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment of an interactive method for live video streaming, where the application environment may include a first live broadcast end 1 and a second live broadcast end 2, where the first live broadcast end 1 and the second live broadcast end 2 may be in communication connection, and may be in wireless communication or in wired communication. Alternatively, the first live broadcast terminal 1 and the second live broadcast terminal 2 may be electronic devices having functions of a display function, a touch function, an image capture function, an audio capture/play function, and the like, and the electronic devices include, but are not limited to, smart phones, notebook computers, tablet computers, personal computers, and the like. It can be understood that, in actual live broadcast, the second live broadcast end 2 may be a live broadcast end where the first microphone user performs live broadcast, the first live broadcast end 1 may be a live broadcast end where the second microphone user performs live broadcast, and when the first microphone user and the second microphone user perform microphone connection, the first live broadcast end 1 and the second live broadcast end 2 are in communication connection.
Referring to fig. 2, fig. 2 is a flowchart illustrating an interaction method for live video according to an embodiment of the present application, where the method is applicable to a first live broadcast end of a live video system, and the live video system further includes a second live broadcast end communicating with the first live broadcast end.
The video live broadcast interaction method can comprise the following steps:
s110, video stream of the second live broadcast end is obtained.
In some embodiments, the first live end may receive a video stream sent by the second live end. The second live broadcast terminal can acquire image information of a user of the second live broadcast terminal and the surrounding environment in real time through an image acquisition device (such as a camera), acquire audio of the second live broadcast terminal through an audio acquisition device (such as a microphone), and generate a video stream according to the acquired image information and the audio information.
And S120, when the target object is detected from the video stream, displaying the target object in a first live-broadcasting picture of a first live-broadcasting end according to the video stream, and extracting feature data of the target object.
The feature data may be a facial feature, a limb feature, or the like of the target object, or alternatively, a facial feature represented by a point, a facial feature defined by a line or a boundary, or a facial feature defined by a region.
In some embodiments, when the first live broadcast end detects a target object in a video stream, after the first live broadcast end receives the video stream, each frame of image of the video stream may be identified, and whether the target object exists in the image may be identified, where the target object may include a face, a limb, and the like of a user (hereinafter, may be referred to as a head microphone user) of the second live broadcast end. Alternatively, when the target object is recognized, a face recognition technique, an iris recognition technique, or the like may be employed.
In some embodiments, when the first live broadcast terminal detects the target object, a first live broadcast picture may be played at the first live broadcast terminal according to the video stream, where the target object is included in the first live broadcast picture. As an example, the first live broadcast terminal and the second live broadcast terminal play the same live broadcast picture at the same time, and the live broadcast picture includes a face, a limb, and the like of the first microphone user. Then, the first live broadcast terminal may extract features such as facial features and body features of the first microphone user from a live broadcast picture being played, and the extracted feature data of the target object may be key points of different facial organs on the face of the user, taking the facial features as an example.
S130, drawing data aiming at the target object is obtained, and drawing information is generated according to the drawing data and the characteristic data.
In some embodiments, the first live end may receive a drawing operation input by a user of the first live end (hereinafter, may be referred to as a second user), and generate drawing data based on the drawing operation. The drawing data can include position coordinates of a drawing track input by a user through the touch device, texture information and the like, and the drawing pattern can be displayed at the first direct-playing terminal based on the drawing data. As an example, for example, a live view including a target object is being displayed on the touch screen of the first live broadcast terminal, the second microphone user may scribble the displayed target object on the touch screen, specifically, a sliding motion may be made on the touch screen for a distance with respect to the target object, accordingly, a drawing pattern corresponding to the sliding motion may be displayed on the touch screen of the first live broadcast terminal, and parameters such as the sliding distance and the coordinates may be used as drawing data for generating the drawing pattern. Optionally, the user can also select texture, color, size, transparency and other information corresponding to the drawing pattern through the touch device, and the texture, color, size, transparency and other information can be used as the drawing data.
In some embodiments, the rendering information may be generated from the rendering data and the feature data in such a manner that a relative positional relationship between the key point and the rendering trajectory is generated from one or more key point coordinates in the feature data and rendering trajectory coordinates in the rendering data, which may also be understood as a relative positional relationship between the key point and the rendering data, and then the relative positional relationship is taken as the rendering information. As an example, for example, the coordinates of the key points are coordinates of key points corresponding to eye parts on a face of a target object, the coordinates of a drawing track are equivalent to coordinates of a plurality of pixel points of a drawing pattern generated according to the drawing track, the coordinates of each pixel point and the coordinates of the key points corresponding to the eyes are first placed in the same coordinate system, and then distance information and direction information between the key points corresponding to the eyes and the plurality of pixel points are calculated, so that a plurality of distance information and direction information can be obtained, and the plurality of distance information and direction information can be used to represent a relative position relationship between the drawing track and the key points, that is, drawing information.
It should be noted that the coordinates of the pixel points and the key points may be two-dimensional plane coordinates or three-dimensional space coordinates.
And S140, sending the drawing information to the second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast screen including the target object is displayed.
As an example, when the second live broadcast end receives the drawing information sent by the first live broadcast end and the second live broadcast end is playing a second live broadcast screen including the target object, a drawing pattern generated based on the drawing data may be rendered and displayed on the second live broadcast screen according to the drawing information, and since the drawing information includes a relative position relationship between a key point of the target object and the drawing data, the displayed drawing pattern and the key point of the target object always maintain a fixed position relationship, and thus the drawing pattern may move along with the movement of the key point of the target object, thereby forming a dynamic scribble for the target object.
In this embodiment, by acquiring a video stream of the second live broadcast end, when a target object is detected from the video stream, the target object is displayed in a first live broadcast picture of the first live broadcast end according to the video stream, feature data of the target object is extracted, and drawing data for the target object is acquired, where the drawing data may be generated based on a drawing operation performed by a user on the target object, for example, the user scrawls a picture displayed with a main broadcast through a touch screen of a mobile phone, and then drawing information is generated according to the drawing data and the feature data, so that the drawing data and the target object may be bound. And then sending the drawing information to a second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object. Therefore, the user of the first live broadcast end can interact with the user of the second live broadcast end through the drawing operation, the interaction mode is expanded, the drawing data and the target object are bound, the drawing image generated based on the drawing data can move along with the movement of the target object, the display effect is enhanced, and the user experience is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating an interactive method for live video according to another embodiment of the present application, where the method may include:
s210, video stream of the second live broadcast end is obtained.
S220, when the target object is detected from the video stream, the target object is displayed in a first live-broadcasting picture of a first live-broadcasting end according to the video stream, and feature data of the target object is extracted.
In some embodiments, before S220, the first live end may further determine whether a target object exists in the video stream, and specifically, may detect the target object in the video stream by identifying whether a human face or a limb exists in a video frame of the video stream. When the target object does not exist, the first live broadcast end may send a reminding message to remind a user of the first live broadcast end that the target object does not exist in the video stream, and optionally, the reminding message may include a voice reminding message, a text reminding message, a vibration reminding message, and the like. Optionally, when there is no target object, the first live broadcast end may send the reminding information to the second live broadcast end to remind a user of the second live broadcast end.
As shown in fig. 4, in some embodiments, S220 may include the following steps:
s221, a plurality of image frames are extracted from the video stream.
In some embodiments, the first direct playing terminal may extract one image frame from the video stream every preset time period, so that a plurality of image frames may be extracted within a period of time.
S222, when there are image frames conforming to the specification among the plurality of image frames, displaying the target object in the drawing area of the first live view according to the image frames conforming to the specification.
In some embodiments, the first live broadcast terminal may perform compliance detection on each of the plurality of image frames, where the compliance detection on the image frame may be to detect the content of the image frame to determine whether the content displayed by the image frame meets the specification, for example, to determine whether the image frame contains the illegal content. It is also possible to detect the image quality of the image frame to determine whether the image quality thereof is normal.
As a mode, when detecting the image frame attribute, the attribute parameter of the image frame to be detected may be extracted, optionally, the attribute parameter may include parameters such as resolution, color depth, image distortion, and the like, and then it is determined whether the attribute parameter satisfies a preset condition, and if not, it is determined that the image frame to be detected does not conform to the specification. For example, if the resolution of the image frame to be detected is lower than the preset resolution, it may be determined that the attribute parameter does not satisfy the preset condition. In the embodiment, the image frame attributes are detected to determine whether the image frame meets the specification, so that a high-quality live broadcast display picture can be ensured.
As a mode, when detecting the content of the image frame, the similarity comparison may be performed between the image frame to be detected and the plurality of images that do not conform to the specification, and when the similarity between the image frame to be detected and at least one of the plurality of images that do not conform to the specification is greater than a similarity threshold, it is determined that the image frame to be detected does not conform to the specification, otherwise, it is determined that the image frame to be detected conforms to the specification. In the present embodiment, by performing similarity comparison using an image frame and an image that does not meet the specification, it is possible to accurately determine whether the content of the image frame is in compliance.
Alternatively, when the content of the image frame is detected, a machine learning model for detecting the compliance of the image frame may be trained in advance, and the machine learning model is trained in advance by a plurality of sample pictures which do not meet the compliance. When detecting, each image frame is input into the machine learning model, and the machine learning model outputs a detection result corresponding to each image frame. In the present embodiment, whether the content of the image frame is compliant or not can be determined quickly and efficiently by the machine learning model for training based on the sample image that is not compliant with the specification.
Alternatively, the first live end may detect only the content of the image frame to determine whether the image frame meets the specification. The content and quality of the image frame may also be detected simultaneously to determine whether the image frame meets the specification, for example, when the content and the attribute parameter of the image frame meet the respective preset conditions simultaneously, the image frame is determined to meet the specification.
As shown in fig. 5, in some embodiments, when there are a plurality of image frames that conform to the specification, a target object in the image frames that conform to the specification may be displayed in a drawing area in the first live view, where the drawing area may select a tool for drawing, such as a virtual pencil, a virtual crayon, or the like, so that a user at the first live end may perform a graffiti or the like operation on the target object. In particular, the amount of the solvent to be used,
and S223, extracting characteristic data of the target object from the image frame which meets the specification.
As one mode, when the first direct broadcasting terminal detects that all of the plurality of image frames conform to the specification, the feature data of the target object may be directly extracted from at least one image frame of the plurality of image frames that conform to the specification. As one mode, when the first broadcast end detects that a plurality of image frames exceeds a preset number of image frames and does not meet the specification, the plurality of image frames may not be processed.
In the embodiment, the compliance detection is carried out on the image frames, so that the live broadcast pictures displayed in the live video broadcast have high quality and healthy content.
And S230, obtaining drawing data aiming at the target object, wherein the drawing data comprises position coordinates of pixel points of the drawing image, and the characteristic data comprises three-dimensional space coordinates of at least three key points.
As an example, for example, when the target object is a human face, the feature data may include three-dimensional space coordinates of two key points corresponding to both eyes of the human face and three-dimensional space coordinates of one key point of the tip of the nose. The drawing image can be generated based on drawing operation made by a user of the first live end on a touch screen of the first live end. Specifically, the generation mode of the drawn image may refer to a handwriting input method on the smartphone, and when the user performs operations such as sliding in the handwriting input area, the pattern may be automatically generated in the handwriting input area according to the sliding track. It can be understood that the handwriting input area may be equivalent to a drawing area in the first live view image displayed by the first live view terminal in this embodiment, and since the target object is displayed in the drawing area, the drawing image displayed in the drawing area and the target object are in the same three-dimensional coordinate system, so that the position coordinates of the pixel point of the drawing image may be obtained according to the three-dimensional coordinate system. Alternatively, the drawing operation may include sliding, double clicking, single clicking, long pressing, and the like. The drawing image may include line pixel points, texture information, and the like.
S240, according to the three-dimensional space coordinates of at least three keys and the position coordinates of the pixel points, relative position information of the drawing data and the characteristic data is constructed to serve as drawing information.
As an example, a three-dimensional vector relationship between the key points and the rendered image may be constructed according to three-dimensional space coordinates of three key points of the eyes and the nose tip and position coordinates of pixel points of the rendered image, after the construction, a three-dimensional vector is corresponding between the pixel points and each key point, and the three-dimensional vector includes a distance and a direction, and may be used to represent relative position information of the pixel points and each key point, that is, rendering information. Alternatively, the relative position information between the key points and the pixel points can be constructed by using the three-point positioning principle.
Optionally, the key points of the feature data include, but are not limited to, key points of various organs on the human face, and for example, the key points of limbs such as shoulders and hands may also be included. The gesture action of the microphone user or the action on other limbs can be determined through the limb key points.
And S250, sending the drawing information to the second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast screen including the target object is displayed.
The specific implementation of S250 can refer to S140, and therefore is not described herein.
In view of the fact that at least three-dimensional space coordinates are required for building the three-dimensional space coordinate system, in this embodiment, at least three key points of the target object are used as feature data, and the position coordinates of the pixel points of the drawing image are used as drawing data, so that the position relationship information between the feature data and the drawing data can be quickly and accurately built in the three-dimensional space coordinate system.
Referring to fig. 6, fig. 6 is a flowchart illustrating an interactive method for live video according to another embodiment of the present application, where the method can be applied to a second live broadcast end of a live video system, and the live video system further includes a first live broadcast end communicating with the second live broadcast end.
The method can comprise the following steps:
and S310, obtaining drawing information sent by the first live broadcast end, wherein the drawing information is obtained based on the characteristic data of the target object and the drawing data aiming at the target object, and the target object is detected from the video stream sent by the second live broadcast end to the first live broadcast end.
In some embodiments, the second live broadcast terminal may receive the drawing information sent by the first live broadcast terminal after determining the authority of the first live broadcast terminal. The drawing information may be obtained in a manner of S220-S240.
S320, when the second live broadcast terminal displays the second live broadcast screen comprising the target object, rendering the target object based on the drawing information.
As an example, a second live broadcast picture played by the second live broadcast terminal displays a target object in real time, that is, a user of the second live broadcast terminal is displayed, and when the target object is displayed, the second live broadcast terminal renders the target object in the second live broadcast picture based on drawing information, that is, a graffiti made by the second live broadcast terminal to the first microphone user is displayed on a face or a limb of the first microphone user in the second live broadcast picture, so that interaction between anchor broadcasts is performed in a graffiti manner.
Alternatively, when rendering the target object based on the drawing information, an Open graphics library (OpenGL) and a Metal platform may be used for rendering.
As shown in fig. 7, in some embodiments, S320 may include:
s321, it is determined whether the drawing information is valid.
As one mode, a lifetime may be set for the drawing information in advance when the drawing information is generated, and when the second live broadcast terminal determines that the current time is within the lifetime, it may be determined that the drawing information is valid. As an example, for example, the time of generating the drawing information is 14:00, which corresponds to the lifetime of 14:00 to 14:30, assuming that the second live end detects that the current time is 14:15, it may be determined that the drawing information is valid, and when the second live end detects that the current time is 14:35, it may be determined that the drawing information is invalid.
S322, when it is determined that the drawing information is valid, rendering the target object based on the drawing information.
As one way, when the second live broadcast terminal determines that the drawing information is invalid, the drawing information may be deleted from the second live broadcast terminal.
In this embodiment, receive the drawing information that first direct broadcast end sent through the live broadcast end of second, when the second live broadcast end demonstrates the second live broadcast picture including the target object, render the target object based on the drawing information, make the live broadcast picture of second live broadcast end can show the target object that has the drawing pattern, thereby realized the scribble interaction between first direct broadcast end and the live broadcast end of second, interactive interest has been improved, and make drawing pattern and target object can bind according to the drawing information, bind the back, when the target object removes or turns to, drawing pattern can follow in real time and remove or turn to, the display effect of drawing the scribble has been strengthened, user experience has been promoted.
Referring to fig. 8, fig. 8 is a flowchart illustrating an interactive method for live video according to another embodiment of the present application, where the method may include:
and S410, obtaining drawing information sent by the first live broadcast end, wherein the drawing information is obtained based on the characteristic data of the target object and the drawing data aiming at the target object, and the target object is detected from the video stream sent by the second live broadcast end to the first live broadcast end.
The detailed implementation of S410 can refer to S310, and therefore is not described herein.
S420, when the second live broadcast terminal displays a second live broadcast picture comprising the target object, current feature data of the target object are obtained, wherein the current feature data comprise current three-dimensional space coordinates of at least three key points.
As an example, for example, when the second live broadcast end plays a second live broadcast screen including the first microphone user, the second live broadcast end may first obtain the position, size, and rotation angle changes of the three key points according to a matching algorithm to calculate a normal vector, so as to establish a current three-dimensional space coordinate system, and then obtain the current three-dimensional space coordinates of the three key points of the first microphone user in real time based on the three-dimensional space coordinate system, so as to track the movement of the three key points in real time.
And S430, calculating current drawing data corresponding to the current characteristic data based on the drawing information, wherein the current drawing data comprises current position coordinates of pixel points of the drawn image.
As an example, since the rendering information includes the relative position information between the feature data and the rendering data, when the feature data redeemed by the target is changed, for example, when the three-dimensional space coordinates of three key points, such as the two eyes and the tip of the nose, of the target object are changed, the current position coordinates of the pixel points of the rendering image in the rendering data can be calculated according to the changed three-dimensional space coordinates and the relative position information.
S440, rendering the target object based on the current drawing data.
As shown in fig. 9, as an example, since the rendering data includes current position coordinates of pixel points of the rendering image, after the target object is rendered, in a live broadcast picture, the pixel points of the rendering image will be moved from an original position to the current position coordinates of the pixel points of the rendering image, so that real-time tracking of key points of the target object by the rendering image is realized.
In the embodiment, the current feature data of the target object is obtained, the current feature data comprises current three-dimensional space coordinates of at least three key points, current drawing data corresponding to the current feature data is calculated based on the drawing information, the current drawing data comprises current position coordinates of pixel points of the drawing image, and the target object is rendered according to the current drawing data, so that the synchronous movement of the drawing image and the target object can be effectively realized, and the display effect of the drawing image is enhanced.
Referring to fig. 10, fig. 10 is a flowchart illustrating an interactive method for live video according to another embodiment of the present application, where the method may include:
and S510, obtaining drawing information sent by the first live broadcast end, wherein the drawing information is obtained based on the characteristic data of the target object and the drawing data aiming at the target object, and the target object is detected from a video stream sent by the second live broadcast end to the first live broadcast end.
S520, when the second live broadcast terminal displays a second live broadcast screen comprising the target object, rendering the target object based on the drawing information.
The specific implementation of S510-S520 can refer to S310-S320, and therefore is not described herein.
S530, generating a live video based on the rendered target object.
As an example, when the second live broadcast terminal collects live broadcast images of the first microphone user in real time, the second live broadcast terminal renders the first microphone user in the live broadcast images by using the rendering information, that is, an image frame with the first microphone user and the rendering pattern can be generated in real time, and a live broadcast video can be obtained based on the continuously updated image frame.
And S540, playing the live video.
The second live broadcast end can play the live broadcast video on a display device configured by the second live broadcast end.
And S550, sending the live video to the first live end to indicate the first live end to play the live video.
And the second live broadcast end sends the live broadcast video to the first live broadcast end through the video stream of the live broadcast video. Optionally, the user of the first live end may be a main broadcast or a viewer, and the number of the first live ends may be one or more.
S560, obtains the current time and the valid period of the rendering information.
In one mode, the rendering information may be set to the valid period of the rendering information 15 minutes after the generation time, and the valid period may be acquired from the rendering information generation time. For example, when the drawing information is generated at 14:00, the valid period is 14:00 to 14:15.
S570, when the current time is not within the valid period, stops rendering the target object based on the rendering information.
As an example, when it is determined that the current time is not within the valid period, for example, when the current time is 14:20, rendering of the target object based on the rendering information may be stopped.
In the embodiment, live broadcast pictures with drawn patterns are displayed at the first live broadcast end and the second live broadcast end simultaneously, so that the live broadcast interest can be increased, in addition, the target object is stopped being rendered after the drawn information is expired by detecting the validity period of the drawn information, and the phenomenon that a large amount of data is generated due to the fact that the target object is rendered all the time is avoided.
An interactive system for live video provided in an embodiment of the present application is shown in fig. 1, and the system includes a first live broadcast terminal 1 and a second live broadcast terminal 2, where:
the first live broadcast terminal 1 is configured to obtain a video stream of the second live broadcast terminal 2.
The first live broadcast terminal 1 is configured to, when a target object is detected from a video stream, display the target object in a first live broadcast picture of the first live broadcast terminal 1 according to the video stream, and extract feature data of the target object.
And the first live broadcast terminal 1 is used for acquiring drawing data for the target object and generating drawing information according to the drawing data and the characteristic data.
And the first live broadcast terminal 1 is used for sending the drawing information to the second live broadcast terminal 2.
And the second live broadcast terminal 2 is configured to obtain the drawing information sent by the first live broadcast terminal 1.
And the second live broadcast terminal 2 is configured to render the target object based on the drawing information when the second live broadcast terminal 2 displays a second live broadcast screen including the target object.
Referring to fig. 11, which illustrates an interactive apparatus 600 for live video according to an embodiment of the present application, where the apparatus 600 may be applied to a first live end of a live video system, the live video system further includes a second live end communicating with the first live end, and the interactive apparatus 600 for live video includes: a video stream acquisition module 610, a feature data extraction module 620, a drawing information generation module 630 and a sending module 640.
And a video stream obtaining module 610, configured to obtain a video stream of the second live broadcast end.
And the feature data extraction module 620 is configured to, when a target object is detected from the video stream, display the target object in a first live broadcast picture of the first live broadcast end according to the video stream, and extract feature data of the target object.
A drawing information generating module 630, configured to obtain drawing data for the target object, and generate drawing information according to the drawing data and the feature data.
The sending module 640 is configured to send the drawing information to the second live broadcast end to instruct the second live broadcast end to render the target object based on the drawing information when displaying the second live broadcast screen including the target object.
Further, the drawing data includes position coordinates of pixel points of the drawing image, the feature data includes three-dimensional space coordinates of at least three key points, and the drawing information generating module 630 includes:
and the known information construction unit is used for constructing the relative position information of the drawing data and the characteristic data as the drawing information according to the three-dimensional space coordinates of at least three keys and the position coordinates of the pixel points.
Further, the feature data extraction module 620 includes:
and the image frame extraction unit is used for extracting a plurality of image frames from the video stream.
And the display unit is used for displaying the target object in the drawing area of the first live-action picture according to the image frames which accord with the standard when the image frames which accord with the standard exist in the plurality of image frames.
And the characteristic data extraction unit is used for extracting the characteristic data of the target object from the image frame which conforms to the specification.
Referring to fig. 12, a live video interaction apparatus 700 according to another embodiment of the present application is shown, where the live video interaction apparatus 700 can be applied to a second live end of a live video system, and the live video system further includes a first live end communicating with the second live end. The live video interaction device 700 includes: a sending module 710 and a rendering module 720. Wherein:
the sending module 710 is configured to obtain drawing information sent by the first live broadcast end, where the drawing information is obtained based on feature data of a target object and drawing data for the target object, and the target object is detected in a video stream sent by the second live broadcast end to the first live broadcast end.
And a rendering module 720, configured to render the target object based on the drawing information when the second live broadcast end displays the second live broadcast screen including the target object.
Further, the rendering module 720 includes:
and the current characteristic data acquisition unit is used for acquiring current characteristic data of the target object, wherein the current characteristic data comprises current three-dimensional space coordinates of at least three key points.
And the current characteristic data generating unit is used for calculating current drawing data corresponding to the current characteristic data based on the drawing information, wherein the current drawing data comprises current position coordinates of pixel points of the drawing image.
A first rendering unit to render the target object based on the current drawing data.
Further, the rendering module 720 further includes:
and a validity judging unit for determining whether the drawing information is valid.
A second rendering unit that renders the target object based on the drawing information when it is determined that the drawing information is valid.
Further, the live video interaction apparatus 700 further includes:
and the video generation module is used for generating a live video based on the rendered target object.
And the playing module is used for playing the live video.
And the video sending module is used for sending the live video to the first live end so as to indicate the first live end to play the live video.
Further, the apparatus 700 further comprises:
and the valid period obtaining module is used for obtaining the current time and the valid period of the drawing information.
And a rendering control module which stops rendering the target object based on the rendering information when the current time is not within the valid period.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 13, a block diagram of an electronic device according to an embodiment of the present application is shown. The electronic device 800 may be the electronic device 800 capable of running the program in the foregoing embodiments. The electronic device 800 in the present application may include one or more of the following components: a processor 810, a memory 820, and one or more programs, wherein the one or more programs may be stored in the memory 820 and configured to be executed by the one or more processors 810, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 810 may include one or more processing cores. The processor 810 interfaces with various interfaces and circuitry throughout the electronic device to perform various functions of the electronic device and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 820 and invoking data stored in the memory 820. Alternatively, the processor 810 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 810 may integrate one or a combination of a Central Processing Unit (CPU) 810, a Graphics Processing Unit (GPU) 810, a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 810, but may be implemented by a communication chip.
The Memory 820 may include a Random Access Memory (RAM) 820 or a Read-Only Memory (Read-Only Memory) 820. The memory 820 may be used to store instructions, programs, code sets, or instruction sets. The memory 820 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal in use, such as a phonebook, audio-video data, chat log data, and the like.
The touch screen 830 is used to Display information input by a user, information provided to the user, and various graphic user interfaces of the electronic device, which may be composed of graphics, text, icons, numbers, video, and any combination thereof, and in one example, the touch screen 830 may be a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED), which is not limited herein.
Referring to fig. 14, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 900 has stored therein a program code 910, and the program code 910 can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 900 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium includes a non-transitory computer-readable storage medium. The computer readable storage medium has a storage space for program code for performing any of the method steps of the above-described method. The program code can be read from or written to one or more computer program products. The program code may be compressed, for example, in a suitable form.
To sum up, according to the video live broadcast interaction method, device, system, electronic device, and storage medium provided by the embodiments of the present application, by obtaining a video stream of a second live broadcast end, when a target object is detected from the video stream, the target object is displayed in a first live broadcast picture of a first live broadcast end according to the video stream, feature data of the target object is extracted, and then drawing data for the target object is obtained, where the drawing data may be generated based on a user performing a drawing operation on the target object, for example, the user performs a doodle on a picture displayed with a main broadcast through a touch screen of a mobile phone, and then generates drawing information according to the drawing data and the feature data, so that the drawing data and the target object can be bound. And then sending the drawing information to a second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object. Therefore, the user of the first live broadcast end can interact with the user of the second live broadcast end through the drawing operation, the interaction mode is expanded, the user experience is improved, the drawing data and the target object are bound, the drawing image generated based on the drawing data can move along with the movement of the target object, and the display effect of the drawing data is enhanced.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. An interactive method of live video is characterized in that the interactive method is applied to a first live broadcast end of a live video system, the live video system further comprises a second live broadcast end which is communicated with the first live broadcast end, and the method comprises the following steps:
acquiring a video stream of the second live broadcast end;
when a target object is detected from the video stream, displaying the target object in a first direct-broadcasting picture of a first direct-broadcasting end according to the video stream, and extracting feature data of the target object;
obtaining drawing data aiming at the target object, and generating drawing information according to the drawing data and the characteristic data;
and sending the drawing information to the second live broadcast end to indicate the second live broadcast end to render the target object based on the drawing information when displaying a second live broadcast picture comprising the target object.
2. The method of claim 1, wherein the rendering data comprises position coordinates of pixel points of a rendered image, the feature data comprises three-dimensional space coordinates of at least three key points, and the generating rendering information from the rendering data and the feature data comprises:
and constructing relative position information of the drawing data and the characteristic data as the drawing information according to the three-dimensional space coordinates of the at least three keys and the position coordinates of the pixel points.
3. The method according to claim 1, wherein said presenting the target object in a first live view of the first live end according to the video stream and extracting feature data of the target object comprises:
extracting a plurality of image frames from the video stream;
when there is an image frame conforming to a specification among the plurality of image frames, displaying the target object in a drawing area of the first live view according to the image frame conforming to the specification;
and extracting the characteristic data of the target object from the image frame which conforms to the specification.
4. A video live broadcast interaction method is applied to a second live broadcast end of a video live broadcast system, the video live broadcast system further comprises a first live broadcast end communicated with the second live broadcast end, and the method comprises the following steps:
obtaining drawing information sent by the first live broadcast end, wherein the drawing information is obtained based on feature data of a target object and drawing data aiming at the target object, and the target object is detected from a video stream sent by the second live broadcast end to the first live broadcast end;
and when the second live broadcast terminal displays a second live broadcast picture comprising the target object, rendering the target object based on the drawing information.
5. The method of claim 4, wherein the rendering the target object based on the drawing information comprises:
acquiring current characteristic data of the target object, wherein the current characteristic data comprises current three-dimensional space coordinates of at least three key points;
based on the drawing information, current drawing data corresponding to the current characteristic data is calculated, and the current drawing data comprises current position coordinates of pixel points of a drawn image;
rendering the target object based on the current drawing data.
6. The method of claim 4, wherein the rendering the target object based on the drawing information comprises:
determining whether the drawing information is valid;
rendering the target object based on the drawing information when it is determined that the drawing information is valid.
7. The method according to any of claims 4-6, further comprising, after said rendering said target object based on said drawing information:
generating a live video based on the rendered target object;
playing the live video;
and sending the live video to the first live end to indicate the first live end to play the live video.
8. The method of claim 7, further comprising, after said playing said live video:
acquiring the current time and the valid period of the drawing information;
when the current time is not within the valid period, stopping rendering the target object based on the drawing information.
9. The utility model provides a live interactive installation of video, its characterized in that is applied to the first live broadcast end of live video system, live video system still include with the second live broadcast end of first live broadcast end communication, the device includes:
the video stream acquisition module is used for acquiring the video stream of the second live broadcast end;
the characteristic data extraction module is used for displaying the target object in a first live-action picture of the first live-action end according to the video stream and extracting the characteristic data of the target object when the target object is detected from the video stream;
the drawing information generation module is used for acquiring drawing data aiming at the target object and generating drawing information according to the drawing data and the characteristic data;
and the sending module is used for sending the drawing information to the second live broadcast end so as to indicate the second live broadcast end to render the target object based on the drawing information when a second live broadcast picture comprising the target object is displayed.
10. The utility model provides a live interactive installation of video which characterized in that is applied to the live end of second of video system, the live system of video still include with the live end communication's of second first always broadcast end, the device includes:
a drawing information obtaining module, configured to obtain drawing information sent by the first live broadcast end, where the drawing information is obtained based on feature data of a target object and drawing data for the target object, and the target object is detected in a video stream sent by the second live broadcast end to the first live broadcast end;
and the rendering module is used for rendering the target object based on the drawing information when the second live broadcast terminal displays a second live broadcast screen comprising the target object.
11. The interactive system for live video is characterized by comprising a first live broadcast end and a second live broadcast end, wherein:
the first live broadcast end is used for acquiring a video stream of the second live broadcast end;
the first live broadcast end is used for displaying a target object in a first live broadcast picture of the first live broadcast end according to the video stream and extracting feature data of the target object when the target object is detected from the video stream;
the first direct broadcasting end is used for acquiring drawing data aiming at the target object and generating drawing information according to the drawing data and the characteristic data;
the first live broadcast end is used for sending the drawing information to the second live broadcast end;
the second live broadcast end is used for acquiring the drawing information sent by the first live broadcast end;
and the second live broadcast end is used for rendering the target object based on the drawing information when the second live broadcast end displays a second live broadcast picture comprising the target object.
12. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-3, or the method of any of claims 4-8.
13. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 3, or the method according to any one of claims 4 to 8.
CN201911395712.7A 2019-12-30 2019-12-30 Interaction method, device and system for live video, electronic equipment and storage medium Pending CN111147880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911395712.7A CN111147880A (en) 2019-12-30 2019-12-30 Interaction method, device and system for live video, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911395712.7A CN111147880A (en) 2019-12-30 2019-12-30 Interaction method, device and system for live video, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111147880A true CN111147880A (en) 2020-05-12

Family

ID=70521968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911395712.7A Pending CN111147880A (en) 2019-12-30 2019-12-30 Interaction method, device and system for live video, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111147880A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073819A (en) * 2020-09-17 2020-12-11 网易(杭州)网络有限公司 Live broadcast interaction method, system, server, live broadcast end and storage medium
CN112218108A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112218069A (en) * 2020-09-28 2021-01-12 北京达佳互联信息技术有限公司 Live broadcast interface detection method and device
CN112383793A (en) * 2020-11-12 2021-02-19 咪咕视讯科技有限公司 Picture synthesis method and device, electronic equipment and storage medium
CN113329260A (en) * 2021-06-15 2021-08-31 北京沃东天骏信息技术有限公司 Live broadcast processing method and device, storage medium and electronic equipment
CN113573088A (en) * 2021-07-23 2021-10-29 上海芯翌智能科技有限公司 Method and equipment for synchronously drawing identification object for live video stream
WO2022073409A1 (en) * 2020-10-10 2022-04-14 腾讯科技(深圳)有限公司 Video processing method and apparatus, computer device, and storage medium
CN114466224A (en) * 2022-01-26 2022-05-10 广州繁星互娱信息科技有限公司 Video data encoding and decoding method and device, storage medium and electronic equipment
CN114501102A (en) * 2022-01-25 2022-05-13 广州繁星互娱信息科技有限公司 Live broadcast object display method and device, storage medium and electronic device
CN114679618A (en) * 2022-05-27 2022-06-28 成都有为财商教育科技有限公司 Method and system for receiving streaming media data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187930A (en) * 2015-09-18 2015-12-23 广州酷狗计算机科技有限公司 Video live broadcasting-based interaction method and device
CN105279251A (en) * 2015-09-30 2016-01-27 广州酷狗计算机科技有限公司 Virtual gift display method and device
CN106846040A (en) * 2016-12-22 2017-06-13 武汉斗鱼网络科技有限公司 Virtual present display methods and system in a kind of direct broadcasting room
US20170293826A1 (en) * 2015-01-20 2017-10-12 Eiji Kemmochi Electronic information board apparatus, information processing method, and computer program product
CN107438200A (en) * 2017-09-08 2017-12-05 广州酷狗计算机科技有限公司 The method and apparatus of direct broadcasting room present displaying
CN107454433A (en) * 2017-08-09 2017-12-08 广州视源电子科技股份有限公司 Live annotation method and device, terminal and live broadcast system
CN107820132A (en) * 2017-11-21 2018-03-20 广州华多网络科技有限公司 Living broadcast interactive method, apparatus and system
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN108965977A (en) * 2018-06-13 2018-12-07 广州虎牙信息科技有限公司 Methods of exhibiting, device, storage medium, terminal and the system of present is broadcast live

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293826A1 (en) * 2015-01-20 2017-10-12 Eiji Kemmochi Electronic information board apparatus, information processing method, and computer program product
CN105187930A (en) * 2015-09-18 2015-12-23 广州酷狗计算机科技有限公司 Video live broadcasting-based interaction method and device
CN105279251A (en) * 2015-09-30 2016-01-27 广州酷狗计算机科技有限公司 Virtual gift display method and device
CN106846040A (en) * 2016-12-22 2017-06-13 武汉斗鱼网络科技有限公司 Virtual present display methods and system in a kind of direct broadcasting room
CN107454433A (en) * 2017-08-09 2017-12-08 广州视源电子科技股份有限公司 Live annotation method and device, terminal and live broadcast system
CN107438200A (en) * 2017-09-08 2017-12-05 广州酷狗计算机科技有限公司 The method and apparatus of direct broadcasting room present displaying
CN107888845A (en) * 2017-11-14 2018-04-06 腾讯数码(天津)有限公司 A kind of method of video image processing, device and terminal
CN107820132A (en) * 2017-11-21 2018-03-20 广州华多网络科技有限公司 Living broadcast interactive method, apparatus and system
CN108965977A (en) * 2018-06-13 2018-12-07 广州虎牙信息科技有限公司 Methods of exhibiting, device, storage medium, terminal and the system of present is broadcast live

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073819A (en) * 2020-09-17 2020-12-11 网易(杭州)网络有限公司 Live broadcast interaction method, system, server, live broadcast end and storage medium
CN112218108A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112218069A (en) * 2020-09-28 2021-01-12 北京达佳互联信息技术有限公司 Live broadcast interface detection method and device
WO2022073409A1 (en) * 2020-10-10 2022-04-14 腾讯科技(深圳)有限公司 Video processing method and apparatus, computer device, and storage medium
CN112383793A (en) * 2020-11-12 2021-02-19 咪咕视讯科技有限公司 Picture synthesis method and device, electronic equipment and storage medium
CN112383793B (en) * 2020-11-12 2023-07-07 咪咕视讯科技有限公司 Picture synthesis method and device, electronic equipment and storage medium
CN113329260A (en) * 2021-06-15 2021-08-31 北京沃东天骏信息技术有限公司 Live broadcast processing method and device, storage medium and electronic equipment
CN113329260B (en) * 2021-06-15 2024-04-09 北京沃东天骏信息技术有限公司 Live broadcast processing method and device, storage medium and electronic equipment
CN113573088A (en) * 2021-07-23 2021-10-29 上海芯翌智能科技有限公司 Method and equipment for synchronously drawing identification object for live video stream
CN113573088B (en) * 2021-07-23 2023-11-10 上海芯翌智能科技有限公司 Method and equipment for synchronously drawing identification object for live video stream
CN114501102A (en) * 2022-01-25 2022-05-13 广州繁星互娱信息科技有限公司 Live broadcast object display method and device, storage medium and electronic device
CN114466224A (en) * 2022-01-26 2022-05-10 广州繁星互娱信息科技有限公司 Video data encoding and decoding method and device, storage medium and electronic equipment
CN114466224B (en) * 2022-01-26 2024-04-16 广州繁星互娱信息科技有限公司 Video data encoding and decoding method and device, storage medium and electronic equipment
CN114679618A (en) * 2022-05-27 2022-06-28 成都有为财商教育科技有限公司 Method and system for receiving streaming media data
CN114679618B (en) * 2022-05-27 2022-08-02 成都有为财商教育科技有限公司 Method and system for receiving streaming media data

Similar Documents

Publication Publication Date Title
CN111147880A (en) Interaction method, device and system for live video, electronic equipment and storage medium
CN112351302B (en) Live broadcast interaction method and device based on cloud game and storage medium
CN112379812B (en) Simulation 3D digital human interaction method and device, electronic equipment and storage medium
US9762855B2 (en) Sharing physical whiteboard content in electronic conference
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
CN110868635B (en) Video processing method and device, electronic equipment and storage medium
CN110942501B (en) Virtual image switching method and device, electronic equipment and storage medium
CN113099298B (en) Method and device for changing virtual image and terminal equipment
CN110969682B (en) Virtual image switching method and device, electronic equipment and storage medium
CN111491208B (en) Video processing method and device, electronic equipment and computer readable medium
CN112752116A (en) Display method, device, terminal and storage medium of live video picture
CN111050023A (en) Video detection method and device, terminal equipment and storage medium
CN112750186B (en) Virtual image switching method, device, electronic equipment and storage medium
CN111580652A (en) Control method and device for video playing, augmented reality equipment and storage medium
CN112516589A (en) Game commodity interaction method and device in live broadcast, computer equipment and storage medium
US11758217B2 (en) Integrating overlaid digital content into displayed data via graphics processing circuitry
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN112989112B (en) Online classroom content acquisition method and device
CN112866577A (en) Image processing method and device, computer readable medium and electronic equipment
CN111510769A (en) Video image processing method and device and electronic equipment
CN111507139A (en) Image effect generation method and device and electronic equipment
CN112702625B (en) Video processing method, device, electronic equipment and storage medium
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN109218803B (en) Video enhancement control method and device and electronic equipment
CN113766167A (en) Panoramic video conference enhancement method, system and network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511400 24th floor, building B-1, North District, Wanda Commercial Plaza, Wanbo business district, No.79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512