CN106791893B - Video live broadcasting method and device - Google Patents

Video live broadcasting method and device Download PDF

Info

Publication number
CN106791893B
CN106791893B CN201611009523.8A CN201611009523A CN106791893B CN 106791893 B CN106791893 B CN 106791893B CN 201611009523 A CN201611009523 A CN 201611009523A CN 106791893 B CN106791893 B CN 106791893B
Authority
CN
China
Prior art keywords
target
video
live broadcast
image data
background image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611009523.8A
Other languages
Chinese (zh)
Other versions
CN106791893A (en
Inventor
赵子龙
汤晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201611009523.8A priority Critical patent/CN106791893B/en
Publication of CN106791893A publication Critical patent/CN106791893A/en
Application granted granted Critical
Publication of CN106791893B publication Critical patent/CN106791893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a video live broadcast method and device. The method comprises the following steps: in the video live broadcast process, acquiring target background image data; generating target live broadcast video data according to the target background image data and the original live broadcast video data; and sending the target live video data to a server. This technical scheme can realize not needing the background image in the manual switching original video live interface of anchor, can make spectator's image according to the automatic generation of target background image data, directly perceivedly know the live video data of the target of the current live broadcast situation of anchor and current live broadcast data, thereby the room of anchor has been richened, make spectator can more clearly know the current live broadcast situation of anchor through this target live broadcast video data in the very first time, like the geographical position of current position, weather conditions, mood etc., thereby be favorable to promoting the live broadcast interaction, improve spectator's live broadcast experience.

Description

Video live broadcasting method and device
Technical Field
The disclosure relates to the technical field of videos, in particular to a live video broadcasting method and device.
Background
At present, along with the popularity of live broadcasting, more and more users begin to engage in live broadcasting, and the live broadcasting interface in the related art often only displays the picture of the anchor currently being live broadcasting and the head portrait of the anchor, etc., so that the live broadcasting interface is fixed and single, and the anchor often can not change the live broadcasting interface, even if the live broadcasting interface can be changed, the live broadcasting interface must be changed manually, which brings great inconvenience to the anchor, and also makes the audience watching the live broadcasting unable to know the current live broadcasting condition of the anchor, and is not beneficial to the live broadcasting interaction.
Disclosure of Invention
The embodiment of the disclosure provides a video live broadcast method and device. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video live broadcast method, including:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
and sending the target live video data to a server so that the server forwards the target live video data to each audience.
In one embodiment, the acquiring target background image data during the video live broadcast includes:
acquiring a current background parameter in a video live broadcast process;
and determining the target background image data matched with the current background parameter according to the current background parameter.
In an embodiment, the obtaining the current background parameter during the video live broadcast includes:
and receiving the input current background parameters in the video live broadcasting process.
In one embodiment, the current context parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
In one embodiment, when the current context parameter includes an emotion parameter, the obtaining the current context parameter during the live video includes:
acquiring facial feature data of a main broadcasting end user in the original live video data;
performing expression analysis on the facial feature data to obtain the emotion parameters.
In one embodiment, when the current context parameter comprises the mood parameter, prior to generating the target live video data, the method further comprises:
and adjusting the target background image data according to the emotion parameters.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring preset display parameters of the target background image data;
displaying the target background image data in the original video live broadcast interface according to the preset display parameters, wherein the preset display parameters comprise:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring user information of a main broadcasting end;
and displaying the user information and the target background image data in the original video live broadcast interface.
According to a second aspect of the embodiments of the present disclosure, there is provided a video live broadcasting device, including:
the acquisition module is used for acquiring target background image data in the video live broadcast process;
the generating module is used for generating target live broadcast video data according to the target background image data and the original live broadcast video data;
and the sending module is used for sending the target live video data to a server so that the server forwards the target live video data to each audience.
In one embodiment, the obtaining module comprises:
the first obtaining submodule is used for obtaining current background parameters in the video live broadcast process;
and the determining submodule is used for determining the target background image data matched with the current background parameter according to the current background parameter.
In one embodiment, the first obtaining sub-module includes:
and the receiving unit is used for receiving the input current background parameters in the video live broadcast process.
In one embodiment, the current context parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
In one embodiment, the first obtaining sub-module includes:
the acquisition unit is used for acquiring facial feature data of a main broadcast end user in the original live video data when the current background parameters comprise emotion parameters;
and the analysis unit is used for performing expression analysis on the facial feature data to obtain the emotion parameters.
In one embodiment, the apparatus further comprises:
and the adjusting module is used for adjusting the color of the target background image data according to the emotion parameter before generating the target live broadcast video data when the current background parameter comprises the emotion parameter.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
the generation module comprises:
the second obtaining submodule is used for obtaining preset display parameters of the target background image data;
the first display sub-module is configured to display the target background image data in the original video live broadcast interface according to the preset display parameters, where the preset display parameters include:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
the generation module comprises:
the third acquisition submodule is used for acquiring the user information of the anchor terminal;
and the second display sub-module is used for displaying the user information and the target background image data in the original video live broadcast interface.
According to a third aspect of the embodiments of the present disclosure, there is provided a video live broadcasting apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
and sending the target live video data to a server so that the server forwards the target live video data to each audience.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the technical scheme, when video live broadcasting is carried out, target background image data are automatically acquired, and then the target live video data are generated according to the target background image data and the original live video data, so that the aim that audiences can vividly and intuitively know the current live broadcasting condition of the anchor and the current live broadcasting data can be automatically generated according to the target background image data on the basis that the anchor does not need to manually switch background images in an original video live broadcasting interface, the room of the anchor is enriched, the audiences can more clearly know the current live broadcasting condition of the anchor through the target live broadcasting video data at the first time, such as the current geographic position, weather condition, mood and the like, and therefore live broadcasting interaction is facilitated, and the live broadcasting experience of the audiences is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow diagram illustrating a method of video live broadcast in accordance with an exemplary embodiment.
Fig. 2A is a flow diagram illustrating another video live method according to an example embodiment.
Fig. 2B is a flow diagram illustrating yet another video live method in accordance with an example embodiment.
Fig. 3 is a flow chart illustrating yet another method of video live broadcasting according to an example embodiment.
Fig. 4 is a flow chart illustrating yet another method of video live broadcasting in accordance with an exemplary embodiment.
Fig. 5 is a flow chart illustrating yet another method of video live broadcasting in accordance with an exemplary embodiment.
Fig. 6 is a block diagram illustrating a video live device according to an example embodiment.
Fig. 7A is a block diagram illustrating another video live device according to an example embodiment.
Fig. 7B is a block diagram illustrating yet another video live device according to an example embodiment.
Fig. 8 is a block diagram illustrating yet another video live device according to an example embodiment.
Fig. 9 is a block diagram illustrating yet another video live device according to an example embodiment.
Fig. 10 is a block diagram illustrating a device adapted for video live broadcast according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
At present, along with the popularity of live broadcasting, more and more users begin to engage in live broadcasting, and the live broadcasting interface in the related art often only displays the picture of the anchor currently being live broadcasting and the head portrait of the anchor, etc., so that the live broadcasting interface is fixed and single, and the anchor often can not change the live broadcasting interface, even if the live broadcasting interface can be changed, the live broadcasting interface must be changed manually, which brings great inconvenience to the anchor, and also makes the audience watching the live broadcasting unable to know the current live broadcasting situation of the anchor, and is not beneficial to the live broadcasting interaction.
In order to solve the foregoing technical problem, an embodiment of the present disclosure provides a video live broadcasting method, where the method may be used in a video live broadcasting program, system or device, and an execution subject corresponding to the method may be a terminal such as a mobile phone, a Personal Computer (PC), a tablet, and the like.
Fig. 1 is a flow diagram illustrating a method of video live broadcast in accordance with an exemplary embodiment.
As shown in fig. 1, the method includes steps S101 to S103:
in step S101, in the process of live video, target background image data is acquired;
the target background image data enables the audience to visually and intuitively know the current live broadcast condition of the user at the anchor end (namely the anchor), such as the current position of the anchor, the weather condition of the current position of the anchor, the mood of the anchor and the like.
In step S102, target live video data is generated according to the target background image data and the original live video data, where the original video live interface is the current live specific content, such as a live broadcast tour, the original video live interface is a tour environment in which the live broadcast tour is currently located, and when playing a game in a live broadcast, the original video live interface is a game operation interface in the live broadcast play game.
In step S103, the target live broadcast video data is sent to a server, so that the server forwards the target live broadcast video data to each audience, where each audience is an audience in a live broadcast room where a main broadcast is located in the current video live broadcast process, and the server may be a live broadcast server.
When video live broadcasting is carried out, target background image data are automatically acquired, and then target live video data are generated according to the target background image data and original live video data, so that the aim that audiences can vividly and intuitively know the current live broadcasting condition of the anchor and the target live video data of the current live broadcasting data can be automatically generated according to the target background image data on the basis that the anchor does not need to manually switch background images in an original video live broadcasting interface, a room of the anchor is enriched, the audiences can know the current live broadcasting condition of the anchor more clearly through the target live video data at the first time, such as the current geographic position, weather conditions, mood and the like, thereby being beneficial to promoting live broadcasting interaction and improving the live broadcasting experience of the audiences.
Fig. 2A is a flow diagram illustrating another video live method according to an example embodiment.
As shown in FIG. 2A, in one embodiment, step S101 shown in FIG. 1 above may include step A1 and step A2:
in step a1, in the process of live video, obtaining a current background parameter;
the current background parameter can reflect the current live broadcast condition of the anchor, such as the current geographic position, weather condition, mood and the like.
In step a2, target background image data matching the current background parameters is determined based on the current background parameters.
The target background image data matched with the current background parameters enables the audience to visually and intuitively know the current live broadcast condition of the anchor user (i.e. the anchor).
In the process of live video, the current background parameter is obtained, and the target background image data matched with the current background parameter can be automatically and intelligently determined according to the current background parameter, so that target live video data which enables audiences to visually and vividly know the current live broadcast condition of the main broadcast can be generated later, and the live broadcast interaction efficiency is improved.
Fig. 2B is a flow diagram illustrating yet another video live method in accordance with an example embodiment.
As shown in fig. 2B, in one embodiment, step S101 in fig. 1 may include step a 3:
in step a3, receiving an input current background parameter during the video live broadcast; the current context parameter may be manually entered by a user.
In one embodiment, the current context parameters include: the method comprises the following steps that at least one parameter of a current position of a anchor terminal, a weather condition corresponding to the current position and an emotion anchor parameter of an anchor terminal user is selected, wherein the emotion parameter can be a picture, an expression and other parameters which are input in an original video live broadcast interface by the anchor terminal and used for expressing current moods of the anchor terminal, or an interaction record (such as a chat record between the anchor terminal and a viewer terminal) capable of reflecting the current moods of the anchor terminal.
Current background parameters include, but are not limited to: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position, and the emotion parameter of the anchor terminal user, for example, the recent activity parameter of the anchor terminal user may also be included.
When the current background parameter is obtained, the target background image data which is matched with the current background parameter and can enable the audience to know the current live broadcast condition of the anchor in time can be automatically determined, for example:
when the acquired weather condition parameters are rain, sunny, cloudy, fog and the like, the target background image data can be a background image with weather characteristics, such as a background image with characteristics of rain, sunny, cloudy, fog and the like, so that when a viewer enters a main broadcasting room, the current weather of the current position of the main broadcasting room can be intuitively known by watching the target live broadcast video data generated by the target background image data, and the viewer can chat with the main broadcasting room; another example is:
when the position of the anchor is obtained, the target background image data can be automatically changed along with the geographical position, if the target background image data can be automatically changed into a landmark building of the current position, and the like, the audience can directly see where the anchor plays by watching the target live broadcast video data, so that local play strategies or local famous snacks and the like can be recommended to the anchor, and the interaction between the anchor and the audience is improved.
Fig. 3 is a flow chart illustrating yet another method of video live broadcasting according to an example embodiment.
As shown in fig. 3, in one embodiment, when the current background parameter includes an emotion parameter, the step S101 shown in fig. 1 may include the step B1 and the step B2:
in step B1, facial feature data of the anchor user in the original live video data is acquired;
the facial feature data may be the avatar, eye-gaze, mouth shape, eyebrow features, etc. of the anchor user.
In step B2, facial feature data is subjected to expression analysis to obtain mood parameters.
Different facial features can correspond to different emotion parameters, so when the current background parameters comprise emotion parameters, expression analysis can be automatically performed on the facial feature data according to facial feature data of a main broadcast end user in the original live broadcast video data to obtain the emotion parameters, and accordingly the emotion parameters are prevented from being manually input by live broadcast.
In one embodiment, when the current background parameter includes an emotion parameter, before performing step S102, the method may further include:
and adjusting the target background image data according to the emotion parameters.
When the current background parameter includes an emotion parameter, before the target live broadcast video data is generated, the target background image data may be automatically adaptively adjusted according to the emotion parameter, for example, parameters such as size, transparency, or color of the target background image data are adjusted, so as to represent different emotions of the live broadcast by adaptively adjusting parameters such as size, transparency, or color of the same target background image data, and distinguish the mood of the live broadcast, so that the viewer may further understand the current anchor state of the live broadcast, for example: when receiving the emotion parameters such as pictures and expressions which are input/selected by the anchor and used for reflecting the current mood of the anchor, or automatically acquiring the emotion parameters reflecting the current mood of the anchor through interactive records between the anchor and audiences, or automatically analyzing the facial feature data of the anchor end user in the original live video data to obtain the emotion parameters, the parameters such as the size, the transparency or the color of the whole target background image data can be adjusted according to the emotion parameters, or even the information such as the bullet screen font, the gift, the lightening and the like in the target background image data is adjusted, so that when the audiences enter a room of the anchor, the mood of the anchor can be known more intuitively through the generated target live video data, so that when the mood of the anchor is not good, the audiences can consolation together, when the mood of the anchor is good, the audiences can be happy together with the anchor, thereby improving the interactive experience between the anchor and the audience.
Fig. 4 is a flow chart illustrating yet another method of video live broadcasting in accordance with an exemplary embodiment.
As shown in fig. 4, in one embodiment, the original live video data includes: an original video live interface, where content currently live in the main broadcast and the like are displayed in the original video live interface, and the step S102 in fig. 1 may include step C1 and step C2:
in step C1, acquiring preset display parameters of the target background image data;
in step C2, the target background image data is displayed in the original video live interface according to preset display parameters, where the preset display parameters include: at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
The preset display parameters include, but are not limited to: at least one parameter of a preset transparency of the target background image data, a size of the target background image data, a display frame of the target background image data, and a position of the target background image data in the original video live broadcast interface may even include: and the display frequency of the target background image data (for example, when the target background image data is always displayed in the original video live broadcast interface or is displayed in the original video live broadcast interface, the target background image data is hidden according to a certain period, so that the target background image data is discontinuously displayed in the original video live broadcast interface).
When the target background image data is displayed in the original video live broadcast interface, the preset display parameters of the target background image data can be acquired, and then the target background image data is automatically displayed in the original video live broadcast interface according to the preset display parameters, so that a viewer can visually see the current live broadcast condition of the anchor broadcast by watching the target background image data in the original video live broadcast interface, and the live broadcast interaction efficiency is favorably improved.
In one embodiment, when the preset display parameter includes a preset transparency, the step C2 in fig. 4 may be performed as follows:
setting the transparency of the target background image data as a preset transparency, wherein the preset transparency can be 0-100%;
and suspending the target background image data with the preset transparency on the original video live broadcast interface.
When the preset display parameters include the preset transparency, when the target background image data are displayed in the original video live broadcast interface, the transparency of the target background image data can be set as the preset transparency, and then the target background image data with the preset transparency are suspended on the original video live broadcast interface, so that the audience can know the current live broadcast condition of the anchor by watching the target background image data, the interaction efficiency between the anchor and the audience is promoted, and the live broadcast experience of both parties is improved.
In addition, when the preset transparency is larger than 0 and smaller than 1, the situation that a part of an original video live broadcast interface is completely covered by target background image data to bring visual obstruction to audiences can be avoided.
Fig. 5 is a flow chart illustrating yet another method of video live broadcasting in accordance with an exemplary embodiment.
As shown in fig. 5, in one embodiment, the original live video data includes: in the original video live interface, the step S102 in fig. 1 may include the step D1 and the step D2:
in step D1, user information of the anchor terminal is obtained, where the user information may be information that identifies the identity of the anchor, such as the head portrait, nickname, blood type, constellation, and address of the anchor;
in step D2, the user information and the target background image data are displayed in the original video live interface.
When the target background image data is displayed in the original video live broadcast interface, user information of a main broadcast can be obtained, and then the user information of the main broadcast and the target background image data are displayed in the original video live broadcast interface together, so that audiences can know current live broadcast content by watching the original video live broadcast interface, know the current live broadcast state of the main broadcast through the target background image data, and can further know identity information of the current main broadcast through watching the user information.
Corresponding to the video live broadcasting method provided by the embodiment of the disclosure, the embodiment of the disclosure also provides a video live broadcasting device.
Fig. 6 is a block diagram illustrating a video live device according to an example embodiment.
As shown in fig. 6, the apparatus includes an obtaining module 601, a generating module 602, and a sending module 603:
an obtaining module 601, configured to obtain target background image data in a video live broadcast process;
a generating module 602 configured to generate target live video data according to the target background image data and the original live video data;
a sending module 603 configured to send the target live video data to the server, so that the server forwards the target live video data to each viewer.
Fig. 7A is a block diagram illustrating another video live device according to an example embodiment.
As shown in fig. 7A, in an embodiment, the obtaining module 601 shown in fig. 6 may include a first obtaining submodule 6011 and a determining submodule 6012:
a first obtaining submodule 6011, configured to obtain a current background parameter during a live video broadcast process;
a determining sub-module 6012 configured to determine, according to the current background parameter, target background image data matching the current background parameter.
Fig. 7B is a block diagram illustrating yet another video live device according to an example embodiment.
As shown in fig. 7B, in an embodiment, the first obtaining sub-module 6011 shown in fig. 7A may include a receiving unit 60111:
the receiving unit 60111 is configured to receive the input current background parameters during the live video.
In one embodiment, the current context parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
In one embodiment, the first acquisition submodule may include an acquisition unit and an analysis unit:
an acquisition unit configured to acquire facial feature data of a anchor user in original live video data when the current background parameter includes an emotion parameter;
an analysis unit configured to perform expression analysis on the facial feature data to obtain an emotional parameter.
In one embodiment, the apparatus further comprises:
an adjusting module configured to adjust a color of the target background image data according to the mood parameter before generating the target live video data when the current background parameter includes the mood parameter.
Fig. 8 is a block diagram illustrating yet another video live device according to an example embodiment.
As shown in fig. 8, in one embodiment, the original live video data includes: an original video live broadcast interface;
the generation module 602 shown in fig. 6 described above may include a second acquisition sub-module 6021 and a first display sub-module 6022:
a second obtaining sub-module 6021 configured to obtain preset display parameters of the target background image data;
the first display sub-module 6022 is configured to display the target background image data in the original video live broadcast interface according to preset display parameters, where the preset display parameters include:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
Fig. 9 is a block diagram illustrating yet another video live device according to an example embodiment.
As shown in fig. 9, in one embodiment, the original live video data includes: an original video live broadcast interface;
the generation module 602 shown in fig. 6 described above may include a third acquisition sub-module 6023 and a second display sub-module 6024:
a third obtaining sub-module 6023 configured to obtain user information of the anchor terminal;
a second display sub-module 6024 configured to display the user information and the target background image data in the original video live interface.
According to a third aspect of the embodiments of the present disclosure, there is provided a video live broadcasting apparatus, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
and sending the target live video data to a server so that the server forwards the target live video data to each audience.
The processor may be further configured to:
in the process of live video, acquiring target background image data, including:
acquiring a current background parameter in a video live broadcast process;
and determining the target background image data matched with the current background parameter according to the current background parameter.
The processor may be further configured to:
in the process of live video, acquiring a current background parameter includes:
and receiving the input current background parameters in the video live broadcasting process.
The processor may be further configured to:
the current background parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
The processor may be further configured to:
when the current background parameter includes an emotion parameter, the obtaining of the current background parameter in the video live broadcast process includes:
acquiring facial feature data of a main broadcasting end user in the original live video data;
performing expression analysis on the facial feature data to obtain the emotion parameters.
The processor may be further configured to:
when the current context parameter comprises the mood parameter, prior to generating the target live video data, the method further comprises:
and adjusting the target background image data according to the emotion parameters.
The processor may be further configured to:
the original live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring preset display parameters of the target background image data;
displaying the target background image data in the original video live broadcast interface according to the preset display parameters, wherein the preset display parameters comprise:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
The processor may be further configured to:
the original live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring user information of a main broadcasting end;
and displaying the user information and the target background image data in the original video live broadcast interface.
Fig. 10 is a block diagram illustrating an apparatus 1000 for live video according to an exemplary embodiment, which is suitable for a terminal device. For example, the apparatus 1000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 1000 may include one or at least two of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1002 may include one or at least two processors 1020 to execute instructions to perform all or part of the steps of the methods described above. Further, processing component 1002 may include one or at least two modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the apparatus 1000. Examples of such data include instructions for any stored object or method operating on device 1000, contact user data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the device 1000. The power components 1006 may include a power management system, one or at least two power sources, and other components associated with generating, managing, and distributing power sources for the device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or at least two touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, audio component 1010 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or at least two sensors for providing various aspects of status assessment for the device 1000. For example, sensor assembly 1014 may detect an open/closed state of device 1000, the relative positioning of components, such as a display and keypad of device 1000, the change in position of device 1000 or a component of device 1000, the presence or absence of user contact with device 1000, the orientation or acceleration/deceleration of device 1000, and the change in temperature of device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate communications between the apparatus 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1016 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1000 may be implemented by one or at least two Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the device 1000 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions stored thereon that, when executed by a processor of the apparatus 1000, enable the apparatus 1000 to perform a video live method, comprising:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
and sending the target live video data to a server so that the server forwards the target live video data to each audience.
In one embodiment, the acquiring target background image data during the video live broadcast includes:
acquiring a current background parameter in a video live broadcast process;
and determining the target background image data matched with the current background parameter according to the current background parameter.
In an embodiment, the obtaining the current background parameter during the video live broadcast includes:
and receiving the input current background parameters in the video live broadcasting process.
In one embodiment, the current context parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
In one embodiment, when the current context parameter includes an emotion parameter, the obtaining the current context parameter during the live video includes:
acquiring facial feature data of a main broadcasting end user in the original live video data;
performing expression analysis on the facial feature data to obtain the emotion parameters.
In one embodiment, when the current context parameter comprises the mood parameter, prior to generating the target live video data, the method further comprises:
and adjusting the target background image data according to the emotion parameters.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring preset display parameters of the target background image data;
displaying the target background image data in the original video live broadcast interface according to the preset display parameters, wherein the preset display parameters comprise:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
In one embodiment, the raw live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring user information of a main broadcasting end;
and displaying the user information and the target background image data in the original video live broadcast interface.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method for live video, comprising:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
sending the target live video data to a server so that the server forwards the target live video data to each audience;
in the process of live video, acquiring target background image data, including:
acquiring a current background parameter in a video live broadcast process;
determining the target background image data matched with the current background parameter according to the current background parameter;
the current background parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
2. The method of claim 1,
in the process of live video, acquiring a current background parameter includes:
and receiving the input current background parameters in the video live broadcasting process.
3. The method of claim 1,
when the current background parameter includes an emotion parameter, the obtaining of the current background parameter in the video live broadcast process includes:
acquiring facial feature data of a main broadcasting end user in the original live video data;
performing expression analysis on the facial feature data to obtain the emotion parameters.
4. The method of claim 3,
when the current context parameter comprises the mood parameter, prior to generating the target live video data, the method further comprises:
and adjusting the target background image data according to the emotion parameters.
5. The method according to any one of claims 1 to 4,
the original live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring preset display parameters of the target background image data;
displaying the target background image data in the original video live broadcast interface according to the preset display parameters, wherein the preset display parameters comprise:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
6. The method according to any one of claims 1 to 4,
the original live video data includes: an original video live broadcast interface;
generating target live broadcast video data according to the target background image data and the original live broadcast video data, wherein the generating of the target live broadcast video data comprises the following steps:
acquiring user information of a main broadcasting end;
and displaying the user information and the target background image data in the original video live broadcast interface.
7. A video live broadcast apparatus, comprising:
the acquisition module is used for acquiring target background image data in the video live broadcast process;
the generating module is used for generating target live broadcast video data according to the target background image data and the original live broadcast video data;
the sending module is used for sending the target live video data to a server so that the server forwards the target live video data to each audience;
the acquisition module includes:
the first obtaining submodule is used for obtaining current background parameters in the video live broadcast process;
the determining submodule is used for determining the target background image data matched with the current background parameter according to the current background parameter;
the current background parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
8. The apparatus of claim 7,
the first acquisition sub-module includes:
and the receiving unit is used for receiving the input current background parameters in the video live broadcast process.
9. The apparatus of claim 7,
the first acquisition sub-module includes:
the acquisition unit is used for acquiring facial feature data of a main broadcast end user in the original live video data when the current background parameters comprise emotion parameters;
and the analysis unit is used for performing expression analysis on the facial feature data to obtain the emotion parameters.
10. The apparatus of claim 9, further comprising:
and the adjusting module is used for adjusting the color of the target background image data according to the emotion parameter before generating the target live broadcast video data when the current background parameter comprises the emotion parameter.
11. The apparatus according to any one of claims 7 to 10,
the original live video data includes: an original video live broadcast interface;
the generation module comprises:
the second obtaining submodule is used for obtaining preset display parameters of the target background image data;
the first display sub-module is configured to display the target background image data in the original video live broadcast interface according to the preset display parameters, where the preset display parameters include:
at least one parameter of preset transparency of the target background image data, size of the target background image data, display frame of the target background image data, and position of the target background image data in the original video live broadcast interface.
12. The apparatus according to any one of claims 7 to 10,
the original live video data includes: an original video live broadcast interface;
the generation module comprises:
the third acquisition submodule is used for acquiring the user information of the anchor terminal;
and the second display sub-module is used for displaying the user information and the target background image data in the original video live broadcast interface.
13. A video live broadcast apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
in the video live broadcast process, acquiring target background image data;
generating target live broadcast video data according to the target background image data and the original live broadcast video data;
sending the target live video data to a server so that the server forwards the target live video data to each audience;
in the process of live video, acquiring target background image data, including:
acquiring a current background parameter in a video live broadcast process;
determining the target background image data matched with the current background parameter according to the current background parameter;
the current background parameters include: at least one parameter of the current position of the anchor terminal, the weather condition corresponding to the current position and the emotion parameter of the user of the anchor terminal.
14. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201611009523.8A 2016-11-14 2016-11-14 Video live broadcasting method and device Active CN106791893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611009523.8A CN106791893B (en) 2016-11-14 2016-11-14 Video live broadcasting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611009523.8A CN106791893B (en) 2016-11-14 2016-11-14 Video live broadcasting method and device

Publications (2)

Publication Number Publication Date
CN106791893A CN106791893A (en) 2017-05-31
CN106791893B true CN106791893B (en) 2020-09-11

Family

ID=58968526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611009523.8A Active CN106791893B (en) 2016-11-14 2016-11-14 Video live broadcasting method and device

Country Status (1)

Country Link
CN (1) CN106791893B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247794B (en) * 2017-06-21 2020-12-04 北京小米移动软件有限公司 Topic guiding method in live broadcast, live broadcast device and terminal equipment
CN107197349A (en) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 Video processing method and device, electronic equipment and storage medium
CN109635616B (en) * 2017-10-09 2022-12-27 阿里巴巴集团控股有限公司 Interaction method and device
CN109660853B (en) * 2017-10-10 2022-12-30 腾讯科技(北京)有限公司 Interaction method, device and system in live video
CN108024071B (en) * 2017-11-24 2022-03-08 腾讯数码(天津)有限公司 Video content generation method, video content generation device, and storage medium
CN108401129A (en) * 2018-03-22 2018-08-14 广东小天才科技有限公司 Video call method, device, terminal based on Wearable and storage medium
CN109151489B (en) * 2018-08-14 2019-05-31 广州虎牙信息科技有限公司 Live video image processing method, device, storage medium and computer equipment
CN109151551B (en) * 2018-09-20 2021-08-27 阿里巴巴(中国)有限公司 Video interface display method and device
CN109302631B (en) * 2018-09-20 2021-06-08 阿里巴巴(中国)有限公司 Video interface display method and device
CN109618217B (en) * 2019-01-28 2021-01-08 广州酷狗计算机科技有限公司 Method and device for displaying interface of live broadcast room
CN112291608B (en) * 2019-07-25 2022-06-14 腾讯科技(深圳)有限公司 Virtual article data processing method and device and storage medium
CN110582020B (en) * 2019-09-03 2022-03-01 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN110677685B (en) * 2019-09-06 2021-08-31 腾讯科技(深圳)有限公司 Network live broadcast display method and device
CN110891200B (en) * 2019-11-22 2022-06-17 网易(杭州)网络有限公司 Bullet screen based interaction method, device, equipment and storage medium
CN111131892B (en) * 2019-12-31 2022-02-22 安博思华智能科技有限责任公司 System and method for controlling live broadcast background
CN113271471A (en) * 2020-02-14 2021-08-17 阿里巴巴集团控股有限公司 Information display method, equipment and system
CN111414221B (en) * 2020-03-20 2022-06-28 联想(北京)有限公司 Display method and device
CN111405307A (en) * 2020-03-20 2020-07-10 广州华多网络科技有限公司 Live broadcast template configuration method and device and electronic equipment
CN111464818B (en) * 2020-03-20 2022-04-19 新之航传媒科技集团有限公司 Online live broadcast exhibition hall system
CN111601170A (en) * 2020-05-21 2020-08-28 广州欢网科技有限责任公司 Advertisement playing method, device and system
CN112235516B (en) * 2020-09-24 2022-10-04 北京达佳互联信息技术有限公司 Video generation method, device, server and storage medium
CN114584824A (en) * 2020-12-01 2022-06-03 阿里巴巴集团控股有限公司 Data processing method and system, electronic equipment, server and client equipment
CN114765692B (en) * 2021-01-13 2024-01-09 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN112770173A (en) * 2021-01-28 2021-05-07 腾讯科技(深圳)有限公司 Live broadcast picture processing method and device, computer equipment and storage medium
CN113171613B (en) * 2021-05-27 2022-08-05 腾讯科技(深圳)有限公司 Team-forming and game-checking method, device, equipment and storage medium
CN114390305A (en) * 2021-12-23 2022-04-22 中国电信股份有限公司 Method and device for switching live video theme based on style
CN114422815B (en) * 2022-01-14 2024-05-07 广州方硅信息技术有限公司 Live gift processing method, device, medium, equipment and program product
CN114501060A (en) * 2022-01-24 2022-05-13 广州繁星互娱信息科技有限公司 Live broadcast background switching method and device, storage medium and electronic equipment
CN115022668B (en) * 2022-07-21 2023-08-11 中国平安人寿保险股份有限公司 Live broadcast-based video generation method and device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306468A (en) * 2015-10-30 2016-02-03 广州华多网络科技有限公司 Method for real-time sharing of synthetic video data and anchor client side
CN105791958A (en) * 2016-04-22 2016-07-20 北京小米移动软件有限公司 Method and device for live broadcasting game

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150099154A (en) * 2014-02-21 2015-08-31 삼성전자주식회사 User Interface for Layers Displayed on Device
CN105208458B (en) * 2015-09-24 2018-10-02 广州酷狗计算机科技有限公司 Virtual screen methods of exhibiting and device
CN105956059A (en) * 2016-04-27 2016-09-21 乐视控股(北京)有限公司 Emotion recognition-based information recommendation method and apparatus
CN105933738B (en) * 2016-06-27 2019-01-04 徐文波 Net cast methods, devices and systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306468A (en) * 2015-10-30 2016-02-03 广州华多网络科技有限公司 Method for real-time sharing of synthetic video data and anchor client side
CN105791958A (en) * 2016-04-22 2016-07-20 北京小米移动软件有限公司 Method and device for live broadcasting game

Also Published As

Publication number Publication date
CN106791893A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN111970533B (en) Interaction method and device for live broadcast room and electronic equipment
EP3125530B1 (en) Video recording method and device
WO2020057327A1 (en) Information list display method and apparatus, and storage medium
CN106506448B (en) Live broadcast display method and device and terminal
CN106941624B (en) Processing method and device for network video trial viewing
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
CN107743244B (en) Video live broadcasting method and device
CN109151565B (en) Method and device for playing voice, electronic equipment and storage medium
CN109039872B (en) Real-time voice information interaction method and device, electronic equipment and storage medium
CN109660873B (en) Video-based interaction method, interaction device and computer-readable storage medium
CN107566892B (en) Video file processing method and device and computer readable storage medium
CN112788354A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CN106254939B (en) Information prompting method and device
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN111866531A (en) Live video processing method and device, electronic equipment and storage medium
CN108174269B (en) Visual audio playing method and device
CN112188230A (en) Virtual resource processing method and device, terminal equipment and server
CN107566878B (en) Method and device for displaying pictures in live broadcast
CN110620956A (en) Live broadcast virtual resource notification method and device, electronic equipment and storage medium
CN109756783B (en) Poster generation method and device
CN107247794B (en) Topic guiding method in live broadcast, live broadcast device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant