CN113873272B - Method, device and storage medium for controlling background image of live video - Google Patents

Method, device and storage medium for controlling background image of live video Download PDF

Info

Publication number
CN113873272B
CN113873272B CN202111056258.XA CN202111056258A CN113873272B CN 113873272 B CN113873272 B CN 113873272B CN 202111056258 A CN202111056258 A CN 202111056258A CN 113873272 B CN113873272 B CN 113873272B
Authority
CN
China
Prior art keywords
background
video
video image
information source
live video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111056258.XA
Other languages
Chinese (zh)
Other versions
CN113873272A (en
Inventor
张焱
李娟�
邸文华
林铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dushi Technology Co ltd
Original Assignee
Beijing Dushi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dushi Technology Co ltd filed Critical Beijing Dushi Technology Co ltd
Priority to CN202111056258.XA priority Critical patent/CN113873272B/en
Publication of CN113873272A publication Critical patent/CN113873272A/en
Application granted granted Critical
Publication of CN113873272B publication Critical patent/CN113873272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440227Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by decomposing into layers, e.g. base layer and one or more enhancement layers

Abstract

The application discloses a method, a device and a storage medium for controlling the background of a live video. Wherein the method comprises the following steps: according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video; generating a background video image of the live video according to the background information source; and fusing the background video image and the video image of the live video to generate a corresponding target live video.

Description

Method, device and storage medium for controlling background image of live video
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for controlling a background image of a live video.
Background
As the network live broadcast is a representative of new media, the network live broadcast is an entertainment form of playing instant images on the internet with the further accelerated development of network construction and deployment, and is being accepted by more and more people. Because the network live broadcast has the characteristics of high real-time performance, high interactivity and the like, the network live broadcast also becomes an important mode for expanding the influence of various large online video and audio platforms and attracting users.
The network live broadcast needs to lead the live broadcast to erect audio and video signal acquisition equipment on the live broadcast site, upload the equipment to a server through the network, and then release the address of the live broadcast room through the live broadcast platform to attract users to go to the live broadcast room for watching. Therefore, in the network live broadcast process, the live broadcast environment is limited to live broadcast, the live broadcast background cannot be flexibly changed, namely, the background image in the live broadcast picture cannot be replaced, and the effect required by a host or a viewer is difficult to achieve.
Aiming at the technical problem that the background image in the live video cannot be flexibly set in the prior art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device and a storage medium for controlling background images of live video, which at least solve the technical problem that the background images in the live video cannot be flexibly set in the prior art.
According to an aspect of the embodiments of the present disclosure, there is provided a method of controlling a background of live video, including: according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video; generating a background video image of the live video according to the background information source; and fusing the background video image and the video image of the live video to generate a corresponding target live video.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
According to another aspect of the embodiments of the present disclosure, there is also provided an apparatus for controlling a background image of a live video, including: the background information source determining module is used for determining a background information source serving as the background of the live video according to the operation of setting the background of the live video by a user; the background video image generation module is used for generating a background video image of the live video according to the background information source; and the fusion module is used for fusing the background video image with the video image of the live video to generate a corresponding target live video.
According to another aspect of the embodiments of the present disclosure, there is also provided an apparatus for controlling a background image of a live video, including: a processor; and a memory, coupled to the processor, for providing instructions to the processor for processing the steps of: according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video; generating a background video image of the live video according to the background information source; and fusing the background video image and the video image of the live video to generate a corresponding target live video.
In the embodiment of the disclosure, in the live broadcast process, a user may set a background image of a live video according to actual requirements in an operation interface displayed by a terminal device in communication connection with a live host. The live host responds to the operation of setting the background image of the live video by the user, firstly, a background information source which is determined by the user and serves as the background image of the live video is obtained, then, a background video image of the live video is generated according to the background information source, and finally, the background video image and the video image of the live video are fused to generate a corresponding target live video. By the method, the technical effect that a user can flexibly set the background image in the live video is achieved. And further solves the technical problem that the background image in the live video cannot be flexibly set in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a computing device for implementing a method according to embodiment 1 of the present disclosure;
Fig. 2A is a schematic diagram of a live control system according to embodiment 1 of the present disclosure;
fig. 2B is a schematic diagram of an operation interface of a control panel for live control according to embodiment 1 of the present disclosure;
FIG. 2C is a schematic diagram of a source control window in an operator interface of a control panel according to embodiment 1 of the present disclosure;
fig. 2D is a schematic diagram of a video fusion control panel in a control panel for live control according to embodiment 1 of the present disclosure;
fig. 3 is a flow diagram of a method of controlling the background of live video according to a first aspect of embodiment 1 of the present disclosure;
fig. 4A is a schematic diagram illustrating an arrangement of a background layer, a source window video layer, and a foreground layer of a live video according to embodiment 1 of the present disclosure;
fig. 4B is a schematic diagram illustrating an arrangement of a background layer, a source window video layer, and a foreground layer of a live video according to embodiment 1 of the present disclosure, where a video image displayed by a source display window is generated by fusing a matting layer and a source background layer;
fig. 5A is a schematic diagram of a live video with no background according to embodiment 1 of the present disclosure;
fig. 5B is a schematic diagram of a live video after setting an underlying background image according to embodiment 1 of the present disclosure;
Fig. 5C is a schematic diagram of a live video with a confidence source background layer set in a matting process according to embodiment 1 of the present disclosure;
fig. 5D is a schematic diagram of a live video with a background layer of a source set in a matting process while setting a background layer of an underlying layer according to embodiment 1 of the present disclosure;
fig. 6 is a schematic diagram of a "picture" tab of a video fusion control panel in a control panel for live control according to embodiment 1 of the present disclosure;
fig. 7 is a schematic diagram of an apparatus for controlling the background of live video according to embodiment 2 of the present disclosure; and
fig. 8 is a schematic diagram of an apparatus for controlling the background of live video according to embodiment 3 of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following description will clearly and completely describe the technical solutions of the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the present embodiment, there is provided an embodiment of a method of controlling a background image of a live video, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
The method embodiments provided by the present embodiments may be performed in a mobile terminal, a computer terminal, a server, or similar computing device. Fig. 1 shows a block diagram of a hardware architecture of a computing device for implementing a method of controlling background images of live video. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, etc., processing means), memory for storing data, and transmission means for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the embodiments of the present disclosure, the data processing circuit acts as a processor control (e.g., selection of the variable resistance termination path to interface with).
The memory may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the method for controlling background images of live video in the embodiments of the present disclosure, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the method for controlling background images of live video by the application program described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to the computing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the computing device. In one example, the transmission means comprises a network adapter (Network Interface Controller, NIC) connectable to other network devices via the base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted herein that in some alternative embodiments, the computing device shown in FIG. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computing devices described above.
Fig. 2A is a schematic diagram of a live control system according to the present embodiment. Referring to fig. 2A, the system includes: a terminal device 100 of a user 300 and a live host 200. The live host 200 is provided with a plurality of video input interface means so that captured video can be received from a plurality of cameras. Moreover, an AP hotspot device is provided in the live host 200, so that the terminal device 100 (such as a mobile phone, a tablet or a notebook computer) of the user 300 can interact with the AP hotspot of the live host 200 through a WLAN, thereby implementing control over the live host 200 and receiving video data from the live host 200. In addition, the live host 200 is provided with a video output interface for outputting a video stream to a monitor screen so that a live person can view information such as a synthesized video and a station word by viewing the monitor screen.
Thus, the user 300 may establish communication with the live host 200 through the terminal device 100 (i.e., a device such as a mobile phone, a tablet or a notebook computer), for example, may search for the address of the live host 200 by searching a network, and then enter a password to log into the live host 200 (similar to accessing WIFI). In case of successful communication with the live host 200, the terminal device 100 of the user 300 enters a control panel as shown in fig. 2B, and control of the live host 200 can be achieved by performing a corresponding operation at the control panel.
Referring to fig. 2B, the operation interface 400 displayed on the control panel of the terminal device 100 includes the following 4 parts: live push video display window 410, source monitor panel 420, video fusion control panel 430, and volume control panel 440. Wherein, the live broadcast push video display window 410 is used for displaying live broadcast video (i.e. target live broadcast video) pushed to the live broadcast network; the source monitor panel 420 includes a plurality of source monitor windows 421 to 428, respectively displaying video images of a plurality of sources associated with the live host 200; the video fusion control panel 430 is configured to control fusion of live video according to different functional requirements, for example, but not limited to, deployment of a source display window in the live video, matting processing, setting of a media source, setting of a subtitle and a prompter, and the like; the volume control panel 440 is used to control the volume of each source, and includes: microphone, HDMI input video, volume of background video played by live host, etc.
Further, referring to FIG. 2B, the source monitor panel 420 includes a plurality of source monitor windows 421-428, wherein the source monitor windows 421-428 display video images of 3 different types of sources. The source monitoring windows 421-424 are machine source monitoring windows, and are used for displaying video images of machine sources, that is, video images collected by cameras at each machine position in the live broadcasting room. In this embodiment, 4 sites are provided in the live broadcasting room, so the site source monitoring windows 421 to 424 respectively display video images collected by the 4 sites. For example, the station source monitor window 421 displays the video image collected by station 1, the station source monitor window 422 displays the video image collected by station 2, the station source monitor window 423 displays the video image collected by station 3, and the station source monitor window 424 displays the video image collected by station 4. The source monitor windows 427 and 428 are media source monitor windows for displaying video images from media. In this embodiment, the media source monitor window 427 displays a preset video file or video image of a video source on the network, and the media source monitor window 428 displays a video image of a preset PPT. The source monitor windows 425-426 are interface source monitor windows for displaying video images received through the interface of the live host. In this embodiment, the interface source monitor window 425 displays video images received through the HDMI interface, and the interface source monitor window 426 displays video images received through the USB interface. The user 300 may select a source associated with a source display window in the live video by clicking on a different source monitor window (the details of which will be described in conjunction with the "template" tab below).
Fig. 2C illustrates a schematic diagram of the source monitor window 421 as an example. Referring to fig. 2C, the source monitor window 421 is provided with a zoom button, a display adjustment button, and a background setting button. Wherein the zoom button is used to zoom the video image of the source monitor window 421. The display adjustment button automatically pops up a panel for adjusting display by which the video image of the source monitoring window 421 can be adjusted in the case of being clicked. For example, the brightness, contrast, white balance, and other attributes of the video image of the source monitor window 421 can be adjusted by the panel. The background setting button is used to set the video image of the source monitor window 421 as the background in the live video 500. Wherein the background setting button will be described in detail below. Although fig. 2C is described with reference to the source monitoring window 421 as an example, the above description is also applicable to other source monitoring windows 422 to 428, and will not be repeated here.
In addition, fig. 2D shows a schematic diagram of a video fusion control panel 430. Referring to fig. 2D, the video fusion control panel 430 includes labels such as "templates", "pictures", "videos", "PPT", "subtitles", "prompter", and "matting". The template tag is used for setting an information source display window in the live video; the picture tag is used for setting pictures fused into the live video; the "video" tab is used to set video media sources that are fused into the live video, and the source monitor window 427 displays the video media sources set by the user 300 in the "video" tab; the "PPT" tab is used to set the PPT media source fused into the live video, and the source monitor window 428 displays the PPT media source set by the user 300 in the "PPT" tab; the subtitle label is used for setting the subtitle fused into the live video; the "prompter" tag is used to set a prompter fused into an auxiliary video corresponding to a live video; and the "matting" label is used to set the matting-related process.
In addition, the user 300 can control the live host 200 and implement the director function by: 1) Controlling through a keyboard; 2) Controlling by a WeChat applet; 3) APP application program control through a mobile phone or a tablet computer; 4) Control is performed through the H5 page of a computer or a notebook. It should be noted that, the above-described hardware structure may be applied to both the terminal device 100 and the live host 200 in the system.
In the above-described operation environment, according to the first aspect of the present embodiment, there is provided a method of controlling a background image of a live video, which is implemented by the live host 200 shown in fig. 2A. Fig. 3 shows a schematic flow chart of the method, and referring to fig. 3, the method includes:
s302: according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video;
s304: generating a background video image of the live video according to the video image of the background information source; and
s306: and fusing the background video image with the video image of the live video to generate a corresponding target live video.
Referring to fig. 2A, a user (e.g., user 300) may establish communication with a live host 200 through a terminal device 100 (e.g., a cell phone, tablet, or notebook device) prior to a start of a program. In case of successful communication with the live host 200, the terminal device 100 of the user 300 displays an operation interface 400 as shown in fig. 2B, so that the user 300 can set a background image in a live video by performing a corresponding operation at the operation interface 400.
In particular, fig. 4A shows a schematic diagram of a hierarchy of live video 500 generated according to the method described in the present embodiment. Referring to fig. 4A, live video 500 may include an underlying background layer 510, a source window video layer 520, and a foreground layer 530. Wherein the bottom background layer 510 is a video image layer located at the lowest background of the live video 500, and the source window video layer 520 is located above the bottom background layer 510, and is used for displaying source display windows 521 and 522, and displaying video images of associated sources associated with the source display windows through the source display windows 521 and 522. The foreground layer 530 is located over the source window video layer 520 for displaying video images that float over the source window video layer 520. Thus, the live host 200 generates live video according to the hierarchy fusion.
In addition, referring to fig. 4B, according to the method of the present embodiment, not only the live video 500 may be generated by fusing the hierarchical structures of the underlying background layer 510, the source window video layer 520, and the foreground layer 530, but also the video image displayed on the source display window in the source window video layer 520 may be generated by fusing the hierarchical structures of the source background layer and the matting layer. For example, in fig. 4B, the video image displayed by the source display window 521 may be generated by fusing the source background layer 521B and the matting layer 521a in a hierarchical structure.
In addition, although the source window video layer 520 is shown in fig. 4A and 4B as including two source display windows 521 and 522. However, the number and layout of the source display windows 521 and 522 included in the source window video layer 520 are not so limited. The user 300 may set the number and layout of the source display windows through the "template" tab of the video fusion control panel 430, which is not described in detail herein.
In addition, the user 300 can associate the source corresponding to the source monitoring windows 421 to 428 with the source display windows 521 and 522 by clicking on the operation interface 400, and select the associated source associated with the source display window 521 and the associated source associated with the source display window 522 from the source monitoring windows 421 to 428, so that the source display windows 521 and 522 display video images of the associated sources. Referring to fig. 5A, the currently associated source of the source display 521 is the machine 1 associated with the machine source monitor 421, so that the source display 521 displays the video image captured by the machine 1. The current associated source of the source display window 522 is the station 4 associated with the station video monitor window 424, so that the source display window 522 displays the video image acquired by the station 4. Of course, the source display windows 521 and 522 may also be associated with other sources corresponding to the source monitor windows 421 to 428, which will not be described in detail herein.
So that the user 300 can determine the source (i.e., background source) as the underlying background layer 510 and/or source background layer 521B shown in fig. 4A and 4B through the operator interface 400 shown in fig. 2B. The live host 200 thus determines a background source as a background of the live video according to the operation of the user 300 (S302)
Then, the live host 200 needs to generate a background video image of the live video according to the background source (S304). Finally, the live host 200 fuses the background video image with the video image of the live video to generate a corresponding target live video (S306).
For example, as shown in fig. 5A, the live video before the background image is not set by the user 300 is a live video, in which the background image of the bottom background layer of the live video is blank, and the background of the video image displayed by the source display window 521 associated with the machine 1 in the live video is a green curtain (i.e., the background wall in the field of view of the machine 1 is a green cloth).
At this point, the user may select, for example, but not limited to, to use the PPT media source corresponding to media source monitor window 428 as an underlying background image for the underlying background layer of the live video. Thus, after the user 300 performs the corresponding setting operation, the live host 200 needs to acquire the corresponding PPT media source, and after acquiring the corresponding PPT media source, generates a background video image serving as the bottom background layer in the live video according to the PPT media source. Finally, the live host 200 fuses the generated bottom background image with the video image of the live video, generating a live video 500 (i.e., a target live video) as shown in fig. 5B.
In addition, the user 300 may further perform a matting operation on the video image collected by the machine 1 corresponding to the source monitoring window 421 through the operation interface 400 shown in fig. 2B, so as to segment the graphic area of the anchor person as the matting layer 521a of the video image displayed by the source display window 521. The user 300 may set the HDMI interface source corresponding to the interface source monitor window 425 as the source background layer 521b as a background of the matting process. The live host 200 thus determines the HDMI interface source as a background source for generating a source background layer according to a setting operation by the user, and generates a background video image displayed by the source display window 521 as a source background layer 521b from a video image of the HDMI interface source. Finally, the live host 200 fuses the generated background video image with the video image of the live video, generating the target live video as shown in fig. 5C.
In addition, the user 300 may set the underlying background of the underlying background layer 510 and the source background of the source window video layer 520 through the operation interface shown in fig. 2B. For example, the user 300 may set the PPT media source to the underlying background layer of the live video, and set the video image received from the HDMI interface to the source background of the video image displayed by the source display window 521 corresponding to the set 1. Accordingly, the live host 200 generates a corresponding background video image and source background video image according to the operation of the user 300, and fuses the generated background video image and source background video image with the video image of the live video to generate the target live video as shown in fig. 5D.
Among them, the operation of setting the background through the operation interface 400 by the user 300 will be described in detail below.
As described in the foregoing background, the network live broadcast requires the anchor to erect an audio and video signal acquisition device on the live broadcast site, upload the signal to the server through the network, and then issue the address of the live broadcast room through the live broadcast platform to attract the user to go to the live broadcast room for viewing. Therefore, in the network live broadcast process, the live broadcast environment is limited to live broadcast, the live broadcast background cannot be flexibly changed, namely, the background image in the live broadcast picture cannot be replaced, and the effect required by a host or a viewer is difficult to achieve.
In view of this, in the live broadcast process, the user 300 of the present embodiment may set the background image of the live video according to the actual requirement in the operation interface displayed by the terminal device 100 communicatively connected to the live host 200. Accordingly, in response to the user setting the background image of the live video, the live host 200 first obtains the background information source determined by the user as the background image of the live video, then generates the background video image of the live video according to the background information source, and finally fuses the background video image with the video image of the live video to generate the corresponding target live video. By the method, the technical effect that a user can flexibly set the background image in the live video is achieved. And further solves the technical problem that the background image in the live video cannot be flexibly set in the prior art.
Optionally, according to an operation of setting a background of the live video by a user, the operation of determining a background source serving as the background of the live video includes: according to the operation of setting the bottom background layer of the live video by a user, determining a first background information source serving as the bottom background layer of the live video, wherein the bottom background layer is a video layer positioned at the bottommost background of the live video; the operation of generating the background video image of the live video according to the video image of the background information source comprises the following steps: generating a video image serving as a bottom background layer according to the first background information source; and fusing the background video image with the video image of the live video to generate a corresponding target live video, comprising: and fusing the video image of the background layer at the bottom layer with the video image of the live video to generate the target live video.
Specifically, in the process of live control. The user 300 may set a picture as an underlying background layer in, for example, a "picture" tab of the video fusion control panel 430 of the interface 400 displayed by the terminal device 100. Specifically, fig. 6 shows a schematic diagram of a "picture" tag. Wherein the user 300 may load pictures through a "picture" tab, and the "picture" tab is provided with a corresponding foreground/background setting button for each picture loaded by the user 300. The foreground/background setting button can switch the corresponding picture between four states of 'empty state', 'foreground', 'background' and 'source background'. Thus, when the user 300 sets the state of a certain picture (e.g., picture 1) to "background," then a video image of the background layer 510 of the live video 500 may be generated from the picture.
In addition, referring to fig. 2C, background setting buttons are also provided on the source monitoring windows 421 to 428, so that the video image of the corresponding source can be switched between the "empty state" and the "bottom background" state by clicking the background setting buttons. Thus, the user 300 may set the video image of the source corresponding to a certain source monitoring window as the video image of the bottom background layer by clicking the "background setting button". For example, the user 300 may set the status of the video image of the PPT media source to "background" by clicking a "background set button" on the media source monitor window 428 so that the video image of the background layer 510 may be generated from the video image of the source.
Preferably, in this embodiment, only the source used to generate the underlying background layer 510 is alternatively specified. For example, when the user 300 sets picture 1 as the background of the live video 500 at the "picture" tab, then the video image of the source of a certain source monitoring window cannot be set as the background of the live video 500 by clicking the "background setting button" in the source monitoring windows 421-428. Alternatively, when the user 300 sets the video image of the PPT media source to the underlying background of the live video 500 through the source monitor window 428, the picture cannot be set to the underlying background of the live video 500 through the "picture" tab, and the sources of the other source monitor windows cannot be set to the underlying background of the live video 500 through the "background setting button" of the other source monitor window.
Thus, when the user 300 sets the picture loaded in the "picture" tag or the source corresponding to the source monitoring window (i.e., the first background source) as the bottom background of the live video, according to the operation of the user 300, the live host 200 determines that the source serving as the bottom background layer is the source corresponding to the picture or the source monitoring window. The live host 200 then generates a video image as an underlying background layer from the determined source and fuses the video image of the underlying background layer with the video image of the live video, generating a live video 500 (i.e., a target live video), as shown in fig. 5B and 5D.
In this way, the user 300 can replace the video image of the bottom background layer in the live video process, so that the audience can view more visual information in the live video picture, and the viewing experience of the audience can be improved.
Optionally, generating the video image as the underlying background layer according to the first background image source determined by the user includes: generating a video image serving as a bottom background layer according to the machine position video image serving as the bottom background layer determined by the user; generating a video image serving as a bottom background layer according to the local video image serving as the bottom background layer determined by the user; generating a video image serving as a bottom background layer according to the network video image resource serving as the bottom background layer determined by the user; generating a video image serving as a bottom background layer according to the video image received by the data interface determined by the user; generating a video image serving as a bottom background layer according to the picture serving as the bottom background layer determined by the user; or generating the video image serving as the bottom background layer according to the PPT serving as the bottom background layer determined by the user.
With reference to the above, the user 300 may set the video media source or PPT media source associated with the media source monitor window 427 or 428 to the underlying background layer via a background set button in the media source monitor window 427 or 428. Further, as described above, the user 300 may set the designated picture as the underlying background through the "picture" tab of the video fusion control panel 430. Thus, the live host 200 may generate a video image as the underlying background layer 510 from a PPT, video, or picture media source specified by the user 300. Further, although not shown, the user 300 may load a local video image or a network video image via a "video" tab of the video fusion control panel 430 to associate the loaded video media source with the media source monitoring window 427. Thus, the user 300 may set the local video image or the network video image as the underlying background layer of the live video 500 through the operation interface 400.
In addition, the user 300 may set the video image collected by the designated machine as the bottom background layer through the machine source monitoring windows 421 to 424, so that the live host 200 may generate the video image as the bottom background layer 510 according to the video image collected by the machine designated by the user 300. The user 300 may also set the video image received by the designated interface to the underlying background layer through the interface source monitor window 425 or 426, so that the live host 200 may generate a video image as the underlying background layer 510 from the video image received by the interface designated by the user 300.
Therefore, according to the technical solution of the present embodiment, the user 300 can set the bottom background layer 510 of the live video according to the needs of the user, so that the content displayed in the background of the live video can be flexibly set.
Optionally, according to an operation of setting a background of the live video by a user, determining a background source serving as the background of the live video includes: according to the operation that a user performs the matting processing on the video image of the machine position information source and sets the background of the matting processing, a second background information source serving as the information source background of the matting processing is determined; based on the background information source, generating a background video image of the live video, including: generating a video image serving as a signal source background of the matting processing according to the second background signal source; and fusing the background video image with the video image of the live video to generate a corresponding target live video, comprising: and fusing the video image of the information source background with the video image of the target object area processed by the matting, and generating a video image displayed by an information source display window in the target live video, wherein the video image of the target object area is the video image of the target object extracted from the video image of the machine position information source through the matting operation, and the information source display window is associated with the machine position information source.
Specifically, referring to fig. 4B, according to the method of the present embodiment, not only live video 500 may be generated by fusing the hierarchical structures of the underlying background layer 510, the source window video layer 520, and the foreground layer 530, but also video images displayed on the source display window in the source window video layer 520 may be generated by fusing the hierarchical structures of the source background layer and the matting layer. For example, in fig. 4B, the video image displayed by the source display window 521 may be generated by fusing the source background layer 521B and the matting layer 521a in a hierarchical structure. In this way, the image of the anchor person can be fused with the required background through the matting process. Thereby realizing richer and more flexible content display.
Wherein fig. 7 shows a schematic diagram of a "matting" label. When the user 300 performs the matting process, the "level selection" control in the "matting" label may select the source (i.e., the target source) for performing the matting process. For example, referring to fig. 5A, in this embodiment, the background wall of the field of view of the machine position 1 is a green curtain, and may be used for performing the matting process. Thus, the user 300 may perform a matting process on the video image collected by the location 1 (i.e., the video image displayed by the location source monitoring window 421) by selecting "location 1" as the target source in the "location select" control. In addition, the user 300 may select a source (i.e., a second background source) that is the background of the matting process from the sources corresponding to the source monitoring windows 421 to 428 through a "fixed background level selection" control. For example, referring to fig. 5C and 5D, in this embodiment, the user 300 selects the HDMI interface associated with the interface source monitor window 425 as the background source (i.e., the second background source) for the matting process.
In addition, as described above with reference to fig. 6, the user 300 may also switch the state of the designated picture to "source background" in the "picture" tab, thereby setting the picture as a background source (i.e., a second background source) for the matting process.
Thus, after the user 300 clicks the "scratch on" control, in response to an instruction of the user 300 to perform the scratch processing on the machine 1 (i.e., the target information source), the live host 200 performs the scratch processing on the video image acquired by the machine 1, thereby dividing the image of the anchor person (i.e., the target object area), and generating the video image corresponding to the scratch layer 521a according to the image of the anchor person. In addition, the live host 200 generates a video image as the source background layer 521b from a video image received by the HDMI interface in response to an instruction by the user 300 to determine the HDMI interface (i.e., the second background source) associated with the interface source monitoring window 425 as the background source of the matting process.
Since the user 300 has previously associated machine 1 with the source display window 521, the live host 200 fuses the video images of the matting layer 521a and the source background layer 521B to generate a video image displayed by the source display window 521, as shown with reference to fig. 4B, 5C, and 5D.
In addition, when the video image of the source background layer needs to be replaced, the user 300 can switch the source background in the "matting" label through the "fixed background machine position selection" control. For example, the user 300 may select the video image captured by the machine 2 associated with the machine source monitor window 422 as the video image of the source background layer through the "fixed background machine select" control. Thus, in response to an instruction of the user 300 to switch the source background layer, the live host 200 generates a video image as the source background layer 521b according to the video image acquired by the machine position 2, and fuses with the video image of the matting layer 521a, and displays the video image in the source display window 521.
Thus, the live host 200 can switch the video image of the source background layer according to the operation of the user 300. In this way, the user 300 can replace the machine position background image of the machine position video of the designated machine position in the live broadcast process, so that the audience terminal can watch the live broadcast content of the anchor in a plurality of different scenes, and the watching experience of the audience terminal is improved.
Optionally, the operation of generating the video image as the machine-position background image of the source window video layer according to the second background image source determined by the user includes: generating a machine position background image serving as an information source window video layer according to the local video image serving as the machine position background image determined by the user; generating a machine position background image serving as an information source window video layer according to the network video image resource serving as the machine position background image determined by the user; according to the video image received by the data interface determined by the user, the machine position background image of the information source window video layer; generating a machine position background image serving as an information source window video layer according to the picture serving as the machine position background image determined by the user; or generating the machine position background image serving as the information source window video layer according to the PPT serving as the machine position background image determined by the user.
Referring to the above, the user 300 may select a source as a source background layer from the sources associated with the source monitor windows 421-428 through the "fixed background level selection" control in the "scratch" tab of the video fusion control panel 430. Thus, the user 300 may set the PPT media source associated with the source monitor window 428 as the source background or the video media source associated with the source monitor window 427 as the source background. And the user 300 may set the designated picture as the source background through the "picture" tab of the video fusion control panel 430. Thus, according to the present embodiment, the live host 200 may generate a video image as the source background layer 521b according to the PPT, video, or picture media source specified by the user 300. Further, although not shown, the user 300 may load a local video image or a network video image via a "video" tab of the video fusion control panel 430 to associate the loaded video media source with the media source monitoring window 427. Thus, the user 300 may set the local video image or the network video image as the source background of the matting process through the operation interface 400.
In addition, the user 300 may set the video image collected at the designated machine location (e.g., the machine location associated with the machine location source monitoring windows 421-424) as the source background layer through the "fixed background machine location selection" control in the "scratch" tab of the video fusion control panel 430, so that the live host 200 may generate the video image as the source background layer 521b from the video image collected at the machine location designated by the user 300. The user 300 may also set the video image received by the designated interface (e.g., the interface associated with the interface source monitor windows 425 and 426) to the source background layer through the "fixed background level selection" control in the "scratch" tab of the video fusion control panel 430, so that the live host 200 may generate a video image as the source background layer 521b from the video image received by the interface according to the interface designated by the user 300.
The live host 200 thus determines the video image of the corresponding source as a source background layer according to the operation of the user 300. By the mode, the user can flexibly set the information source background of the matting processing, so that images of the anchor personnel are fused with various video image resources.
In addition, through the operation interface 400, the user 300 may not only switch the video image as the underlying background layer 510 and/or the source background layer 521b according to the own needs. And for the same source the user 300 can switch it between the underlying background layer 510 and the source background layer 521 b. For example, referring to FIG. 5B, a video image of a PPT media source is shown with an underlying background layer. In this case, however, the user 300 may set the PPT media source to the source background layer of the matting operation in the "matting" tab. Thus, the live host 200 generates a video image of the source background layer shown in fig. 4B from the PPT media source according to the operation of the user 300, freeing the underlying background layer. The background displayed in the source display window 521 is thus the video image of the PPT media source, while the underlying background layer of the live video 500 is blank.
In contrast, referring to fig. 5C, the HDMI interface source is set in association with the source background layer 521b so that the background displayed in the source display window 521 is the video image received by the HDMI interface. In this case, however, the user 300 may associate the source with the underlying background layer of the live video 500 by clicking the background settings button of the source monitor window 425. Thus, the live host 200 generates a video image of the underlying background layer from the video image received by the HDMI interface according to the user's operation, and leaves the source background layer 521B shown in fig. 4B free. Thus, the bottom background layer displayed in the live push video display window 410 of the operation interface 400 is the video image received by the HDMI interface, and the background displayed in the source display window 521 is blank.
In this way, the user 300 can switch the information source between different background layers according to the need, thereby enriching the expression form of the live video.
Optionally, according to an operation of setting a background of the live video by a user, determining a background source serving as the background of the live video includes: receiving a background setting request from a terminal device of a user, wherein the background setting request is generated by the terminal device in response to setting operation of the user on a background information source at a background setting interface; and responding to the background setting request, and determining the background information source set by the user as the background information source of the background of the live video. .
Specifically, referring to fig. 2A, a user (e.g., user 300) may establish communication with the live host 200 through the terminal device 100 (e.g., a device such as a cell phone, tablet, or notebook computer) prior to the start of a program. In case of successful communication with the live host 200, the terminal device 100 of the user 300 displays a control panel as shown in fig. 2B, so that the user 300 can set a background image in a live video by performing a corresponding operation on the control panel. Thus, during the live broadcast process, the user may determine the background source used as the background image of the live video, for example, but not limited to, by clicking on the source monitor windows 421-428 of the operator interface 400, the "scratch" tab of the video fusion control panel 430, the "picture" tab of the video fusion control panel 430, and so on. Therefore, after the user 300 performs the corresponding clicking operation, the terminal device 100 of the user 300 sends a corresponding background image setting request to the live host 200. Accordingly, the live host 200 receives the background image setting request from the terminal device 100 of the user 300, and acquires the background source triggered by the user 300 as the background source of the background image of the live video in response to the background image setting request. In this way, the live host 200 can quickly and accurately determine the background source as the background image of the live video according to the setting operation of the user.
Optionally, the method further comprises: setting playing parameters of the background video image, wherein the playing parameters comprise a playing mode, a playing speed, a playing duration and whether the background video image is hidden after playing; and performing play control on the background video image according to the set play parameters.
Specifically, the user 300 of the present embodiment may also perform play control on the background video image set by the user 300, so that the live host 200 may set play parameters of the background video image according to the relevant information input by the user 300. The playing parameters include, for example and without limitation, parameters such as playing mode, playing speed, playing duration, and whether to hide after playing. Then, the live host 200 performs play control on the background video image according to the set play parameters. By the method, the purpose that the user 300 can play and control the background video image according to actual requirements is achieved, and the use experience of the user 300 is effectively improved.
Further, referring to fig. 1, according to a second aspect of the present embodiment, there is provided a storage medium. The storage medium includes a stored program, wherein the method of any of the above is performed by a processor when the program is run.
Therefore, according to the embodiment, in the live broadcast process, the user can set the background image of the live video according to the actual requirement in the operation interface displayed by the terminal device in communication connection with the live host. The live host responds to the operation of setting the background image of the live video by the user, firstly, a background information source which is determined by the user and serves as the background image of the live video is obtained, then, a background video image of the live video is generated according to the background information source, and finally, the background video image and the video image of the live video are fused to generate a corresponding target live video. By the method, the technical effect that a user can flexibly set the background image in the live video is achieved. And further solves the technical problem that the background image in the live video cannot be flexibly set in the prior art.
Furthermore, although in the present embodiment, the terminal device and the live host are two separate devices, the method of the present embodiment is also applicable to a case where the terminal device and the live host are integrated together. For example, the live device can use the operation interface, so that the method described in this embodiment can be implemented according to the operation of the user.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
Fig. 7 shows an apparatus 700 for controlling a background image of a live video according to the present embodiment, which apparatus 700 corresponds to the method according to the first aspect of embodiment 1. Referring to fig. 7, the apparatus 700 includes: the background information source determining module 710 is configured to determine a background information source serving as a background of the live video according to an operation of setting the background of the live video by a user; the background video image generating module 720 is configured to generate a background video image of the live video according to the background information source; and a fusion module 730, configured to fuse the background video image with the video image of the live video, and generate a corresponding target live video. .
Optionally, the background source determining module 710 includes a first background source determining sub-module, configured to determine, according to a user setting an underlying background layer of the live video, a first background source serving as an underlying background layer of the live video, where the underlying background layer is a video layer located in a bottommost background of the live video. The background video image generation module 720 includes a first background video generation sub-module for generating a video image as an underlying background layer according to a first background source. The fusion module 730 includes a first fusion sub-module, configured to fuse a video image of a background layer of a bottom layer with a video image of a live video, and generate a target live video.
Optionally, the first background video generation submodule includes: the first generation unit is used for generating a video image serving as a bottom background layer according to the machine position video image serving as the bottom background layer determined by a user; the second generation unit is used for generating a video image serving as a bottom background layer according to the local video image serving as the bottom background layer determined by the user; the third generation unit is used for generating a video image serving as a bottom background layer according to the network video image resource serving as the bottom background layer determined by the user; a fourth generating unit, configured to generate a video image serving as a bottom background layer according to the video image received by the data interface determined by the user; a fifth generating unit, configured to generate a video image serving as a bottom background layer according to the picture serving as the bottom background layer determined by the user; or a sixth generating unit, configured to generate a video image as an underlying background layer according to the PPT as the underlying background layer determined by the user.
Optionally, the background source determining module 710 includes a second background source determining sub-module, configured to determine a second background source serving as a source background of the matting process according to an operation of the user to perform the matting process on the video image of the machine position source and set the matting process background. The background video image generation module 720 includes a second background video generation sub-module for generating video images of the source background as a matting process. The fusion module 730 includes a second fusion sub-module, configured to fuse a video image of a source background with a video image of a target object area processed by matting, and generate a video image displayed by a source display window in a target live video, where the video image of the target object area is a video image of a target object extracted by matting a video image of a machine position source, and the source display window is associated with the machine position source.
Optionally, the second background video generation submodule includes: a seventh generating unit, configured to generate, according to a local video image determined by a user as a machine-position background image, a video image as a source background for matting processing; an eighth generating unit, configured to generate, according to a network video image resource determined by a user as a machine-position background image, a video image as a source background for matting processing; a ninth generation unit, configured to generate, according to a video image received by the data interface determined by the user, a video image serving as a source background for the matting processing; a tenth generation unit for generating a video image of the source background as a matting process according to the picture determined by the user as the machine position background image; or an eleventh generating unit, configured to generate a video image of the source background as the matting processing according to the PPT determined by the user as the machine-position background image.
Optionally, the background source determination module 710 includes: a request receiving sub-module, configured to receive a context setting request from a terminal device of a user, where the context setting request is generated by the terminal device in response to a setting operation of the user on a context information source at a context setting interface; and the background information source determining submodule is used for responding to the background setting request and determining the background information source triggered by the user as the background information source of the background image of the live video.
Optionally, the apparatus 700 further comprises: the setting module is used for setting the playing parameters of the background video image, wherein the playing parameters comprise a playing mode, a playing speed, a playing duration and whether the background video image is hidden after being played; and the play control module is used for carrying out play control on the background video image according to the set play parameters.
Therefore, according to the embodiment, in the live broadcast process, the user can set the background image of the live video according to the actual requirement in the operation interface displayed by the terminal device in communication connection with the live host. The live host responds to the operation of setting the background image of the live video by the user, firstly, a background information source which is determined by the user and serves as the background image of the live video is obtained, then, a background video image of the live video is generated according to the background information source, and finally, the background video image and the video image of the live video are fused to generate a corresponding target live video. By the method, the technical effect that a user can flexibly set the background image in the live video is achieved. And further solves the technical problem that the background image in the live video cannot be flexibly set in the prior art.
Example 3
Fig. 8 shows an apparatus 800 for controlling a background image of a live video according to the present embodiment, which apparatus 800 corresponds to the method according to the first aspect of embodiment 1. Referring to fig. 8, the apparatus 800 includes: a processor 810; and a memory 820 coupled to the processor 810 for providing instructions to the processor 810 for processing the following processing steps: according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video; generating a background video image of the live video according to the background information source; and fusing the background video image with the video image of the live video to generate a corresponding target live video.
Optionally, according to an operation of setting a background of a live video by a user, determining a background source serving as the background of the live video includes: according to the operation of setting the bottom background layer of the live video by the user, determining a first background information source serving as the bottom background layer of the live video, wherein the bottom background layer is a video layer positioned at the bottommost background of the live video, and according to the background information source, generating a background video image of the live video comprises the following steps: generating a video image serving as the bottom background layer according to the first background information source; and fusing the background video image with the video image of the live video to generate a corresponding target live video, comprising: and fusing the video image of the bottom background layer with the video image of the live video to generate the target live video.
Optionally, generating, according to the first background source, a video image as the underlying background layer includes: generating a video image serving as a bottom background layer according to the machine position video image serving as the bottom background layer determined by the user; generating a video image serving as a bottom background layer according to the local video image serving as the bottom background layer determined by the user; generating a video image serving as a bottom background layer according to the network video image resource serving as the bottom background layer determined by the user; generating a video image serving as the bottom background layer according to the video image received by the data interface determined by the user; generating a video image serving as a bottom background layer according to the picture serving as the bottom background layer determined by the user; or generating a video image serving as the bottom background layer according to the PPT serving as the bottom background layer determined by the user.
Optionally, according to an operation of setting a background of a live video by a user, determining a background source serving as the background of the live video includes: according to the operation that the user performs the matting processing on the video image of the machine position information source and sets the background of the matting processing, a second background information source serving as the information source background of the matting processing is determined; and generating a background video image of the live video according to the background information source, wherein the operation comprises the following steps: generating a video image serving as the information source background of the matting processing according to the second background information source; and fusing the background video image with the video image of the live video to generate a corresponding target live video, comprising: and fusing the video image of the information source background with the video image of the target object area processed by the matting, and generating a video image displayed by an information source display window in the target live video, wherein the video image of the target object area is a video image of a target object extracted from the video image of the machine position information source through matting operation, and the information source display window is associated with the machine position information source.
Optionally, the operation of generating the video image serving as the source background of the matting processing according to the second background source includes: generating a video image serving as a source background of the matting processing according to the local video image serving as the machine position background image determined by the user; generating a video image serving as a source background of the matting processing according to the network video image resource serving as the machine position background image determined by the user; generating a video image serving as a source background of the matting processing according to the video image received by the data interface determined by the user; generating a video image serving as a source background of the matting processing according to the picture serving as the machine position background image determined by the user; or generating a video image serving as the information source background of the matting processing according to the PPT serving as the machine position background image determined by the user.
Optionally, according to an operation of setting a background of a live video by a user, determining a background source serving as the background of the live video includes: receiving a background setting request from a terminal device of the user, wherein the background setting request is generated by the terminal device in response to the user performing setting operation on a background information source on a background setting interface; and responding to the background setting request, and determining the background information source set by the user as the background information source of the background of the live video.
Optionally, the memory 820 is also used to provide instructions for the processor 810 to process the following processing steps: setting playing parameters of the background video image, wherein the playing parameters comprise a playing mode, a playing speed, a playing duration and whether the background video image is hidden after playing; and performing play control on the background video image according to the set play parameters.
Therefore, according to the embodiment, in the live broadcast process, the user can set the background image of the live video according to the actual requirement in the operation interface displayed by the terminal device in communication connection with the live host. The live host responds to the operation of setting the background image of the live video by the user, firstly, a background information source which is determined by the user and serves as the background image of the live video is obtained, then, a background video image of the live video is generated according to the background information source, and finally, the background video image and the video image of the live video are fused to generate a corresponding target live video. By the method, the technical effect that a user can flexibly set the background image in the live video is achieved. And further solves the technical problem that the background image in the live video cannot be flexibly set in the prior art.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. A method of controlling the background of live video, comprising:
according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video, wherein the live video comprises a bottom background layer, an information source window video layer and a foreground layer;
generating a background video image of the live video according to the background information source; and
fusing the background video image and the video image of the live video to generate a corresponding target live video, wherein the target live video comprises a video image and a video image of the live video
According to the operation of setting the background of the live video by a user, determining the background information source serving as the background of the live video comprises the following steps: according to the operation of setting the bottom background layer of the live video by the user, determining a first background information source serving as the bottom background layer of the live video, wherein the bottom background layer is a video layer positioned at the bottommost background of the live video, and
And generating a background video image of the live video according to the background information source, wherein the operation comprises the following steps: generating a video image serving as the bottom background layer according to the first background information source; and
fusing the background video image with the video image of the live video to generate a corresponding target live video, including: fusing the video image of the bottom background layer with the video image of the live video to generate the target live video, and wherein
According to the operation of setting the background of the live video by a user, determining the background information source serving as the background of the live video comprises the following steps: according to the operation that the user performs the matting processing on the video image of the machine position information source and sets the background of the matting processing, a second background information source serving as the information source background of the matting processing is determined;
and generating a background video image of the live video according to the background information source, wherein the operation comprises the following steps: generating a video image serving as the information source background of the matting processing according to the second background information source; and
fusing the background video image with the video image of the live video to generate a corresponding target live video, including: and fusing the video image of the information source background with the video image of the target object area processed by the matting, and generating a video image displayed by an information source display window in the target live video, wherein the video image of the target object area is a video image of a target object extracted from the video image of the machine position information source through matting operation, and the information source display window is associated with the machine position information source.
2. The method of claim 1, wherein generating a video image as the underlying background layer from the first background source comprises:
generating a video image serving as a bottom background layer according to the machine position video image serving as the bottom background layer determined by the user;
generating a video image serving as a bottom background layer according to the local video image serving as the bottom background layer determined by the user;
generating a video image serving as a bottom background layer according to the network video image resource serving as the bottom background layer determined by the user;
generating a video image serving as the bottom background layer according to the video image received by the data interface determined by the user;
generating a video image serving as a bottom background layer according to the picture serving as the bottom background layer determined by the user; or alternatively
And generating a video image serving as the bottom background layer according to the PPT serving as the bottom background layer determined by the user.
3. A method as recited in claim 1, wherein generating a video image of a source background for the matting process based on the second background source comprises:
Generating a video image serving as a source background of the matting processing according to the local video image serving as the machine position background image determined by the user;
generating a video image serving as a source background of the matting processing according to the network video image resource serving as the machine position background image determined by the user;
generating a video image serving as a source background of the matting processing according to the video image received by the data interface determined by the user;
generating a video image serving as a source background of the matting processing according to the picture serving as the machine position background image determined by the user; or alternatively
And generating a video image serving as the information source background of the matting processing according to the PPT serving as the machine position background image determined by the user.
4. The method according to claim 1, wherein the operation of determining a background source as a background of the live video according to an operation of setting a background of the live video by a user comprises:
receiving a background setting request from a terminal device of the user, wherein the background setting request is generated by the terminal device in response to the user performing setting operation on a background information source on a background setting interface; and
And responding to the background setting request, and determining the background information source set by the user as the background information source of the background of the live video.
5. The method as recited in claim 1, further comprising:
setting play parameters of the background video image, wherein the play parameters comprise a play mode, a play speed, a play duration and whether the background video image is hidden after being played; and
and performing play control on the background video image according to the set play parameters.
6. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 5 is performed by a processor when the program is run.
7. An apparatus (700) for controlling the background of live video, comprising:
the background information source determining module (710) is used for determining a background information source serving as a background of the live video according to the operation of setting the background of the live video by a user, wherein the live video comprises a bottom background layer, an information source window video layer and a foreground layer;
a background video image generating module (720) for generating a background video image of the live video according to the background information source; and
A fusion module (730) for fusing the background video image and the video image of the live video to generate a corresponding target live video, wherein
The background source determination module (710) includes: the first background information source determining submodule is used for determining a first background information source serving as a bottom background layer of the live video according to the operation of setting the bottom background layer of the live video by a user, wherein the bottom background layer is a video layer positioned at the bottommost background of the live video;
the background video image generation module (720) comprises: the first background video generation sub-module is used for generating a video image serving as the bottom background layer according to the first background information source; and
the fusion module (730) includes: a first fusion sub-module, configured to fuse the video image of the bottom background layer with the video image of the live video, generate the target live video, and wherein
The background source determination module (710) includes: the second background information source determining submodule is used for determining a second background information source serving as an information source background of the matting processing according to the operation of the user for matting processing the video image of the machine information source and setting the background of the matting processing;
The background video image generation module (720) comprises: the second background video generation sub-module is used for generating a video image serving as a source background of the matting processing; and
the fusion module (730) includes: and the second fusion sub-module is used for fusing the video image of the information source background with the video image of the target object area processed by the matting, and generating a video image displayed by an information source display window in the target live video, wherein the video image of the target object area is a video image of a target object extracted from the video image of the machine position information source through the matting operation, and the information source display window is associated with the machine position information source.
8. An apparatus (800) for controlling the background of live video, comprising:
a processor (810); and
a memory (820) coupled to the processor for providing instructions to the processor for processing the following processing steps:
according to the operation of setting the background of the live video by a user, determining a background information source serving as the background of the live video, wherein the live video comprises a bottom background layer, an information source window video layer and a foreground layer;
Generating a background video image of the live video according to the background information source; and
fusing the background video image and the video image of the live video to generate a corresponding target live video, wherein the target live video comprises a video image and a video image of the live video
According to the operation of setting the background of the live video by a user, determining the background information source serving as the background of the live video comprises the following steps: according to the operation of setting the bottom background layer of the live video by the user, determining a first background information source serving as the bottom background layer of the live video, wherein the bottom background layer is a video layer positioned at the bottommost background of the live video, and
and generating a background video image of the live video according to the background information source, wherein the operation comprises the following steps: generating a video image serving as the bottom background layer according to the first background information source; and
fusing the background video image with the video image of the live video to generate a corresponding target live video, including: fusing the video image of the bottom background layer with the video image of the live video to generate the target live video, and wherein
According to the operation of setting the background of the live video by a user, determining the background information source serving as the background of the live video comprises the following steps: according to the operation that the user performs the matting processing on the video image of the machine position information source and sets the background of the matting processing, a second background information source serving as the information source background of the matting processing is determined;
and generating a background video image of the live video according to the background information source, wherein the operation comprises the following steps: generating a video image serving as the information source background of the matting processing according to the second background information source; and
fusing the background video image with the video image of the live video to generate a corresponding target live video, including: and fusing the video image of the information source background with the video image of the target object area processed by the matting, and generating a video image displayed by an information source display window in the target live video, wherein the video image of the target object area is a video image of a target object extracted from the video image of the machine position information source through matting operation, and the information source display window is associated with the machine position information source.
CN202111056258.XA 2021-09-09 2021-09-09 Method, device and storage medium for controlling background image of live video Active CN113873272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111056258.XA CN113873272B (en) 2021-09-09 2021-09-09 Method, device and storage medium for controlling background image of live video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111056258.XA CN113873272B (en) 2021-09-09 2021-09-09 Method, device and storage medium for controlling background image of live video

Publications (2)

Publication Number Publication Date
CN113873272A CN113873272A (en) 2021-12-31
CN113873272B true CN113873272B (en) 2023-12-15

Family

ID=78995134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111056258.XA Active CN113873272B (en) 2021-09-09 2021-09-09 Method, device and storage medium for controlling background image of live video

Country Status (1)

Country Link
CN (1) CN113873272B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466207A (en) * 2022-01-18 2022-05-10 阿里巴巴(中国)有限公司 Live broadcast control method and computer storage medium
CN114584797A (en) * 2022-02-28 2022-06-03 北京字节跳动网络技术有限公司 Display method and device of live broadcast picture, electronic equipment and storage medium
CN114710703A (en) * 2022-03-29 2022-07-05 稿定(厦门)科技有限公司 Live broadcast method and device with variable scenes

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543203A (en) * 2003-03-28 2004-11-03 伊斯曼柯达公司 Method and system for modifying digital cinema frame content
CN106713942A (en) * 2016-12-27 2017-05-24 广州华多网络科技有限公司 Video processing method and video processing device
CN110012264A (en) * 2017-11-13 2019-07-12 广州必威易微播科技有限责任公司 A kind of video handles synthesis system and method in real time
CN111447389A (en) * 2020-04-22 2020-07-24 广州酷狗计算机科技有限公司 Video generation method, device, terminal and storage medium
CN111698224A (en) * 2020-05-22 2020-09-22 张焱 Water quality monitoring terminal user verification method and system and water quality monitoring internet of things terminal
WO2021047430A1 (en) * 2019-09-11 2021-03-18 广州华多网络科技有限公司 Virtual gift special effect synthesis method and apparatus, and live streaming system
CN112601106A (en) * 2020-11-16 2021-04-02 北京都是科技有限公司 Video image processing method and device and storage medium
CN112822542A (en) * 2020-08-27 2021-05-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN113099265A (en) * 2021-04-27 2021-07-09 北京大米科技有限公司 Interaction method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543203A (en) * 2003-03-28 2004-11-03 伊斯曼柯达公司 Method and system for modifying digital cinema frame content
CN106713942A (en) * 2016-12-27 2017-05-24 广州华多网络科技有限公司 Video processing method and video processing device
CN110012264A (en) * 2017-11-13 2019-07-12 广州必威易微播科技有限责任公司 A kind of video handles synthesis system and method in real time
WO2021047430A1 (en) * 2019-09-11 2021-03-18 广州华多网络科技有限公司 Virtual gift special effect synthesis method and apparatus, and live streaming system
CN111447389A (en) * 2020-04-22 2020-07-24 广州酷狗计算机科技有限公司 Video generation method, device, terminal and storage medium
CN111698224A (en) * 2020-05-22 2020-09-22 张焱 Water quality monitoring terminal user verification method and system and water quality monitoring internet of things terminal
CN112822542A (en) * 2020-08-27 2021-05-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN112601106A (en) * 2020-11-16 2021-04-02 北京都是科技有限公司 Video image processing method and device and storage medium
CN113099265A (en) * 2021-04-27 2021-07-09 北京大米科技有限公司 Interaction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
北京电视台高清新闻演播室背景大屏系统;顾端;德旻晖;;现代电视技术(第04期);全文 *
顾端等.北京电视台高清新闻演播室背景大屏系统.现代电视技术.2013,(第04期),全文. *

Also Published As

Publication number Publication date
CN113873272A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN113873272B (en) Method, device and storage medium for controlling background image of live video
CN110636353B (en) Display device
US10417742B2 (en) System and apparatus for editing preview images
EP2690550A1 (en) Method and apparatus for displaying a multi-task interface
CN108600818B (en) Method and device for displaying multimedia resources
CN113873311B (en) Live broadcast control method, device and storage medium
CN111405339B (en) Split screen display method, electronic equipment and storage medium
CN112073798B (en) Data transmission method and equipment
CN110798622B (en) Shared shooting method and electronic equipment
CN113014972B (en) Screen projection method, device and system
US20190261026A1 (en) Multimedia information playing method and system, standardized server and live broadcast terminal
EP4171046A1 (en) Video processing method, and device, storage medium and program product
CN112637515A (en) Shooting method and device and electronic equipment
CN108111897A (en) A kind of method and device for showing displaying information in video
CN113596555B (en) Video playing method and device and electronic equipment
CN113518257B (en) Multisystem screen projection processing method and equipment
CN113873273B (en) Method, device and storage medium for generating live video
CN105578204B (en) Method and device for displaying multiple video data
CN112616078A (en) Screen projection processing method and device, electronic equipment and storage medium
CN112399220A (en) Camera physical switch locking state display method and display equipment
CN109999490B (en) Method and system for reducing networking cloud application delay
CN112399235A (en) Method for enhancing photographing effect of camera of smart television and display device
CN108989874B (en) Intelligent display with projection function, implementation method thereof and intelligent television
CN114938430B (en) Picture display method and device under full-true scene, electronic equipment and readable storage medium
CN116233511A (en) Video synchronous playing method, spliced screen system, television and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant