CN111432235A - Live video generation method and device, computer readable medium and electronic equipment - Google Patents

Live video generation method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN111432235A
CN111432235A CN202010250228.1A CN202010250228A CN111432235A CN 111432235 A CN111432235 A CN 111432235A CN 202010250228 A CN202010250228 A CN 202010250228A CN 111432235 A CN111432235 A CN 111432235A
Authority
CN
China
Prior art keywords
live
video
live broadcast
anchor
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010250228.1A
Other languages
Chinese (zh)
Inventor
程梦影
王毅
孙静
陈健生
冯里千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010250228.1A priority Critical patent/CN111432235A/en
Publication of CN111432235A publication Critical patent/CN111432235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure provides a live video generation method, a live video generation apparatus, a computer-readable medium, and an electronic device; relates to the technical field of live broadcast. The live video generation method comprises the following steps: collecting a live broadcast picture of a live broadcast anchor terminal; responding to a live broadcast background switching instruction, and removing a live broadcast background from the live broadcast picture to acquire a main broadcast image in the live broadcast picture; and synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience. The live video generation method can overcome the limitation of a live environment on live broadcasting to a certain extent, and further improves the live broadcasting effect.

Description

Live video generation method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of live broadcast technologies, and in particular, to a live video generation method, a live video generation apparatus, a computer-readable medium, and an electronic device.
Background
Live webcast is also an emerging industry as a representative of new media, and with further accelerated development of network construction and deployment, live webcast, an entertainment form for publicly playing instant images on the internet, is being accepted by more and more people. Because the network live broadcast has the characteristics of high real-time performance, high interactivity and the like, the network live broadcast also becomes an important mode for expanding the influence of each large online video platform and attracting users.
Live webcasting needs a main broadcast to erect audio signal acquisition equipment on a live broadcast site, and then the audio signal acquisition equipment is uploaded to a server through a network, and then a live broadcast platform is released to attract users to go to a live broadcast room to watch. In the process of network live broadcast, the live broadcast background can not be flexibly changed due to the limitation of real-time performance, so that the effect required by a main broadcast or audiences can not be achieved.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide a live video generation method, a live video generation apparatus, a computer-readable medium, and an electronic device, so as to overcome the limitation of a live environment on live video to a certain extent, and improve the generation efficiency of live video.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a live video generation method, including:
collecting a live broadcast picture of a live broadcast anchor terminal;
responding to a live broadcast background switching instruction, and removing a live broadcast background from the live broadcast picture to acquire a main broadcast image in the live broadcast picture;
and synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience.
In an exemplary embodiment of the present disclosure, before responding to the live background switching instruction, the method further includes:
acquiring interaction information between the live broadcast anchor terminal and the live broadcast audience terminal;
and if the interactive information meets the preset condition, displaying a live broadcast background switching control at the live broadcast anchor terminal so as to receive the live broadcast background switching instruction through the live broadcast background switching control.
In an exemplary embodiment of the present disclosure, the interactive information includes a virtual gift sent by the live viewer to the live host.
In an exemplary embodiment of the present disclosure, the removing, in response to a background switching instruction, a live background from the live view to obtain an anchor image in the live view includes:
and if the confirmation operation of the live broadcast background switching control is detected, removing the live broadcast background from the live broadcast picture to acquire the anchor image in the live broadcast picture.
In an exemplary embodiment of the present disclosure, the removing a live background from the live view to obtain a main view image in the live view includes:
identifying the live broadcast picture through a neural network to determine a live broadcast background in the live broadcast picture;
and deleting the live background from the live picture to acquire a live image in the live picture.
In an exemplary embodiment of the present disclosure, before the identifying the live view through the neural network model, the method further includes:
acquiring a plurality of sample pictures;
and training the plurality of sample pictures by adopting a deep learning algorithm to obtain the neural network model.
In an exemplary embodiment of the present disclosure, the synthesizing the anchor image and the target video file into a live video includes:
displaying a plurality of video files stored by the live anchor;
and acquiring a target video file selected by a main broadcasting in the plurality of video files so as to synthesize the main broadcasting image and the target video file into a live video.
In an exemplary embodiment of the present disclosure, the synthesizing the anchor image and the target video file into a live video includes:
acquiring a first interaction type to which the interaction information belongs;
acquiring a plurality of video files corresponding to the first interaction type from a server for selection by a main broadcast;
and determining a target video file selected by an anchor in the plurality of video files so as to synthesize the anchor image and the target video file into a live video.
In an exemplary embodiment of the present disclosure, before obtaining, from a server, a plurality of video files corresponding to the first interaction type, the method further includes:
generating a plurality of video files by a special effect manufacturing technology;
and determining the interaction type corresponding to each video file.
In an exemplary embodiment of the present disclosure, the synthesizing the anchor image and the target video file into a live video includes:
extracting video frames contained in the target video file;
and performing image superposition processing on the anchor image and the video frame to synthesize the live video.
According to a second aspect of the present disclosure, a live video generation apparatus is provided, including a picture acquisition module, a background segmentation module, and a video synthesis module, wherein:
the picture acquisition module is used for acquiring a live broadcast picture of a live broadcast anchor terminal;
the background segmentation module is used for responding to a live broadcast background switching instruction and removing a live broadcast background from the live broadcast picture so as to acquire a main broadcast image in the live broadcast picture;
and the video synthesis module is used for synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience.
In an exemplary embodiment of the present disclosure, the apparatus may further include an interaction obtaining module, and a control display module, wherein:
and acquiring interactive information between the live broadcast anchor terminal and the live broadcast audience terminal.
And if the interactive information meets the preset condition, displaying a live broadcast background switching control at the live broadcast anchor terminal so as to receive the live broadcast background switching instruction through the live broadcast background switching control.
In an exemplary embodiment of the present disclosure, the background segmentation module may specifically include a picture recognition unit, and a background removal unit, wherein:
and the picture identification unit is used for identifying the live broadcast picture through a neural network so as to determine a live broadcast background in the live broadcast picture.
And the background removing unit is used for deleting the live background from the live picture so as to acquire the live image in the live picture.
In an exemplary embodiment of the present disclosure, the video composition module may specifically include a file display unit, and a file selection unit, wherein:
and the file display unit is used for displaying a plurality of video files stored by the live anchor terminal.
And the file selection unit is used for acquiring a target video file selected by a main broadcast in the video files so as to synthesize the main broadcast image and the target video file into a live broadcast video.
In an exemplary embodiment of the present disclosure, the video composition module may specifically include a type obtaining unit, a file obtaining unit, and a target file determining unit, wherein:
and the type acquisition unit is used for acquiring the first interaction type to which the interaction information belongs.
And the file acquisition unit is used for acquiring a plurality of video files corresponding to the first interaction type from a server so as to be selected by a main broadcast.
And the target file determining unit is used for determining a target video file selected by a main broadcast in the plurality of video files so as to synthesize the main broadcast image and the target video file into a live video.
In an exemplary embodiment of the present disclosure, the live video generation apparatus may further include a video production module, and an interaction type determination module, wherein:
and the video production module is used for generating a plurality of video files through a special effect production technology.
And the interaction type determining module is used for determining the interaction type corresponding to each video file.
In an exemplary embodiment of the present disclosure, the video composition module may specifically include a video frame extraction unit, and an image superimposition unit, wherein:
and the video frame extraction unit is used for extracting the video frames contained in the target video file.
And the image superposition unit is used for carrying out image superposition processing on the anchor image and the video frame so as to synthesize the live video.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any one of the above via execution of the executable instructions.
According to a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
Exemplary embodiments of the present disclosure may have some or all of the following benefits:
in the live video generation method provided by an exemplary embodiment of the present disclosure, on one hand, live videos can be generated flexibly and without being limited by live scenes by removing a live background in a live picture and synthesizing a main broadcast image and a target video file, thereby improving the live effect; on the other hand, when in live broadcasting, a live video required by a main broadcasting can be generated through a target video file, so that the user requirements can be met and the user experience can be improved; on the other hand, the live video can be directly switched in the background by switching the background, the labor can be saved without an anchor mobile acquisition device, and the video generation efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically shows a system architecture diagram for implementing a live video generation method according to one embodiment of the present disclosure;
fig. 2 schematically shows a flow diagram of a live video generation method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a live view diagram according to an embodiment of the disclosure;
FIG. 4 schematically shows a flow diagram of a live video generation method according to one embodiment of the present disclosure;
fig. 5 schematically shows a flow diagram of a live video generation method according to another embodiment of the present disclosure;
FIG. 6 schematically shows a flow diagram of a live video generation method according to one embodiment of the present disclosure;
fig. 7 schematically shows a flow diagram of a live video generation method according to another embodiment of the present disclosure;
FIG. 8 schematically shows a display effect diagram of a video file according to one embodiment of the present disclosure;
FIG. 9 schematically shows an effect diagram of a live video according to one embodiment of the present disclosure;
FIG. 10 schematically shows a flow diagram of a live video generation method according to one embodiment of the present disclosure;
fig. 11 schematically shows a block diagram of a live video generating apparatus according to an embodiment of the present disclosure;
FIG. 12 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The technical solution of the embodiment of the present disclosure is explained in detail below:
the virtual background is a technology of removing the original picture background and then synthesizing the original picture background, so that the characters in the synthesized picture are as if the characters are in the replaced background, and the virtual background is widely applied to the post-processing of movies and television series to achieve the expected effect. However, because live webcast has the characteristic of high real-time performance, live webcast video cannot be processed by means of post-processing, and the existing post-processing has a high requirement for video acquisition, and requires a green screen as a background, and the picture effect after synthesis is poor due to influence factors such as light in the environment and the flatness of the green screen.
In view of one or more of the above problems, an exemplary embodiment of the present disclosure first provides a system architecture for implementing a live video generation method. Referring to fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send request instructions or the like. Various client applications, such as shopping applications, live platform applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a video data transmission service for providing a live address requested by a user using the terminal device 101, 102, 103, a background management service for providing support for a shopping website browsed by the user using the terminal device, and the like (for example only). The background management server may analyze and perform other processing on the received data such as the information query request, and feed back a processing result (for example, target push information and product information — just an example) to the terminal device.
Note that the live video generation method provided in the embodiment of the present disclosure is generally executed by the server 105, and accordingly, the live video generation apparatus is generally provided in the server 105. However, it can be understood by those skilled in the art that the live video generation method of the present disclosure may also be executed by the terminal device 101, and accordingly, a live video generation apparatus may be provided in the terminal device 101, which is not particularly limited in this embodiment.
Based on the system architecture, the present exemplary embodiment provides a live video generation method. Referring to fig. 2, the live video generation method may include step S210, step S220, and step S230, in which:
step S210: and collecting the live broadcast picture of the live broadcast anchor terminal.
Step S220: and responding to a live broadcast background switching instruction, and removing a live broadcast background from the live broadcast picture to acquire a main broadcast image in the live broadcast picture.
Step S230: and synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience.
In the testing method provided by an exemplary embodiment of the present disclosure, on one hand, live video can be generated flexibly and without being limited by a live scene by removing a live background in a live picture and synthesizing a main broadcast image and a target video file, thereby improving a live effect; on the other hand, when in live broadcasting, a live video required by a main broadcasting can be generated through a target video file, so that the user requirements can be met and the user experience can be improved; on the other hand, the live video can be directly switched in the background by switching the background, the labor can be saved without an anchor mobile acquisition device, and the video generation efficiency is improved.
The above steps of the present exemplary embodiment will be described in more detail below.
In step S210, a live view of the live anchor is captured.
In this embodiment, the live anchor may refer to a client or a user that provides a live video; the anchor terminal can collect audio and video streams and upload the audio and video streams to the server terminal, and the audio and video streams are provided to each client terminal through the designated address for the audience to watch. In this embodiment, the audio and video acquisition device at the anchor end can acquire the live broadcast picture at the anchor end, and the acquired live broadcast picture can be cached at the anchor end, for example, the device such as a camera and a microphone can acquire the image of the anchor to obtain the live broadcast picture, and the image can be stored in the cache of the live broadcast anchor end. For example, the captured live view may be as shown in fig. 3.
In step S220, in response to a live background switching instruction, removing a live background from the live view to obtain a main view image in the live view.
The live background switching instruction may include an instruction controlled by an operation of the anchor, such as a click operation, a drag operation, and the like; instructions controlled by specific rules can also be included, for example, when the live broadcast time length meets a certain condition, when the number of audiences in a live broadcast room meets a certain threshold, when the number of barrages at a live broadcast anchor end meets a certain condition, and the like; this embodiment is not particularly limited to this. And when the live broadcast background switching instruction is triggered, the live broadcast anchor end can respond to the live broadcast background switching instruction after detecting the live broadcast anchor end, so that live broadcast background removal processing is performed on a live broadcast picture. For example, it may be determined that a live background switching instruction is received when the live anchor detects that a specific button is clicked, and then the live background may be removed from the currently captured live frame in response to the live background switching instruction.
In an exemplary embodiment, before responding to the live background switching instruction, the present embodiment may further include the following step S401 and step S402, as shown in fig. 4, specifically:
in step S401, the interactive information between the live broadcasting anchor and the live audience is obtained. In this embodiment, the interactive information may be transmission data between a live viewer and a live anchor, for example, a barrage message, a virtual gift, and the like sent by the live viewer to the live anchor. The background server may store data of each live broadcast anchor, and may obtain the interaction information of the live broadcast anchor by requesting the background server, for example, may obtain a total amount of gifts sent by live viewers.
In step S402, if the interaction information meets a preset condition, a live background switching control is displayed at the live anchor terminal, so as to receive the live background switching instruction through the live background switching control. The preset condition may be determined according to an actual situation, for example, the preset condition may include that the number of live viewers connected to a live address of the live broadcast anchor meets a certain value, or may also include that the total amount of gifts sent by the live viewers to the anchor reaches a certain number, and the like, which is not particularly limited in this embodiment. For example, if the interactive information is information of a gift sent by the audience, when the total amount of the gift sent by the audience to the live broadcast anchor exceeds a budget threshold, it can be determined that the interactive information meets a preset condition; for another example, if the interactive information is bullet screen information, the number of bullet screens sent by the audience to the live broadcast anchor exceeds a certain value, and it can be determined that the interactive information meets the preset condition. And if the interactive information meets the preset condition, displaying a live background switching control at the anchor terminal. The display state of the live broadcast background switching control can be set to be hidden and not displayed to a user, and the display state is switched to be displayed when the interactive information meets the condition, so that the live broadcast background switching control is displayed in a user interface of a live broadcast anchor. For example, the live background switching control may be a virtual button, and when the total amount of the gifts sent by the live audience meets the condition, the virtual button may be displayed.
After the live background switching control is displayed, the anchor terminal can autonomously select whether the live background needs to be switched or not so as to determine whether the live background switching control is triggered or not, and if the anchor terminal confirms the live background switching control (for example, confirms initiated by the anchor such as clicking, re-pressing, long-pressing and sliding), the anchor terminal can determine that a live background switching instruction is received. In addition, in other embodiments, the live broadcast background switching control may further include a tab, an input box, a language control, and the like, and the live broadcast background switching control may also be displayed in other manners, for example, when a preset condition is met, the live broadcast background switching control may be displayed in another window in a pop-up window manner, which also belongs to the protection scope of the present disclosure.
After receiving the live broadcast background switching instruction, the live broadcast picture can be subjected to background switching in response to the instruction. The currently collected live broadcast picture can be divided into a foreground and a background, the foreground can be a main broadcast image, and the background can be a live broadcast background. The live broadcast background can be removed from the currently acquired live broadcast picture through an image segmentation technology, and the live broadcast picture is subjected to foreground and background recognition to recognize the live broadcast background in the live broadcast picture, so that the live broadcast background is segmented from the original live broadcast picture, and the foreground in the live broadcast picture, namely the anchor image, is obtained. For example, the method may specifically identify a live broadcast picture through a neural network to determine a live broadcast background in the live broadcast picture; and further deleting the live background from the live picture to acquire the live image in the live picture. Specifically, the method comprises the following steps:
the collected live broadcast pictures need to be uploaded to a server as video frames in real time, so that the live broadcast pictures cannot be processed and segmented in a post-production mode. In the embodiment, live broadcast pictures can be identified through the machine learning model, so that live broadcast backgrounds can be separated. For example, a certain number of images can be obtained in advance as sample training data, a U-shaped neural network can be trained to obtain a neural network model, and a deep learning algorithm can be adopted to further train the neural network, so that the accuracy of the model can be improved. The neural network model obtained through training can identify the live broadcast picture, and live broadcast backgrounds in the live broadcast picture are removed and live broadcast portrait elements are reserved, so that a main broadcast image of a foreground is obtained. In addition, the adopted machine learning algorithm may also include other algorithms such as a clustering algorithm, a random forest, etc., active learning, etc., and the embodiment is not limited thereto.
Next, referring to fig. 2, in step S230, the anchor image and the target video file are synthesized into a live video, so as to push the live video to a live viewer.
The target video file may be a video file containing a specific scene, which may include various kinds, for example, an indoor scene video, an outdoor scene video, a stage scene video, and the like; the target video may include a virtual video generated by an animation technology, and may also include a real scene video actually shot, which is not particularly limited in this embodiment. The method comprises the steps of taking a main broadcast image as a foreground, randomly selecting a frame image in a target video as a background, synthesizing the main broadcast image and the image in the target video by using an image superposition algorithm to form a new picture, and synthesizing the new live broadcast picture with each live broadcast picture one by one according to the sequence of each video frame in the target video by continuously acquiring the new live broadcast picture to form a video stream as the live broadcast video.
In an exemplary embodiment, a plurality of video files may be obtained in advance to determine a target video from the video files, so as to synthesize a live video, as shown in fig. 5, the method may include step S501 and step S502, specifically:
in step S501, a plurality of video files saved by the live anchor are displayed. For example, the anchor may upload a plurality of video files to the live anchor locally in advance for storage, or may also store the video files in a specific file directory, and after receiving a live background switching instruction, may obtain the stored video files from the local or specific file target; or after receiving a background switching instruction of the live broadcast anchor, the server may request a video file, so that the server sends the video file or a link address storing the video file to the live broadcast anchor, thereby acquiring a plurality of video files. Furthermore, a plurality of video files are displayed in the user interface of the live anchor side, for example, identification information such as names and icons of the video files can be displayed in the display interface of the live anchor side for the anchor to view and select.
In step S502, a target video file selected by a anchor in the plurality of video files is acquired, so that the anchor image and the target video file are combined into a live video. The anchor can select from the plurality of video files after seeing the plurality of video files, and further determines the video file selected by the anchor as a target video file by detecting the operation of the anchor. For example, a plurality of video files may be displayed as a list directory, selected by the radio box component; or selecting the icon corresponding to the video file by clicking; identification information of a target video to be selected may also be input through the input box for selection, and the like. The anchor can select according to the scene type that contains in the different video files, chooses the target video file that agrees with more with current live broadcast content to can be better satisfy spectator's demand, and provide better live broadcast effect for spectator.
In an exemplary embodiment, the method for determining a target video file and synthesizing the target video file and the anchor image into a live video may specifically include the following steps S601, S602, and S603, as shown in fig. 6, where:
in step S601, a first interaction type to which the interaction information belongs is obtained. For example, whether the first interaction type is satisfied may be determined by the content of the interaction information, for example, if the interaction information is a bullet screen, it may be determined that the interaction information belongs to the first interaction type, and the like; alternatively, it may be determined whether the first interaction type is satisfied by the number satisfied by the interaction information, for example, if the number of barrage satisfies a certain threshold, it may be determined that the interaction information belongs to the first interaction type. However, it may also be determined whether the interactive information is of the first interaction type by other means, for example, whether the interactive information belongs to the first interaction type is determined by a total amount of gifts received by a live broadcast anchor, for example, the interactive information may be determined to belong to the first interaction type when the total amount of gifts is 10 ten thousands, and the embodiment is not limited thereto.
In an exemplary embodiment, the interaction between the live broadcast anchor and the audience may be divided into multiple interaction types in advance, for example, the interaction information is divided into a first interaction type, a second interaction type, a third interaction type, and the like according to a quantity condition reached by the interaction information, and for example, according to the content of the interaction information, the barrage is used as the first interaction type, the gift is used as the second interaction type, the message is sent as the third interaction type, the forward sharing is used as the fourth interaction type, and the like. In addition, other interaction types can be determined through other classification modes, for example, if the number of the barracks in the interaction information reaches a certain value, the interaction information belongs to the first interaction type; if the total gift amount sent in the interactive information reaches a certain value, the interactive information satisfies a second interactive type, and so on, which is not limited in this embodiment.
In step S602, a plurality of video files corresponding to the first interaction type are obtained from the server for selection by the anchor. The server side can store information of a plurality of video files, each video file can also have a relative interaction type, and according to the interaction type corresponding to each video file and the first interaction type to which the interaction information belongs, a video file matched with the first interaction type is searched from the plurality of video files and is selected by the anchor. Moreover, the interaction types corresponding to the plurality of video files may be the same, and thus, the video file matching the first interaction type may include a plurality of video files.
Before acquiring a plurality of video files, the present embodiment may include step S701 and step S702, as shown in fig. 7, specifically:
in step S701, a plurality of video files are generated by a special effect making technique. Optionally, a producer can draw a virtual scene model through an art production tool to generate a video; alternatively, a video file may be generated by shooting a real scene. The image content contained in each video file may be different, for example, the playback effect of the video file may be as shown in fig. 8.
In step S702, an interaction type corresponding to each video file is determined. Optionally, the corresponding interaction type may be determined according to the scene content included in the video file, for example, the video file may be divided into a stage scene type, an indoor scene type, an outdoor scene type, and the like; in addition, other interaction types may be classified, such as a seascape type, a desert type, and so on. Each interaction type can correspond to one type of interaction information or correspond to a preset condition, the interaction type to which the interaction information belongs can be determined according to the condition met by the interaction information, so that the interaction information corresponds to the video file, and the video file which is authorized to be used by the live broadcast anchor end is determined according to the interaction information.
Next, in step S603, a target video file selected by the anchor in the plurality of video files is determined to combine the anchor image and the target video file into a live video. For example, the number of the video files with the first interaction type may be 8, and then the identification information of the 8 video files may be displayed so as to facilitate the selection by the anchor. By detecting the selection operation of the anchor terminal, the video file corresponding to the selection operation can be determined to be used as the target video file, so that the target video file is synthesized with the anchor image in the acquired anchor picture to form the live video. For example, as shown in fig. 8, a video in the target video file is synthesized with the anchor image to form a frame image in the live video, as shown in fig. 9, the live video with multiple styles can be formed by the method, the limitation of the live site on live network broadcast is broken, the anchor mobile live scene is not needed, and the generation efficiency of the live video can be improved.
After the live broadcast video is generated, the live broadcast video can be uploaded to a server corresponding to a live broadcast anchor, and each live broadcast anchor can watch the live broadcast through a live broadcast room address request distributed by the server, so that the live broadcast video is watched, and the requirements of audiences are met.
In an exemplary embodiment, the method can also be used for other scenes, for example, generating other media forms such as MV (music video) at the live main broadcasting end. Therefore, as shown in fig. 10, the method may further include the following steps S1001 to S1006, wherein:
in step S1001, training data is acquired; the training data can be a plurality of sample images containing marks, wherein the marks comprise human figures and backgrounds, and can be used for training a machine learning model; in step S1002, training with training data to obtain a background segmentation model; for example, the background segmentation model may be a U-shaped neural network model; in step S1003, an MV photographing request is received; for example, when the gift of the live viewer reaches 5 ten thousand, the corresponding MV shooting request can be triggered; the MV shooting request may trigger a background switching instruction, or may trigger the display of a background switching control; in step S1004, in response to the MV shooting request, performing background segmentation on the live broadcast frame using a background segmentation model to obtain a anchor image; in step S1005, synthesizing the divided anchor image and the target video file into a virtual MV; in step S1006, the generated virtual MV is pushed to each live viewer for the viewer to view the MV.
It should be noted that, the implementation of each step shown in fig. 10 is already described in the above specific embodiment, and is not described herein again.
Further, in this exemplary embodiment, a live video generating apparatus is further provided, which is configured to execute the live video generating method of the present disclosure. The device can be applied to a server or terminal equipment.
Referring to fig. 11, the live video generation apparatus 1100 may include: a picture capture module 1110, a background segmentation module 1120, and a video composition module 1130, wherein:
a picture collecting module 1110, configured to collect a live broadcast picture of a live broadcast anchor;
a background segmentation module 1120, configured to remove a live broadcast background from the live broadcast frame in response to a live broadcast background switching instruction, so as to obtain a anchor image in the live broadcast frame;
a video composition module 1130, configured to combine the anchor image and the target video file into a live video, so as to push the live video to a live viewer.
In an exemplary embodiment of the present disclosure, the apparatus 1100 may further include an interaction obtaining module, and a control display module, wherein:
and acquiring interactive information between the live broadcast anchor terminal and the live broadcast audience terminal.
And if the interactive information meets the preset condition, displaying a live broadcast background switching control at the live broadcast anchor terminal so as to receive the live broadcast background switching instruction through the live broadcast background switching control.
In an exemplary embodiment of the present disclosure, the background segmentation module 1120 may specifically include a picture recognition unit, and a background removal unit, wherein:
and the picture identification unit is used for identifying the live broadcast picture through a neural network so as to determine a live broadcast background in the live broadcast picture.
And the background removing unit is used for deleting the live background from the live picture so as to acquire the live image in the live picture.
In an exemplary embodiment of the present disclosure, the video composition module 1130 may specifically include a file display unit, and a file selection unit, wherein:
and the file display unit is used for displaying a plurality of video files stored by the live anchor terminal.
And the file selection unit is used for acquiring a target video file selected by a main broadcast in the video files so as to synthesize the main broadcast image and the target video file into a live broadcast video.
In an exemplary embodiment of the present disclosure, the video composition module 1130 may specifically include a type obtaining unit, a file obtaining unit, and a target file determining unit, wherein:
and the type acquisition unit is used for acquiring the first interaction type to which the interaction information belongs.
And the file acquisition unit is used for acquiring a plurality of video files corresponding to the first interaction type from a server so as to be selected by a main broadcast.
And the target file determining unit is used for determining a target video file selected by a main broadcast in the plurality of video files so as to synthesize the main broadcast image and the target video file into a live video.
In an exemplary embodiment of the present disclosure, the live video generation apparatus 1100 may further include a video production module, and an interaction type determination module, wherein:
and the video production module is used for generating a plurality of video files through a special effect production technology.
And the interaction type determining module is used for determining the interaction type corresponding to each video file.
In an exemplary embodiment of the present disclosure, the video composition module 1130 may specifically include a video frame extraction unit, and an image superposition unit, wherein:
and the video frame extraction unit is used for extracting the video frames contained in the target video file.
And the image superposition unit is used for carrying out image superposition processing on the anchor image and the video frame so as to synthesize the live video.
As each functional module of the live video generation apparatus in the exemplary embodiment of the present disclosure corresponds to the step of the exemplary embodiment of the live video generation method, please refer to the embodiment of the live video generation method in the present disclosure for details that are not disclosed in the embodiment of the apparatus of the present disclosure.
Referring now to FIG. 12, shown is a block diagram of a computer system 1200 suitable for use with the electronic device implementing an embodiment of the present invention.
It should be noted that the computer system 1200 of the electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiment of the present invention.
As shown in fig. 12, the computer system 1200 includes a Central Processing Unit (CPU)1201, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1202 or a program loaded from a storage section 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data necessary for system operation are also stored. The CPU 1201, ROM 1202, and RAM 1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
To the I/O interface 1205, AN input section 1206 including a keyboard, a mouse, and the like, AN output section 1207 including a device such as a Cathode Ray Tube (CRT), a liquid crystal display (L CD), and the like, a speaker, and the like, a storage section 1208 including a hard disk, and the like, and a communication section 1209 including a network interface card such as a L AN card, a modem, and the like, the communication section 1209 performs communication processing via a network such as the internet, a drive 1210 is also connected to the I/O interface 1205 as necessary, a removable medium 1211 such as a magnetic disk, AN optical disk, a magneto-optical disk, a semiconductor memory, and the like is mounted on the drive 1210 as necessary, so that a computer program read out therefrom is mounted into the storage section 1208 as necessary.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1209, and/or installed from the removable medium 1211. The computer program performs the above-described functions defined in the system of the present application when executed by the Central Processing Unit (CPU) 1201.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 2 to 10, and the like.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A live video generation method is characterized by comprising the following steps:
collecting a live broadcast picture of a live broadcast anchor terminal;
responding to a live broadcast background switching instruction, and removing a live broadcast background from the live broadcast picture to acquire a main broadcast image in the live broadcast picture;
and synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience.
2. The method of claim 1, wherein before responding to the live background switch command, further comprising:
acquiring interaction information between the live broadcast anchor terminal and the live broadcast audience terminal;
and if the interactive information meets the preset condition, displaying a live broadcast background switching control at the live broadcast anchor terminal so as to receive the live broadcast background switching instruction through the live broadcast background switching control.
3. The method of claim 2, wherein the interactive information comprises a virtual gift sent by the live viewer to the live host.
4. The method of claim 2, wherein removing live background from the live view in response to the background switching instruction to obtain the anchor image in the live view comprises:
and if the confirmation operation of the live broadcast background switching control is detected, removing the live broadcast background from the live broadcast picture to acquire the anchor image in the live broadcast picture.
5. The method of claim 1, wherein removing live background from the live view to obtain a anchor image in the live view comprises:
identifying the live broadcast picture through a neural network model to determine a live broadcast background in the live broadcast picture;
and deleting the live background from the live picture to acquire a live image in the live picture.
6. The method of claim 5, wherein prior to identifying the live view via the neural network model, further comprising:
acquiring a plurality of sample pictures;
and training the plurality of sample pictures by adopting a deep learning algorithm to obtain the neural network model.
7. The method of claim 1, wherein said compositing the anchor image with a target video file into a live video comprises:
displaying a plurality of video files stored by the live anchor;
and acquiring a target video file selected by a main broadcasting in the plurality of video files so as to synthesize the main broadcasting image and the target video file into a live video.
8. The method of claim 2, wherein said compositing the anchor image with a target video file into a live video comprises:
acquiring a first interaction type to which the interaction information belongs;
acquiring a plurality of video files corresponding to the first interaction type from a server for selection by a main broadcast;
determining a target video file selected by an anchor in the plurality of video files, and synthesizing the anchor image and the target video file into a live video.
9. The method of claim 8, wherein before obtaining the plurality of video files corresponding to the first interaction type from the server, the method further comprises:
generating a plurality of video files by a special effect manufacturing technology;
and determining the interaction type corresponding to each video file.
10. The method of claim 1, wherein said compositing the anchor image with the target video file into a live video comprises:
extracting video frames contained in the target video file;
and performing image superposition processing on the anchor image and the video frame to synthesize the live video.
11. A live video generation apparatus, comprising:
the picture acquisition module is used for acquiring a live broadcast picture of a live broadcast anchor terminal;
the background segmentation module is used for responding to a live broadcast background switching instruction and removing a live broadcast background from the live broadcast picture so as to acquire a main broadcast image in the live broadcast picture;
and the video synthesis module is used for synthesizing the anchor image and the target video file into a live video so as to push the live video to a live audience.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out a live video generation method according to any one of claims 1 to 10.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a live video generation method as claimed in any one of claims 1 to 10.
CN202010250228.1A 2020-04-01 2020-04-01 Live video generation method and device, computer readable medium and electronic equipment Pending CN111432235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010250228.1A CN111432235A (en) 2020-04-01 2020-04-01 Live video generation method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010250228.1A CN111432235A (en) 2020-04-01 2020-04-01 Live video generation method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111432235A true CN111432235A (en) 2020-07-17

Family

ID=71550440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010250228.1A Pending CN111432235A (en) 2020-04-01 2020-04-01 Live video generation method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111432235A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118397A (en) * 2020-09-23 2020-12-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112188228A (en) * 2020-09-30 2021-01-05 网易(杭州)网络有限公司 Live broadcast method and device, computer readable storage medium and electronic equipment
CN112351291A (en) * 2020-09-30 2021-02-09 深圳点猫科技有限公司 Teaching interaction method, device and equipment based on AI portrait segmentation
CN112367534A (en) * 2020-11-11 2021-02-12 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112423006A (en) * 2020-11-09 2021-02-26 珠海格力电器股份有限公司 Live broadcast scene switching method, device, equipment and medium
CN112712575A (en) * 2020-12-28 2021-04-27 广州虎牙科技有限公司 Sticker template image generation method and device, anchor terminal equipment and storage medium
CN112770173A (en) * 2021-01-28 2021-05-07 腾讯科技(深圳)有限公司 Live broadcast picture processing method and device, computer equipment and storage medium
CN113076790A (en) * 2020-12-06 2021-07-06 泰州市朗嘉馨网络科技有限公司 Service information big data supervision platform and method
CN113225572A (en) * 2021-03-31 2021-08-06 北京达佳互联信息技术有限公司 Method, device and system for displaying page elements in live broadcast room
CN113315987A (en) * 2021-05-27 2021-08-27 北京达佳互联信息技术有限公司 Video live broadcast method and video live broadcast device
CN113965665A (en) * 2021-11-22 2022-01-21 上海掌门科技有限公司 Method and equipment for determining virtual live broadcast image
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN115022668A (en) * 2022-07-21 2022-09-06 中国平安人寿保险股份有限公司 Video generation method and device based on live broadcast, equipment and medium
CN115134616A (en) * 2021-03-29 2022-09-30 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product
WO2023279705A1 (en) * 2021-07-07 2023-01-12 上海商汤智能科技有限公司 Live streaming method, apparatus, and system, computer device, storage medium, and program
WO2023160573A1 (en) * 2022-02-28 2023-08-31 北京字节跳动网络技术有限公司 Live broadcast picture display method and apparatus, electronic device and storage medium
CN117596418A (en) * 2023-10-11 2024-02-23 书行科技(北京)有限公司 Live broadcasting room UI display control method and device, electronic equipment and storage medium
CN113837978B (en) * 2021-09-28 2024-04-05 北京奇艺世纪科技有限公司 Image synthesis method, device, terminal equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249090A1 (en) * 2010-04-12 2011-10-13 Moore John S System and Method for Generating Three Dimensional Presentations
CN102625129A (en) * 2012-03-31 2012-08-01 福州一点通广告装饰有限公司 Method for realizing remote reality three-dimensional virtual imitated scene interaction
CN105608715A (en) * 2015-12-17 2016-05-25 广州华多网络科技有限公司 Online group shot method and system
CN106204426A (en) * 2016-06-30 2016-12-07 广州华多网络科技有限公司 A kind of method of video image processing and device
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN110719533A (en) * 2019-10-18 2020-01-21 广州虎牙科技有限公司 Live virtual image broadcasting method and device, server and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249090A1 (en) * 2010-04-12 2011-10-13 Moore John S System and Method for Generating Three Dimensional Presentations
CN102625129A (en) * 2012-03-31 2012-08-01 福州一点通广告装饰有限公司 Method for realizing remote reality three-dimensional virtual imitated scene interaction
CN105608715A (en) * 2015-12-17 2016-05-25 广州华多网络科技有限公司 Online group shot method and system
CN106204426A (en) * 2016-06-30 2016-12-07 广州华多网络科技有限公司 A kind of method of video image processing and device
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN110719533A (en) * 2019-10-18 2020-01-21 广州虎牙科技有限公司 Live virtual image broadcasting method and device, server and storage medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118397B (en) * 2020-09-23 2021-06-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112118397A (en) * 2020-09-23 2020-12-22 腾讯科技(深圳)有限公司 Video synthesis method, related device, equipment and storage medium
CN112188228A (en) * 2020-09-30 2021-01-05 网易(杭州)网络有限公司 Live broadcast method and device, computer readable storage medium and electronic equipment
CN112351291A (en) * 2020-09-30 2021-02-09 深圳点猫科技有限公司 Teaching interaction method, device and equipment based on AI portrait segmentation
CN112423006A (en) * 2020-11-09 2021-02-26 珠海格力电器股份有限公司 Live broadcast scene switching method, device, equipment and medium
CN112367534A (en) * 2020-11-11 2021-02-12 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112367534B (en) * 2020-11-11 2023-04-11 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN113076790B (en) * 2020-12-06 2021-09-28 上海臻客信息技术服务有限公司 Service information big data supervision platform and method
CN113076790A (en) * 2020-12-06 2021-07-06 泰州市朗嘉馨网络科技有限公司 Service information big data supervision platform and method
CN112712575A (en) * 2020-12-28 2021-04-27 广州虎牙科技有限公司 Sticker template image generation method and device, anchor terminal equipment and storage medium
CN114765692B (en) * 2021-01-13 2024-01-09 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN114765692A (en) * 2021-01-13 2022-07-19 北京字节跳动网络技术有限公司 Live broadcast data processing method, device, equipment and medium
CN112770173A (en) * 2021-01-28 2021-05-07 腾讯科技(深圳)有限公司 Live broadcast picture processing method and device, computer equipment and storage medium
CN115134616B (en) * 2021-03-29 2024-01-02 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product
CN115134616A (en) * 2021-03-29 2022-09-30 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product
CN113225572B (en) * 2021-03-31 2023-08-08 北京达佳互联信息技术有限公司 Page element display method, device and system of live broadcasting room
CN113225572A (en) * 2021-03-31 2021-08-06 北京达佳互联信息技术有限公司 Method, device and system for displaying page elements in live broadcast room
CN113315987A (en) * 2021-05-27 2021-08-27 北京达佳互联信息技术有限公司 Video live broadcast method and video live broadcast device
WO2023279705A1 (en) * 2021-07-07 2023-01-12 上海商汤智能科技有限公司 Live streaming method, apparatus, and system, computer device, storage medium, and program
CN113837978B (en) * 2021-09-28 2024-04-05 北京奇艺世纪科技有限公司 Image synthesis method, device, terminal equipment and readable storage medium
CN113965665A (en) * 2021-11-22 2022-01-21 上海掌门科技有限公司 Method and equipment for determining virtual live broadcast image
CN113965665B (en) * 2021-11-22 2024-09-13 上海掌门科技有限公司 Method and equipment for determining virtual live image
WO2023160573A1 (en) * 2022-02-28 2023-08-31 北京字节跳动网络技术有限公司 Live broadcast picture display method and apparatus, electronic device and storage medium
CN115022668A (en) * 2022-07-21 2022-09-06 中国平安人寿保险股份有限公司 Video generation method and device based on live broadcast, equipment and medium
CN115022668B (en) * 2022-07-21 2023-08-11 中国平安人寿保险股份有限公司 Live broadcast-based video generation method and device, equipment and medium
CN117596418A (en) * 2023-10-11 2024-02-23 书行科技(北京)有限公司 Live broadcasting room UI display control method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111432235A (en) Live video generation method and device, computer readable medium and electronic equipment
CN109413483B (en) Live content preview method, device, equipment and medium
JP5795580B2 (en) Estimating and displaying social interests in time-based media
US9357242B2 (en) Method and system for automatic tagging in television using crowd sourcing technique
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN113365133B (en) Video sharing method, device, equipment and medium
CN106658200A (en) Live video sharing and obtaining methods and devices, and terminal equipment thereof
CN109905782A (en) A kind of control method and device
CN105095480A (en) Providing link to portion of media object in real time in social networking update
KR20060025518A (en) Methods and apparatus for interactive point-of-view authoring of digital video content
CN108476344B (en) Content selection for networked media devices
CN105872717A (en) Video processing method and system, video player and cloud server
CN109474843A (en) The method of speech control terminal, client, server
KR20150083355A (en) Augmented media service providing method, apparatus thereof, and system thereof
CN108810580B (en) Media content pushing method and device
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
CN112543344B (en) Live broadcast control method and device, computer readable medium and electronic equipment
Tao et al. Real-time personalized content catering via viewer sentiment feedback: a QoE perspective
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN111274449A (en) Video playing method and device, electronic equipment and storage medium
US10153003B2 (en) Method, system, and apparatus for generating video content
CN114780792A (en) Video abstract generation method, device, equipment and medium
CN113315987A (en) Video live broadcast method and video live broadcast device
CN113282770A (en) Multimedia recommendation system and method
CN109640119B (en) Method and device for pushing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717