CN116112702A - Live broadcast method, live broadcast device, electronic equipment and storage medium - Google Patents

Live broadcast method, live broadcast device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116112702A
CN116112702A CN202310085373.2A CN202310085373A CN116112702A CN 116112702 A CN116112702 A CN 116112702A CN 202310085373 A CN202310085373 A CN 202310085373A CN 116112702 A CN116112702 A CN 116112702A
Authority
CN
China
Prior art keywords
live
video
editing
target
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310085373.2A
Other languages
Chinese (zh)
Inventor
沈海洋
林杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310085373.2A priority Critical patent/CN116112702A/en
Publication of CN116112702A publication Critical patent/CN116112702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The disclosure relates to a live broadcast method, a live broadcast device, electronic equipment and a storage medium, and belongs to the technical field of live broadcast, wherein the method comprises the following steps: by playing the live video of the live recorded video, relevant personnel can find out target video fragments meeting the live content filtering conditions in the live video in time, and further, by displaying the editing track aiming at the live video, the relevant personnel can edit the target video fragments in time, so that delay live broadcast is performed based on the edited live video. By the method, the light live broadcast method is provided, man-machine interaction efficiency can be effectively improved on the basis of ensuring the compliance of live broadcast contents, and live broadcast cost is greatly reduced.

Description

Live broadcast method, live broadcast device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of live broadcasting technology, and in particular relates to a live broadcasting method, a live broadcasting device, electronic equipment and a storage medium.
Background
With the rapid development of internet technology, people can watch, participate in, or initiate various types of live broadcast, such as conference live broadcast, evening live broadcast, and sports event live broadcast, etc., through a live broadcast platform.
At present, in order to avoid the occurrence of the non-compliant live broadcast content in the live broadcast process, wind control processing is generally performed on the live broadcast content, for example, delay processing is performed on live broadcast signals of a live broadcast site through live broadcast equipment, and when the non-compliant live broadcast content is found, the live broadcast signals of the part of live broadcast content are deleted in advance through the editing function of the live broadcast equipment, so that the non-compliant live broadcast content is skipped.
However, since the live broadcast equipment is often expensive, professional operators are required to operate the live broadcast signals, and the operation is complicated, so that the live broadcast cost is high.
Disclosure of Invention
The present disclosure provides a live broadcast method, apparatus, electronic device, and storage medium, which can effectively improve man-machine interaction efficiency and greatly reduce live broadcast cost on the basis of ensuring live broadcast content compliance. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a live broadcast method, the method comprising:
playing an un-live video of a live recording video, wherein the un-live video comprises a video segment which is not subjected to time delay live in the live recording video;
displaying an edit track for the live video;
Responding to the editing operation of the target video clips on the editing track, and generating the edited non-live video, wherein the target video clips are video clips meeting the filtering condition of live content;
and carrying out time delay live broadcast based on the edited non-live video.
By playing the live video of the live recorded video, relevant personnel can find out target video fragments meeting the live content filtering conditions in the live video in time, and further, by displaying the editing track aiming at the live video, the relevant personnel can edit the target video fragments in time, so that delay live broadcast is performed based on the edited live video. By the method, the light live broadcast method is provided, man-machine interaction efficiency can be effectively improved on the basis of ensuring the compliance of live broadcast contents, and live broadcast cost is greatly reduced.
In some embodiments, generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
and deleting the target video segment in response to the deleting operation of the target video segment on the editing track, so as to obtain the edited non-live video.
In some embodiments, generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
displaying a plurality of video clip materials in response to a replacement operation for the target video clip on the editing track;
and in response to the selection operation of the target video segment materials in the video segment materials, replacing the target video segment with the target video segment material to obtain the edited non-live video.
In some embodiments, displaying a plurality of video clip material in response to a replacement operation for the target video clip on the editing track, comprising:
responding to the replacement operation, and carrying out scene recognition on the non-live video to obtain scene information of the non-live video;
determining the plurality of video clip materials matched with the scene information from a video clip material library based on the scene information;
and displaying the plurality of video clip materials based on the matching degree between the scene information and each video clip material.
In some embodiments, generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
Displaying a volume editing track in response to a volume editing operation on the target video clip on the editing track;
and responding to the editing operation of the volume editing track, editing the volume of the target video segment, and obtaining the edited non-live video.
In some embodiments, the method further comprises:
and carrying out content identification on the non-live video, and displaying a prompt message based on the position of the target video segment on the editing track under the condition that the target video segment exists in the non-live video, wherein the prompt message indicates that the content of the target video segment meets the live content filtering condition.
In some embodiments, the method further comprises:
displaying a target option based on the position of the target video clip on the editing track, wherein the target option indicates to carry out target editing on the target video clip so as to obtain the edited non-live video;
and responding to the editing operation of the target video clip on the editing track, generating the edited non-live video, comprising the following steps:
and responding to the confirmation operation of the target option, and carrying out target editing on the target video segment to obtain the edited non-live video.
In some embodiments, generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
and responding to the editing operation of at least one video frame in the target video segment, editing the at least one video frame, and generating the edited non-live video.
In some embodiments, the playing the live video of the live recorded video includes:
playing the live broadcast recorded video, and carrying out live broadcast video of delay live broadcast based on the live broadcast recorded video, and the non-live broadcast video; wherein the video segment of the live video follows the currently playing segment of the live video.
By synchronously playing the three videos, relevant personnel can conveniently and clearly see live videos, live videos which are played and non-live videos which are not played temporarily, so that the man-machine interaction efficiency is improved, and the user experience is improved.
According to a second aspect of embodiments of the present disclosure, there is provided a live broadcast apparatus, the apparatus comprising:
a playing unit configured to perform playing of an un-live video of a live recording video, where the un-live video includes a video segment of the live recording video that is not subjected to delayed live;
A display unit configured to perform display of an editing track for the live video;
an editing unit configured to perform editing operations in response to target video clips on the editing track, the target video clips being video clips that meet live content filtering conditions, and generate edited non-live video;
and the live broadcast unit is configured to perform delayed live broadcast based on the edited non-live video.
In some embodiments, the editing unit is configured to perform:
and deleting the target video segment in response to the deleting operation of the target video segment on the editing track, so as to obtain the edited non-live video.
In some embodiments, the editing unit is configured to perform:
displaying a plurality of video clip materials in response to a replacement operation for the target video clip on the editing track;
and in response to the selection operation of the target video segment materials in the video segment materials, replacing the target video segment with the target video segment material to obtain the edited non-live video.
In some embodiments, the editing unit is configured to perform:
Responding to the replacement operation, and carrying out scene recognition on the non-live video to obtain scene information of the non-live video;
determining the plurality of video clip materials matched with the scene information from a video clip material library based on the scene information;
and displaying the plurality of video clip materials based on the matching degree between the scene information and each video clip material.
In some embodiments, the editing unit is configured to perform:
displaying a volume editing track in response to a volume editing operation on the target video clip on the editing track;
and responding to the editing operation of the volume editing track, editing the volume of the target video segment, and obtaining the edited non-live video.
In some embodiments, the apparatus further comprises an identification unit configured to perform:
and carrying out content identification on the non-live video, and displaying a prompt message based on the position of the target video segment on the editing track under the condition that the target video segment exists in the non-live video, wherein the prompt message indicates that the content of the target video segment meets the live content filtering condition.
In some embodiments, the display unit is further configured to perform:
displaying a target option based on the position of the target video clip on the editing track, wherein the target option indicates to carry out target editing on the target video clip so as to obtain the edited non-live video;
the editing unit is configured to execute:
and responding to the confirmation operation of the target option, and carrying out target editing on the target video segment to obtain the edited non-live video.
In some embodiments, the editing unit is configured to perform:
and responding to the editing operation of at least one video frame in the target video segment, editing the at least one video frame, and generating the edited non-live video.
In some embodiments, the playing unit is configured to perform playing of the live recorded video, live video that is live-delayed based on the live recorded video, and the non-live video; wherein the video segment of the live video follows the currently playing segment of the live video.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
One or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the live method described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium comprising: the program code in the computer readable storage medium, when executed by a processor of the electronic device, enables the electronic device to perform the live method described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the live method described above.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic view of an implementation environment of a live broadcast method according to an embodiment of the present disclosure;
Fig. 2 is a flowchart of a live broadcast method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another live method provided by an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of an application interface provided by an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of an editing operation provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another editing operation provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another editing operation provided by an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a live broadcast method provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a display hint message provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a display target option provided by an embodiment of the present disclosure;
fig. 11 is a block diagram of a live broadcast device according to an embodiment of the present disclosure;
fig. 12 is a block diagram of a terminal according to an embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, live recorded video and the like referred to in the embodiments of the present disclosure are all acquired with sufficient authorization.
Fig. 1 is an environmental schematic diagram of an implementation of a live broadcast method according to an embodiment of the disclosure. As shown in fig. 1, the implementation environment includes: the terminal 101 and the server 102 are directly or indirectly connected by wired or wireless communication between the terminal 101 and the server 102, which is not limited.
The terminal 101 is at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, and a laptop portable computer. Terminal 101 may be referred to generally as one of a plurality of terminals, with embodiments of the present disclosure being illustrated only by terminal 101. Those skilled in the art will recognize that the number of terminals may be greater or lesser. Illustratively, the terminal 101 can be provided with an application program for providing a time-lapse live broadcast function, such as a live broadcast type application program (also referred to as a live broadcast platform), a video type application program, and the like, which is not limited thereto.
The server 102 is an independent physical server, may be a server cluster or a distributed file system formed by a plurality of physical servers, and may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. The number of servers 102 may be greater or lesser, and embodiments of the present disclosure are not limited in this regard. The server 102 is illustratively used to provide background services for applications running on the terminal 101. Illustratively, taking a live broadcast application program running on the terminal 101, the server 102 is configured to provide a background service for the live broadcast application program, where a target object (such as a related person responsible for live broadcast) can trigger the terminal to receive a live broadcast signal (i.e. an audio/video signal) of a live broadcast through the live broadcast application program, record the live broadcast signal, obtain a live broadcast recorded video, and delay and live broadcast the live broadcast recorded video through the server 102, so as to facilitate viewing of the live broadcast video of the live broadcast by a viewer. Of course, server 102 may also include other functional servers to provide more comprehensive and diverse services, which are not limited.
Based on the above implementation environment, the live broadcast method provided by the embodiment of the present disclosure is described below.
Fig. 2 is a flowchart of a live broadcast method provided in an embodiment of the present disclosure. As shown in fig. 2, the live broadcast method is performed by a terminal, and includes the following steps 201 to 204.
In step 201, the terminal plays an un-live video of the live recorded video, where the un-live video includes a video segment of the live recorded video that is not subjected to delayed live broadcast.
In the embodiment of the present disclosure, live recorded video refers to video obtained by recording live broadcast signals (i.e., audio/video signals) on live broadcast sites, for example, a concert site, a conference site, a evening site, and a sports event site, which are not limited thereto. The terminal is provided with a target application program providing a time-delay live broadcast function, relevant personnel can carry out time-delay live broadcast on live broadcast recorded video through the target application program, accordingly, the terminal plays non-live broadcast video on an application interface of the target application program so as to be convenient for relevant personnel to check, and it is understood that the time-delay time length of the terminal for time-delay live broadcast based on the live broadcast recorded video can be set according to requirements, and the time-delay time length is not limited. For example, taking a delay time of 5 minutes as an example, in a case where the real time of the live broadcast is 18:05, the real time of the current playing segment of the video being broadcast is 18:00 through delay live broadcast, and the video segment with real time between 18:00 and 18:05 is not broadcast, that is, the video segment with no delay live broadcast in the live broadcast recorded video.
In step 202, the terminal displays an edit track for the non-live video.
In an embodiment of the present disclosure, the editing track is used to edit video clips in the live video. The terminal displays the editing track for the non-live video on the application interface of the target application program, for example, the terminal displays the editing track below the playing area of the non-live video on the application interface, which is not limited. The terminal can synchronously display the editing track aiming at the live video and the non-live video under the condition of playing the live video and the non-live video; the editing track can also be displayed in response to the triggering operation of the video editing control on the application interface under the condition of playing live video and non-live video, which is not limited.
In step 203, the terminal responds to the editing operation of the target video segment on the editing track, and generates the edited non-live video, wherein the target video segment refers to the video segment meeting the filtering condition of the live content.
In the embodiment of the present disclosure, the editing operations include a deleting operation, a replacing operation, and a volume editing operation, which will be described in detail in the embodiment shown in fig. 3, which will not be described herein. The live content filtering condition means that the content of the live video does not meet the live requirement or is not compliant, and the live content comprises sound content and image content. For example, if the sound content of a video clip includes a sensitive word, or the image content includes a sensitive image, the video clip meets the live content filtering criteria. It should be understood that the specific form of the live content filtering condition can be set according to actual requirements, which is not limited.
In step 204, the terminal performs live broadcasting with time delay based on the edited live video.
In the embodiment of the disclosure, after editing the target video segment meeting the live content filtering condition, delay live broadcasting is performed based on the edited non-live video, so that the finally broadcasted live video meets the live broadcasting requirement, and the safety of the live broadcasting content is ensured.
In summary, in the live broadcast method provided by the embodiment of the disclosure, by playing the live broadcast video of the live broadcast recorded video, relevant personnel can find out the target video segments meeting the live broadcast content filtering conditions in the live broadcast video in time, and further, by displaying the editing track aiming at the live broadcast video, relevant personnel can edit the target video segments in time, so that delayed live broadcast is performed based on the edited live broadcast video. By the method, the light live broadcast method is provided, man-machine interaction efficiency can be effectively improved on the basis of ensuring the compliance of live broadcast contents, and live broadcast cost is greatly reduced.
Fig. 2 above presents a simplified flow of a live broadcast method provided by an embodiment of the present disclosure. The live broadcast method will be described in detail based on the embodiment shown in fig. 3.
Fig. 3 is a flowchart of another live method provided by an embodiment of the present disclosure. As shown in fig. 3, the live broadcasting method is performed by a terminal, and includes the following steps 301 to 306.
In step 301, the terminal records the received live broadcast signal on the live broadcast spot, and obtains a live broadcast recorded video.
In the embodiment of the disclosure, the terminal is in communication connection with the live broadcast equipment on the live broadcast by a wired or wireless mode, and the live broadcast signal on the live broadcast is sent by the live broadcast equipment, for example, the live broadcast equipment is a broadcast guiding station equipment, which is not limited. The terminal is schematically provided with and runs a target application program for providing a time-delay live broadcast function, and a target object (such as a related person responsible for live broadcast) can trigger the terminal to record a received live broadcast signal on a live broadcast site through the target application program so as to obtain a live broadcast recorded video.
In some embodiments, the terminal is configured with a capture card (a capture device that captures analog signals such as external photoelectricity, video, audio, etc., and digitally directs the analog signals to a computer for digital processing, including an image capture card, a video capture card, an audio capture card, etc.), the live broadcast device processes real-time signals (i.e., audio and video signals) of the live broadcast to obtain Program video signals (Program, PGM), also referred to as Program bus, of the live broadcast, the PGM signals of the live broadcast are sent to the terminal through a digital component serial interface (Serial Digital Interface, SDI) interface, and the terminal records the PGM signals of the live broadcast through the capture card to obtain live broadcast recorded video. It should be understood that the process of recording the live broadcast signal by the terminal is a process of storing the live broadcast signal on the live broadcast site.
In step 302, the terminal performs time-delay live broadcast on the live broadcast recorded video to obtain a live broadcast video.
In the embodiment of the disclosure, the terminal is in communication connection with a server in a wired or wireless manner, and the server is used for providing background service for a target application program running on the terminal. Schematically, the terminal responds to the time-delay live broadcast operation of the live broadcast recorded video, generates the live broadcast video based on the time-delay time length corresponding to the time-delay live broadcast operation, and sends the live broadcast video to the server (namely, live broadcast push stream) to realize time-delay live broadcast. The delay time can be set according to requirements, and will not be described herein.
In step 303, the terminal plays live recorded video, live video, and non-live video.
In the disclosed embodiments, the video segment of the live video follows the currently playing segment of the live video. And the terminal plays the live broadcast recorded video, the live broadcast video and the non-live broadcast video on an application interface of the target application program. By synchronously playing the videos on the application interface, relevant personnel can conveniently and clearly see live videos on the live broadcast site, live videos which are played and non-live videos which are not played temporarily, so that the man-machine interaction efficiency is improved, and the user experience is improved.
In step 304, the terminal displays an edit track for the non-live video.
In the embodiment of the present disclosure, the step 304 is the same as the step 202 in the embodiment shown in fig. 2, and thus will not be described again.
In some embodiments, the terminal also displays a recording track of the live recorded video and a playout track of the live video. Through displaying the recording track and the broadcasting track, relevant personnel can clearly know the difference between the live broadcast recording condition and the live broadcast broadcasting condition, and therefore the man-machine interaction efficiency is improved. It should be noted that, the broadcast track of the live video and the edit track of the non-live video may be displayed on the basis of the same track, or may be displayed separately, which is not limited thereto. For example, referring to fig. 4, fig. 4 is a schematic diagram of an application interface provided in an embodiment of the present disclosure, taking a broadcast track of a live video and an edit track of an un-live video as shown in fig. 4, for example, where the broadcast area 401 of the application interface 400 plays live recorded video (i.e. a real-time signal), live video (i.e. a broadcast signal), and un-live video (i.e. an edit signal), and the record track and the target track of the live recorded video are displayed in the edit area 402, where the target track includes the broadcast track of the live video and the edit track of the un-live video (i.e. a broadcast+edit track). On the target track, the video segments before the broadcasting time axis belong to live videos, and the video segments after the broadcasting time axis belong to non-live videos.
It should be understood that, in the process of time-lapse live broadcasting, steps 301 to 304 are synchronously performed, that is, when the terminal receives the live broadcast signal, the terminal records the live broadcast recorded video, performs time-lapse live broadcasting, and synchronously plays the live broadcast recorded video, the live broadcast video and the non-live broadcast video on the application interface, so that the relevant personnel can clearly see the whole time-lapse live broadcast condition, and thus, the relevant personnel can edit the target video segment in time when the target video segment meeting the live broadcast content filtering condition appears in the non-live broadcast video, so that the edited non-live broadcast video meets the requirement in the live broadcast, and the editing process of the target video segment is described in detail through step 305.
In step 305, the terminal responds to the editing operation on the target video segment on the editing track, and generates the edited non-live video, wherein the target video segment refers to the video segment meeting the filtering condition of the live content.
In the embodiment of the disclosure, the target video segment includes at least one video frame, and the terminal is used for editing the at least one video frame in response to the editing operation of the at least one video frame in the target video segment to generate the edited non-live video. In this way, real-time editing in frame units can be realized, improving the accuracy of video editing. In addition, as is known from the foregoing step 203, the editing operation includes a deletion operation, a replacement operation, a volume editing operation, and the like, which are described below, respectively.
First, delete operation.
Schematically, the terminal responds to the deleting operation of the target video segment on the editing track to delete the target video segment, and the edited non-live video is obtained. In some embodiments, the terminal displays a plurality of editing options in response to a positioning operation of a target video clip on the editing track, and deletes the target video clip in response to a triggering operation of a deletion option in the plurality of editing options, so as to obtain an edited non-live video. For example, referring to fig. 5, fig. 5 is a schematic diagram of an editing operation provided by an embodiment of the present disclosure. As shown in fig. 5, the terminal displays an editing track of the live video in the editing area 402 of the application interface 400, the terminal responds to the positioning operation of the target video segment 501, displays a plurality of editing options 502, responds to the triggering operation of the "delete" option in the plurality of editing options, deletes the target video segment 501, and obtains the edited live video.
By the method, aiming at the video clips meeting the live broadcast content filtering conditions, the terminal can respond to the deleting operation to directly delete the corresponding video clips, so that the video editing efficiency is effectively improved.
Second, replacement operation.
Illustratively, the terminal displays a plurality of video clip materials in response to a replacement operation for a target video clip on the editing track; and in response to the selection operation of the target video segment materials in the video segment materials, replacing the target video segment with the target video segment material to obtain the edited non-live video.
In some embodiments, based on a procedure similar to the first deleting operation, the terminal displays a plurality of editing options in response to a positioning operation on the target video clip, and displays a plurality of video clip materials in response to a triggering operation on a replacement option in the plurality of editing options, so as to replace the target video clip with the target video clip materials. For example, referring to fig. 6, fig. 6 is a schematic diagram of another editing operation provided by an embodiment of the present disclosure. As shown in fig. 6, the terminal displays an editing track of an un-live video in the editing area 402 of the application interface 400, the terminal displays a plurality of editing options 502 in response to a positioning operation on a target video clip 501, displays a plurality of video clip materials 600 in response to a triggering operation on a "replace" option in the plurality of editing options, and replaces the target video clip with the video clip material a in response to a selecting operation on the video clip material a, thereby obtaining an edited un-live video.
In some embodiments, the matching degree between the plurality of video clip materials and the scene of the non-live video meets the target condition. Illustratively, the process of displaying a plurality of video clip materials by the terminal includes the steps of:
and A1, responding to the replacement operation by the terminal, and carrying out scene recognition on the non-live video to obtain scene information of the non-live video. And the terminal responds to the replacement operation, calls a first artificial intelligent model, and performs scene recognition on the live video to obtain scene information of the live video, for example, the scene information indicates that the live video is a conference scene. Of course, the terminal can also perform scene recognition on the live recorded video in advance, so that scene information can be directly acquired under the condition of responding to the replacement operation. Or, the terminal can also obtain the scene information based on the live broadcast tag information, for example, before the relevant personnel live broadcast, the relevant personnel input live broadcast tag information of the live broadcast (such as live broadcast of a sports event) through the target application program, and then the terminal can obtain the scene information based on the live broadcast tag information.
And A2, the terminal determines a plurality of video clip materials matched with the scene information from a video clip material library based on the scene information. The terminal uses the scene information as an index, and inquires the material label information matched with the scene information in the video segment material library to obtain a plurality of video segment materials matched with the scene information. For example, if the scene information indicates that the live scene is a sports event scene, the plurality of video clip materials are all materials related to the sports event; if the scene information indicates that the live broadcast scene is a evening scene, the plurality of video clip materials are all performance-related materials, and the like, the embodiment of the disclosure is not limited thereto.
And A3, the terminal displays a plurality of video clip materials based on the matching degree between the scene information and each video clip material. The terminal determines the matching degree between the scene information and each video segment material based on the scene information and the material label information of each video segment material, and displays a plurality of video segment materials according to the sequence from the big to the small of the matching degree.
Through the steps A1 to A3, in the case of editing the target video clip based on the replacement operation, the terminal displays the video clip material matched with the scene information of the non-live video, so that the video clip after the replacement and the non-live video have higher adaptation degree, the content of the video clip after the replacement is prevented from being too abrupt, and the video clip material with higher adaptation degree with the non-live video can be selected in consideration of the high probability of related personnel.
And thirdly, volume editing operation.
Schematically, in the case that the sound content of the target video clip meets the live content filtering condition, the terminal responds to the volume editing operation of the target video clip on the editing track to display the volume editing track; and responding to the editing operation of the volume editing track, editing the volume of the target video clip, and obtaining the edited non-live video.
In some embodiments, based on a procedure similar to the first deleting operation described above, the terminal displays a plurality of editing options in response to a positioning operation of the target video clip, and displays a volume editing track in response to a triggering operation of a volume editing option of the plurality of editing options, so as to edit the volume of the target video clip. For example, referring to fig. 7, fig. 7 is a schematic diagram of another editing operation provided by an embodiment of the present disclosure. As shown in fig. 7, the terminal displays an editing track of an un-live video in the editing area 402 of the application interface 400, the terminal displays a plurality of editing options 502 in response to a positioning operation on the target video clip 501, displays a volume editing track 700 (also referred to as a sound console) in response to a triggering operation on a "volume editing option" in the plurality of editing options, edits (e.g., mutes, etc.) the volume of the target video clip 501 in response to an editing operation on the volume editing track 700, and obtains an edited un-live video.
It should be noted that, in the above process, the volume editing track is displayed by the terminal in response to the volume editing operation, and in other embodiments, the volume editing track is always displayed on the application interface, that is, the terminal displays the volume of the target video clip on the volume editing track in response to the positioning operation of the target video clip on the editing track, edits the volume of the target video clip in response to the editing operation of the volume editing track, so as to obtain the edited non-live video, which is not limited in the embodiments of the present disclosure.
By the method, aiming at the video segment with the sound content meeting the live broadcast content filtering condition, the terminal can edit the volume of the video segment based on volume editing operation so as to ensure the safety of the live broadcast content.
In addition, the above description of the editing operation is merely a few schematic descriptions provided in the embodiments of the present disclosure, and in some embodiments, the terminal may also provide other forms of editing operations, for example, in a case where a portion of image content of the target video clip meets the live content filtering condition, the terminal provides an area blocking operation, that is, the target object blocks a portion of the area of the target video clip by implementing the area blocking operation, such as adding a sticker, a mosaic, and so on, which is not limited.
Through the step 305, when the target video segment meeting the live content filtering condition appears in the live video, the terminal can edit the target video segment in time in response to the editing operation of the target video segment, so that the edited live video meets the live content requirement, and the safety of the live content is ensured.
In step 306, the terminal performs live broadcasting with time delay based on the edited live video.
In the embodiment of the disclosure, the terminal sends the edited non-live video to the server to realize delay live broadcast.
The above steps 301 to 306 are schematically described below with reference to fig. 8. Fig. 8 is a schematic diagram of a live broadcast method provided in an embodiment of the present disclosure. As shown in fig. 8, the live broadcast method includes: the method comprises the steps that a broadcasting guide station device processes a live broadcast live signal to obtain a PGM signal, the PGM signal is sent to a terminal through an SDI interface, a target application program running on the terminal records the PGM signal to obtain a live broadcast recorded video, the live broadcast recorded video is subjected to time-lapse live broadcast, the live broadcast recorded video, the live broadcast video and the live broadcast video are played, a target video segment meeting live broadcast content filtering conditions in the live broadcast video is edited through an editing track, the edited live broadcast video is obtained, the edited live broadcast video is subjected to live broadcast pushing, namely the PGM signal of the live broadcast video is sent to a server, and the server realizes time-lapse live broadcast and achieves audience touch.
In summary, in the live broadcast method provided by the embodiment of the disclosure, by playing the live broadcast video of the live broadcast recorded video, relevant personnel can find out the target video segments meeting the live broadcast content filtering conditions in the live broadcast video in time, and further, by displaying the editing track aiming at the live broadcast video, relevant personnel can edit the target video segments in time, so that delayed live broadcast is performed based on the edited live broadcast video. By the method, the light live broadcast method is provided, man-machine interaction efficiency can be effectively improved on the basis of ensuring the compliance of live broadcast contents, and live broadcast cost is greatly reduced.
In the embodiments shown in fig. 2 to fig. 8, the target video segments may be understood as being observed by related personnel in the process of playing the non-live video by the terminal, and in other embodiments, the terminal may be capable of automatically identifying the video segments meeting the filtering condition of the live content in the non-live video by combining with the artificial intelligent model, so as to assist the related personnel, further improve the human-computer interaction efficiency, and improve the editing efficiency of the target video segments. This process is described below.
The terminal is used for identifying the content of the non-live video, and displaying a prompt message based on the position of the target video segment on the editing track when the target video segment exists in the non-live video, wherein the prompt message indicates that the content of the target video segment meets the live content filtering condition. And the terminal calls a second artificial intelligent model to identify the content of the non-live video, and displays a prompt message based on the identification result of the target video segment when the target video segment is identified. For example, referring to fig. 9, fig. 9 is a schematic diagram of a display hint message provided by an embodiment of the present disclosure. As shown in fig. 9, the terminal displays an edit track of the non-live video in the edit area 402 of the application interface 400, and when the target video clip is identified, based on the position of the target video clip on the edit track, a prompt message 900 is displayed, for example, but not limited to, "the above video clip contains XXX sensitive words" or "the above video clip contains XXX sensitive images". By the method, the terminal can automatically identify the video segments meeting the live broadcast content filtering conditions in the live broadcast video, and display the prompt message at the corresponding position under the condition of identification so as to prompt related personnel, thereby improving the man-machine interaction efficiency.
In some embodiments, the terminal is further capable of displaying a target option for editing the target video clip based on the identification result of the target video clip, so as to implement automatic editing. Illustratively, this process includes: the terminal displays a target option based on the position of the target video clip on the editing track, wherein the target option indicates to carry out target editing on the target video clip so as to obtain the edited non-live video; accordingly, the foregoing process of "generating the edited non-live video in response to the editing operation of the target video clip on the editing track" includes: and responding to the confirmation operation of the target option, and performing target editing on the target video segment to obtain the edited non-live video. The terminal determines target editing for the target video clips based on the identification result of the target video clips, and displays target options based on the positions of the target video clips on the editing track so as to facilitate operation confirmation of related personnel. For example, referring to fig. 10, fig. 10 is a schematic diagram of displaying a target option provided in an embodiment of the present disclosure, as shown in fig. 10, a terminal displays an editing track of an un-live video in an editing area 402 of an application interface 400, and when a target video clip is identified, based on a position of the target video clip on the editing track, a prompt message "above the video clip contains an XXX sensitive image" and a target option "please confirm whether to delete" is displayed, and in response to a confirmation operation for the target option, the target video clip 501 is deleted, so as to obtain an edited un-live video. By the method, the terminal can provide an automatic editing function for the identified target video clips, and through displaying target options, relevant personnel can conveniently confirm whether target editing is performed or not, so that man-machine interaction efficiency is improved, and user experience is improved. Of course, if the related person performs the cancel operation on the target option, the terminal can implement editing of the target video clip in response to other editing operations on the target video clip (i.e. step 305 in the embodiment shown in fig. 3 described above).
It should be noted that the foregoing fig. 9 and fig. 10 are only illustrative, and in some embodiments, the terminal may also display other forms of prompting messages and target options, and the number of target video clips may be one or multiple, which is not limited in this embodiment of the disclosure.
Fig. 11 is a block diagram of a live broadcast device according to an embodiment of the present disclosure. As shown in fig. 11, the apparatus includes a play unit 1101, a display unit 1102, an edit unit 1103, and a live broadcast unit 1104.
A playing unit 1101 configured to perform playing of an un-live video of a live recorded video, where the un-live video includes a video clip in the live recorded video that is not subjected to delayed live;
a display unit 1102 configured to perform displaying an editing track for the non-live video;
an editing unit 1103 configured to perform an editing operation in response to a target video clip on the editing track, the target video clip being a video clip that meets a live content filtering condition, and generate the edited live video;
a live unit 1104 configured to perform delayed live based on the edited non-live video.
In some embodiments, the editing unit 1103 is configured to perform:
And deleting the target video segment in response to the deleting operation of the target video segment on the editing track, so as to obtain the edited non-live video.
In some embodiments, the editing unit 1103 is configured to perform:
displaying a plurality of video clip materials in response to a replacement operation for the target video clip on the editing track;
and in response to the selection operation of the target video segment material in the video segment materials, replacing the target video segment with the target video segment material to obtain the edited non-live video.
In some embodiments, the editing unit 1103 is configured to perform:
responding to the replacement operation, and carrying out scene recognition on the non-live video to obtain scene information of the non-live video;
determining the plurality of video clip materials matched with the scene information from a video clip material library based on the scene information;
and displaying the plurality of video clip materials based on the matching degree between the scene information and each video clip material.
In some embodiments, the editing unit 1103 is configured to perform:
responding to the volume editing operation of the target video clip on the editing track, and displaying a volume editing track;
And responding to the editing operation of the volume editing track, editing the volume of the target video clip, and obtaining the edited non-live video.
In some embodiments, the apparatus further comprises an identification unit configured to perform:
and carrying out content identification on the non-live video, and displaying a prompt message based on the position of the target video segment on the editing track under the condition that the target video segment exists in the non-live video, wherein the prompt message indicates that the content of the target video segment meets the live content filtering condition.
In some embodiments, the display unit 1102 is further configured to perform:
displaying a target option based on the position of the target video clip on the editing track, wherein the target option indicates that the target video clip is subjected to target editing so as to obtain the edited non-live video;
the editing unit 1103 is configured to perform:
and responding to the confirmation operation of the target option, and carrying out target editing on the target video segment to obtain the edited non-live video.
In some embodiments, the editing unit 1103 is configured to perform:
and responding to the editing operation of at least one video frame in the target video segment, editing the at least one video frame, and generating the edited non-live video.
In some embodiments, the playing unit 1101 is configured to perform: broadcasting live recorded video, and carrying out live video and non-live video of delay live broadcast based on the live recorded video; wherein the video segment of the live video follows the currently playing segment of the live video.
By playing the live video of the live recorded video, relevant personnel can find out target video fragments meeting the live content filtering conditions in the live video in time, and further, by displaying the editing track aiming at the live video, the relevant personnel can edit the target video fragments in time, so that delay live broadcast is performed based on the edited live video. Through the mode, the light live broadcast device is provided, the man-machine interaction efficiency can be effectively improved on the basis of ensuring the compliance of live broadcast contents, and the live broadcast cost is greatly reduced.
It should be noted that: in the live broadcast device provided in the foregoing embodiment, only the division of the foregoing functional modules is used for illustration, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the live broadcast device provided in the above embodiment and the live broadcast method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not described herein again.
In an exemplary embodiment, there is also provided an electronic device comprising a processor and a memory for storing at least one computer program that is loaded and executed by the processor to implement the live method in embodiments of the present disclosure.
Taking an electronic device as an example of a terminal, fig. 12 is a block diagram of a structure of a terminal according to an embodiment of the disclosure. The terminal 1200 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1200 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 1200 includes: a processor 1201 and a memory 1202.
Processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Progr ammable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1201 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central ProcessingUnit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen is required to display. In some embodiments, the processor 1201 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one program code for execution by processor 1201 to implement the processes performed by a terminal in a live method provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203, and at least one peripheral. The processor 1201, the memory 1202, and the peripheral interface 1203 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1203 via buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, a display 1205, a camera assembly 1206, audio circuitry 1207, a positioning assembly 1208, and a power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, the memory 1202, and the peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1201, the memory 1202, and the peripheral interface 1203 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1204 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1204 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1204 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1204 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited by the present disclosure.
The display 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1205 is a touch display, the display 1205 also has the ability to collect touch signals at or above the surface of the display 1205. The touch signal may be input as a control signal to the processor 1201 for processing. At this time, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one and disposed on a front panel of the terminal 1200; in other embodiments, the display 1205 may be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in other embodiments, the display 1205 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1200. Even more, the display 1205 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The display 1205 can be made of LCD (Liquid Crystal Display ), OLED (Organic light-Emitting Diode) or other materials.
The camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1201 for processing, or inputting the electric signals to the radio frequency circuit 1204 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1200. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to position the current geographic location of the terminal 1200 to enable navigation or LBS (Location Bas ed Service, location-based services).
The power supply 1209 is used to power the various components in the terminal 1200. The power source 1209 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, optical sensor 1214, and proximity sensor 1215.
The acceleration sensor 1211 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1201 may control the display 1205 to display a user interface in either a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 may implement the following functions based on the data collected by the gyro sensor 1212: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1213 may be disposed at a side frame of the terminal 1200 and/or at a lower layer of the display 1205. When the pressure sensor 1213 is provided at a side frame of the terminal 1200, a grip signal of the terminal 1200 by a user may be detected, and the processor 1201 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at the lower layer of the display 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display 1205. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1214 is used to collect the ambient light intensity. In one embodiment, processor 1201 may control the display brightness of display 1205 based on the intensity of ambient light collected by optical sensor 1214. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1205 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the shooting parameters of camera assembly 1206 based on the intensity of ambient light collected by optical sensor 1214.
A proximity sensor 1215, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1200. The proximity sensor 1215 is used to collect the distance between the user and the front of the terminal 1200. In one embodiment, when the proximity sensor 1215 detects that the distance between the user and the front face of the terminal 1200 gradually decreases, the processor 1201 controls the display 1205 to switch from the bright screen state to the off screen state; when the proximity sensor 1215 detects that the distance between the user and the front surface of the terminal 1200 gradually increases, the processor 1201 controls the display 1205 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 12 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium is also provided, e.g. a memory, comprising program code, executable by a processor of an electronic device to perform the above-described live method. Alternatively, the computer readable storage medium may be a Read-only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Compact-disk Read-only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, implements the above-described live method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A live broadcast method, the method comprising:
playing an un-live video of a live recording video, wherein the un-live video comprises a video segment which is not subjected to time delay live in the live recording video;
Displaying an edit track for the live video;
responding to the editing operation of the target video clips on the editing track, and generating the edited non-live video, wherein the target video clips are video clips meeting the filtering condition of live content;
and carrying out time delay live broadcast based on the edited non-live video.
2. The live method of claim 1, wherein generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
and deleting the target video segment in response to the deleting operation of the target video segment on the editing track, so as to obtain the edited non-live video.
3. The live method of claim 1, wherein generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
displaying a plurality of video clip materials in response to a replacement operation for the target video clip on the editing track;
and in response to the selection operation of the target video segment materials in the video segment materials, replacing the target video segment with the target video segment material to obtain the edited non-live video.
4. A live broadcast method as defined in claim 3, wherein the displaying a plurality of video clip material in response to a replacement operation for the target video clip on the editing track comprises:
responding to the replacement operation, and carrying out scene recognition on the non-live video to obtain scene information of the non-live video;
determining the plurality of video clip materials matched with the scene information from a video clip material library based on the scene information;
and displaying the plurality of video clip materials based on the matching degree between the scene information and each video clip material.
5. The live method of claim 1, wherein generating the edited non-live video in response to an editing operation of the target video clip on the editing track comprises:
displaying a volume editing track in response to a volume editing operation on the target video clip on the editing track;
and responding to the editing operation of the volume editing track, editing the volume of the target video segment, and obtaining the edited non-live video.
6. The live broadcast method of claim 1, wherein the method further comprises:
And carrying out content identification on the non-live video, and displaying a prompt message based on the position of the target video segment on the editing track under the condition that the target video segment exists in the non-live video, wherein the prompt message indicates that the content of the target video segment meets the live content filtering condition.
7. The live broadcast method of claim 6, wherein the method further comprises:
displaying a target option based on the position of the target video clip on the editing track, wherein the target option indicates to carry out target editing on the target video clip so as to obtain the edited non-live video;
the responding to the editing operation of the target video clip on the editing track, generating the edited non-live video comprises the following steps:
and responding to the confirmation operation of the target option, and carrying out target editing on the target video segment to obtain the edited non-live video.
8. The live method of any of claims 1-7, wherein the generating the edited non-live video in response to an editing operation of a target video clip on the editing track comprises:
And responding to the editing operation of at least one video frame in the target video segment, editing the at least one video frame, and generating the edited non-live video.
9. A live broadcast device, the device comprising:
a playing unit configured to perform playing of an un-live video of a live recording video, where the un-live video includes a video segment of the live recording video that is not subjected to delayed live;
a display unit configured to perform display of an editing track for the live video;
an editing unit configured to perform editing operations in response to target video clips on the editing track, the target video clips being video clips that meet live content filtering conditions, and generate edited non-live video;
and the live broadcast unit is configured to perform delayed live broadcast based on the edited non-live video.
10. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the live method of any of claims 1 to 8.
11. A computer readable storage medium, characterized in that program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the live method of any of claims 1 to 8.
CN202310085373.2A 2023-01-17 2023-01-17 Live broadcast method, live broadcast device, electronic equipment and storage medium Pending CN116112702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310085373.2A CN116112702A (en) 2023-01-17 2023-01-17 Live broadcast method, live broadcast device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310085373.2A CN116112702A (en) 2023-01-17 2023-01-17 Live broadcast method, live broadcast device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116112702A true CN116112702A (en) 2023-05-12

Family

ID=86265145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310085373.2A Pending CN116112702A (en) 2023-01-17 2023-01-17 Live broadcast method, live broadcast device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116112702A (en)

Similar Documents

Publication Publication Date Title
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN108965757B (en) Video recording method, device, terminal and storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN109167937B (en) Video distribution method, device, terminal and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN112929687A (en) Interaction method, device and equipment based on live video and storage medium
CN111355974A (en) Method, apparatus, system, device and storage medium for virtual gift giving processing
CN108897597B (en) Method and device for guiding configuration of live broadcast template
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN107896337B (en) Information popularization method and device and storage medium
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN113395566B (en) Video playing method and device, electronic equipment and computer readable storage medium
CN113411680A (en) Multimedia resource playing method, device, terminal and storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111628925A (en) Song interaction method and device, terminal and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN114845129B (en) Interaction method, device, terminal and storage medium in virtual space
CN114546227A (en) Virtual lens control method, device, computer equipment and medium
CN111818358A (en) Audio file playing method and device, terminal and storage medium
CN111669640A (en) Virtual article transfer special effect display method, device, terminal and storage medium
CN113556481B (en) Video special effect generation method and device, electronic equipment and storage medium
CN112419143A (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination