CN117499690A - Playback video generation method, playback video play device, electronic equipment and medium - Google Patents

Playback video generation method, playback video play device, electronic equipment and medium Download PDF

Info

Publication number
CN117499690A
CN117499690A CN202311368877.1A CN202311368877A CN117499690A CN 117499690 A CN117499690 A CN 117499690A CN 202311368877 A CN202311368877 A CN 202311368877A CN 117499690 A CN117499690 A CN 117499690A
Authority
CN
China
Prior art keywords
interaction
information
identifier
playback video
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311368877.1A
Other languages
Chinese (zh)
Inventor
路清波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lebai Software Development Co ltd
Original Assignee
Beijing Lebai Software Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lebai Software Development Co ltd filed Critical Beijing Lebai Software Development Co ltd
Priority to CN202311368877.1A priority Critical patent/CN117499690A/en
Publication of CN117499690A publication Critical patent/CN117499690A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a playback video generation method, a playback video playing device, electronic equipment and a medium, wherein the playback video generation method comprises the following steps: responding to the received interaction starting dotting event sent by the main speaking terminal, and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal; recording an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending mark and an interaction ending time stamp corresponding to the interaction information; responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring recorded video after recording; and generating a playback video based on the recorded video and the interaction information, the interaction starting identifier, the interaction starting time stamp, the interaction ending identifier and the interaction ending time stamp which correspond to the interaction information. The scheme realizes the complete reproduction of the live content and improves the participation of the user and the interest of playing back the video.

Description

Playback video generation method, playback video play device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of live broadcasting, and in particular relates to a playback video generation method, a playback device, electronic equipment and a medium.
Background
With the development of computer technology and network technology, network live broadcast is widely popularized, such as live broadcast singing, live broadcast classroom, and the like. If the user does not get the live, the corresponding content can also be viewed by viewing the playback video.
In the related art, a playback video is usually obtained by adopting a video recording mode, and a user can watch live audio and video content by watching the playback video, but for interactive content initiated in a live broadcast process, the user watching the playback video cannot participate, so that the user lacks participation.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, embodiments of the present disclosure provide a playback video generating method, a playback device, an electronic apparatus, and a medium.
According to an aspect of the present disclosure, there is provided a playback video generation method including:
responding to receiving an interaction starting dotting event sent by a main speaking terminal, and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal;
recording an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information;
Responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring recorded video after recording;
and generating a playback video based on the recorded video, the interaction information, an interaction starting identifier corresponding to the interaction information, an interaction starting time stamp, an interaction ending identifier and an interaction ending time stamp, so that a playing terminal displays the interaction information at a playing time corresponding to the interaction starting time stamp in the process of playing the playback video.
According to another aspect of the present disclosure, there is provided a playback video playing method, the playback video being generated by the playback video generating method described in the foregoing aspect, the playing method including:
the method comprises the steps that supplementary enhancement information and interaction dotting information corresponding to a playback video are obtained from a server, wherein the interaction dotting information comprises interaction information, and an interaction starting identifier, an interaction starting time stamp, an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
In the process of playing the playback video, performing interaction point matching based on the current time stamp of the supplemental enhancement information and the interaction dotting information, and determining a target interaction identifier closest to the current time stamp;
responding to the target interaction identifier as an interaction starting identifier, and determining target interaction information corresponding to the target interaction identifier;
and responding to the playback video to be played to the playing time corresponding to the interaction starting time stamp corresponding to the target interaction information, and displaying the target interaction information.
According to another aspect of the present disclosure, there is provided a playback video generating apparatus, the apparatus including:
the information sending module is used for responding to the received interaction starting dotting event sent by the main speaking terminal and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal;
the recording module is used for recording an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
the video acquisition module is used for responding to the live broadcast ending dotting event sent by the main speaking terminal to acquire recorded video after recording;
The video generation module is used for generating a playback video based on the recorded video, the interaction information, the interaction starting identifier corresponding to the interaction information, the interaction starting timestamp, the interaction ending identifier and the interaction ending timestamp, so that a playing terminal can display the interaction information at the playing time corresponding to the interaction starting timestamp in the process of playing the playback video.
According to another aspect of the present disclosure, there is provided a playback video playing device that generates a playback video by the playback video generating method described in the foregoing aspect, the device including:
the information acquisition module is used for acquiring the supplementary enhancement information and the interactive dotting information corresponding to the playback video from the server, wherein the interactive dotting information comprises interactive information, and an interactive start identifier, an interactive start timestamp, an interactive end identifier and an interactive end timestamp corresponding to the interactive information;
the interaction point matching module is used for carrying out interaction point matching based on the current time stamp of the supplemental enhancement information and the interaction dotting information in the process of playing the playback video, and determining a target interaction identifier closest to the current time stamp;
The information determining module is used for determining target interaction information corresponding to the target interaction identification in response to the target interaction identification being the interaction starting identification;
and the interaction display module is used for responding to the playback video to be played to the playing time corresponding to the interaction starting time stamp corresponding to the target interaction information, and displaying the target interaction information.
According to another aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory in which a program is stored,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the playback video generation method according to the preceding aspect, or to perform the playback video play method according to the preceding aspect.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the playback video generation method according to the foregoing aspect, or to perform the playback video play method according to the foregoing aspect.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the playback video generation method according to the preceding aspect, or performs the playback video play method according to the preceding aspect.
According to one or more technical schemes provided by the embodiment of the disclosure, by responding to the received interaction starting dotting event sent by the main speaking terminal, the interactive information corresponding to the interaction starting dotting event is returned to the main speaking terminal, and an interaction starting identifier and an interaction starting time stamp corresponding to the interactive information are recorded; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending mark and an interaction ending time stamp corresponding to the interaction information; responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring recorded video after recording; furthermore, based on the recorded video and the interaction information, the interaction start identifier corresponding to the interaction information, the interaction start time stamp, the interaction end identifier and the interaction end time stamp, a playback video is generated, so that the playing terminal displays the interaction information at the playing time corresponding to the interaction start time stamp in the process of playing the playback video. By adopting the scheme disclosed by the invention, the interaction information initiated in the live broadcast process can be recorded in the playback video, so that the user watching the playback video can also participate in the interaction, the same watching experience as the user watching the live broadcast is obtained, the complete reproduction of the live broadcast content is realized, and the participation of the user and the interestingness of the playback video are improved.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 illustrates a flowchart of a playback video generation method according to an exemplary embodiment of the present disclosure;
fig. 2 illustrates a flowchart of a playback video generation method according to another exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a playback video playing method according to an exemplary embodiment of the present disclosure;
fig. 4 illustrates a flowchart of a playback video playing method according to another exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating searching for a target interaction identifier in different scenarios according to an exemplary embodiment of the present disclosure;
FIG. 6 illustrates an overall architecture diagram of playback video generation and playback in a live classroom scene in accordance with an exemplary embodiment of the present disclosure;
fig. 7 illustrates a talkback terminal recording timing diagram of an exemplary embodiment of the present disclosure;
FIG. 8 illustrates a student-side playback timing diagram of an exemplary embodiment of the present disclosure;
fig. 9 shows a schematic block diagram of a playback video generating apparatus according to an exemplary embodiment of the present disclosure;
fig. 10 shows a schematic block diagram of a playback video playback device according to an exemplary embodiment of the present disclosure;
Fig. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before explaining the playback video generation scheme and the playback scheme provided by the present disclosure, the terms and english abbreviations that may be involved in the present disclosure are explained as follows:
RTC: real Time Communication, chinese meaning is real-time communication, is a short name of real-time audio and video;
RTMP: real-Time Messaging Protocol, the Chinese meaning is a Real-time message transmission protocol, is the main protocol of live broadcast at present, and is an application layer private protocol designed by Adobe company for providing audio and video data transmission service between a Flash player and a server;
IRC: internet Relay Chat, chinese meaning internet relay chat;
crontab, a timed task for a user to execute a program at a fixed time or interval;
S3: cloud storage, amazon Simple Storage Service, is an object storage service that provides industry-leading scalability, data availability, security, and performance;
DB: dataBase, dataBase;
SEI: supplemental Enhancement Information the Chinese meaning is supplementary enhancement information, belongs to the code stream category, provides the method for adding additional information into the video code stream, and is one of the characteristics of the H.264 standard.
The playback video generation method, the playback device, the electronic equipment and the medium provided by the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 illustrates a flowchart of a playback video generation method according to an exemplary embodiment of the present disclosure, which may be performed by a playback video generation apparatus of an embodiment of the present disclosure, wherein the apparatus may be implemented in software and/or hardware, and may be generally integrated in an electronic device, including a mobile phone, a Personal Computer (PC), a tablet (e.g., iPad), and the like.
As shown in fig. 1, the playback video generation method may include the steps of:
and step 101, responding to the received interaction starting dotting event sent by the main speaking terminal, and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal.
In practical application, the host can enter the live broadcasting room to carry out live broadcasting through an application program supporting the live broadcasting function installed in the host terminal, and the live broadcasting can be, for example, but not limited to, a talent show live broadcasting, a teaching live broadcasting, a game live broadcasting and the like, and correspondingly, the host can be a talent show, a teacher, a game player and the like.
In the embodiment of the disclosure, after the live broadcast is started, the main speaking terminal can initiate interaction at the moment when the interaction is required, then the main speaking terminal can send an interaction starting dotting event to the server, and when the server receives the interaction starting dotting event sent by the main speaking terminal, the corresponding interaction information can be returned to the main speaking terminal.
For example, suppose that live broadcast is a teacher live broadcast for teaching, before live broadcast begins, the teacher may upload courseware required for teaching to a server, and set interactive content, such as text content corresponding to an interactive answer, for the courseware after a page needing interaction, set a corresponding interaction id for the interaction, bind the page id and the interaction id, and store each interactive content corresponding to the courseware in the server in advance. In the live broadcast process, when the teacher turns pages, an interaction start dotting event is triggered, for example, the interaction start dotting event may include page turning dotting 5 and page id of a page displayed before turning pages, so as to inform the server-side main terminal that the page is turned. After the server receives the interaction starting dotting event, inquiring whether interaction ids corresponding to the page ids exist in a plurality of interaction contents corresponding to the courseware stored in advance, if so, acquiring the corresponding interaction contents according to the interaction ids, and rendering based on the interaction contents to generate interaction information and returning the interaction information to the main speaking terminal so that the main speaking terminal displays the interaction information in a live broadcast process, and interaction with a live broadcast viewer is realized.
It can be appreciated that different interaction start dotting events can be agreed in different live scenes, which is not limited by the present disclosure.
Step 102, recording an interaction start identifier and an interaction start time stamp corresponding to the interaction information.
In the embodiment of the disclosure, when the server side returns the interaction information to the main speaking terminal, the interaction start identifier and the interaction start time stamp corresponding to the interaction information may be recorded, where the interaction start time stamp may be the current time when the server side returns the interaction information to the main speaking terminal. The interaction start identifier is used for indicating the start of the interaction, and the interaction end identifier can be set for indicating the end of the interaction.
For example, the interaction start may be indicated by "1", the interaction end may be indicated by "2", the interaction start identifier may be indicated by "interaction id-1", the interaction end identifier may be indicated by "interaction id-2", for example, assuming that the interaction identifier of the interaction information returned by the server to the talkback terminal is a, the interaction start identifier corresponding to the interaction information may be recorded as "a-1", and the interaction end identifier corresponding to the interaction information is "a-2".
And step 103, responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information.
In the embodiment of the disclosure, when the interaction ends, the main speaking terminal may send an interaction end dotting event to the server to inform the server that the interaction ends. After the server receives the interaction ending dotting event sent by the main speaking terminal, the moment when the interaction ending dotting event is received can be obtained and used as an interaction ending time stamp corresponding to the interaction information which is returned to the main speaking terminal last time, and the interaction ending identification and the interaction ending time stamp corresponding to the interaction information are recorded.
It should be noted that, in the embodiment of the present disclosure, the interaction start dotting event and the interaction end dotting event may be predefined, and the interaction start dotting event and the interaction end dotting event may be the same or different, which is not limited in the present disclosure.
When the defined interaction start dotting event and the defined interaction end dotting event are the same, the dotting event received for the first time in the live broadcast process can be used as the interaction start dotting event, if the server side responds to the interaction start dotting event, the next received dotting event is used as the interaction end dotting event corresponding to the interaction information, the corresponding interaction end mark and the interaction end timestamp are recorded, and if the server side does not return the interaction information for the interaction start dotting event, the next received dotting event is still used as the interaction start dotting event, and the like.
When the defined interaction start dotting event and the defined interaction end dotting event are different, the start and the end of the interaction can be determined according to each event, the interaction start dotting event and the interaction end dotting event of the same interaction information are adjacent, and when the server receives the interaction end dotting event, the latest returned interaction information display end can be determined, so that the interaction end identifier and the interaction end timestamp corresponding to the interaction information are recorded.
And 104, responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring the recorded video after recording.
In the embodiment of the disclosure, when the live broadcasting of the main speaking terminal is finished, the main speaking terminal can send a live broadcasting finishing dotting event to the server to inform the server that the live broadcasting is finished.
Taking live broadcasting as a teacher live broadcasting teaching example, when the teacher clicks to play a class, a live broadcasting ending dotting event is triggered, for example, a dotting 7 point is used as a live broadcasting ending dotting event, and a server receives the dotting 7 point, namely, determines that the live broadcasting is ended.
In the embodiment of the disclosure, when the server receives the live broadcast ending dotting event, the server determines that the live broadcast is ended, and the server can acquire the recorded video after the recording is completed.
For example, recording of live video may be accomplished by a recording terminal (e.g., a live broadcast in a messenger cloud). When live broadcasting starts, the main speaking terminal can push RTC stream to the acoustic network RTC service, and after audio and video converging and transcoding are carried out through the acoustic network RTC service, RTMP stream is pushed to the recording terminal, and video recording is carried out by the recording terminal. When the live broadcast is finished, the server side can acquire recorded video which is recorded from the recording terminal.
Step 105, generating a playback video based on the recorded video and the interaction information, the interaction start identifier corresponding to the interaction information, the interaction start time stamp, the interaction end identifier and the interaction end time stamp, so that the playback terminal displays the interaction information at a playing time corresponding to the interaction start time stamp in the process of playing the playback video.
In the embodiment of the disclosure, the server may generate the playback video based on the acquired recorded video, all the interaction information returned to the main terminal in the live broadcast process, and the interaction start identifier, the interaction start timestamp, the interaction end identifier and the interaction end timestamp corresponding to each interaction information. Therefore, in the process of playing and replaying the video, the playing terminal can display the corresponding interaction information when playing to the playing time corresponding to the interaction starting time stamp, and can end displaying the corresponding interaction information when playing to the playing time corresponding to the interaction ending time stamp.
According to the playback video generation method, interaction information corresponding to the interaction starting dotting event is returned to the main speaking terminal by responding to the interaction starting dotting event sent by the main speaking terminal, and an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information are recorded; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending mark and an interaction ending time stamp corresponding to the interaction information; responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring recorded video after recording; furthermore, based on the recorded video and the interaction information, the interaction start identifier corresponding to the interaction information, the interaction start time stamp, the interaction end identifier and the interaction end time stamp, a playback video is generated, so that the playing terminal displays the interaction information at the playing time corresponding to the interaction start time stamp in the process of playing the playback video. By adopting the scheme disclosed by the invention, the interaction information initiated in the live broadcast process can be recorded in the playback video, so that the user watching the playback video can also participate in the interaction, the same watching experience as the user watching the live broadcast is obtained, the complete reproduction of the live broadcast content is realized, and the participation of the user and the interestingness of the playback video are improved.
In an optional embodiment of the present disclosure, when generating a playback video, an interaction file may be acquired first, where each interaction information returned by a server to a main speaking terminal in a live broadcast process and an interaction start identifier, an interaction start timestamp, an interaction end identifier and an interaction end timestamp corresponding to each interaction information are stored in the interaction file, and then the recorded video and the interaction file are stored in a correlated manner, so as to obtain the playback video.
For example, when the main speaking terminal starts live broadcasting, a live broadcasting start dotting event (for example, dotting 6 points represent live broadcasting start) may be sent to the server to inform that live broadcasting starts, and the server may establish an empty interaction file for storing relevant content of interaction initiated in the live broadcasting process. When the server side returns the interaction information to the main speaking terminal, the interaction information, the corresponding interaction starting identifier, the interaction starting time stamp, the interaction ending identifier and the interaction ending time stamp can be recorded in the interaction file. After the live broadcast is finished, the relevant content of all interactions initiated in the live broadcast process is recorded in the interaction file. Therefore, the server can store the acquired recorded video and the interactive file in an associated mode to obtain the playback video.
Illustratively, the format of the dotting data in the interaction file may be as follows:
{
"id":0,
"Category":5,
"StreamName":"",
"Info":"***",
"ActionTs":1668063736,
"ActionTsOffset":2665,
"ModifyTime":"1668063736",
"LogTime":0
}
wherein, info content data format: < IrcKey >: key of signaling, < IrcData >: original signaling format
{
"<IrcKey>":{
"actionDuration":0,
"endTime":1668061070347,
"beginTime":1668061070347,
"<IrcKey>":<IrcData>
}
}。
In the embodiment of the disclosure, the interaction generated in the live broadcast process is stored in association with the recorded video in the form of the interaction file, so that data support is provided for displaying interaction information in the playback process of the recorded video.
In an actual application scene, a host may also perform operations such as writing and graffiti during live broadcasting, for example, a teacher may perform writing and graffiti on courseware during live broadcasting. In an optional embodiment of the present disclosure, in order to restore operations such as mastering, graffiti, etc. during playback, the server may acquire the blackboard writing data during live broadcasting and store the blackboard writing data in association with the recorded video, so that the blackboard writing content of the mastering is restored during playback of the video. Thus, as shown in fig. 2, the playback video generation method of the present disclosure, on the basis of the embodiment shown in fig. 1, may further include the steps of:
and step 201, inquiring whether the blackboard writing data corresponding to the recorded video exists in a database.
The database may be an external database, and is used for storing board writing data generated in the live broadcast process, where the board writing data includes, but is not limited to, board writing content, graffiti content, and the like.
In the embodiment of the disclosure, when live broadcasting starts, the server side sends a timestamp to the main speaking terminal, the timestamp can be the current time of the server side, and the main speaking terminal performs live broadcasting and video recording initiation based on the timestamp, so that the recorded video is consistent with the timestamp of the server side. In the live broadcast process, if a host broadcast carries out a writing or graffiti operation, the generated writing data are written into a database, the writing data written into the database comprise writing contents and corresponding time stamps (namely writing time stamps), if the live broadcast carries out live broadcast teaching for teachers, and if courseware is displayed in a host terminal, the writing data can also comprise page id of a page currently displaying courseware, so that when the page corresponding to the page id is displayed in the playback process, writing contents are added in the page at the playing time corresponding to the writing time stamp. After the live broadcast is finished, the server can inquire whether the blackboard writing data corresponding to the recorded video exists in the database.
When writing the blackboard writing data into the database, the live account number of the anchor can be used as an identifier of the blackboard writing data, namely, the corresponding relation between the live account number and the blackboard writing data is written into the database, so that when the server side queries the database, the live account number can be used as a query basis, and whether the blackboard writing data corresponding to the live account number exists in the database or not can be queried. In order to avoid that the historical blackboard writing data is determined to be the data generated in the live broadcast process, adverse effects are generated on the playing effect of the playback video. That is, only the blackboard writing data in the current live broadcast is stored in the database.
And 202, acquiring the blackboard writing data corresponding to the recorded video in response to the inquiry from the database, wherein the blackboard writing data comprises blackboard writing content and a blackboard writing time stamp.
Step 203, storing the recorded video and the blackboard writing data in an associated mode.
In the embodiment of the disclosure, if the server side queries the blackboard writing data corresponding to the recorded video from the database, the blackboard writing data is obtained, and the obtained blackboard writing data and the recorded video are associated and stored, so that the generated playback video contains the blackboard writing data of the anchor in the live broadcast process, and therefore, when the playback video is played by the playing terminal, the blackboard writing content is displayed when the playback video is played to the blackboard writing moment.
According to the playback video generation method, whether the blackboard writing data corresponding to the recorded video exists in the database is inquired, and the blackboard writing data are acquired when the blackboard writing data corresponding to the recorded video are inquired, so that the recorded video and the blackboard writing data are stored in a correlated mode, and therefore the blackboard writing content can be displayed in the playback process, and interestingness of the playback video is improved.
In the live broadcast process, when the network is unstable, the video recorded by the recording terminal may be interrupted, in order to obtain the complete recorded video, in an alternative embodiment of the present disclosure, a break callback interface may be set in the server, where the break callback interface is used to record the number of times of interruption in the video recording process, and the recording terminal may send a break callback request to the server to request a break callback, so as to perform a splicing task when there is an interruption in the recording process, and splice multiple recording segments generated by the interruption into a complete recorded video. Thus, the playback video generation method provided by the present disclosure may further include: responding to a cut-off callback request sent by a recording terminal, and acquiring the number of times that the recording terminal is interrupted in the video recording process from a cut-off callback interface; and calling a cut-off callback interface to send a video splicing task to the recording terminal in response to the fact that the number of times of interruption is not 0, so that the recording terminal can splice a plurality of recording fragments into complete recorded video in response to the video splicing task.
In the embodiment of the disclosure, an initial value of the number of times of being interrupted recorded by the interruption callback interface can be set to 0, when the recording terminal is interrupted once in the process of recording video, the interruption callback interface accumulates the recorded number of times of being interrupted by 1, after the recording is finished, the recording terminal can send an interruption callback request to the server, the server responds to the interruption callback request to inquire about the number of times of being interrupted by the recording terminal in the process of recording video, if the number of times is not 0 (more than 0), the recording terminal is judged to be interrupted in the process of recording video, the server calls the interruption callback interface to send a video splicing task to the recording terminal, and after the recording terminal receives the video splicing task, a plurality of recording fragments generated in the recording process are spliced in response to the video splicing task, so that the complete recorded video is obtained.
In the embodiment of the disclosure, when the interruption callback request sent by the recording terminal is received, the number of times that the recording terminal is interrupted in the video recording process is acquired from the interruption callback interface, and when the number of times is not 0, the interruption callback interface is called to send a video splicing task to the recording terminal, so that the recording terminal can splice a plurality of recording fragments into complete recording videos in response to the video splicing task, thereby avoiding the condition that the recording videos acquired by interruption due to network jitter are incomplete, and ensuring the integrity of the finally acquired recording videos.
Fig. 3 illustrates a flowchart of a playback video playing method according to an exemplary embodiment of the present disclosure, which may be performed by a playback video playing apparatus according to an embodiment of the present disclosure, wherein the apparatus may be implemented in software and/or hardware, and may be generally integrated in an electronic device, where the electronic device includes a mobile phone, a Personal Computer (PC), a tablet computer (such as an iPad), and so on.
As shown in fig. 3, the playback video playing method may include the steps of:
step 301, obtaining, from a server, supplemental enhancement information and interaction dotting information corresponding to the playback video, where the interaction dotting information includes interaction information, and an interaction start identifier, an interaction start timestamp, an interaction end identifier, and an interaction end timestamp corresponding to the interaction information.
The playback video is generated by the playback video generation method described in the foregoing embodiment, and the generation process of the playback video in this embodiment is not described in detail.
In the embodiment of the disclosure, when a user views a playback video through a playing terminal, the playback video can be obtained from a server for playing, and SEI information and interactive dotting information corresponding to the playback video are obtained.
The playing terminal may call a meta fo interface of the server, and when the server monitors that the meta fo interface is called, obtain corresponding playback video and metadata (metadata) corresponding to the playback video, parse the metadata to obtain SEI information corresponding to the playback video and all interactive dotting information, and then return the SEI information and the interactive dotting information to the playing terminal.
The playing terminal may send a video playing request to the server, where the video playing request carries a video identifier of a playback video to be played, and the server further responds to the video playing request to obtain the playback video corresponding to the video identifier and corresponding metadata data, and parse the metadata data to obtain corresponding SEI information and all interactive dotting information, and return the SEI information and the interactive dotting information to the playing terminal.
Step 302, in the process of playing the playback video, performing interaction point matching based on the current time stamp of the supplemental enhancement information and the interaction dotting information, and determining a target interaction identifier closest to the current time stamp.
The interaction point may be an interaction start point or an interaction end point, and the corresponding interaction identifier may be an interaction start identifier or an interaction end identifier.
In the disclosed embodiment, during playing and replaying video, the playing terminal may search the target interaction identifier closest to the current timestamp according to the current timestamp of the SEI and the timestamp (interaction start timestamp or interaction end timestamp) corresponding to each interaction identifier (interaction start identifier or interaction end identifier), where the determined target interaction identifier may be the interaction start identifier or the interaction end identifier.
The current time stamp of the SEI can be obtained by sequentially traversing the time stamp corresponding to each segmentation point after segmenting the SEI time length according to the preset time length, wherein the current time stamp is the time stamp traversed currently; or, after dragging the progress bar of the playback video, dragging to a corresponding timestamp of the playing time in the SEI; alternatively, the time stamp may be a time stamp corresponding to the current playing time.
And step 303, determining target interaction information corresponding to the target interaction identifier in response to the target interaction identifier being the interaction start identifier.
In the embodiment of the disclosure, after determining the target interaction identifier, if the target interaction identifier is the interaction start identifier, the target interaction information corresponding to the target interaction identifier may be determined.
For example, assuming that the determined target interaction identifier is a-1, where "1" indicates the start of interaction and "a" indicates the interaction id, the interaction information corresponding to the interaction id "a" may be determined as the target interaction information.
And step 304, in response to the playback video playing to the playing time corresponding to the interaction starting time stamp corresponding to the target interaction information, displaying the target interaction information.
In the embodiment of the disclosure, in the playing process of playing back the video, when the playing back video is played to the playing time corresponding to the starting time stamp corresponding to the target interaction information, the target interaction information is displayed.
The interaction type corresponding to the interaction information may include, but is not limited to, an interaction question, an interaction game, photographing, and the like, wherein the interaction question may include, but is not limited to, a judgment question, a single choice question, a multi-choice question, a gap filling question, a composite question, and the like.
In an optional embodiment of the disclosure, if the interaction type corresponding to the target interaction information is an interaction question, the target interaction information may be displayed through a popup window when the target interaction information is displayed. And closing the interactive answer popup window when the playback video is played to the playing time corresponding to the interaction ending time stamp corresponding to the target interaction information.
In an alternative embodiment of the present disclosure, if the type of interaction corresponding to the target interaction information is an interactive game, when playing to the moment corresponding to the game start time stamp, the target interaction information is displayed in full screen (i.e. the game is loaded in full screen for interaction). And when the playback video is played to the playing time corresponding to the game ending time stamp, the full screen is withdrawn, and the game interaction is closed.
In the embodiment of the disclosure, the mode of displaying the interactive questions through the popup window or loading the interactive games in a full screen manner ensures that the playback video is normally played when the interaction is loaded in the playback process, and improves the user experience.
According to the playback video playing method, the supplementary enhancement information and the interactive dotting information corresponding to the playback video are obtained from the server, the interactive dotting information comprises the interactive information, and the interactive start identifier, the interactive start timestamp, the interactive end identifier and the interactive end timestamp corresponding to the interactive information, in the process of playing the playback video, the interactive point matching is carried out based on the current timestamp of the supplementary enhancement information and the interactive dotting information, the target interactive identifier closest to the current timestamp is determined, the target interactive identifier corresponding to the target interactive identifier is responded to be the interactive start identifier, the target interactive information is determined, and further, the target interactive information is displayed in response to the playback video being played to the playing time corresponding to the interactive start timestamp corresponding to the target interactive information, so that the interaction can be loaded for users to participate in the playback process, the complete reproduction of live content is realized, the users watching the live broadcast can obtain the same watching experience as the users watching live broadcast, and the participation of the users is improved.
In an alternative implementation of the present disclosure, as shown in fig. 4, step 302 may include the following sub-steps, based on the example shown in fig. 3:
step 401, reading the current time stamp of the supplemental enhancement information according to a preset period.
The preset period may be set according to actual requirements, for example, 500 milliseconds (ms).
The current timestamp may be a timestamp corresponding to the current time when the video is played, or may also be a timestamp corresponding to the time when the user drags the video progress bar.
Step 402, searching whether an interaction point exists in a preset duration to the left based on the current timestamp and the interaction dotting information, wherein the interaction point corresponds to an interaction start identifier or an interaction end identifier.
The preset duration may be preset according to actual requirements, for example, the preset duration may be set to 2 seconds(s).
In the embodiment of the disclosure, the playing terminal may search from the current timestamp to the left according to the current timestamp and the interaction start timestamp and the interaction end timestamp corresponding to each interaction information in the interaction dotting information, and find whether an interaction start point or an interaction end point exists in a preset duration.
Step 403, in response to finding the interaction point within the preset duration, determining a target interaction identifier corresponding to the interaction point closest to the current timestamp.
In the embodiment of the disclosure, if the interaction point is found within a preset time period from the current timestamp to the left, the interaction point closest to the current timestamp can be obtained, and an interaction identifier (an interaction start identifier or an interaction end identifier) corresponding to the interaction point is determined as a target interaction identifier.
Fig. 5 is a schematic diagram showing searching for a target interaction identifier in different scenarios according to an exemplary embodiment of the present disclosure, in fig. 5, a-1 and a-2 are an interaction start identifier and an interaction end identifier corresponding to interaction information with an interaction id of a, b-1 and b-2 are an interaction start identifier and an interaction end identifier corresponding to interaction information with an interaction id of b, and c-1 and c-2 are an interaction start identifier and an interaction end identifier corresponding to interaction information with an interaction id of c, respectively. In fig. 5, solid arrows represent current time stamps, dashed arrows represent previous time stamps, search for preset time lengths leftwards from the current time stamps, and assume that a time length interval between each solid arrow and a latest interaction identifier on the left is smaller than the preset time length, that is, at least one interaction identifier can be found in the preset time length searched leftwards from the current time stamp corresponding to any solid arrow in fig. 5, and for scene 2 (case 2), search for the preset time length leftwards from the current time stamp, and find out a target interaction identifier as "a-1", where the identifier is an interaction start identifier with an interaction id of a. For case3, the user drags the progress bar of the playback video from the last timestamp corresponding to the dotted arrow to the current timestamp corresponding to the solid arrow, searches the preset duration leftwards from the current timestamp, and searches the target interaction identifier as "b-1", wherein the identifier is the interaction start identifier with the interaction id of b. Likewise, for case4, the found target interaction identifier is "c-2", and the identifier is the interaction ending identifier with the interaction id of c; for case5, the searched target interaction identifier is "a-2", and the identifier is an interaction ending identifier with the interaction id of a; for case6, the searched target interaction identifier is "a-1", and the identifier is the interaction start identifier with the interaction id of a. And aiming at case1, searching left for the preset time without an interaction point.
In the embodiment of the disclosure, the current time stamp of the supplemental enhancement information is read according to the preset period, based on the current time stamp and the interactive dotting information, whether the interactive point exists in the preset time period is searched leftwards, and the target interactive identifier corresponding to the interactive point closest to the current time stamp is determined in response to the search of the interactive point in the preset time period, so that the interactive point matching is performed in a mode of searching the preset time period leftwards from the current time stamp, each interactive point in the playback video can be ensured to be searched, and missing of the interactive point is prevented.
In an optional embodiment of the present disclosure, when a user drags a progress bar during watching a playback video, a playing terminal acquires a target timestamp after dragging (i.e., a current timestamp corresponding to a playing time dragged to) in response to receiving a drag operation of the user on the progress bar of the playback video, and determines an interaction identifier corresponding to a hit interaction point with a closest distance to the target timestamp (for convenience of distinguishing and description, referred to as a first interaction identifier) based on the target timestamp and interaction dotting information; and judging whether the first interaction mark is consistent with a second interaction mark corresponding to the current interaction point, wherein the current interaction point is the interaction point searched before dragging the progress bar or the interaction point played to. If the first interaction identifier is inconsistent with the second interaction identifier corresponding to the current interaction point and the second interaction identifier is the interaction starting identifier, further determining a target interaction ending identifier corresponding to the second interaction identifier, and further distributing signaling carrying the target interaction ending identifier to stop displaying the interaction information corresponding to the current interaction point. It can be appreciated that if the first interactive identification and the second interactive identification are identical, no processing is performed. Therefore, when the user drags the progress bar during interaction, if the dragged position is matched with the interaction point of another interaction, the interactive information which is being displayed is directly closed, and the interaction can be not closed when the progress bar is dragged in the same interaction (namely, the time stamp after dragging is smaller than the ending time stamp of the interactive information which is currently displayed).
Further, in an optional embodiment of the present disclosure, if it is determined that the first interaction identifier is inconsistent with the second interaction identifier corresponding to the current interaction point and the first interaction identifier is the interaction start identifier, whether the hit interaction point meets the playing condition is further determined, and in response to the hit interaction point meeting the playing condition, signaling carrying the first interaction identifier is distributed to display interaction information corresponding to the first interaction identifier. It can be understood that if the second interaction identifier is also the interaction start identifier, before distributing the signaling carrying the first interaction identifier, the signaling carrying the interaction end identifier corresponding to the second interaction identifier needs to be distributed first to close the currently displayed interaction information.
The playing condition can be set according to actual requirements, for example, the playing condition can be set to be that the time difference between the interaction starting time stamp corresponding to the hit interaction point and the target time stamp is smaller than a preset value; for another example, the playing condition may be set such that a time difference between the target time stamp and the interaction start time stamp corresponding to the hit interaction point is smaller than a time difference between the target time stamp and the interaction end time stamp of the interaction information corresponding to the hit interaction point.
In the embodiment of the disclosure, when the first interaction identifier corresponding to the hit interaction point is the interaction start identifier and is inconsistent with the second interaction identifier corresponding to the current interaction point, when the hit interaction point meets the playing condition, signaling carrying the first interaction identifier corresponding to the hit interaction point is distributed to display the interaction information corresponding to the first interaction identifier, thereby realizing that when the dragged position has interaction, the corresponding interaction information is displayed.
Still taking each scenario shown in fig. 5 as an example, for case1, the user drags (seek) the progress bar of playing back the video leftwards from the position corresponding to the dashed arrow to the position corresponding to the solid arrow, when searching leftwards, the interaction points cannot be searched from the timestamps corresponding to the dashed arrow and the implementing arrow respectively, i.e. the current interaction point and the hit interaction point are both empty, and no processing is performed at this time. For case2, the playback video is normally played from the position of the dotted arrow to the position of the solid arrow, the current interaction point cannot be found by searching leftwards from the timestamp corresponding to the dotted arrow, the second interaction mark is empty, the first interaction mark corresponding to the hit interaction point is found leftwards from the timestamp corresponding to the solid arrow, the first interaction mark is inconsistent with the second interaction mark, the hit interaction point meets the playing condition, and the a-1 is distributed to display the interaction information corresponding to the interaction id 'a'. For case3, a user drags a progress bar of playing back the video leftwards from a position corresponding to a dotted arrow to a position corresponding to a solid arrow, searches leftwards from a timestamp corresponding to the dotted arrow, searches for a second interaction identifier corresponding to a current interaction point to be a-1, searches leftwards from the timestamp corresponding to the solid arrow, searches for a first interaction identifier corresponding to a hit interaction point to be b-1, is inconsistent with the second interaction identifier, and distributes an interaction ending identifier a-2 corresponding to a-1 when the second interaction identifier is an interaction starting identifier, so as to close the interaction information corresponding to the displayed interaction id 'a'. For case4, a user drags a progress bar of playing back the video leftwards from a position corresponding to a dotted arrow to a position corresponding to a solid arrow, searches leftwards from a timestamp corresponding to the dotted arrow, searches for a second interaction identifier corresponding to a current interaction point to be b-1, searches leftwards from the timestamp corresponding to the solid arrow, searches for a first interaction identifier corresponding to a hit interaction point to be c-2, is inconsistent with the second interaction identifier, and distributes an interaction ending identifier b-2 corresponding to b-1 when the second interaction identifier is an interaction starting identifier, so as to close the interaction information corresponding to the displayed interaction id 'b'. From case4, the dragged part of the progress bar comprises the interaction starting point and the interaction ending point of the interaction information corresponding to the interaction id "c", so that the interaction information corresponding to the interaction id "c" is not loaded, and the interaction information can be normally displayed only when the user normally plays the starting point and the ending point of the interaction information. For case5, a user drags a progress bar of playing back the video leftwards from a position corresponding to a dotted arrow to a position corresponding to a solid arrow, searches leftwards from a timestamp corresponding to the dotted arrow, searches for a second interaction identifier corresponding to a current interaction point as b-1, searches leftwards from the timestamp corresponding to the solid arrow, searches for a first interaction identifier corresponding to a hit interaction point as a-2, is inconsistent with the second interaction identifier, and distributes signaling carrying an interaction ending identifier b-2 corresponding to b-1 when the second interaction identifier is an interaction starting identifier, so as to close the interaction information corresponding to the displayed interaction id 'b'. Aiming at case6, the second interaction identifier corresponding to the searched current interaction point is b-1, the first interaction identifier corresponding to the searched hit interaction point is a-1, the first interaction identifier is inconsistent with the second interaction identifier, and the hit interaction point meets part of conditions, a signaling carrying an interaction ending identifier b-2 corresponding to b-1 is distributed so as to close the interaction information corresponding to the displayed interaction id 'b', and a signaling carrying a-1 is distributed so as to display the interaction information corresponding to the interaction id 'a'. The hit interaction point determined this time is the current interaction point of the next time.
The playback video generation scheme and the play scheme provided by the present disclosure can be applied to a live broadcast classroom application scene, and fig. 6 shows an overall architecture diagram of playback video generation and play in the live broadcast classroom scene according to an exemplary embodiment of the present disclosure. As shown in fig. 6, in this scenario, a main speaking terminal (i.e., a terminal device used by a teacher, which may also be referred to as a teacher end) initiates a bypass push stream (to send a 6 point to a server end to inform about the start of live broadcast) when starting to take lessons, performs audio and video confluence and transcoding through an audio and video network RTC service, pushes an RTMP stream to a messenger cloud (i.e., a recording terminal), and the messenger cloud performs video recording (all operations of audio and video, initiation/termination of interaction, board writing, etc. are recorded in the lesson process, in this process, the teacher will send a 5 point to the server end to inform about page turning or interaction when turning over pages), and store videos after merging, and when the main speaking terminal clicks down in lessons, exits from an online classroom (to send a 7 point to the server end to inform about the end of recording), and ends the bypass push stream recording, thereby completing recording of video of the classroom. After the recording is finished, the messenger Yun Hui requests a break callback to a break callback interface of the server, and if a terminal occurs in the recording process, the break callback interface can semi-initiate a plurality of video splicing tasks to the messenger cloud point so as to enable the messenger cloud point to broadcast and splice a plurality of recorded fragments into complete videos. The video composition completion interface of the server side can initiate active call to the vacation cloud on demand through the Crontab, whether the task of splicing completion exists or not is detected, and if so, the vacation cloud on demand can write a complete and complete video path into the video composition completion interface. The dotting information and the recorded video acquired by the server can be restored to S3 for backup, and the blackboard writing data generated in the live broadcast process can be acquired from the database DB. The student calls the server through the terminal equipment (i.e. the playing terminal, also called as student end) to obtain the playback address, and can normally watch the playback video, when playing to the time point that the teacher initiates the interaction, the playback can automatically initiate the interaction, the student normally participates in the interaction answering and submits the operation, the system can judge the student answering, and when playing to the interaction ending point, the interaction is ended. Therefore, according to the scheme, interaction data initiated by the teacher end are recorded in the playback video stream in the live broadcasting classroom teaching process, after the playback video is generated, students can participate in various interaction answers initiated by the teacher in the classroom in real time (such as various interaction operations including selection questions, gap filling questions, judgment questions, games, compound questions and the like) in the process of watching the playback video, the complete reproduction of the classroom is truly realized, students watching the playback can obtain classroom experience the same as that of the live broadcasting classroom, the playback becomes more temperature, the playback is more interesting, and the learning experience of the students is improved.
Fig. 7 shows a recording timing chart of a lecture terminal of an exemplary embodiment of the present disclosure, which is applied to a live lecture scene, as shown in fig. 7, when a lecture starts, the lecture terminal obtains courseware from a server, calls a pushmetadata interface to obtain starting lecture and dotting, the server returns courseware and 6 points to the lecture terminal, and then the lecture terminal starts live lecture and plays 6 points to the server to inform the server of the start of live broadcast, and the server returns a timestamp to the lecture terminal. And the main speaking terminal initiates a video bypass plug flow to the voice network service, and if the plug flow fails, initiates a new video bypass plug flow until receiving a plug flow success message fed back by the voice network service. After the push is successful, the voice network service pushes the RTMP stream to the messenger cloud, and the messenger cloud starts video recording. If the main speaking terminal needs to initiate interaction in the live lecture process, the main speaking terminal requests interaction information (dotting 5 points-page turning points) to the service terminal, the service terminal returns corresponding interaction information (such as directional gold coins, red bags, judgment questions, selection questions, photographing and wall-mounting, and the like), and records and plays back parameters (such as interaction starting identification, interaction starting time stamp, and the like) needed by the secondary interaction student terminal of the class, the main speaking terminal sends the acquired interaction information to the IRC service through IRC signaling so as to display the interaction information at the live watching terminal, and the IRC service returns an interaction sending monitoring result to the main speaking terminal so as to feed back whether the interaction information reaches the live watching terminal or not. If the main speaking terminal initiates the blackboard writing data, the blackboard writing data is stored in a Database (DB), and the blackboard writing data is sent to the IRC service to display the blackboard writing content at the live broadcast watching end. When the live lecture is finished, a teacher clicks to give lessons, the main lecture terminal sends a live lecture finishing dotting event (dotting 7 points) to the server, and the server acquires recorded videos from the messenger cloud.
Fig. 8 shows a student-side playback timing diagram according to an exemplary embodiment of the present disclosure, as shown in fig. 8, when a student starts to see playback, a meta fo interface of a server is called, the server obtains metadata data and parses the metadata data, obtains SEI information and interactive dotting information corresponding to a playback video, and the server returns the SEI information and the interactive dotting information to a student side. The student side invokes the interaction interface through the interaction dotting information to trigger interaction, and the SEI callback is continuously triggered in the process of playing the video, namely, the SEI timestamp is read once every 500ms and the nearest interaction dotting within 2s is searched leftwards. And triggering the starting of the interaction if the interaction starting point is matched, starting corresponding interaction such as interaction answering, game and the like by the student end, and triggering the ending of the interaction if the interaction ending point is matched. When the interactive answer starting point is matched, the student starts the interactive answer, and the interactive answer is matched to the ending point according to the interactive answer information. When playback of the video ends, playback is stopped.
In order to achieve the above embodiments, exemplary embodiments of the present disclosure further provide a playback video generating apparatus.
Fig. 9 shows a schematic block diagram of a playback video generating apparatus according to an exemplary embodiment of the present disclosure, as shown in fig. 9, the playback video generating apparatus 60 includes: an information sending module 610, a recording module 620, a video acquisition module 630 and a video generation module 640.
The information sending module 610 is configured to, in response to receiving an interaction start dotting event sent by a main speaking terminal, return, to the main speaking terminal, interaction information corresponding to the interaction start dotting event;
the recording module 620 is configured to record an interaction start identifier and an interaction start timestamp corresponding to the interaction information; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
the video acquisition module 630 is configured to acquire a recorded video after recording is completed in response to receiving a live broadcast ending dotting event sent by the main speaking terminal;
the video generating module 640 is configured to generate a playback video based on the recorded video and the interaction information, an interaction start identifier corresponding to the interaction information, an interaction start timestamp, an interaction end identifier, and an interaction end timestamp, so that a playing terminal displays the interaction information at a playing time corresponding to the interaction start timestamp in a process of playing the playback video.
Optionally, the video generating module 640 is further configured to:
acquiring an interaction file, wherein each interaction information returned to the main speaking terminal in the live broadcast process, and an interaction start identifier, an interaction start time stamp, an interaction end identifier and an interaction end time stamp corresponding to each interaction information are stored in the interaction file;
And storing the recorded video and the interactive file in an associated mode to obtain the playback video.
Optionally, the playback video generating apparatus 60 further includes:
the query module is used for querying whether the database contains the blackboard writing data corresponding to the recorded video;
the blackboard writing data acquisition module is used for acquiring the blackboard writing data corresponding to the recorded video in response to the inquiry of the blackboard writing data from the database, wherein the blackboard writing data comprises blackboard writing content and a blackboard writing time stamp;
and the storage module is used for storing the recorded video and the blackboard writing data in an associated mode.
Optionally, the playback video generating apparatus 60 further includes:
the interruption frequency acquisition module is used for responding to a cut-off callback request sent by the recording terminal and acquiring the interrupted frequency of the recording terminal in the video recording process from a cut-off callback interface;
and the task sending module is used for calling the cut-off callback interface to send a video splicing task to the recording terminal in response to the frequency not being 0, so that the recording terminal can splice a plurality of recording fragments into complete recorded video in response to the video splicing task.
The playback video generation device provided by the embodiment of the disclosure can execute any playback video generation method applicable to the electronic equipment, and has the corresponding functional modules and beneficial effects of the execution method. Details of the embodiments of the apparatus of the present disclosure that are not described in detail may refer to descriptions of any of the embodiments of the method of the present disclosure.
In order to achieve the above-described embodiments, exemplary embodiments of the present disclosure also provide a playback video playing device that generates playback video by the playback video generation method described in the foregoing embodiments.
Fig. 10 shows a schematic block diagram of a playback video playing device according to an exemplary embodiment of the present disclosure, as shown in fig. 10, the playback video playing device 70 includes: the system comprises an information acquisition module 710, an interaction point matching module 720, an information determination module 730 and an interaction display module 740.
The information obtaining module 710 is configured to obtain, from a server, supplemental enhancement information and interaction dotting information corresponding to the playback video, where the interaction dotting information includes interaction information, and an interaction start identifier, an interaction start timestamp, an interaction end identifier, and an interaction end timestamp corresponding to the interaction information;
the interaction point matching module 720 is configured to perform interaction point matching based on the current timestamp of the supplemental enhancement information and the interaction dotting information in the process of playing the playback video, and determine a target interaction identifier closest to the current timestamp;
the information determining module 730 is configured to determine, in response to the target interaction identifier being an interaction start identifier, target interaction information corresponding to the target interaction identifier;
The interactive display module 740 is configured to display the target interaction information in response to the playback video playing to a playing time corresponding to the interaction start time stamp corresponding to the target interaction information.
Optionally, the interaction point matching module 720 is further configured to:
reading the current time stamp of the supplementary enhancement information according to a preset period;
searching whether an interaction point exists in a preset time length leftwards based on the current time stamp and the interaction dotting information, wherein the interaction point corresponds to an interaction starting identifier or an interaction ending identifier;
and in response to finding the interaction point in the preset time, determining a target interaction identifier corresponding to the interaction point closest to the current time stamp.
Optionally, the playback video playing device 70 further includes:
the time stamp obtaining module is used for obtaining a target time stamp after dragging in response to receiving a dragging operation of a user on a progress bar of the playback video;
the first determining module is used for determining a first interaction identifier corresponding to a hit interaction point with the closest target time stamp distance based on the target time stamp and the interaction dotting information;
the second determining module is used for determining a target interaction ending identifier corresponding to the second interaction identifier in response to the fact that the first interaction identifier is inconsistent with the second interaction identifier corresponding to the current interaction point and the second interaction identifier is the interaction starting identifier;
The first distribution module is used for distributing signaling carrying the target interaction ending mark so as to stop displaying the interaction information corresponding to the current interaction point.
Optionally, the playback video playing device 70 further includes:
the judging module is used for responding to the fact that the first interaction identifier is inconsistent with the second interaction identifier, and the first interaction identifier is an interaction starting identifier, and judging whether the hit interaction point meets playing conditions or not;
and the second distributing module is used for distributing signaling carrying the first interaction identifier to display interaction information corresponding to the first interaction identifier in response to the hit interaction point meeting the playing condition.
Optionally, the interactive display module 740 is further configured to:
responding to the interaction type corresponding to the target interaction information as an interaction question, and displaying the target interaction information through a popup window;
and responding to the interaction type corresponding to the target interaction information as an interaction game, and displaying the target interaction information in a full screen mode.
The playback video playing device provided by the embodiment of the disclosure can execute any playback video playing method applicable to the electronic equipment, and has the corresponding functional modules and beneficial effects of the execution method. Details of the embodiments of the apparatus of the present disclosure that are not described in detail may refer to descriptions of any of the embodiments of the method of the present disclosure.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a playback video generation method or a playback video play method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a playback video generation method or a playback video play method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a playback video generation method or a playback video play method according to an embodiment of the present disclosure.
Referring to fig. 11, a block diagram of an electronic device 1100 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data required for the operation of the device 1100 can also be stored. The computing unit 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in the electronic device 1100 are connected to the I/O interface 1105, including: an input unit 1106, an output unit 1107, a storage unit 1108, and a communication unit 1109. The input unit 1106 may be any type of device capable of inputting information to the electronic device 1100, and the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 1107 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1108 may include, but is not limited to, magnetic disks, optical disks. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices through computer networks such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1101 performs the respective methods and processes described above. For example, in some embodiments, the playback video generation method or playback video playing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto electronic device 1100 via ROM 1102 and/or communication unit 1109. In some embodiments, the computing unit 1101 may be configured to perform a playback video generation method or a playback video play method by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The terms "machine-readable medium" and "computer-readable medium" as used in this disclosure refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) for providing machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Claims (13)

1. A playback video generation method, wherein the method comprises:
responding to receiving an interaction starting dotting event sent by a main speaking terminal, and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal;
recording an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information;
responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
responding to the received live broadcast ending dotting event sent by the main speaking terminal, and acquiring recorded video after recording;
and generating a playback video based on the recorded video, the interaction information, an interaction starting identifier corresponding to the interaction information, an interaction starting time stamp, an interaction ending identifier and an interaction ending time stamp, so that a playing terminal displays the interaction information at a playing time corresponding to the interaction starting time stamp in the process of playing the playback video.
2. The playback video generation method of claim 1, wherein the generating playback video based on the recorded video and the interaction information, an interaction start identifier corresponding to the interaction information, an interaction start time stamp, an interaction end identifier, and an interaction end time stamp comprises:
acquiring an interaction file, wherein each interaction information returned to the main speaking terminal in the live broadcast process, and an interaction start identifier, an interaction start time stamp, an interaction end identifier and an interaction end time stamp corresponding to each interaction information are stored in the interaction file;
and storing the recorded video and the interactive file in an associated mode to obtain the playback video.
3. The playback video generation method of claim 2, wherein the method further comprises:
inquiring whether the database contains the blackboard writing data corresponding to the recorded video;
acquiring the blackboard writing data corresponding to the recorded video in response to inquiry from the database, wherein the blackboard writing data comprises blackboard writing content and a blackboard writing time stamp;
and storing the recorded video and the blackboard writing data in an associated mode.
4. A playback video generation method as recited in any one of claims 1-3, wherein the method further comprises:
Responding to a cut-off callback request sent by a recording terminal, and acquiring the interrupted times of the recording terminal in the video recording process from a cut-off callback interface;
and calling the cut-off callback interface to send a video splicing task to the recording terminal in response to the number of times not being 0, so that the recording terminal splices a plurality of recording fragments into a complete recording video in response to the video splicing task.
5. A playback video playing method, wherein the playback video is generated by the playback video generating method according to any one of claims 1 to 4, the method comprising:
the method comprises the steps that supplementary enhancement information and interaction dotting information corresponding to a playback video are obtained from a server, wherein the interaction dotting information comprises interaction information, and an interaction starting identifier, an interaction starting time stamp, an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
in the process of playing the playback video, performing interaction point matching based on the current time stamp of the supplemental enhancement information and the interaction dotting information, and determining a target interaction identifier closest to the current time stamp;
responding to the target interaction identifier as an interaction starting identifier, and determining target interaction information corresponding to the target interaction identifier;
And responding to the playback video to be played to the playing time corresponding to the interaction starting time stamp corresponding to the target interaction information, and displaying the target interaction information.
6. The playback video playing method of claim 5, wherein the performing the interactive point matching based on the current timestamp of the supplemental enhancement information and the interactive dotting information, determining the target interactive identifier closest to the current timestamp, comprises:
reading the current time stamp of the supplementary enhancement information according to a preset period;
searching whether an interaction point exists in a preset time length leftwards based on the current time stamp and the interaction dotting information, wherein the interaction point corresponds to an interaction starting identifier or an interaction ending identifier;
and in response to finding the interaction point in the preset time, determining a target interaction identifier corresponding to the interaction point closest to the current time stamp.
7. The playback video playing method as set forth in claim 6, wherein the method further comprises:
responding to the received dragging operation of the user on the progress bar of the playback video, and acquiring a target timestamp after dragging;
determining a first interaction identifier corresponding to a hit interaction point with the closest target timestamp based on the target timestamp and the interaction dotting information;
Responding to the fact that the first interaction identification is inconsistent with a second interaction identification corresponding to a current interaction point, wherein the second interaction identification is an interaction starting identification, and determining a target interaction ending identification corresponding to the second interaction identification;
and distributing signaling carrying the target interaction ending mark to stop displaying the interaction information corresponding to the current interaction point.
8. The playback video playing method as set forth in claim 7, wherein the method further comprises:
responding to the fact that the first interaction identifier is inconsistent with the second interaction identifier, wherein the first interaction identifier is an interaction starting identifier, and judging whether the hit interaction point meets playing conditions or not;
and responding to the hit interaction point meeting the playing condition, distributing signaling carrying the first interaction identifier to display interaction information corresponding to the first interaction identifier.
9. The playback video playing method as recited in any one of claims 5-8, wherein the presenting the target interaction information includes:
responding to the interaction type corresponding to the target interaction information as an interaction question, and displaying the target interaction information through a popup window;
and responding to the interaction type corresponding to the target interaction information as an interaction game, and displaying the target interaction information in a full screen mode.
10. A playback video generation apparatus, wherein the apparatus comprises:
the information sending module is used for responding to the received interaction starting dotting event sent by the main speaking terminal and returning interaction information corresponding to the interaction starting dotting event to the main speaking terminal;
the recording module is used for recording an interaction starting identifier and an interaction starting time stamp corresponding to the interaction information; responding to the received interaction ending dotting event sent by the main speaking terminal, and recording an interaction ending identifier and an interaction ending time stamp corresponding to the interaction information;
the video acquisition module is used for responding to the live broadcast ending dotting event sent by the main speaking terminal to acquire recorded video after recording;
the video generation module is used for generating a playback video based on the recorded video, the interaction information, the interaction starting identifier corresponding to the interaction information, the interaction starting timestamp, the interaction ending identifier and the interaction ending timestamp, so that a playing terminal can display the interaction information at the playing time corresponding to the interaction starting timestamp in the process of playing the playback video.
11. A playback video playback apparatus in which the playback video is generated by the playback video generation method as recited in any one of claims 1 to 4, the apparatus comprising:
The information acquisition module is used for acquiring the supplementary enhancement information and the interactive dotting information corresponding to the playback video from the server, wherein the interactive dotting information comprises interactive information, and an interactive start identifier, an interactive start timestamp, an interactive end identifier and an interactive end timestamp corresponding to the interactive information;
the interaction point matching module is used for carrying out interaction point matching based on the current time stamp of the supplemental enhancement information and the interaction dotting information in the process of playing the playback video, and determining a target interaction identifier closest to the current time stamp;
the information determining module is used for determining target interaction information corresponding to the target interaction identification in response to the target interaction identification being the interaction starting identification;
and the interaction display module is used for responding to the playback video to be played to the playing time corresponding to the interaction starting time stamp corresponding to the target interaction information, and displaying the target interaction information.
12. An electronic device, comprising:
a processor; and
a memory in which a program is stored,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the playback video generation method according to any one of claims 1-4, or to perform the playback video play method according to any one of claims 5-9.
13. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the playback video generation method according to any one of claims 1-4 or to perform the playback video play method according to any one of claims 5-9.
CN202311368877.1A 2023-10-20 2023-10-20 Playback video generation method, playback video play device, electronic equipment and medium Pending CN117499690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311368877.1A CN117499690A (en) 2023-10-20 2023-10-20 Playback video generation method, playback video play device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311368877.1A CN117499690A (en) 2023-10-20 2023-10-20 Playback video generation method, playback video play device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117499690A true CN117499690A (en) 2024-02-02

Family

ID=89675467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311368877.1A Pending CN117499690A (en) 2023-10-20 2023-10-20 Playback video generation method, playback video play device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117499690A (en)

Similar Documents

Publication Publication Date Title
CN106227335B (en) Interactive learning method for preview lecture and video course and application learning client
US9055193B2 (en) System and method of a remote conference
US9426214B2 (en) Synchronizing presentation states between multiple applications
US20140123014A1 (en) Method and system for chat and activity stream capture and playback
CN105933375B (en) Method and device for monitoring microphone connection session and server
US20120166596A1 (en) System and method for identifying common media content
US20110228921A1 (en) Method and System for Live Collaborative Tagging of Audio Conferences
US6724918B1 (en) System and method for indexing, accessing and retrieving audio/video with concurrent sketch activity
CN113518247A (en) Video playing method, related equipment and computer readable storage medium
WO2022121302A1 (en) Game live streaming processing method and electronic device
CN112584187B (en) Session creation method, device, server and storage medium
KR20220090411A (en) Method, apparatus and device of live game broadcasting
US20170168660A1 (en) Voice bullet screen generation method and electronic device
CN112839258A (en) Video note generation method, video note playing method, video note generation device, video note playing device and related equipment
CN112954426B (en) Video playing method, electronic equipment and storage medium
US20220021715A1 (en) Live streaming method and apparatus, device and computer readable storage medium
US20210201958A1 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium
CN114025185A (en) Video playback method and device, electronic equipment and storage medium
US11093120B1 (en) Systems and methods for generating and broadcasting digital trails of recorded media
WO2023241360A1 (en) Online class voice interaction methods and apparatus, device and storage medium
US11838338B2 (en) Method and device for conference control and conference participation, server, terminal, and storage medium
CN117499690A (en) Playback video generation method, playback video play device, electronic equipment and medium
CN113676761B (en) Multimedia resource playing method and device and main control equipment
CN106101192B (en) Information interaction method and device
CN113660155A (en) Special effect output method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination