CN114157877A - Playback data generation method and device, and playback method and device - Google Patents
Playback data generation method and device, and playback method and device Download PDFInfo
- Publication number
- CN114157877A CN114157877A CN202111171848.7A CN202111171848A CN114157877A CN 114157877 A CN114157877 A CN 114157877A CN 202111171848 A CN202111171848 A CN 202111171848A CN 114157877 A CN114157877 A CN 114157877A
- Authority
- CN
- China
- Prior art keywords
- auxiliary
- content
- auxiliary content
- data
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 194
- 230000008569 process Effects 0.000 claims abstract description 100
- 230000006870 function Effects 0.000 claims description 23
- 230000001960 triggered effect Effects 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 15
- 238000010586 diagram Methods 0.000 description 19
- 238000012905 input function Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000012217 deletion Methods 0.000 description 4
- 230000037430 deletion Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23614—Multiplexing of additional data and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
One or more embodiments of the present specification provide a method and an apparatus for generating playback data, and a method and an apparatus for playback, where the method may include: acquiring auxiliary data in a video recording process, wherein the auxiliary data comprises auxiliary content input by a recording participant in the video recording process and input starting time corresponding to the auxiliary content; acquiring a video file obtained by recording; generating playback data based on the auxiliary data and the video file, the playback data being used for the associated playing of the auxiliary content and the video file according to the start input moment.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of data processing technologies, and in particular, to a method and an apparatus for generating playback data, and a method and an apparatus for playback.
Background
Video recording and playback is widely used in various areas of people's life. Video recording may generate associated auxiliary data in addition to video files. In the related art, the auxiliary data and the video file obtained by video recording are stored separately, so that the user can only select the auxiliary data or the video file to be played separately during the playback process, and the user experience is poor.
Disclosure of Invention
In view of this, one or more embodiments of the present specification provide a method and an apparatus for generating playback data, and a method and an apparatus for playback.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided a playback data generation method including:
acquiring auxiliary data in a video recording process, wherein the auxiliary data comprises auxiliary content input by a recording participant in the video recording process and input starting time corresponding to the auxiliary content;
acquiring a video file obtained by recording;
generating playback data based on the auxiliary data and the video file, the playback data being used for the associated playing of the auxiliary content and the video file according to the start input moment.
According to a second aspect of one or more embodiments of the present specification, there is provided a playback method of playback data, including:
acquiring playback data, wherein the playback data comprises a video file and auxiliary data corresponding to the video file, and the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and starting input time corresponding to the auxiliary content;
and in response to a trigger operation for the auxiliary content, adjusting the video file to the starting input moment to perform associated playing with the auxiliary content.
According to a third aspect of one or more embodiments of the present specification, there is provided a playback method of playback data, including:
acquiring playback data, wherein the playback data comprises a video file and auxiliary data corresponding to the video file, and the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and starting input time corresponding to the auxiliary content;
responding to a trigger operation aiming at the video file, and determining a specified time indicated by the trigger operation and target auxiliary content corresponding to the specified time; wherein the input starting time corresponding to the target auxiliary content is not later than the designated time, and the input starting time corresponding to the auxiliary content next to the target auxiliary content is later than the designated time;
and displaying the target auxiliary content, and adjusting the video file to the specified time for associated playing.
According to a fourth aspect of one or more embodiments of the present specification, there is provided a method of generating assistance data, including:
and generating auxiliary data corresponding to the video file according to auxiliary content input by a recording participant in the recording process of the video file and the starting input time corresponding to the auxiliary content, wherein the auxiliary data is used for being associated with the video file for playing according to the starting input time.
According to a fifth aspect of one or more embodiments of the present specification, there is provided a method for playing auxiliary data, including:
acquiring auxiliary data, wherein the auxiliary data comprises auxiliary content input by a recording participant in the recording process of a video file and a starting input moment corresponding to the auxiliary content;
receiving a trigger operation aiming at the auxiliary content, wherein the trigger operation is used for calling the video file and adjusting the video file to the starting input moment so as to carry out associated playing with the auxiliary content.
According to a sixth aspect of one or more embodiments of the present specification, there is provided a playback data generation apparatus including:
the video recording device comprises a first auxiliary data acquisition unit, a second auxiliary data acquisition unit and a video recording unit, wherein the first auxiliary data acquisition unit is used for acquiring auxiliary data in a video recording process, and the auxiliary data comprises auxiliary content input by a recording participant in the video recording process and starting input time corresponding to the auxiliary content;
the video acquisition unit is used for acquiring a video file obtained by recording;
a playback data generation unit configured to generate playback data based on the auxiliary data and the video file, the playback data being used to perform associated playback of the auxiliary content and the video file according to the start input time.
According to a seventh aspect of one or more embodiments of the present specification, there is provided a playback apparatus that plays back data, including:
a first playback data acquisition unit, configured to playback data, where the playback data includes a video file and auxiliary data corresponding to the video file, and the auxiliary data includes auxiliary content input by a recording participant in a recording process of the video file and a start input time corresponding to the auxiliary content;
a first playing unit, configured to adjust the video file to the start input time to perform associated playing with the auxiliary content in response to a trigger operation for the auxiliary content.
According to an eighth aspect of one or more embodiments of the present specification, there is provided a playback apparatus that plays back data, including:
a second playback data acquisition unit configured to acquire playback data including a video file and auxiliary data corresponding to the video file, the auxiliary data including auxiliary content input by a recording participant during recording of the video file and a start input time corresponding to the auxiliary content;
the determining unit is used for responding to the triggering operation of the video file, and determining the specified time indicated by the triggering operation and the target auxiliary content corresponding to the specified time; wherein the input starting time corresponding to the target auxiliary content is not later than the designated time, and the input starting time corresponding to the auxiliary content next to the target auxiliary content is later than the designated time;
and the second playing unit is used for displaying the target auxiliary content and adjusting the video file to the specified time for associated playing.
According to a ninth aspect of one or more embodiments of the present specification, there is provided an assistance data generation apparatus including:
the auxiliary data generating unit is used for generating auxiliary data corresponding to the video file according to auxiliary content input by a recording participant in the recording process of the video file and the input starting time corresponding to the auxiliary content, and the auxiliary data is used for being associated with the video file according to the input starting time.
According to a tenth aspect of one or more embodiments of the present specification, there is provided an auxiliary data playback apparatus including:
the second auxiliary data acquisition unit is used for acquiring auxiliary data, wherein the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and input starting time corresponding to the auxiliary content;
a receiving unit, configured to receive a trigger operation for the auxiliary content, where the trigger operation is used to invoke the video file and adjust the video file to the start input time to perform associated playing with the auxiliary content.
According to an eleventh aspect of one or more embodiments of the present specification, there is provided an electronic device. The electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the first, second or third aspect by executing the executable instructions.
According to a twelfth aspect of one or more embodiments of the present description, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first, second or third aspect.
According to a thirteenth aspect of one or more embodiments of the present specification, there is provided an electronic device. The electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the fourth aspect or the fifth aspect by executing the executable instructions.
According to a fourteenth aspect of one or more embodiments of the present specification, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the fourth or fifth aspect.
Drawings
Fig. 1 is an architectural diagram of a playback data generation and playback method provided in an exemplary embodiment of the present specification.
Fig. 2 is a flowchart of a method for generating playback data according to an exemplary embodiment of the present specification.
Fig. 3 is a flowchart of a playback method for playing back data according to an exemplary embodiment of the present specification.
Fig. 4 is a flowchart of another playback method for playing back data according to an exemplary embodiment of the present specification.
Fig. 5A-5B are flowcharts of a method for generating playback data of a network course and a playback method provided in an exemplary embodiment of the present specification.
Fig. 6 is a schematic diagram of a presentation interface of a network course in a live process according to an exemplary embodiment of the present specification.
Fig. 7 is a schematic diagram of a presentation interface of lecture data according to an exemplary embodiment of the present specification.
Fig. 8 is a schematic diagram of a presentation interface for playing back data according to an exemplary embodiment of the present specification.
Fig. 9 is a flowchart of a method for generating live playback data according to an exemplary embodiment of the present specification.
Fig. 10 is a flowchart of a playback method for live playback data according to an exemplary embodiment of the present specification.
Fig. 11 is a flowchart of another playback method for live playback data according to an exemplary embodiment of the present specification.
Fig. 12 is a schematic diagram of a display interface of a live shopping broadcast according to an exemplary embodiment of the present specification.
Fig. 13 is a flowchart of a method for generating auxiliary data according to an exemplary embodiment of the present specification.
Fig. 14 is a flowchart of a method for playing auxiliary data according to an exemplary embodiment of the present disclosure.
Fig. 15 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 16 is a block diagram of a playback data generation apparatus according to an exemplary embodiment of the present specification.
Fig. 17 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 18 is a block diagram of a playback apparatus that plays back data according to an exemplary embodiment of the present specification.
Fig. 19 is a block diagram of another playback apparatus that plays back data according to an exemplary embodiment of the present specification.
Fig. 20 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 21 is a block diagram of an auxiliary data generation apparatus according to an exemplary embodiment of the present specification.
Fig. 22 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Fig. 23 is a block diagram of an apparatus for playing auxiliary data according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
With the development of computer technology and network technology, live webcast and video recording have been widely popularized, and live webcast and video recording have been widely applied to various fields of people's lives.
In the related art, the auxiliary data input by the anchor of the network live broadcast or other related personnel during the live broadcast process and the live video recorded during the live broadcast process are often stored separately. In the process of playing back the live webcast, the user can only select the auxiliary data or the live video to play alone, or the user needs to manually adjust the auxiliary data and the live video data to the same time point to play, so that the operation process is complicated and the user experience is poor. Similarly, the auxiliary data input by the recorder or other related personnel during the recording process and the video file obtained by recording are often stored separately, and the user can only select the auxiliary data or the video file to play separately, which results in poor user experience.
Accordingly, the present specification solves the above-described technical problems in the related art by improving the generation of playback data and the playback method. The following examples are given for illustrative purposes.
Fig. 1 is an architecture diagram of a playback data generation and playback method shown in this specification. As shown in fig. 1, may include a server 11, a network 12, and an electronic device 13.
The server 11 may be a physical server comprising a separate host, or the server 11 may be a virtual server carried by a cluster of hosts. In operation, the server 11 may be configured with playback data generating means, which may be implemented in software and/or hardware, to generate playback data comprising auxiliary data and video files. Alternatively, the server 11 may be configured with a playback device for playing back the data, and the device may be implemented in software and/or hardware to perform associated playing of the auxiliary data and the video file in the playback data. Of course, the server 11 may be configured with both the generation device and the playback device of the playback data, which is not limited in this specification.
The electronic device 13 refers to one type of electronic device that a user can use. In fact, it is obvious that the user can also use electronic devices of the type such as: a mobile phone, a tablet device, a notebook computer, a pda (Personal Digital Assistants), a wearable device (such as smart glasses, a smart watch, etc.), etc., which are not limited by one or more embodiments of the present disclosure. During operation, the electronic device may play auxiliary data and/or video files.
And the network 12 for interaction between the server 11 and the electronic device 13 may include various types of wired or wireless networks.
Of course, the auxiliary data and the video file may be obtained in a live broadcast process or in a recording process, which is not limited in this specification. The live broadcast process may be a game live broadcast, a shopping live broadcast, an education live broadcast, or the like, and the corresponding auxiliary data may be auxiliary content input by a live broadcast participant in the live broadcast process of the game live broadcast, the shopping live broadcast, the education live broadcast, or the like, and the auxiliary content may be text information input by the live broadcast participant, a drawn graphic or character, or a triggered animation effect, or the like, which is not limited in this specification.
Fig. 2 is a flowchart of a method for generating playback data according to an exemplary embodiment. As shown in fig. 2, taking the generation process of the playback data of the network lesson as an example, the method may be applied to a server (e.g., the server 11 shown in fig. 1, etc.); the method may comprise the steps of:
step 202, acquiring auxiliary data in a video recording process, where the auxiliary data includes auxiliary content input by a recording participant in the video recording process and a start input time corresponding to the auxiliary content.
In an embodiment, the video recording process may be a live broadcast of the network course or a recording process, the auxiliary data may be lecture data obtained in the course of giving lessons to the network course, the video file may be a video of giving lessons to the network course, and the recording participant is a course participant.
In one embodiment, the server may obtain lecture data of the network course, where the lecture data may include lecture content input by a course participant of the network course during a course of the network course and a start input time corresponding to the lecture content, and the lecture content may include text content input by the course participant and drawn graphics, logos, and other content. For example, the lecture data may include text information, a circled drawing, or a drawn picture, etc., which are input by the teacher of the network lesson through the electronic whiteboard. The participants of the course may include related personnel such as a lecturer and/or a lecturer of the network course, which is not limited in this specification.
In one embodiment, in a case where a course participant inputs a plurality of lecture contents, each lecture content may be included in the lecture data, and a start input time corresponding to each lecture content. Of course, the instruction data may also record the termination input time corresponding to each instruction content, which is not limited in this specification.
In one embodiment, in a case where a course participant inputs a plurality of lecture contents, the server may divide the acquired lecture contents into corresponding lecture content groups, and may use all the obtained lecture content groups as lecture data, and each of the lecture content groups may also be referred to as "knowledge points", and hereinafter "lecture content group" may be referred to as "knowledge points". When any knowledge point only contains a piece of lecture content, the knowledge point can record the lecture content and the input starting time corresponding to the lecture content; when any knowledge point contains a plurality of lecture contents, each lecture content and the input starting time corresponding to the first lecture content contained in the knowledge point can be recorded in the knowledge point, so that the input starting time of each lecture content can be avoided from being recorded, the data volume in each knowledge point can be reduced, the storage space occupied by the lecture data can be reduced, and the efficiency of subsequently calling each knowledge point can be improved. By dividing each lecture content into corresponding knowledge points, the order of a plurality of lecture contents contained in the lecture data can be improved.
In an embodiment, the same knowledge point may include one lecture content, or may include a plurality of lecture contents whose adjacent intervals are not greater than a preset time length. And in the adjacent knowledge points, the adjacent interval between the last lecture content contained in the previous knowledge point and the first lecture content contained in the next knowledge point is not less than the preset time length. The preset duration can be set according to actual requirements, and is not limited in this specification. For example, a teacher of a network lesson may input numbers "1", "2", and "3" in order through an electronic whiteboard during a lecture, and "1", "2", and "3" are first lecture content, second lecture content, and third lecture content input by the teacher, assuming that a preset time period is set to 10 seconds. The server may detect a start input time when the teacher starts to draw "1" and a stop input time when "1" is drawn, that is, the server may detect a start input time and a stop input time of the first lecture content, and if the first lecture content is the first lecture content detected, the first lecture content may be divided into the first knowledge points. The server may also detect a start input time when the teacher starts to draw "2" and a stop input time when "2" is drawn, that is, the server may detect a start input time and a stop input time of the second lecture content, and may divide the second lecture content into the first knowledge points assuming that a time interval between the start input time of the second lecture content and the stop input time of the first lecture content is 4 seconds, which is less than a preset time period. The server may also detect the start input time when the teacher starts drawing "3" and the end input time when "3" is drawn, that is, the server may detect the start input time and the end input time of the third lecture content. Assuming that the time interval between the input start time of the third lecture content and the input end time of the second lecture content is 12 seconds, which is greater than the preset time period, the second lecture content and the third lecture content do not belong to the same knowledge point, and the third lecture content may be classified into another knowledge point.
In one embodiment, in the adjacent knowledge point, the preset function may be triggered between the termination input time of the last lecture content contained in the previous knowledge point and the start input time of the first lecture content contained in the next knowledge point. The preset functions may include functions that may be triggered by the classroom participant, such as a shape input function, a line input function, and a text input function, which are not limited in this specification. For example, a teacher of a network lesson may input numbers "1", "2", and "3" in sequence through an electronic whiteboard during a lecture, and the numbers "1", "2", and "3" are the first lecture content, the second lecture content, and the third lecture content input by the teacher. The server may detect a start input time when the teacher starts drawing "1" and a stop input time when "1" is drawn, that is, the server may detect a start input time and a stop input time of the first lecture content. If the first lecture content is the detected lecture content input for the first time, the detected first lecture content can be divided into the first knowledge points. After the termination input time of the first lecture content, the server detects that the preset function is triggered, and then detects the start input time of the second lecture content, so that the first lecture content and the second lecture content do not belong to the same knowledge point, and the second lecture content can be divided into another knowledge point.
In one embodiment, the server may determine a plurality of selection areas formed on the lecture content input interface, and divide each lecture content located in the same selection area into the same knowledge point. The selection area may be set according to actual requirements, the selection area may be a closed area or an unclosed area, and the shape of the selection area may be a circle, a square, or a polygon, which is not limited in this specification. For example, a teacher of a network lesson may input numbers "1", "2", and "3" in sequence through an electronic whiteboard during a lecture, and the numbers "1", "2", and "3" are the first lecture content, the second lecture content, and the third lecture content input by the teacher. Suppose that the teacher circles the first lecture content and the second lecture content with a circle after inputting the first lecture content, the second lecture content and the third lecture content. Assuming that the preset selected area is a circular area, the server may divide the first lecture content and the second lecture content in the circle into the same knowledge point after detecting the circle.
And step 204, acquiring the recorded video file.
In an embodiment, the network course may be recorded by a collection device to generate a teaching video, where the collection device may include an electronic device such as a camera or a video camera, and the teaching video may include picture data of a teaching party and/or a listening party of the network course, and may also include picture data of an experimental site corresponding to the network course, and of course, the teaching video may also include audio data in a course of the network course, which is not limited in this specification.
It should be noted that step 202 and step 204 are independent of each other, and do not necessarily have a sequential order. In some scenarios, step 202 may be performed first, and then step 204 may be performed, while in other scenarios, step 204 may be performed first, and then step 202 may be performed, which is not limited by this description.
Step 206, generating playback data based on the auxiliary data and the video file, the playback data being used for playing the auxiliary content and the video file in association with each other according to the start input time.
In an embodiment, the playback data of the online lesson can be generated according to the obtained lecture data and lecture video, and the lecture video associated with the lecture content can be played according to the input start time of the lecture content contained in the playback data, so that the content played by the lecture video is related to the displayed lecture content in the playback process of the playback data, thereby facilitating a user to better understand the related content, simplifying the operation of the user, meeting the playback requirements of the user on the lecture video and the lecture data, and improving the use experience of the user.
In an embodiment, the playback data may be a data set composed of lecture data and lecture video, or the playback data may be another data generated by processing the lecture data and lecture video, which is not limited in this specification.
In an embodiment, the server may further determine a termination input time corresponding to the last lecture content included in each knowledge point, and record the termination input time into the knowledge point, that is, a start input time corresponding to the first lecture content and a termination input time corresponding to the last lecture content may be recorded in each knowledge point respectively. The server can determine the time interval corresponding to each knowledge point according to the input starting time and the input ending time, and can control the lecture content contained in any one knowledge point and the teaching video corresponding to the time interval to be played in an independent association mode, so that the lecture content and the teaching video can be flexibly controlled to be played, the playback requirements of a user on playback data corresponding to any knowledge point can be met, and the convenience of user operation can be improved.
In one embodiment, the server may receive a deletion instruction for a specific knowledge point, and may delete all the lecture contents contained in the specific knowledge point from the lecture data according to the deletion instruction. Alternatively, the server may receive a deletion instruction for the specified lecture content in the specified knowledge point, and the specified lecture content may be deleted from the lecture data according to the deletion instruction.
In an embodiment, the server may obtain the position information of the lecture content in the lecture content input interface, and record the position information into the lecture data, so that the lecture content may be displayed at the corresponding position information in the process of playing back the playback data of the network course, the lecture content may be displayed more accurately, the lecture content input in the course of giving lessons in the network course may be reproduced better, and the comprehension of the user may be facilitated. The position information may include a coordinate position of the lecture content in the lecture content input interface or a relative position relationship with a standard reference object, and the like, which is not limited in this specification.
In an embodiment, the server may further obtain a document opened during the course participant inputting the lecture content and page information where the document is located, and record the document and the page information into the lecture data, so that when the playback data of the network course is played back, the document may be called and the document content corresponding to the page information may be displayed. The page information may include the number of pages, the number of paragraphs, the number of lines, and the like corresponding to the document content in the document, which is not limited in this specification.
Fig. 3 is a flowchart of a playback method for playing back data according to an exemplary embodiment. As shown in fig. 3, taking a playback process of playback data of a network course as an example for illustration, the method may be applied to an electronic device (e.g., the electronic device 13 shown in fig. 1, etc.); the method may comprise the steps of:
In an embodiment, the video recording process may be a live broadcast of the network course or a recording process, the auxiliary data may be lecture data obtained in the course of giving lessons to the network course, the video file is a video of giving lessons to the network course, and the recording participant is the course participant.
In one embodiment, the electronic device may obtain playback data of the network course, where the playback data may include lecture data and a lecture video corresponding to the network course, and the lecture data may include lecture content input by a course participant during a course of the network course and a start input time corresponding to the lecture content. The lecture contents can include text contents input by the course participants and drawn graphics, marks and the like. For example, the lecture data may include text information, a circled drawing, or a drawn picture, etc., which are input by the teacher of the network lesson through the electronic whiteboard. The participants of the course may include related personnel such as a lecturer and/or a lecturer of the network course, which is not limited in this specification.
In an embodiment, the electronic device may further obtain a teaching video of the online course, where the teaching video may be generated by recording the online course by using a collection device, and the collection device may include an electronic device such as a camera or a video camera, and the teaching video may include a picture of a teaching party and/or a listening party of the online course, and may further include a picture of an experimental site corresponding to the online course, which is not limited in this specification.
And 304, in response to a trigger operation for the auxiliary content, adjusting the video file to the starting input time to perform associated playing with the auxiliary content.
In an embodiment, the lecture content contained in the lecture data may be displayed, and the electronic device may adjust the lecture video to the input start time corresponding to the lecture content to perform the associated playing in response to the trigger operation according to the received trigger operation for the lecture content, so that the corresponding lecture video may also be adjusted to the portion related to the lecture content to be played in the process of displaying the lecture content, which may facilitate a user to better understand the related content, meet the requirement of the user for the lecture content and the lecture video associated playing, and improve the user experience. Of course, in the process of displaying the lecture content, the lecture content itself and the identification information corresponding to the lecture content may be displayed, so the trigger operation for the lecture content may be a trigger for the lecture content, or a trigger for corresponding identification information, where the identification information may include a name of the lecture content or summary information of the lecture content, and the display form of the identification information may be a play control, and the like, which is not limited in this specification. The triggering operation for the lecture content or the identification information may be in any form, such as a mouse click operation, a mouse hover operation, a touch operation, a line-of-sight focusing operation, and the like, which is not limited in this specification.
In one embodiment, the lecture data corresponding to the network lesson may include a plurality of lecture content groups, and the "lecture content group" is hereinafter referred to as a "knowledge point". A plurality of knowledge points may be included in the lecture data corresponding to the network lesson, and each knowledge point may include at least one lecture content and a start input time corresponding to a first lecture content in the at least one lecture content. The electronic equipment can also respond to the triggering operation aiming at the lecture contents, determine the target knowledge point triggered by the triggering operation, thereby displaying all the lecture contents contained in the target knowledge point, and adjusting the teaching video to the input starting time corresponding to the first lecture contents contained in the target knowledge point for associated playing.
In an embodiment, the same knowledge point may include one lecture content, and may also include multiple lecture contents whose adjacent intervals are not greater than a preset time length, that is, a time interval between the input ending time of the previous lecture content and the input starting time of the next lecture content between the adjacent lecture contents in the same knowledge point is not greater than the preset time length. In the adjacent knowledge points, the adjacent interval between the last lecture content contained in the previous knowledge point and the first lecture content contained in the next knowledge point is not less than the preset time length, that is, the time interval between the input ending time of the last lecture content contained in the previous knowledge point and the input starting time of the first lecture content contained in the next knowledge point is not less than the preset time length. The preset duration can be set according to actual requirements, and is not limited in this specification.
In one embodiment, in the adjacent knowledge point, the preset function may be triggered between the termination input time of the last lecture content contained in the previous knowledge point and the start input time of the first lecture content contained in the next knowledge point. The preset functions may include functions that may be triggered by the classroom participant, such as a shape input function, a line input function, and a text input function, which are not limited in this specification.
In one embodiment, the same knowledge point may include a plurality of lecture contents corresponding to the same selected area on the lecture content input interface. The lecture content input interface may include an interface through which a classroom participant can input lecture content, such as an electronic whiteboard interface or an input interface of an electronic device, which is not limited in this specification. The selection area may be set according to actual requirements, the selection area may be a closed area or an unclosed area, and the shape of the selection area may be a circle, a square, or a polygon, which is not limited in this specification.
In an embodiment, the electronic device may display identification information corresponding to all knowledge points included in the lecture data corresponding to the network lesson, so that a trigger operation for the identification information of the target knowledge point may be received. The identification information of the knowledge points may be displayed in a display list, or the identification information of the knowledge points may be displayed at corresponding positions, the identification information may include names of the knowledge points or summary information of the knowledge points, and the display form of the identification information may be a play control, and the like, which is not limited in this specification. The server may automatically generate the identification information of each knowledge point when dividing each lecture content into the corresponding knowledge points, or the server may receive the identification information of each knowledge point input by the user when dividing each lecture content into the corresponding knowledge points.
In an embodiment, the electronic device may directly display identification information corresponding to all the lecture contents included in the lecture data corresponding to the network course, so as to receive a trigger operation for the identification information of the lecture contents, where the identification information may include a name of the lecture contents or summary information of the lecture contents, and the display form of the identification information may be a play control, which is not limited in this specification.
In an embodiment, each knowledge point may record a start input time corresponding to a first lecture content and a stop input time corresponding to a last lecture content. When the lecture video is adjusted to the beginning input time corresponding to the first lecture content contained in the target knowledge point for playing, the ending input time corresponding to the last lecture content contained in the target knowledge point can be determined, and the lecture video can be controlled to stop playing at the ending input time, so that the lecture content contained in any one knowledge point and the lecture video in the corresponding time interval can be played in a single association manner, a user can flexibly control the play of the lecture content and the lecture video, and the playback requirement of the user on playback data corresponding to any one knowledge point can be met.
In an embodiment, each knowledge point may only record the start input time corresponding to the contained first lecture content, and then, when the lecture video is adjusted to the start input time corresponding to the first lecture content contained in the target knowledge point for playing, the start input time corresponding to the first lecture content contained in the next knowledge point adjacent to the target knowledge point may be used as the termination input time of the target knowledge point, so that the lecture video may be controlled to stop playing at the termination input time.
In an embodiment, each knowledge point may only record the input start time corresponding to the contained first lecture content, and then, in the case that the lecture video is adjusted to the input start time corresponding to the first lecture content contained in the target knowledge point for playing, the lecture content and the lecture video corresponding to other knowledge points after the target knowledge point may be played in association with each other, so that the lecture content and the lecture video content corresponding to each knowledge point are played in sequence, and the continuity of playing the lecture content and the lecture video may be improved.
In an embodiment, the lecture data corresponding to the network course may further include location information of the lecture content in the lecture content input interface, and the electronic device may determine, in response to a trigger operation for the lecture content, target location information corresponding to the lecture content, so that the lecture content may be displayed at the determined target location information. The position information may include a coordinate position of the lecture content in the lecture content input interface or a relative position relationship with a standard reference object, and the like, which is not limited in this specification. The position information of the lecture content in the lecture content input interface is recorded in the lecture data, so that the lecture content can be displayed at the corresponding position information in the playback process of the playback data of the network course, the lecture content can be displayed more accurately, the lecture content input in the course of teaching the network course can be better reproduced, and the user can understand the lecture content conveniently.
In an embodiment, the lecture data corresponding to the network course may include a document opened during the course participant inputting the lecture content and page information where the document is located, where the page information may include the number of pages, the number of paragraphs, the number of lines, and the like corresponding to the document content in the document. The electronic equipment can respond to the triggering operation aiming at the lecture content, call the corresponding document according to the lecture content, and can carry out associated display on the document content corresponding to the page information and the lecture content in the document, so that the matched display of the document content and the lecture content can be realized, a user can conveniently understand the lecture content and the use experience of the user is improved.
Fig. 4 is a flowchart of a playback method for playing back data according to an exemplary embodiment. As shown in fig. 4, taking a playback process of playback data of a network course as an example for illustration, the method may be applied to an electronic device (e.g., the electronic device 13 shown in fig. 1, etc.); the method may comprise the steps of:
In an embodiment, the video recording process may be a live broadcast of the network course or a recording process, the auxiliary data may be lecture data obtained in the course of giving lessons to the network course, the video file is a video of giving lessons to the network course, and the recording participant is the course participant.
In one embodiment, the electronic device may obtain playback data of the network course, where the playback data may include lecture data and a lecture video corresponding to the network course, and the lecture data may include lecture content input by a course participant during a course of the network course and a start input time corresponding to the lecture content. The lecture contents may include text contents input by the course participant and contents such as drawn images and logos. For example, the lecture data may include text information, a circled drawing, or a drawn picture, etc., which are input by the teacher of the network lesson through the electronic whiteboard. The participants of the course may include related personnel such as a lecturer and/or a lecturer of the network course, which is not limited in this specification.
In an embodiment, the electronic device may further obtain a teaching video of the online course, where the teaching video may be generated by recording the online course by using a collection device, and the collection device may include an electronic device such as a camera or a video camera, and the teaching video may include picture data of a teaching party and/or a listening party of the online course, and may also include picture data of an experimental site corresponding to the online course, and the like.
In an embodiment, the electronic device may receive a trigger operation for a teaching video, and may determine, in response to the trigger operation, a specified time indicated by the trigger operation and target lecture content corresponding to the specified time. The trigger operation may be an opening operation of a teaching video, then the specified time indicated by the trigger operation may be a starting time of the teaching video, of course, the trigger operation may be a trigger operation of a certain position of a playing progress bar of the teaching video, then the specified time indicated by the trigger operation may be a time corresponding to a certain position of the playing progress bar, and this is not limited in this specification. Wherein, the input starting time corresponding to the target lecture content is not later than the designated time, and the input starting time corresponding to the next lecture content of the target lecture content is later than the designated time.
In an embodiment, the lecture data corresponding to the network course may include a plurality of lecture contents, the plurality of lecture contents may be divided into corresponding knowledge points, the target lecture content includes all lecture contents included in the target knowledge point, and the input start time corresponding to the target lecture content is the input start time corresponding to the first lecture content included in the target knowledge point. The content of the lecture next to the target lecture content is all the lecture content contained in the knowledge point next to the target knowledge point, and the input starting time corresponding to the content of the lecture next to the target lecture content is the input starting time corresponding to the content of the first lecture contained in the knowledge point next to the target knowledge point.
And 406, displaying the target auxiliary content, and adjusting the video file to the specified time to perform associated playing.
In an embodiment, the electronic device may display the determined target lecture content, and adjust the teaching video to the designated time for associated display, so that when the teaching video is adjusted to the designated time for playing, the target lecture content corresponding to the designated time may be displayed, associated play of the teaching video and the target lecture content may be realized, a user may better understand related content, a requirement of the user on the associated play of the lecture content and the teaching video may be satisfied, and a use experience of the user may be improved.
For convenience of understanding, the following describes the technical solutions of the description in detail by taking the generation and playback process of the playback data of the network lesson as an example.
Fig. 5A and 5B are flowcharts of a method for generating playback data of a network course and a playback method provided in an exemplary embodiment of the present specification. As shown in fig. 5A, taking the generation of the playback data N of the network lesson M as an example, the method may include the following steps:
In this embodiment, the teacher who can receive the online lesson M can input the lecture content through the electronic whiteboard during the course of the lecture.
In this embodiment, the presentation interface of the network course M in the live process can be as shown in fig. 6. The presentation interface 600 can include a function area 601, a video area 602, and a lecture area 603. The function area 601 may be used to present preset functions that may be triggered, and may include a drawing function 6011, a rectangular input function 6012, a straight line input function 6013, a text input function 6014, and the like, and the function area 601 may further include a document option, a whiteboard option, a shared screen option, and the like. The video area 602 may be used to display a live video frame of a teacher giving lessons of the online lessons, and certainly, may also display other video frames related to the online lessons, which is not limited in this specification. The lecture area 603 may be used to show lecture contents input by a lecturer or a lecturer of the network course.
In the present embodiment, the teacher who can receive the network lesson M inputs the first piece of lecture content 6031 in the lecture area 603 shown in fig. 6 during the course of lecture. The server can acquire the start input time of the first lecture content 6031 as "10: 02", that is, 10 minutes 2 seconds, and the end input time of the first lecture content 6031 as "10: 04". Since the first lecture content is the detected lecture content input for the first time, the first lecture content may be divided into the first knowledge points. A first lecture content and a start input time of the first lecture content may be recorded in the first knowledge point.
In the present embodiment, the tutor who can receive the network lesson M inputs the second lecture content 6032 in the lecture area 603 shown in fig. 6 during the course of the lecture. The server can acquire the start input time of the second lecture content 6032 as "10: 07", that is, 10 minutes and 7 seconds, and the end input time of the second lecture content 6032 as "10: 10". It is assumed that the adjacent interval between the pieces of lecture contents belonging to the same knowledge point set in advance is not more than a preset time period of 10 seconds. The server may determine that the time interval between the start input time of the second lecture content 6032 and the end input time of the first lecture content 6031 is 3 seconds, which is less than a preset time period, and may also divide the second lecture content 6032 into the first knowledge points.
In step 503A, it is determined that the preset function is triggered.
In step 504A, the input third lecture content is received.
In the present embodiment, after the termination input time "10: 10" of the second lecture content 6032, the server may detect that the rectangular input function 6012 as shown in fig. 6 is triggered. After the rectangular function 6012 is triggered, the third lecture content 6033 that the teacher of the online lesson M inputs in the lecture area 603 shown in fig. 6 during the course of lecture may be received, where the input start time of the third lecture content 6033 is "10: 15", that is, 10 minutes and 15 seconds, and the input end time of the third lecture content 6033 is "10: 19". Since the rectangular input function 6012 is triggered after the termination input time of the second lecture content and before the start input time of the third lecture content, the third lecture content 6033 and the second lecture content 6032 belong to different knowledge points, and the server may divide the third lecture content 6033 into the second knowledge points.
In step 505A, the generated lecture data X is acquired.
In this embodiment, after the teacher of the network lesson M has input all the three lecture contents, the lecture data X corresponding to the network lesson M may be generated. The lecture data X may include a first knowledge point and a second knowledge point, wherein the first knowledge point may include a first lecture content 6031 and a second lecture content 6032, and the first knowledge point may further include a start input time "10: 02" of the first lecture content and a termination input time "10: 10" of the second lecture content. The third lecture content 6033, and the start input time "10: 15" and the end input time "10: 19" of the third lecture content may be included in the second knowledge point. Further, the lecture data X also records identification information "knowledge point 1" corresponding to the first knowledge point and identification information "knowledge point 2" corresponding to the second knowledge point.
In this embodiment, the server may further obtain a teaching video Y shown in the video area 602 as shown in fig. 6, where the teaching video Y may include a video image and audio information of the teacher of the online course M during the course of teaching.
In step 507A, playback data N is generated.
In this embodiment, the server may generate the playback data N of the network course M according to the obtained lecture data X and the lecture video Y, where the playback data N may be used to perform associated playing on corresponding lecture content and lecture video according to each input start time recorded in the lecture data X.
As shown in fig. 5B, taking the example of playing back the playback data N of the network lesson M, the method may include the following steps:
in step 501B, the playback data N of the network course M is obtained.
In this embodiment, the electronic device may obtain the aforementioned generated playback data N of the network course M. The lecture data X may include a first knowledge point and a second knowledge point, wherein the first knowledge point may include a first lecture content 6031 and a second lecture content 6032, and the first knowledge point may further include a start input time "10: 02" of the first lecture content and a termination input time "10: 10" of the second lecture content. The third lecture content 6033, and the start input time "10: 15" and the end input time "10: 19" of the third lecture content may be included in the second knowledge point. Further, the lecture data X also records identification information "knowledge point 1" corresponding to the first knowledge point and identification information "knowledge point 2" corresponding to the second knowledge point.
In the present embodiment, it is assumed that the user opens the lecture data X in the playback data N, and the presentation interface as described in fig. 7 can be obtained. In the presentation interface, the lecture area 603 can be used for presenting the content of each lecture and the knowledge point list 701, wherein the knowledge point list 701 can be used for presenting identification information corresponding to all knowledge points in the lecture data X, the identification information corresponding to all knowledge points contained in the lecture data can be clearly presented, and a user can conveniently and quickly know the number of the knowledge points contained in the lecture data. Of course, when the lecture data X is opened, only the content of each lecture or the list of knowledge points may be displayed, which is not limited in this specification.
In this embodiment, assuming that a trigger operation of the user for the second lecture content 6032 is received, the electronic apparatus may determine that a knowledge point corresponding to the trigger operation is a first knowledge point.
In this embodiment, the electronic apparatus can present all lecture contents contained in the first knowledge point, i.e., the first lecture content 6031 and the second lecture content 6032, in response to the above-described trigger operation, and can determine that the input start timing of the first lecture content contained in the first knowledge point is "10: 02". The electronic device may adjust the playing progress bar of the lecture video Y to the start input time "10: 02", as shown in fig. 8.
In this embodiment, it can be realized that in the process of displaying the first lecture content 6031 and the second lecture content 6032, the corresponding teaching video Y can be adjusted to the part related to the lecture content for playing, which can facilitate the user to better understand the related content, meet the requirement of the user for the related playing of the lecture content and the teaching video, and improve the user experience.
In the technical solution of the present specification, playback data in a live broadcast process, playback of the playback data, and the like may also be generated. The live broadcast can be game live broadcast, shopping live broadcast or education live broadcast, and the corresponding auxiliary data can be text information input by a live broadcast participant in the live broadcast process, drawn graphics or characters or triggered animation effects and the like. However, the principle of the technical solution is similar to the above embodiments, and the implementation details can be referred to the above embodiments, so that the detailed description is not repeated below.
Accordingly, fig. 9 is a flowchart of a method for generating live playback data according to an exemplary embodiment. As shown in fig. 9, a generation process of live playback data is exemplified; the method may comprise the steps of:
step 902, obtaining associated data in a live broadcasting process, where the associated data includes associated content input by a live broadcasting participant in the live broadcasting process and a start input time corresponding to the associated content.
And 904, acquiring the live broadcast video in the live broadcast process.
Step 906, generating playback data based on the associated data and the live video, wherein the playback data is used for performing associated playing on the associated content and the live video according to the starting input moment.
In an embodiment, the video recording process may be a live broadcast process of various network live broadcasts, the auxiliary data may be associated data obtained in the live broadcast process, the video file may be a live broadcast video, and the recording participant is a live broadcast participant. The associated data may be associated content input by a live broadcast participant in a live broadcast process such as a game live broadcast, a shopping live broadcast, or an education live broadcast, and the associated content may be text information input by the live broadcast participant, a drawn graphic or character, or a triggered animation effect, which is not limited in this specification.
The specific implementation process of steps 902 to 906 can refer to steps 202 to 206 described above, and will not be described herein.
Accordingly, fig. 10 is a flowchart of a playback method of live playback data according to an exemplary embodiment. As shown in fig. 10, a playback process of live playback data is exemplified; the method may comprise the steps of:
And step 1004, responding to the trigger operation aiming at the associated content, and adjusting the live video to the starting input time to perform associated playing with the associated content.
The specific implementation process of step 1002-step 1004 can refer to the above-mentioned step 302-step 304, and will not be described herein again.
Accordingly, fig. 11 is a flowchart of a playback method for live playback data according to an exemplary embodiment. As shown in fig. 11, a playback process of live playback data is exemplified; the method may comprise the steps of:
The specific implementation process of steps 1102-1106 can refer to steps 402-406, which are not described herein again.
For convenience of understanding, a live shopping session is taken as an example to be described below, and as shown in fig. 12, a display interface of a live shopping session may be adopted, and the display interface 1200 may include a video area 1201 and a goods area 1202. The video area 1201 may be used to display a live video picture of a live broadcast in a live broadcast process, and certainly, other video pictures related to the live broadcast may also be displayed, which is not limited in this specification. The product area 1202 may be used to display a product image 12021, a product price 12022, comment information 12023, and the like, and the product image 12021, the product price 12022, and the comment information 12023 may be recorded as associated content in associated data corresponding to the live broadcast.
Fig. 13 is a flowchart of a method for generating assistance data according to an exemplary embodiment. As shown in fig. 13, the method may include the steps of:
In this embodiment, the auxiliary data may be lecture data acquired during a course of an online course, and the video file may be a video of the course of the online course, or the auxiliary data may be associated data acquired during a live broadcast process, and the video file may be a live broadcast video acquired during the live broadcast process.
In the present embodiment, only the process in which the server generates auxiliary data based on auxiliary content and the start input time corresponding to the auxiliary content is involved, and the acquisition process of the video file is not involved. In fact, the video file may be obtained by recording through a capture device by a server that generates the auxiliary data, or the video file may also be obtained by recording through a capture device by another server that is different from the server that generates the auxiliary data, which is not limited in this specification. Wherein the video file associated with the auxiliary content in the auxiliary data can be played by associating the video file with the auxiliary data. The video file and the auxiliary data may be associated by means in the related art, such as adding a mark corresponding to the starting input time of the auxiliary content to the video file, and the like, which is not limited in this specification.
The specific implementation process of step 1302 may refer to step 202, which is not described herein again.
Fig. 14 is a flowchart of a method for playing auxiliary data according to an exemplary embodiment. As shown in fig. 14, the method may include the steps of:
step 1402, obtaining auxiliary data, where the auxiliary data includes auxiliary content input by a recording participant in a recording process of a video file and a start input time corresponding to the auxiliary content.
Step 1404, receiving a trigger operation for the auxiliary content, where the trigger operation is used to call the video file and adjust the video file to the start input time to perform associated playing with the auxiliary content.
In this embodiment, the auxiliary data may be lecture data acquired during a course of an online course, and the video file may be a video of the course of the online course, or the auxiliary data may be associated data acquired during a live broadcast process, and the video file may be a live broadcast video acquired during the live broadcast process.
In the present embodiment, only the process of the electronic device acquiring the auxiliary data and receiving the trigger operation for the auxiliary content is involved, and the acquisition process of the video file is not involved. In fact, the video file may be obtained by recording through a capture device by a server that generates the auxiliary data, or the video file may also be obtained by recording through a capture device by another server that is different from the server that generates the auxiliary data, which is not limited in this specification. Wherein the video file may be pre-associated with the auxiliary data such that the video file associated with the auxiliary content in the auxiliary data may be played. The video file and the auxiliary data may be associated by means in the related art, such as adding a mark corresponding to the starting input time of the auxiliary content to the video file, and the like, which is not limited in this specification.
The specific implementation process of steps 1402-1404 can refer to steps 302-304 described above, and will not be described herein.
FIG. 15 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 15, at the hardware level, the device includes a processor 1502, an internal bus 1504, a network interface 1506, a memory 1508, and a non-volatile storage 1510, although hardware for other functions may be included. The processor 1502 reads a corresponding computer program from the non-volatile memory 1510 into the memory 1508 and then runs the computer program, thereby forming a generation apparatus of playback data on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 16, in a software implementation, the playback data generation apparatus may include a first auxiliary data acquisition unit 1602, a video acquisition unit 1604, and a playback data generation unit 1606. Wherein:
a first auxiliary data obtaining unit 1602, configured to obtain auxiliary data in a video recording process, where the auxiliary data includes auxiliary content input by a recording participant in the video recording process and a start input time corresponding to the auxiliary content;
a video acquiring unit 1604, configured to acquire a video file obtained by recording;
a playback data generation unit 1606 configured to generate playback data based on the auxiliary data and the video file, the playback data being used for associated playback of the auxiliary content and the video file according to the start input time.
Optionally, when a plurality of auxiliary contents are input by the recording participant, the auxiliary data includes each auxiliary content and a start input time corresponding to each auxiliary content.
Optionally, when the recording participant inputs a plurality of auxiliary contents, the auxiliary contents are respectively divided into corresponding auxiliary content groups, and each auxiliary content group includes at least one auxiliary content and a start input time corresponding to a first auxiliary content of the at least one auxiliary content.
Optionally, the same auxiliary content group includes one piece of auxiliary content, or includes a plurality of pieces of auxiliary content whose adjacent intervals are not greater than a preset time length;
in the adjacent auxiliary content group, the adjacent interval between the last auxiliary content contained in the former group and the first auxiliary content contained in the latter group is not less than the preset time length; alternatively, in the adjacent subsidiary content group, the preset function is triggered between the termination input time of the last subsidiary content included in the previous group and the start input time of the first lecture content included in the next group.
Optionally, the first auxiliary data obtaining unit 1602 is specifically configured to:
determining a plurality of selection areas formed on the auxiliary content input interface;
and dividing the auxiliary contents in the same selected area into the same auxiliary content group.
Optionally, the auxiliary data is lecture data acquired in a course of giving lessons of the network lesson, and the video file is a teaching video of the network lesson; or,
the auxiliary data is associated data obtained in a live broadcast process, and the video file is a live broadcast video obtained in the live broadcast process.
Optionally, the method further includes:
a time determining unit 1608 is configured to determine a termination input time corresponding to each piece of auxiliary content, and record the termination input time in the auxiliary data.
Optionally, the method further includes:
an information recording unit 1610, configured to record position information of the auxiliary content in an auxiliary content input interface into the auxiliary data, so as to display the auxiliary content according to the position information when the playback data is played back.
Optionally, the method further includes:
the document recording unit 1612 is configured to record a document opened in a process of inputting the auxiliary content by the recording participant and page information where the document is located into the auxiliary data, so as to call the document and display document content corresponding to the page information when the playback data is played back.
Optionally, the auxiliary content includes at least one of: text content, drawing content.
FIG. 17 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to FIG. 17, at the hardware level, the apparatus includes a processor 1702, an internal bus 1704, a network interface 1706, a memory 1708 and a non-volatile memory 1710, although hardware for other functions may be included. The processor 1702 reads a corresponding computer program from the nonvolatile memory 1710 into the memory 1708 and then runs, forming a playback apparatus for playing back data on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 18, in a software implementation, the playback apparatus of playback data may include a first playback data acquisition unit 1802 and a first play unit 1804. Wherein:
a first playback data obtaining unit 1802, configured to obtain playback data, where the playback data includes a video file and auxiliary data corresponding to the video file, and the auxiliary data includes auxiliary content input by a recording participant during recording of the video file and a start input time corresponding to the auxiliary content;
a first playing unit 1804 configured to adjust the video file to the start input time to play in association with the auxiliary content in response to a triggering operation for the auxiliary content.
Optionally, the auxiliary data includes a plurality of auxiliary content groups, each auxiliary content group includes at least one piece of auxiliary content and a start input time corresponding to a first piece of auxiliary content in the at least one piece of auxiliary content; the first playing unit 1804 is specifically configured to:
determining a target auxiliary content group triggered by the triggering operation;
and displaying the auxiliary content contained in the target auxiliary content group, and adjusting the video file to the starting input time corresponding to the first auxiliary content contained in the target auxiliary content group so as to perform associated playing with the auxiliary content.
Optionally, the same auxiliary content group includes one piece of auxiliary content, or includes a plurality of pieces of auxiliary content whose adjacent intervals are not greater than a preset time length;
in the adjacent auxiliary content group, the adjacent interval between the last auxiliary content contained in the former group and the first auxiliary content contained in the latter group is not less than the preset time length; alternatively, in the adjacent auxiliary content group, the preset function is triggered between the termination input time of the last auxiliary content included in the previous group and the start input time of the first auxiliary content included in the next group.
Optionally, the same auxiliary content group includes a plurality of pieces of auxiliary content corresponding to the same selection area on the auxiliary content input interface.
Optionally, the first playing unit 1804 is specifically configured to:
displaying identification information respectively corresponding to all auxiliary content contained in the auxiliary data;
receiving a trigger operation for the identification information of the auxiliary content.
Optionally, the auxiliary data further includes a termination input time corresponding to the auxiliary content; further comprising:
a time determining unit 1806, configured to determine a termination input time corresponding to the auxiliary content triggered by the trigger operation;
a control unit 1808, configured to control the video file to stop playing at the termination input time.
Optionally, the auxiliary data further includes location information of the auxiliary content in an auxiliary content input interface; the first playing unit 1804 is specifically configured to:
determining target position information corresponding to the auxiliary content;
presenting the auxiliary content at the target location information.
Optionally, the auxiliary data further includes a document opened in the process of inputting the auxiliary content by the recording participant and page information where the document is located; the first playing unit 1804 is specifically configured to:
calling a corresponding document according to the auxiliary content;
and performing associated display on the document content corresponding to the page information in the document and the auxiliary content.
Referring to fig. 19, in a software implementation, the playback apparatus of the playback data may include a second playback data obtaining unit 1902, a determining unit 1904, and a second playing unit 1906. Wherein:
a second playback data obtaining unit 1902, configured to obtain playback data, where the playback data includes a video file and auxiliary data corresponding to the video file, and the auxiliary data includes auxiliary content input by a recording participant in a recording process of the video file and a start input time corresponding to the auxiliary content;
a determining unit 1904, configured to determine, in response to a trigger operation for the video file, a specified time indicated by the trigger operation and target auxiliary content corresponding to the specified time; wherein the input starting time corresponding to the target auxiliary content is not later than the designated time, and the input starting time corresponding to the auxiliary content next to the target auxiliary content is later than the designated time;
a second playing unit 1906, configured to display the target auxiliary content, and adjust the video file to the specified time for associated playing.
Fig. 20 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 20, at the hardware level, the apparatus includes a processor 2002, an internal bus 2004, a network interface 2006, a memory 2008, and a non-volatile storage 2010, but may also include hardware required for other functions. The processor 2002 reads the corresponding computer program from the non-volatile memory 2010 into the memory 2008 and runs the computer program, thereby forming a generating device of the auxiliary data on a logic level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 21, in a software embodiment, the auxiliary data generating means may comprise an auxiliary data generating unit 2102. Wherein:
an auxiliary data generating unit 2102 configured to generate auxiliary data corresponding to a video file according to auxiliary content input by a recording participant during recording of the video file and a start input time corresponding to the auxiliary content, where the auxiliary data is used for associated playback with the video file according to the start input time.
FIG. 22 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 22, at the hardware level, the apparatus includes a processor 2202, an internal bus 2204, a network interface 2206, a memory 2208, and a nonvolatile memory 2210, but may also include hardware required for other functions. The processor 2202 reads a corresponding computer program from the nonvolatile memory 2210 into the memory 2208 and then runs the computer program, thereby forming a playing device of the auxiliary data on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 23, in a software implementation, the auxiliary data playing apparatus may include a second auxiliary data obtaining unit 2302 and a receiving unit 2304. Wherein:
a second auxiliary data obtaining unit 2302, configured to obtain auxiliary data, where the auxiliary data includes auxiliary content input by a recording participant in a recording process of a video file and a start input time corresponding to the auxiliary content;
a receiving unit 2304, configured to receive a trigger operation for the auxiliary content, where the trigger operation is used to invoke the video file and adjust the video file to the start input time to perform associated playing with the auxiliary content.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.
Claims (30)
1. A method for generating playback data, comprising:
acquiring auxiliary data in a video recording process, wherein the auxiliary data comprises auxiliary content input by a recording participant in the video recording process and input starting time corresponding to the auxiliary content;
acquiring a video file obtained by recording;
generating playback data based on the auxiliary data and the video file, the playback data being used for the associated playing of the auxiliary content and the video file according to the start input moment.
2. The method of claim 1,
when a plurality of auxiliary contents are input by the recording participant, the auxiliary data includes each auxiliary content and a start input time corresponding to each auxiliary content.
3. The method of claim 1,
when the recording participant inputs a plurality of auxiliary contents, the auxiliary contents are respectively divided into corresponding auxiliary content groups, and each auxiliary content group comprises at least one auxiliary content and the input starting time corresponding to the first auxiliary content in the at least one auxiliary content.
4. The method of claim 3,
the same auxiliary content group comprises one piece of auxiliary content or a plurality of pieces of auxiliary content with adjacent intervals not more than preset time length;
in the adjacent auxiliary content group, the adjacent interval between the last auxiliary content contained in the former group and the first auxiliary content contained in the latter group is not less than the preset time length; alternatively, in the adjacent subsidiary content group, the preset function is triggered between the termination input time of the last subsidiary content included in the previous group and the start input time of the first lecture content included in the next group.
5. The method according to claim 3, wherein the dividing the auxiliary content into the corresponding auxiliary content groups comprises:
determining a plurality of selection areas formed on the auxiliary content input interface;
and dividing the auxiliary contents in the same selected area into the same auxiliary content group.
6. The method of claim 1,
the auxiliary data is lecture data acquired in the course of giving lessons of the network lessons, and the video file is a teaching video of the network lessons; or,
the auxiliary data is associated data obtained in a live broadcast process, and the video file is a live broadcast video obtained in the live broadcast process.
7. The method of claim 1, further comprising:
and determining the termination input time corresponding to each piece of auxiliary content, and recording the termination input time in the auxiliary data.
8. The method of claim 1, further comprising:
and recording the position information of the auxiliary content in an auxiliary content input interface into the auxiliary data so as to display the auxiliary content according to the position information when the playback data is played back.
9. The method of claim 1, further comprising:
recording a document opened in the process of inputting the auxiliary content by the recording participant and page information where the document is located into the auxiliary data so as to call the document and display the document content corresponding to the page information when the playback data is played back.
10. The method of claim 1, wherein the auxiliary content comprises at least one of: text content, drawing content.
11. A playback method of playback data, comprising:
acquiring playback data, wherein the playback data comprises a video file and auxiliary data corresponding to the video file, and the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and starting input time corresponding to the auxiliary content;
and in response to a trigger operation for the auxiliary content, adjusting the video file to the starting input moment to perform associated playing with the auxiliary content.
12. The method according to claim 11, wherein the auxiliary data comprises a plurality of auxiliary content groups, each auxiliary content group comprising at least one piece of auxiliary content and a start input time corresponding to a first piece of auxiliary content of the at least one piece of auxiliary content; in response to a trigger operation for the auxiliary content, adjusting the video file to the starting input time to play in association with the auxiliary content, including:
determining a target auxiliary content group triggered by the triggering operation;
and displaying the auxiliary content contained in the target auxiliary content group, and adjusting the video file to the starting input time corresponding to the first auxiliary content contained in the target auxiliary content group so as to perform associated playing with the auxiliary content.
13. The method of claim 12,
the same auxiliary content group comprises one piece of auxiliary content or a plurality of pieces of auxiliary content with adjacent intervals not more than preset time length;
in the adjacent auxiliary content group, the adjacent interval between the last auxiliary content contained in the former group and the first auxiliary content contained in the latter group is not less than the preset time length; alternatively, in the adjacent auxiliary content group, the preset function is triggered between the termination input time of the last auxiliary content included in the previous group and the start input time of the first auxiliary content included in the next group.
14. The method of claim 12, wherein the same auxiliary content group comprises a plurality of pieces of auxiliary content corresponding to the same selection area on the auxiliary content input interface.
15. The method of claim 11, wherein the triggering operation for the auxiliary content comprises:
displaying identification information respectively corresponding to all auxiliary content contained in the auxiliary data;
receiving a trigger operation for the identification information of the auxiliary content.
16. The method according to claim 11, wherein the auxiliary data further includes a termination input time corresponding to the auxiliary content; the method further comprises the following steps:
determining the termination input time corresponding to the auxiliary content triggered by the trigger operation;
and controlling the video file to stop playing at the termination input moment.
17. The method of claim 11, wherein the auxiliary data further comprises location information of the auxiliary content in an auxiliary content input interface; the method further comprises the following steps:
determining target position information corresponding to the auxiliary content;
presenting the auxiliary content at the target location information.
18. The method according to claim 11, wherein the auxiliary data further includes a document opened during the recording participant's input of auxiliary content and page information where the document is located; the method further comprises the following steps:
calling a corresponding document according to the auxiliary content;
and performing associated display on the document content corresponding to the page information in the document and the auxiliary content.
19. A playback method of playback data, comprising:
acquiring playback data, wherein the playback data comprises a video file and auxiliary data corresponding to the video file, and the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and starting input time corresponding to the auxiliary content;
responding to a trigger operation aiming at the video file, and determining a specified time indicated by the trigger operation and target auxiliary content corresponding to the specified time; wherein the input starting time corresponding to the target auxiliary content is not later than the designated time, and the input starting time corresponding to the auxiliary content next to the target auxiliary content is later than the designated time;
and displaying the target auxiliary content, and adjusting the video file to the specified time for associated playing.
20. A method of generating assistance data, comprising:
and generating auxiliary data corresponding to the video file according to auxiliary content input by a recording participant in the recording process of the video file and the starting input time corresponding to the auxiliary content, wherein the auxiliary data is used for being associated with the video file for playing according to the starting input time.
21. A method for playing auxiliary data, comprising:
acquiring auxiliary data, wherein the auxiliary data comprises auxiliary content input by a recording participant in the recording process of a video file and a starting input moment corresponding to the auxiliary content;
receiving a trigger operation aiming at the auxiliary content, wherein the trigger operation is used for calling the video file and adjusting the video file to the starting input moment so as to carry out associated playing with the auxiliary content.
22. An apparatus for generating playback data, comprising:
the video recording device comprises a first auxiliary data acquisition unit, a second auxiliary data acquisition unit and a video recording unit, wherein the first auxiliary data acquisition unit is used for acquiring auxiliary data in a video recording process, and the auxiliary data comprises auxiliary content input by a recording participant in the video recording process and starting input time corresponding to the auxiliary content;
the video acquisition unit is used for acquiring a video file obtained by recording;
a playback data generation unit configured to generate playback data based on the auxiliary data and the video file, the playback data being used to perform associated playback of the auxiliary content and the video file according to the start input time.
23. A playback apparatus that plays back data, comprising:
a first playback data acquisition unit, configured to acquire playback data, where the playback data includes a video file and auxiliary data corresponding to the video file, and the auxiliary data includes auxiliary content input by a recording participant during recording of the video file and a start input time corresponding to the auxiliary content;
a first playing unit, configured to adjust the video file to the start input time to perform associated playing with the auxiliary content in response to a trigger operation for the auxiliary content.
24. A playback apparatus that plays back data, comprising:
a second playback data acquisition unit configured to acquire playback data including a video file and auxiliary data corresponding to the video file, the auxiliary data including auxiliary content input by a recording participant during recording of the video file and a start input time corresponding to the auxiliary content;
the determining unit is used for responding to the triggering operation of the video file, and determining the specified time indicated by the triggering operation and the target auxiliary content corresponding to the specified time; wherein the input starting time corresponding to the target auxiliary content is not later than the designated time, and the input starting time corresponding to the auxiliary content next to the target auxiliary content is later than the designated time;
and the second playing unit is used for displaying the target auxiliary content and adjusting the video file to the specified time for associated playing.
25. An apparatus for generating auxiliary data, comprising:
the auxiliary data generating unit is used for generating auxiliary data corresponding to the video file according to auxiliary content input by a recording participant in the recording process of the video file and the input starting time corresponding to the auxiliary content, and the auxiliary data is used for being associated with the video file according to the input starting time.
26. An apparatus for playing auxiliary data, comprising:
the second auxiliary data acquisition unit is used for acquiring auxiliary data, wherein the auxiliary data comprises auxiliary content input by a recording participant in the recording process of the video file and input starting time corresponding to the auxiliary content;
a receiving unit, configured to receive a trigger operation for the auxiliary content, where the trigger operation is used to invoke the video file and adjust the video file to the start input time to perform associated playing with the auxiliary content.
27. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-19 by executing the executable instructions.
28. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-19.
29. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 20-21 by executing the executable instructions.
30. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 20-21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111171848.7A CN114157877B (en) | 2021-10-08 | 2021-10-08 | Playback data generation method and device, playback method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111171848.7A CN114157877B (en) | 2021-10-08 | 2021-10-08 | Playback data generation method and device, playback method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114157877A true CN114157877A (en) | 2022-03-08 |
CN114157877B CN114157877B (en) | 2024-04-16 |
Family
ID=80462532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111171848.7A Active CN114157877B (en) | 2021-10-08 | 2021-10-08 | Playback data generation method and device, playback method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114157877B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745594A (en) * | 2022-04-11 | 2022-07-12 | 北京高途云集教育科技有限公司 | Method and device for generating live playback video, electronic equipment and storage medium |
CN114885187A (en) * | 2022-06-23 | 2022-08-09 | 深圳市必提教育科技有限公司 | Live broadcast playback method and system for online education |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011223078A (en) * | 2010-04-05 | 2011-11-04 | Bug Inc | Video recording/playback device and method for controlling plural devices |
CN104581351A (en) * | 2015-01-28 | 2015-04-29 | 上海与德通讯技术有限公司 | Audio/video recording method, audio/video playing method and electronic device |
CN106203632A (en) * | 2016-07-12 | 2016-12-07 | 中国科学院科技政策与管理科学研究所 | A kind of limited knowledge collection recombinant is also distributed the study of extraction and application system method |
CN108028969A (en) * | 2015-09-25 | 2018-05-11 | 高通股份有限公司 | system and method for video processing |
CN109600678A (en) * | 2018-12-19 | 2019-04-09 | 北京达佳互联信息技术有限公司 | Information displaying method, apparatus and system, server, terminal, storage medium |
CN110300274A (en) * | 2018-03-21 | 2019-10-01 | 腾讯科技(深圳)有限公司 | Method for recording, device and the storage medium of video file |
CN110415569A (en) * | 2019-06-29 | 2019-11-05 | 嘉兴梦兰电子科技有限公司 | Share educational method and system in campus classroom |
US20190370283A1 (en) * | 2018-05-30 | 2019-12-05 | Baidu Usa Llc | Systems and methods for consolidating recorded content |
CN110717470A (en) * | 2019-10-16 | 2020-01-21 | 上海极链网络科技有限公司 | Scene recognition method and device, computer equipment and storage medium |
CN111523293A (en) * | 2020-04-08 | 2020-08-11 | 广东小天才科技有限公司 | Method and device for assisting user in information input in live broadcast teaching |
CN111726525A (en) * | 2020-06-19 | 2020-09-29 | 维沃移动通信有限公司 | Video recording method, video recording device, electronic equipment and storage medium |
CN112002184A (en) * | 2019-05-27 | 2020-11-27 | 广东小天才科技有限公司 | Learning record-based auxiliary learning method and system |
CN112040277A (en) * | 2020-09-11 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Video-based data processing method and device, computer and readable storage medium |
CN112543368A (en) * | 2019-09-20 | 2021-03-23 | 北京小米移动软件有限公司 | Video processing method, video playing method, video processing device, video playing device and storage medium |
CN113038230A (en) * | 2021-03-10 | 2021-06-25 | 读书郎教育科技有限公司 | System and method for playing back videos and adding notes in intelligent classroom |
US20210266650A1 (en) * | 2020-02-26 | 2021-08-26 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
-
2021
- 2021-10-08 CN CN202111171848.7A patent/CN114157877B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011223078A (en) * | 2010-04-05 | 2011-11-04 | Bug Inc | Video recording/playback device and method for controlling plural devices |
CN104581351A (en) * | 2015-01-28 | 2015-04-29 | 上海与德通讯技术有限公司 | Audio/video recording method, audio/video playing method and electronic device |
CN108028969A (en) * | 2015-09-25 | 2018-05-11 | 高通股份有限公司 | system and method for video processing |
CN106203632A (en) * | 2016-07-12 | 2016-12-07 | 中国科学院科技政策与管理科学研究所 | A kind of limited knowledge collection recombinant is also distributed the study of extraction and application system method |
CN110300274A (en) * | 2018-03-21 | 2019-10-01 | 腾讯科技(深圳)有限公司 | Method for recording, device and the storage medium of video file |
US20190370283A1 (en) * | 2018-05-30 | 2019-12-05 | Baidu Usa Llc | Systems and methods for consolidating recorded content |
CN109600678A (en) * | 2018-12-19 | 2019-04-09 | 北京达佳互联信息技术有限公司 | Information displaying method, apparatus and system, server, terminal, storage medium |
CN112002184A (en) * | 2019-05-27 | 2020-11-27 | 广东小天才科技有限公司 | Learning record-based auxiliary learning method and system |
CN110415569A (en) * | 2019-06-29 | 2019-11-05 | 嘉兴梦兰电子科技有限公司 | Share educational method and system in campus classroom |
CN112543368A (en) * | 2019-09-20 | 2021-03-23 | 北京小米移动软件有限公司 | Video processing method, video playing method, video processing device, video playing device and storage medium |
CN110717470A (en) * | 2019-10-16 | 2020-01-21 | 上海极链网络科技有限公司 | Scene recognition method and device, computer equipment and storage medium |
US20210266650A1 (en) * | 2020-02-26 | 2021-08-26 | The Toronto-Dominion Bank | Systems and methods for controlling display of supplementary data for video content |
CN111523293A (en) * | 2020-04-08 | 2020-08-11 | 广东小天才科技有限公司 | Method and device for assisting user in information input in live broadcast teaching |
CN111726525A (en) * | 2020-06-19 | 2020-09-29 | 维沃移动通信有限公司 | Video recording method, video recording device, electronic equipment and storage medium |
CN112040277A (en) * | 2020-09-11 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Video-based data processing method and device, computer and readable storage medium |
CN113038230A (en) * | 2021-03-10 | 2021-06-25 | 读书郎教育科技有限公司 | System and method for playing back videos and adding notes in intelligent classroom |
Non-Patent Citations (1)
Title |
---|
池毓晨;: "希沃助手, 助推物理课堂教学资源的智慧互动生成", 湖南中学物理, no. 05, 15 May 2020 (2020-05-15), pages 18 - 20 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745594A (en) * | 2022-04-11 | 2022-07-12 | 北京高途云集教育科技有限公司 | Method and device for generating live playback video, electronic equipment and storage medium |
CN114885187A (en) * | 2022-06-23 | 2022-08-09 | 深圳市必提教育科技有限公司 | Live broadcast playback method and system for online education |
CN114885187B (en) * | 2022-06-23 | 2023-08-08 | 深圳市必提教育科技有限公司 | Live broadcast playback method and system for online education |
Also Published As
Publication number | Publication date |
---|---|
CN114157877B (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114157877B (en) | Playback data generation method and device, playback method and device | |
CN103780973A (en) | Video label adding method and video label adding device | |
CN109379639B (en) | Method and device for pushing video content object and electronic equipment | |
KR102082313B1 (en) | Historical experience education system using virtual reality | |
JP5243365B2 (en) | CONTENT GENERATION DEVICE, CONTENT GENERATION METHOD, AND CONTENT GENERATION PROGRAM | |
US11758217B2 (en) | Integrating overlaid digital content into displayed data via graphics processing circuitry | |
US20230054388A1 (en) | Method and apparatus for presenting audiovisual work, device, and medium | |
CN108614872A (en) | Course content methods of exhibiting and device | |
Murray | Raised fists: Politics, technology, and embodiment in 1970s French feminist video collectives | |
WO2024131577A1 (en) | Method and apparatus for creating special effect, and device and medium | |
CN114357202A (en) | Tutorial data display method and device, computer equipment and storage medium | |
US10042528B2 (en) | Systems and methods of dynamically rendering a set of diagram views based on a diagram model stored in memory | |
CN111985035B (en) | BIM-based achievement display method and device for campus reduction system and storage medium | |
CN113391745A (en) | Method, device, equipment and storage medium for processing key contents of network courses | |
CN115563320A (en) | Information reply method, device, electronic equipment, computer storage medium and product | |
CN111726693A (en) | Audio and video playing method, device, equipment and medium | |
WO2014126497A1 (en) | Automatic filming and editing of a video clip | |
CN111787391A (en) | Information card display method, device, equipment and storage medium | |
CN116069211A (en) | Screen recording processing method and terminal equipment | |
US20220350650A1 (en) | Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory | |
US10558729B2 (en) | Enhanced collection environments | |
CN106162376A (en) | A kind of multimedia is compiled as the method and device of video playback file automatically | |
CN111079051B (en) | Method and device for playing display content | |
CN110012342A (en) | A kind of method and system for recording micro- class video | |
CN111897979B (en) | Information card instant display control method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40070773 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |