CN110035330B - Video generation method, system, device and storage medium based on online education - Google Patents

Video generation method, system, device and storage medium based on online education Download PDF

Info

Publication number
CN110035330B
CN110035330B CN201910305029.3A CN201910305029A CN110035330B CN 110035330 B CN110035330 B CN 110035330B CN 201910305029 A CN201910305029 A CN 201910305029A CN 110035330 B CN110035330 B CN 110035330B
Authority
CN
China
Prior art keywords
video
online education
paragraph
text
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910305029.3A
Other languages
Chinese (zh)
Other versions
CN110035330A (en
Inventor
杨正大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Zhitong Consulting Co Ltd
Original Assignee
Shanghai Ping An Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ping An Education Technology Co ltd filed Critical Shanghai Ping An Education Technology Co ltd
Priority to CN201910305029.3A priority Critical patent/CN110035330B/en
Publication of CN110035330A publication Critical patent/CN110035330A/en
Priority to TW108134407A priority patent/TW202040498A/en
Application granted granted Critical
Publication of CN110035330B publication Critical patent/CN110035330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video generation method, a system, equipment and a storage medium based on online education, wherein the method comprises the following steps: dividing a recorded video of at least one section of online education into a plurality of video sections according to a preset time length; adding at least one video label to the video paragraph according to the content information of the video paragraph; editing the video paragraphs with the video labels according to a preset first user label to obtain a video file containing a plurality of video paragraphs. According to the invention, the whole video is divided into video segments, and the tags are added after the video segments, so that the whole video is selected and optimized and edited, the watching value of video playback is increased, and the humanized experience is improved.

Description

Video generation method, system, device and storage medium based on online education
Technical Field
The present invention relates to the field of online education, and in particular, to a method, system, device, and storage medium for video generation based on online education.
Background
At present, the technology of learning in virtual classroom via network connection is quite mature, the courses in various classes can be recorded in different data types, and the simplest way to record the courses is to record the pictures of users and store them into film files. Or multimedia (html tag, javascript, mp4/webm, xml data) assembled into a customized surface through various scripting languages is played (for example, BigBlueButton software). However, once the classroom is such (a, the number of class starts to increase in geometric progression and b, when various contents in the video file need to be dynamically modified (for example, removing improper messages, and when the individual student voice file fails to be recorded alone, the file needs to be manually retrieved, and the user does not want to see the segment that is recorded partially), the way of recording through the former side or the way of re-assembling through the script language is extremely wasteful of computing resources and labor cost.
Generally, in a teaching video, the whole length is at least half an hour to 2 hours, and a plurality of knowledge points are included, however, the existing video lacks the searching or browsing function, and a student can only mechanically see tail from head or fast forward, and the like, the utilization efficiency of the teaching video is very low, so that the use scene of the teaching video is fixed, and the way of utilizing the teaching video by the student is single. Moreover, part of the teaching video also comprises leader, trailer, teacher self-introduction or small talk, etc., each video segment in the teaching video has different values, and the teaching video is difficult to provide the video segment which is most needed by each student to the students in a targeted manner.
Accordingly, the present invention provides a video generation method, system, device and storage medium based on online education.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a video generation method, a system, equipment and a storage medium based on online education.
The embodiment of the invention provides a video generation method based on online education, which comprises the following steps:
s110, dividing at least one recorded video of online education into a plurality of video paragraphs according to preset time length;
s120, adding at least one video label to the video paragraph according to the content information of the video paragraph;
s130, screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs.
Preferably, the step S120 includes the steps of:
s121, performing image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph;
and S122, obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
Preferably, the step S122 includes using at least one word with the highest total occurrence number in the first text and the second text as the video tag of the video passage.
Preferably, the step S122 includes using at least one word in which the first text and the second text occur and the occurrence frequency is the highest as the video tag of the video passage.
Preferably, the step S130 includes the steps of: and searching the video paragraph with the video label with the most played times and/or the video paragraph with the video label with the most watched time by the user, and editing to obtain the video file.
Preferably, the step S130 includes the steps of: deleting the video paragraphs with the video tags with the playing times lower than the preset threshold and/or the video paragraphs with the video tags with the time length lower than the preset threshold, and editing to obtain the video file.
Preferably, the step S130 is followed by:
s140, establishing at least one user label according to personal information of the user, screening the video file with the requirement for editing, obtaining at least one video course and sending the video course to the user.
Preferably, the video tag of the video segment played by the user most is obtained from the historical database of the user as the user tag, the video segment with the user tag is searched, merged and edited to obtain the video course, and the video course is sent to the mobile terminal of the user.
Preferably, the step S140 includes the steps of:
s141, extracting keywords of wrong questions from a wrong question set of a historical database of a user;
s142, generating an editing label according to the keyword of the wrong question;
s143, the video label is hit to the video section of the editing label to be edited, and the video file of the wrong question set of the user is obtained; and
and S144, sending the video file to the user.
Preferably, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time lengths.
Preferably, the recorded video is a real-time recorded video when the instructor and the student perform online education one-to-one, and the recorded video within the preset time length is generated into a video passage every time the preset time length passes.
Preferably, the preset time period is selected from the range of 10 seconds to 10 minutes.
Preferably, in step S120, at least one video tag is added to a video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
The embodiment of the present invention further provides a video generation system based on online education, which is used for implementing the above video generation method based on online education, and the video generation system based on online education includes:
the recorded video segmentation module is used for segmenting at least one segment of recorded video of online education into a plurality of video segments according to preset time length;
the video label adding module is used for adding at least one video label to the video paragraph according to the content information of the video paragraph;
and the video file editing module is used for screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs.
Preferably, the video tag adding module performs image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph; and obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
Preferably, the system further comprises a user video customization module, wherein at least one user label is established according to personal information of a user, the video file with the requirement is screened and edited, and at least one video course is obtained and sent to the user.
Preferably, the recorded videos are teaching videos of all recorded instructors and trainees stored in an online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time length; or
The recorded video is a real-time recorded video when the instructor and the student perform one-to-one online education, and the recorded video in the preset time length is generated into a video session after each preset time length.
Preferably, the video tag adding module adds at least one video tag to a video paragraph according to a trigger event in content information of the video paragraph, where the trigger event includes at least one of the following events:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
An embodiment of the present invention also provides a video generating apparatus based on online education, including:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the online education based video generation method described above via execution of the executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium storing a program that, when executed, implements the steps of the above-described online education-based video generation method.
According to the video generation method, the system, the equipment and the storage medium based on online education, the whole video is divided into video segments, tags are added behind, the whole video segments are convenient to select, optimize and edit, the watching value of video playback is increased, and the humanized experience is improved.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings.
FIG. 1 is a flow chart of a video generation method based on online education of the present invention;
FIG. 2 is a schematic diagram of a segmented recorded video in an embodiment of a video generation method based on online education in accordance with the present invention;
FIG. 3 is a frame of a picture in video paragraph A1 in FIG. 2;
FIG. 4 is a frame of a picture in video paragraph A2 in FIG. 2;
FIG. 5 is a frame of a picture in video paragraph A4 in FIG. 2;
FIG. 6 is a frame of a picture in video paragraph A5 in FIG. 2;
FIG. 7 is a frame of a picture in video paragraph A7 in FIG. 2;
FIG. 8 is a frame of a picture in video paragraph A9 in FIG. 2;
FIG. 9 is a diagram illustrating a video lesson sent to a user by creating user tags based on personal information of the user;
FIG. 10 is a diagram illustrating a video file obtained by screening and editing the video paragraphs;
FIG. 11 is a schematic diagram of a full video segment contained in a video file;
FIG. 12 is a schematic diagram of creating user tags based on personal information of a plurality of users respectively, and sending video courses obtained respectively to the users respectively;
FIG. 13 is an architectural diagram of a video generation system based on online education of the present invention;
FIG. 14 is a schematic diagram of the structure of a video generating device based on online education of the present invention; and
fig. 15 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted.
Fig. 1 is a flowchart of a video generation method based on online education of the present invention. As shown in fig. 1, the video generation method based on online education of the present invention includes the following steps:
and S110, dividing the recorded video of at least one section of online education into a plurality of video sections according to preset time length.
And S120, adding at least one video label to the video paragraph according to the content information of the video paragraph.
S130, screening and editing the video sections according to the video tags to obtain at least one video file containing a plurality of video sections.
S140, establishing at least one user label according to the personal information of the user, screening the video file with the requirement for editing, obtaining at least one video course and sending the video course to the user.
The video generation method based on online education of the invention adds tags after dividing the whole video into video segments, so that each video segment can have at least one tag, and the whole video can be conveniently selected and optimized and edited, for example: deleting unimportant video segments, or editing video segments associated with knowledge points collectively, and so on.
The educational video generally has a fixed form, which is greatly different from a common video, and generally speaking, a picture of the educational video at least has a blackboard-writing part and an instructor who makes an itemization for the blackboard-writing part. Even though the method for labeling the video exists in the prior art, the workload is extremely large, and the time consumption is long.
Step S120 includes the steps of:
s121, performing image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph. And performing voice recognition on the video paragraph to obtain a second text of the video paragraph.
And S122, obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
The invention completely sets the label of the video paragraph of any education video by a program method. And, creatively extract a certain amount of picture frames in the video passage can adopt the image-text recognition means of the prior art to obtain the first text content related to the blackboard-writing content from the blackboard-writing part in the picture as the first text, and also adopt the voice recognition means of the prior art to obtain the text of the explanation content from the instructor for the explanation of the blackboard-writing part as the second text. The invention takes at least one word which appears most in the first text and the second text as a video label of the main content of the video paragraph. It should be noted that the educational video is a special video form, in the educational video, the instructor does not read according to the blackboard writing, but performs extended explanation or case introduction by combining the contents of the blackboard writing, and the contents of the blackboard writing are usually relatively simple, so that it cannot be determined that the instructor explains the part of the blackboard writing in the video by only obtaining the text of the blackboard writing, and the accuracy rate of defining the video tags from the blackboard writing alone is very low, and the teaching habit of the instructor is not met. The video tags are defined only by recognizing the voice of the instructor, and are interfered by personal expression habits and the like of the instructor, and the accuracy of the video tags is also improved. Therefore, the invention combines different texts obtained by the two modes, and the video label is closer to the main content of the video paragraph by the mode, thereby greatly increasing the accuracy of defining the video label and providing an optimized label adding mode aiming at the unique attribute of the education video on the premise of considering the calculation amount cost.
In a preferred embodiment, the step S122 includes, but is not limited to, using at least one word with the highest total occurrence number in the first text and the second text as a video tag of the video passage. For example: in the first text, the engine appears 4 times, the maintenance appears 3 times, the rest words appear 2 times at most, in the second text, the engine appears 8 times, the maintenance appears 6 times, and the rest words appear 2 times at most, so that the two words with the highest total times of appearance in the first text and the second text are the engine and the maintenance, and the engine and the maintenance are taken as the two video labels of the video passage.
In a preferred embodiment, the step S122 includes, but is not limited to, using at least one word in which the first text and the second text both appear and the appearance frequency is the highest as a video tag of the video passage. For example: in the first text, "Libai" appears 3 times, Dupu "appears 3 times, the rest words appear 2 times at most, in the second text," Libai "appears 2 times, ancient poetry" appears 6 times, Dupu "appears 4 times, the rest words appear 2 times at most, although the ancient poetry" appears 2 times at most, in the embodiment, the comprehensive effect of the writing on the blackboard and the dictating of the instructor is considered more comprehensively, and the Libai with the first text and the second text appearing and the highest appearing times is taken as the video label of the video paragraph.
In a preferred embodiment, step S130 includes the steps of: searching for the video paragraph with the video tag that is played the most frequently and/or the video paragraph with the video tag that is watched the most time by the user, and editing to obtain the video file, but not limited to this. The invention can screen and edit the video paragraphs (such as highlights, core knowledge points and the like) which are frequently watched by most users to obtain the video file, thereby optimizing the video file and enabling the video file to better meet the watching experience of most users.
In a preferred embodiment, step S130 includes the steps of: deleting the video paragraphs with the video tags with the playing times lower than the preset threshold and/or the video paragraphs with the video tags with the time length lower than the preset threshold, and editing to obtain the video file, but not limited to this. The invention can screen and delete most video sections (such as titles, trailers and advertisements) which are unwilling to be watched by the user, and finally edit and integrate to obtain the video file, thereby simplifying and optimizing the video file and improving the humanized experience.
In a preferred embodiment, the video tag of the video segment played by the user most is obtained from the historical database of the user as the user tag, the video segment with the user tag is searched and edited to obtain the video course, and the video course is sent to the mobile terminal of the user, but not limited thereto. According to the invention, the video file obtained after editing the favorite video paragraph of the user can be obtained in a label screening mode according to the watching habit and the favorite of the user, and the watching experience of the user is improved according to the watching requirement of each user.
In a preferred embodiment, step S140 includes the steps of:
and S141, extracting keywords of wrong questions from the wrong question set of the user history database.
And S142, generating an editing label according to the keyword of the wrong question.
And S143, the video section of the video tag hit editing tag is edited, and a video file of the error set of the user is obtained. And
and S144, sending the video file to the user.
The invention aims at the learning process of the students on the education website, and specifically forms the video files about the wrong questions facing different learning conditions of each student according to the wrong questions in the wrong question set of each student, so that each student can see the explanation videos about the own historical wrong questions in the video files provided for the student, and the humanized experience and the satisfaction degree of the students are greatly improved.
In a preferred embodiment, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to a preset time length, but not limited thereto. The invention can carry out intelligent processing on massive ready-made teaching videos stored in the server of online education, respectively divide the teaching videos into video paragraphs and respectively add labels.
In a preferred embodiment, the recorded video is a real-time recorded video when the instructor and the student perform online education one-to-one, and the recorded video within the preset time length is generated into a video segment every time the preset time length passes, but not limited thereto. According to the invention, the video label is added to each video paragraph in the educational video recorded in real time, so that the timeliness of adding the video label is improved and the popularization of the whole scheme is facilitated.
In a preferred embodiment, the preset duration is selected from a range of 10 seconds to 10 minutes, for example, the length of each video segment is set to be 30 seconds, or 1 minute, etc. by the preset duration, but not limited thereto.
Preferably, in step S120, at least one video tag is added to the video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
instructors or students connect to a video classroom of online education;
a teacher or a student opens a dialog box in a video classroom to input characters;
a teacher or a student turns pages of a teaching material in a video classroom;
a white board is used by a teacher or a trainee in a video classroom;
a teacher or a student plays audio and video files in a video classroom;
the instructor or student uses animated special effects in a video classroom;
the instructor or student sends points in the video classroom; and
the instructor or student leaves the video classroom.
In a preferred embodiment of the present invention, an online course video playback mechanism is created in an online course, which can rely on a self-defined text event format and store audio/video information, whiteboard actions, and classroom operation records in a classroom into a text file and a video/audio file according to the format. When the video file is watched on the same set of online course software, the data can be transmitted through the video file and the text file to implement various functions (for example, the whole course is played back by removing the time difference by using an algorithm). Each data in the event text file can be edited through a background, and the details are adjusted to the characters written on the white board, the drawn lines and the like. The method for playing back the online course video record profile can achieve the purposes of modifying and changing the video record profile and each event content and video and audio playing paragraph in a simple mode, namely, only by modifying the course wb file text file, and has high maintenance flexibility.
The invention aims to create an online course video recording playback mechanism. The mechanism defines a set of text event formats. Events in this format are stored in the video stream file and also stored as an editable UTF-8 file. Various events (such as handwriting events, page changing events, etc.) can allow a person to dynamically modify various contents of the video file, and also determine the sequence of triggering the events of the video file (such as dialing the video file of a teacher or the video file of a student), so that the video file is not required to be assembled through a script language, and the video file can be directly played by using a lesson program interface after modification. The technical threshold and the operation cost of editing a playable online classroom video profile can be reduced.
The present invention creates a text event format for recording events of various whiteboard actions and video and audio actions triggered during online course, and stores a text file (wb file, below) and a video and audio stream file (instructor video and audio file, below) through the prior art. After the course is finished, all the behaviors and videos in the course can be played back and forth through the wb file and the instructor video profile, and the computer program has the characteristics of being modifiable and not required to be compiled again.
The invention relates to a character event recording mode, which comprises the following steps:
a. when any user (instructor, student or IT staff) of the online course enters the classroom, the video streaming event host starts the classroom entity and creates a wb file and writes a line of event ' init4 ', which represents the beginning of the online course (the event format is ' current time-event name-event parameter ', separated by ' character symbol and ending of the event with ' CRLF ' two bytes);
b. every minute the classroom entity will write a row of events't' in the wb file and instructor profile (if instructor profile exists), the event parameter is the current unix time (in milliseconds, consisting of 13 digits);
c. when a teacher or a student logs in a course and is connected to a classroom entity and presses down to allow the local microphone and the camera to be accessed, the classroom entity writes a row of events 'r' in the wb file and the teacher video file (if the teacher video file exists), representing that the user video file begins to be recorded (event parameters: user name, user number, body identification, classroom category, video format).
d. When the instructor or student leaves the course and leaves the classroom entity, the classroom entity writes a row of events 'o' in the wb file and the instructor video profile (if the instructor video profile exists), and the representative stops recording the user video stream (event parameters: user name, user number, body identification).
e. When any user (instructor, student or IT personnel) types in the dialog box in the course, the classroom entity will write a line of event 'c' in the wb file and instructor video file (if instructor video file exists), representing the content of the speech of the user in the classroom (event parameters: custom user header (including user name and conversation object name), user name, user number, user identity, speech content).
f. When any user (instructor, authorized student or IT personnel) with the instruction book operation authority turns/presents the page of instruction book in the course, the classroom entity writes a row of events 'p' in the wb file and the instructor video book (if the instructor video book exists), which represents that the current instruction book in the classroom is in the page several (event parameter: page index value, integer type from 0).
g. When any user (instructor, authorized student or IT personnel) with whiteboard operation authority operates (adds/modifies/deletes) the whiteboard event in the course, the classroom entity writes a row of event's' in the wb file and the instructor video file (if the instructor video file exists), representing the current operation content (such as adding a new hand to draw, uploading a picture and the like) on the whiteboard (event parameters: drawing operation, drawing type, parameters required by the drawing type).
h. When the instructor uses special effects and gives points to approve the student in the course, the classroom entity writes a line of events ' rw ' in the wb file and instructor video file (if the instructor video file exists), representing the current instructor's timely approval and giving points to the student (event parameters: approval type, student name and number, student total points, and points currently accumulated by the instructor in the classroom).
j. When the instructor plays the audio file of the teaching material in the course, the classroom entity writes one row of events 'w' in wb file and instructor video file every second during the playing period (if the instructor video file exists), representing the time point and the playing behavior of the audio file played in the course by the instructor at present (event parameters: file name including audio file path, current playing state, audio file total length (second), audio file current playing position (second)).
k. When the instructor switches the user (such as a student) to be the master in the course, the classroom entity writes a line of event 'sw' in the wb file and the instructor video file (if the instructor video file exists), representing that the instructor should be switched to the master (the video of the user side is switched to the video position of the original instructor side) (event parameters: user name and number, user category).
Each second, the classroom entity writes the accumulated seconds as time codes into the video profiles of all online users after being established from the classroom entity, and the profiles are used as video profiles for synchronous use of all audio tracks when playing;
the video playback mechanism of the present invention is as follows:
the playing program obtains the course number and the verification code from the link parameter, and calls the API related to the video file to obtain the data required by the video file (such as the position of the host for storing the video audio/video file, the position of the folder for the teaching material picture file, the position of the wb file)
And the playing program end reads the wb file of the course. And storing the page event and the whiteboard event into a correlation hash and array structure, and acquiring the audio and video file name information of the instructor from the recording event of the instructor.
Confirm that the playback program does not carry the 'playback ═ 1' parameter. The time stamp of the first recorded event 'r' of the student is taken as the starting position of the playing (five minutes after the instructor enters the classroom, 300 seconds of time code), the total time of the whole classroom is calculated from the wb file, the total time zone in which the instructor logs out and logs in after the first 'r' event is cut off, and finally the calculation result is taken as the total time of the time axis.
Confirm the special event 'q' in wb file for manual modification, use all the events in wb file (filter all events recorded in instructor video file and triggered by prior art CuePoint while playing), establish a timer in program to confirm every second that the second and whether there are any events above can be triggered.
Recording the event 'r' from the instructor to preload the instructor video file and monitoring the callback message. If the callback is successful, executing the step h;
the data structure for user playing is established from the recorded events 'r' of other users.
The classroom category is taken from the instructor 'r' first recorded event to decide which video file interface framework to use.
After jumping the audio-video streaming of the instructor to the first login time point (5 minutes) of the instructor, pausing, and taking the position as the initial position of the time axis, jumping out of the inquiry interface, and allowing the user to select whether to start playing the video file. Selecting 'yes', starting to play from the play starting point;
every fifteen seconds, the instructor profile will be confirmed first, with the most recently triggered time code (315 seconds), and this value is used to compare the most recently triggered time code (314 seconds) in the trainee profile. Because the gap is 1 second (315-.
Confirming that the playing reaches 50 th second (time code 350) to trigger a page turning 'p-1' event, and updating the teaching material graph file on the program to be a second page of teaching material after emptying the canvas.
Fig. 2 to 12 are schematic views of embodiments of a video generation method based on online education of the present invention. Fig. 2 is a schematic diagram of a segmented recorded video in an embodiment of the video generation method based on online education according to the present invention. As shown in fig. 2, in another preferred embodiment of the present invention, a recorded video is divided into a plurality of video segments having a duration of 1 minute, a1, a2, A3, a4, a5, a6, a7, A8, and a9 … …, according to a preset duration (1 minute).
Fig. 3 is a frame of a picture in the video passage a1 in fig. 2.As shown in fig. 3, 1 frame of picture is extracted for textual recognition in video paragraph a1 (instructor 10 is mainly in introducing preposition use), and the first text about the content of the blackboard writing is obtained:
preposition word
Like …. Because like's verb's and prepositions can both be placed before nouns, a distinction is to be made.
What is the she like? What is she? (preposition)
As/as.:
he was interested in playing chess for a short time.
By with/by/ride/by.
The bridge was buried by robots. "
The second text is obtained by performing voice detection on the video passage a1, and in this embodiment, two words, "preposition" and "like" that appear in both the first text and the second text and occur most frequently are used as the video tags of the video passage a 1.
Fig. 4 is a frame of a picture in video passage a2 in fig. 2.As shown in fig. 4, 1 frame of picture is extracted from video paragraph a2 (instructor 10 mainly introduces usage of preposition as) for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a2, and in this embodiment, the two words "preposition" and "as" that appear in the first text and the second text most frequently are used as the video tags of the video passage a 2.
In video paragraph A3 (instructor 10 is still in the usage of introduction preposition as), 1 frame of picture is extracted for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard writing is obtained. The second text is obtained by performing voice detection on the video passage A3, and in this embodiment, the two words "preposition" and "as" that appear in the first text and the second text most frequently are used as the video tags of the video passage A3.
Fig. 5 is a frame of a picture in video passage a4 in fig. 2.As shown in fig. 5, 1 frame of picture is extracted from video paragraph a4 (instructor 10 mainly introduces the usage of preposition by) for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a4, and in this embodiment, two words, "preposition" and "by" that appear in both the first text and the second text and occur most frequently are used as the video tags of the video passage a 4.
Fig. 6 is a frame of a picture in video passage a5 in fig. 2.As shown in fig. 6, 1 frame of picture is extracted for textual recognition in video paragraph a5 (instructor 10 is mainly introducing preposition in usage), and the first text about the content of the blackboard writing is obtained:
preposition word
In … (language):
what's this in Chinese? How is this said in chinese?
On ride/hike, through (radio/tv):
do you go hot on foot? Do you walk there?
Over pass (radio), span:
the key in touch over the radio while working. "
The second text is obtained by performing voice detection on the video passage a5, and in this embodiment, two words, "preposition" and "in" that appear in the first text and the second text and occur most frequently are used as the video tags of the video passage a 5.
In video paragraph a6 (instructor 10 is still in the usage of introduction preposition in), 1 frame of picture is extracted for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a6, and in this embodiment, two words, "preposition" and "in" that appear in the first text and the second text and occur most frequently are used as the video tags of the video passage a 6.
Fig. 7 is a frame of a picture in video passage a7 in fig. 2.As shown in fig. 7, 1 frame of pictures is extracted from video paragraph a7 (instructor 10 mainly introduces usage of preposition on) for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a7, and in this embodiment, two words, "preposition" and "on" that both appear and appear most frequently are used as the video tags of the video passage a 7.
In video paragraph A8 (instructor 10 is still in the usage of introduction preposition on), 1 frame of picture is extracted for image-text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage A8, and in this embodiment, two words, "preposition" and "on" that both appear and appear most frequently are used as the video tags of the video passage A8.
Fig. 8 is a frame of a picture in video passage a9 in fig. 2.As shown in fig. 8, 1 frame of picture is extracted from video paragraph a9 (instructor 10 mainly introduces the usage of preposition over) for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a9, and in this embodiment, two words, "preposition" and "over" that both appear and occur most frequently are used as the video tags of the video passage a 9.
Fig. 9 is a schematic diagram of a video lesson sent to a user by establishing a user tag based on personal information of the user. Fig. 10 is a schematic diagram of a video file obtained by screening and editing video scenes. Fig. 11 is a schematic diagram of all video paragraphs contained in a video file. As shown in fig. 9, the self-learner 11 extracts the keywords of the wrong questions from the wrong-question set in the history database, and generates the edit tags based on the keywords of the wrong questions, and in this embodiment, the edit tags generated based on the keywords of the wrong questions of the learner 11 are "preposition", "as", "in", and "on". Screening video paragraphs of video paragraphs A1, A2, A3, A4, A5, A6, A7, A8 and A9 … … video tags hit editing tags 'prepositions', 'as', 'in' and 'on' to obtain three video files B1, B2 and B3, wherein the video file B1 hits editing tags 'prepositions', 'as'; video file B2 hits the edit tag "preposition", "in"; the video file B3 hits the edit tag "preposition", "on", edits the video files B1, B2 and B3 into one video file C (see fig. 10), and transmits the video file C to the mobile terminal 11A of the trainee 11. The student 11 can learn the video file C, and the fact that the student 11 plays the video file C at the mobile terminal 11A is that the video paragraphs a2, A3, a5, a6, a7 and A8 are substantially played in sequence, so that the relevant knowledge points of the subjects which are most prone to error are reviewed in a very targeted manner, and the learning efficiency and the humanized experience are greatly improved.
Fig. 12 is a schematic diagram of creating user tags based on personal information of a plurality of users respectively, and sending obtained video courses to the users respectively. Referring to fig. 9, 10, 11 and 12, the present invention can divide the recorded video into a plurality of video segments for teaching of all recorded instructors and trainees stored in the online education server, form different video files (C1, C2, C3) according to different requirements (e.g., error set, viewing preference, etc.) of each trainee (11, 12, 13), respectively, and send the video files to the mobile terminals (11A, 12B, 13C) of each trainee, respectively, even for video lessons of online education in the same hall, after editing according to different requirements of each trainee, each trainee can obtain a video file customized or selected according to personal conditions, thereby improving teaching efficiency, better matching different trainees with different learning schedules and knowledge backgrounds, and maximizing the use of recorded video resources of the existing online education, and the better information matching effect is exerted.
Fig. 13 is a schematic diagram of a first embodiment of a video generation method based on online education of the present invention. As shown in fig. 13, an embodiment of the present invention further provides a video generating system based on online education, for implementing the above-mentioned video generating method based on online education, where the video generating system 5 based on online education includes:
the recorded video segmentation module 51 is used for segmenting at least one segment of recorded video of online education into a plurality of video segments according to preset time length;
the video tag adding module 52 is used for adding at least one video tag to the video section according to the content information of the video section;
the video file editing module 53 is used for screening and editing the video frequency bands according to the video tags to obtain at least one video file containing a plurality of video frequency bands;
the user video customizing module 54 establishes at least one user tag according to the personal information of the user, screens the video file with the requirement for editing, obtains at least one video course and sends the video course to the user.
In a preferred embodiment, the video tag adding module performs image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph; and obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
In a preferred embodiment, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time length; or
The recorded video is a real-time recorded video when the instructor and the student perform one-to-one online education, and the recorded video in the preset time length is generated into a video paragraph after the preset time length.
In a preferred embodiment, the video tag adding module adds at least one video tag to the video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
instructors or students connect to a video classroom of online education;
a teacher or a student opens a dialog box in a video classroom to input characters;
a teacher or a student turns pages of a teaching material in a video classroom;
a white board is used by a teacher or a trainee in a video classroom;
a teacher or a student plays audio and video files in a video classroom;
the instructor or student uses animated special effects in a video classroom;
the instructor or student sends points in the video classroom; and
the instructor or student leaves the video classroom.
According to the video generation system based on online education, the whole video is divided into video segments, tags are added after the video segments, the whole video is convenient to select, optimize and edit, the watching value of video playback is increased, and humanized experience is improved.
The embodiment of the invention also provides video generation equipment based on online education, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the online education based video generation method via execution of the executable instructions.
As shown above, the embodiment divides the whole video into video segments, adds tags behind, facilitates selection and optimized editing of the whole video, increases the viewing value of video playback, and improves the humanized experience.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 14 is a schematic structural diagram of a video generating apparatus based on online education according to the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 14. The electronic device 600 shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 14, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
Embodiments of the present invention also provide a computer-readable storage medium for storing a program, which when executed, implements the steps of the video generation method based on online education. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As shown above, the embodiment divides the whole video into video segments, adds tags behind, facilitates selection and optimized editing of the whole video, increases the viewing value of video playback, and improves the humanized experience.
Fig. 15 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 15, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the present invention is directed to a video generation method, system, device and storage medium based on online education, in which a tag is added after a whole video is divided into video segments, so that the whole video is conveniently selected and optimized for editing, the viewing value of video playback is increased, and the user experience is improved.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (18)

1. A video generation method based on online education is characterized by comprising the following steps:
s110, dividing at least one recorded video of online education into a plurality of video paragraphs according to preset time length;
s120, adding at least one video label to the video paragraph according to the content information of the video paragraph;
s130, screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs;
the step S120 includes the steps of:
s121, performing image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph;
and S122, obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
2. The method for generating video based on online education as claimed in claim 1, wherein the step S122 includes using at least one word with the highest total occurrence number in the first text and the second text as the video tag of the video passage.
3. The method for generating a video based on online education as claimed in claim 1, wherein the step S122 includes using at least one word in which the first text and the second text both appear and occur most frequently as the video tag of the video passage.
4. The video generating method based on online education as claimed in claim 1, wherein the step S130 includes the steps of: and searching the video paragraph with the video label with the most played times and/or the video paragraph with the video label with the most watched time by the user, and editing to obtain the video file.
5. The video generating method based on online education as claimed in claim 1, wherein the step S130 includes the steps of: deleting the video paragraphs with the video tags with the playing times lower than the preset threshold and/or the video paragraphs with the video tags with the time length lower than the preset threshold, and editing to obtain the video file.
6. The method for generating a video based on online education as claimed in claim 1, wherein the step S130 is followed by further comprising:
s140, establishing at least one user label according to personal information of the user, screening the video file with the requirement for editing, obtaining at least one video course and sending the video course to the user.
7. The method as claimed in claim 6, wherein the video tag of the video segment played most by the user is obtained from the historical database of the user as the user tag, the video segment with the user tag is searched and edited to obtain the video lesson, and the video lesson is sent to the mobile terminal of the user.
8. The video generating method based on online education as claimed in claim 6, wherein the step S140 includes the steps of:
s141, extracting keywords of wrong questions from a wrong question set of a historical database of a user;
s142, generating an editing label according to the keyword of the wrong question;
s143, the video label is hit to the video section of the editing label to be edited, and the video file of the wrong question set of the user is obtained; and
and S144, sending the video file to the user.
9. The method for generating video based on online education as claimed in claim 1, wherein the recorded video is recorded teaching videos of all instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video segments according to a preset time length.
10. The method for generating a video based on online education as claimed in claim 1, wherein the recorded video is a real-time recorded video when the instructor and the trainee perform online education one-to-one, and the recorded video within a preset time period is generated as a video passage every time the preset time period elapses.
11. The video generating method based on online education as claimed in claim 9 or 10, wherein the preset time periods are each selected in the range of 10 seconds to 10 minutes.
12. The method for generating video based on online education as claimed in claim 1, wherein the step S120 adds at least one video tag to the video paragraph according to a triggering event in the content information of the video paragraph, and the triggering event includes at least one of the following events:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
13. A video generation system based on online education for implementing the video generation method based on online education claimed in claim 1, characterized by comprising:
the recorded video segmentation module is used for segmenting at least one segment of recorded video of online education into a plurality of video segments according to preset time length;
the video label adding module is used for adding at least one video label to the video paragraph according to the content information of the video paragraph, and the video label adding module is used for carrying out image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph; obtaining at least one word with the highest occurrence frequency according to a first text and a second text corresponding to the video paragraph as a video label of the video paragraph;
and the video file editing module is used for screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs.
14. The video generation system based on online education as claimed in claim 13, further comprising a user video customization module for creating at least one user tag according to the personal information of the user, filtering the video file with the requirement for editing, obtaining at least one video course and sending to the user.
15. The video generating system based on online education as claimed in claim 13, wherein the recorded video is recorded teaching videos of all instructors and trainees stored in the online education server, each of the teaching videos is divided into a plurality of video segments according to a preset time length; or
The recorded video is a real-time recorded video when the instructor and the student perform one-to-one online education, and the recorded video in the preset time length is generated into a video session after each preset time length.
16. The online education-based video generation system of claim 13, wherein the video tagging module adds at least one video tag to a video passage according to a triggering event in content information of the video passage, the triggering event including at least one of:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
17. A video generating device based on online education, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the online education-based video generation method of any one of claims 1 to 12 via execution of the executable instructions.
18. A computer-readable storage medium storing a program which, when executed by a processor, implements the steps of the online education-based video generation method according to any one of claims 1 to 12.
CN201910305029.3A 2019-04-16 2019-04-16 Video generation method, system, device and storage medium based on online education Active CN110035330B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910305029.3A CN110035330B (en) 2019-04-16 2019-04-16 Video generation method, system, device and storage medium based on online education
TW108134407A TW202040498A (en) 2019-04-16 2019-09-24 Video generation method, system and equipment based on online education and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910305029.3A CN110035330B (en) 2019-04-16 2019-04-16 Video generation method, system, device and storage medium based on online education

Publications (2)

Publication Number Publication Date
CN110035330A CN110035330A (en) 2019-07-19
CN110035330B true CN110035330B (en) 2021-11-23

Family

ID=67238611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910305029.3A Active CN110035330B (en) 2019-04-16 2019-04-16 Video generation method, system, device and storage medium based on online education

Country Status (2)

Country Link
CN (1) CN110035330B (en)
TW (1) TW202040498A (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505424B (en) * 2019-08-29 2022-08-02 维沃移动通信有限公司 Video processing method, video playing method, video processing device, video playing device and terminal equipment
CN111225235B (en) * 2020-01-16 2020-12-04 北京合众美华教育投资有限公司 Method for playing network teaching video
CN111277917A (en) * 2020-02-17 2020-06-12 北京文香信息技术有限公司 Media data generation method, media characteristic determination method and related equipment
CN111429768B (en) * 2020-03-17 2022-04-05 安徽爱学堂教育科技有限公司 Knowledge point splitting and integrating method and system based on teaching recording and broadcasting
CN111417014B (en) * 2020-03-20 2022-12-13 深圳市企鹅网络科技有限公司 Video generation method, system, device and storage medium based on online education
CN111563196A (en) * 2020-03-30 2020-08-21 威比网络科技(上海)有限公司 Online language course information pushing method, system, equipment and storage medium
CN111681142B (en) * 2020-04-20 2023-12-05 深圳市企鹅网络科技有限公司 Education video virtual teaching-based method, system, equipment and storage medium
CN111462554A (en) * 2020-04-22 2020-07-28 浙江蓝鸽科技有限公司 Online classroom video knowledge point identification method and device
CN111541912B (en) * 2020-04-30 2022-04-22 北京奇艺世纪科技有限公司 Video splitting method and device, electronic equipment and storage medium
CN111654749B (en) * 2020-06-24 2022-03-01 百度在线网络技术(北京)有限公司 Video data production method and device, electronic equipment and computer readable medium
CN111711849A (en) * 2020-06-30 2020-09-25 浙江同花顺智能科技有限公司 Method, device and storage medium for displaying multimedia data
CN112023377B (en) * 2020-09-14 2021-11-09 成都拟合未来科技有限公司 Real-time interaction method, system, terminal and medium for fitness exercise
CN112364068A (en) * 2021-01-14 2021-02-12 平安科技(深圳)有限公司 Course label generation method, device, equipment and medium
CN113079415B (en) * 2021-03-31 2023-07-28 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN113259763B (en) * 2021-04-30 2023-04-07 作业帮教育科技(北京)有限公司 Teaching video processing method and device and electronic equipment
CN113709526B (en) * 2021-08-26 2023-10-20 北京高途云集教育科技有限公司 Teaching video generation method and device, computer equipment and storage medium
CN113840147B (en) * 2021-11-26 2022-04-05 浙江智慧视频安防创新中心有限公司 Video processing method and device based on intelligent digital retina
CN117082268B (en) * 2023-10-18 2024-01-30 成都有为财商教育科技有限公司 Video recording and broadcasting method and system for online live broadcast
CN117786233B (en) * 2024-02-26 2024-05-24 山东正禾大教育科技有限公司 Intelligent online education classroom recommendation method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284216A (en) * 2014-10-23 2015-01-14 Tcl集团股份有限公司 Method and system for generating video highlight clip
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN108769733A (en) * 2018-06-22 2018-11-06 三星电子(中国)研发中心 Video clipping method and video clipping device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645990B2 (en) * 2006-12-22 2014-02-04 Ciena Corporation Dynamic advertising control

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284216A (en) * 2014-10-23 2015-01-14 Tcl集团股份有限公司 Method and system for generating video highlight clip
CN105657537A (en) * 2015-12-23 2016-06-08 小米科技有限责任公司 Video editing method and device
CN108769733A (en) * 2018-06-22 2018-11-06 三星电子(中国)研发中心 Video clipping method and video clipping device

Also Published As

Publication number Publication date
TW202040498A (en) 2020-11-01
CN110035330A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110035330B (en) Video generation method, system, device and storage medium based on online education
Pavel et al. Rescribe: Authoring and automatically editing audio descriptions
CN112399258B (en) Live playback video generation playing method and device, storage medium and electronic equipment
WO2017192851A1 (en) Automated generation and presentation of lessons via digital media content extraction
CN114339285B (en) Knowledge point processing method, video processing method, device and electronic equipment
US20100046911A1 (en) Video playing system and a controlling method thereof
CN111711834B (en) Recorded broadcast interactive course generation method and device, storage medium and terminal
CN111405381A (en) Online video playing method, electronic device and computer readable storage medium
US20150213793A1 (en) Methods and systems for converting text to video
KR101858204B1 (en) Method and apparatus for generating interactive multimedia contents
CN111885313A (en) Audio and video correction method, device, medium and computing equipment
CN111417014B (en) Video generation method, system, device and storage medium based on online education
CN109729418A (en) A kind of teaching programming interactive video recording and broadcasting system and method
CN111614986A (en) Bullet screen generation method, system, equipment and storage medium based on online education
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
TWI575457B (en) System and method for online editing and exchanging interactive three dimension multimedia, and computer-readable medium thereof
CN112131361A (en) Method and device for pushing answer content
He et al. Comparing presentation summaries: slides vs. reading vs. listening
JP4085015B2 (en) STREAM DATA GENERATION DEVICE, STREAM DATA GENERATION SYSTEM, STREAM DATA GENERATION METHOD, AND PROGRAM
KR100882857B1 (en) Method for reproducing contents by using discriminating code
CN115203469A (en) Exercise explanation video knowledge point labeling method and system based on multi-label prediction
JP3816901B2 (en) Stream data editing method, editing system, and program
CN112287160A (en) Audio data sorting method and device, computer equipment and storage medium
KR20200089417A (en) Method and apparatus of providing learning content based on moving pictures enabling interaction with users
CN115052194B (en) Learning report generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201231

Address after: 200030 unit 01, room 801, 166 Kaibin Road, Xuhui District, Shanghai

Applicant after: Shanghai Ping An Education Technology Co.,Ltd.

Address before: 152, 86 Tianshui Road, Hongkou District, Shanghai

Applicant before: TUTORABC NETWORK TECHNOLOGY (SHANGHAI) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221130

Address after: 4 / F, Times financial center, 4001 Shennan Avenue, Futian District, Shenzhen, Guangdong 518000

Patentee after: PING'AN ZHITONG CONSULTING Co.,Ltd.

Address before: 200030 unit 01, room 801, 166 Kaibin Road, Xuhui District, Shanghai

Patentee before: Shanghai Ping An Education Technology Co.,Ltd.