Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a video generation method, a system, equipment and a storage medium based on online education.
The embodiment of the invention provides a video generation method based on online education, which comprises the following steps:
s110, dividing at least one recorded video of online education into a plurality of video paragraphs according to preset time length;
s120, adding at least one video label to the video paragraph according to the content information of the video paragraph;
s130, screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs.
Preferably, the step S120 includes the steps of:
s121, performing image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph;
and S122, obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
Preferably, the step S122 includes using at least one word with the highest total occurrence number in the first text and the second text as the video tag of the video passage.
Preferably, the step S122 includes using at least one word in which the first text and the second text occur and the occurrence frequency is the highest as the video tag of the video passage.
Preferably, the step S130 includes the steps of: and searching the video paragraph with the video label with the most played times and/or the video paragraph with the video label with the most watched time by the user, and editing to obtain the video file.
Preferably, the step S130 includes the steps of: deleting the video paragraphs with the video tags with the playing times lower than the preset threshold and/or the video paragraphs with the video tags with the time length lower than the preset threshold, and editing to obtain the video file.
Preferably, the step S130 is followed by:
s140, establishing at least one user label according to personal information of the user, screening the video file with the requirement for editing, obtaining at least one video course and sending the video course to the user.
Preferably, the video tag of the video segment played by the user most is obtained from the historical database of the user as the user tag, the video segment with the user tag is searched, merged and edited to obtain the video course, and the video course is sent to the mobile terminal of the user.
Preferably, the step S140 includes the steps of:
s141, extracting keywords of wrong questions from a wrong question set of a historical database of a user;
s142, generating an editing label according to the keyword of the wrong question;
s143, the video label is hit to the video section of the editing label to be edited, and the video file of the wrong question set of the user is obtained; and
and S144, sending the video file to the user.
Preferably, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time lengths.
Preferably, the recorded video is a real-time recorded video when the instructor and the student perform online education one-to-one, and the recorded video within the preset time length is generated into a video passage every time the preset time length passes.
Preferably, the preset time period is selected from the range of 10 seconds to 10 minutes.
Preferably, in step S120, at least one video tag is added to a video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
The embodiment of the present invention further provides a video generation system based on online education, which is used for implementing the above video generation method based on online education, and the video generation system based on online education includes:
the recorded video segmentation module is used for segmenting at least one segment of recorded video of online education into a plurality of video segments according to preset time length;
the video label adding module is used for adding at least one video label to the video paragraph according to the content information of the video paragraph;
and the video file editing module is used for screening and editing the video paragraphs according to the video tags to obtain at least one video file containing a plurality of video paragraphs.
Preferably, the video tag adding module performs image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph; and obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
Preferably, the system further comprises a user video customization module, wherein at least one user label is established according to personal information of a user, the video file with the requirement is screened and edited, and at least one video course is obtained and sent to the user.
Preferably, the recorded videos are teaching videos of all recorded instructors and trainees stored in an online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time length; or
The recorded video is a real-time recorded video when the instructor and the student perform one-to-one online education, and the recorded video in the preset time length is generated into a video session after each preset time length.
Preferably, the video tag adding module adds at least one video tag to a video paragraph according to a trigger event in content information of the video paragraph, where the trigger event includes at least one of the following events:
a teacher or student connected to a video classroom of the online education;
opening a dialog box in the video classroom by a teacher or a student to input characters;
a teacher or a student turns pages of the teaching materials in the video classroom;
a teacher or student uses a white board in the video classroom;
a teacher or a student plays audio and video files in the video classroom;
using animated special effects by instructors or trainees in the video classroom;
a teacher or student sending points in the video classroom; and
the instructor or student leaves the video classroom.
An embodiment of the present invention also provides a video generating apparatus based on online education, including:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the online education based video generation method described above via execution of the executable instructions.
Embodiments of the present invention also provide a computer-readable storage medium storing a program that, when executed, implements the steps of the above-described online education-based video generation method.
According to the video generation method, the system, the equipment and the storage medium based on online education, the whole video is divided into video segments, tags are added behind, the whole video segments are convenient to select, optimize and edit, the watching value of video playback is increased, and the humanized experience is improved.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their repetitive description will be omitted.
Fig. 1 is a flowchart of a video generation method based on online education of the present invention. As shown in fig. 1, the video generation method based on online education of the present invention includes the following steps:
and S110, dividing the recorded video of at least one section of online education into a plurality of video sections according to preset time length.
And S120, adding at least one video label to the video paragraph according to the content information of the video paragraph.
S130, screening and editing the video sections according to the video tags to obtain at least one video file containing a plurality of video sections.
S140, establishing at least one user label according to the personal information of the user, screening the video file with the requirement for editing, obtaining at least one video course and sending the video course to the user.
The video generation method based on online education of the invention adds tags after dividing the whole video into video segments, so that each video segment can have at least one tag, and the whole video can be conveniently selected and optimized and edited, for example: deleting unimportant video segments, or editing video segments associated with knowledge points collectively, and so on.
The educational video generally has a fixed form, which is greatly different from a common video, and generally speaking, a picture of the educational video at least has a blackboard-writing part and an instructor who makes an itemization for the blackboard-writing part. Even though the method for labeling the video exists in the prior art, the workload is extremely large, and the time consumption is long.
Step S120 includes the steps of:
s121, performing image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph. And performing voice recognition on the video paragraph to obtain a second text of the video paragraph.
And S122, obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
The invention completely sets the label of the video paragraph of any education video by a program method. And, creatively extract a certain amount of picture frames in the video passage can adopt the image-text recognition means of the prior art to obtain the first text content related to the blackboard-writing content from the blackboard-writing part in the picture as the first text, and also adopt the voice recognition means of the prior art to obtain the text of the explanation content from the instructor for the explanation of the blackboard-writing part as the second text. The invention takes at least one word which appears most in the first text and the second text as a video label of the main content of the video paragraph. It should be noted that the educational video is a special video form, in the educational video, the instructor does not read according to the blackboard writing, but performs extended explanation or case introduction by combining the contents of the blackboard writing, and the contents of the blackboard writing are usually relatively simple, so that it cannot be determined that the instructor explains the part of the blackboard writing in the video by only obtaining the text of the blackboard writing, and the accuracy rate of defining the video tags from the blackboard writing alone is very low, and the teaching habit of the instructor is not met. The video tags are defined only by recognizing the voice of the instructor, and are interfered by personal expression habits and the like of the instructor, and the accuracy of the video tags is also improved. Therefore, the invention combines different texts obtained by the two modes, and the video label is closer to the main content of the video paragraph by the mode, thereby greatly increasing the accuracy of defining the video label and providing an optimized label adding mode aiming at the unique attribute of the education video on the premise of considering the calculation amount cost.
In a preferred embodiment, the step S122 includes, but is not limited to, using at least one word with the highest total occurrence number in the first text and the second text as a video tag of the video passage. For example: in the first text, the engine appears 4 times, the maintenance appears 3 times, the rest words appear 2 times at most, in the second text, the engine appears 8 times, the maintenance appears 6 times, and the rest words appear 2 times at most, so that the two words with the highest total times of appearance in the first text and the second text are the engine and the maintenance, and the engine and the maintenance are taken as the two video labels of the video passage.
In a preferred embodiment, the step S122 includes, but is not limited to, using at least one word in which the first text and the second text both appear and the appearance frequency is the highest as a video tag of the video passage. For example: in the first text, "Libai" appears 3 times, Dupu "appears 3 times, the rest words appear 2 times at most, in the second text," Libai "appears 2 times, ancient poetry" appears 6 times, Dupu "appears 4 times, the rest words appear 2 times at most, although the ancient poetry" appears 2 times at most, in the embodiment, the comprehensive effect of the writing on the blackboard and the dictating of the instructor is considered more comprehensively, and the Libai with the first text and the second text appearing and the highest appearing times is taken as the video label of the video paragraph.
In a preferred embodiment, step S130 includes the steps of: searching for the video paragraph with the video tag that is played the most frequently and/or the video paragraph with the video tag that is watched the most time by the user, and editing to obtain the video file, but not limited to this. The invention can screen and edit the video paragraphs (such as highlights, core knowledge points and the like) which are frequently watched by most users to obtain the video file, thereby optimizing the video file and enabling the video file to better meet the watching experience of most users.
In a preferred embodiment, step S130 includes the steps of: deleting the video paragraphs with the video tags with the playing times lower than the preset threshold and/or the video paragraphs with the video tags with the time length lower than the preset threshold, and editing to obtain the video file, but not limited to this. The invention can screen and delete most video sections (such as titles, trailers and advertisements) which are unwilling to be watched by the user, and finally edit and integrate to obtain the video file, thereby simplifying and optimizing the video file and improving the humanized experience.
In a preferred embodiment, the video tag of the video segment played by the user most is obtained from the historical database of the user as the user tag, the video segment with the user tag is searched and edited to obtain the video course, and the video course is sent to the mobile terminal of the user, but not limited thereto. According to the invention, the video file obtained after editing the favorite video paragraph of the user can be obtained in a label screening mode according to the watching habit and the favorite of the user, and the watching experience of the user is improved according to the watching requirement of each user.
In a preferred embodiment, step S140 includes the steps of:
and S141, extracting keywords of wrong questions from the wrong question set of the user history database.
And S142, generating an editing label according to the keyword of the wrong question.
And S143, the video section of the video tag hit editing tag is edited, and a video file of the error set of the user is obtained. And
and S144, sending the video file to the user.
The invention aims at the learning process of the students on the education website, and specifically forms the video files about the wrong questions facing different learning conditions of each student according to the wrong questions in the wrong question set of each student, so that each student can see the explanation videos about the own historical wrong questions in the video files provided for the student, and the humanized experience and the satisfaction degree of the students are greatly improved.
In a preferred embodiment, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to a preset time length, but not limited thereto. The invention can carry out intelligent processing on massive ready-made teaching videos stored in the server of online education, respectively divide the teaching videos into video paragraphs and respectively add labels.
In a preferred embodiment, the recorded video is a real-time recorded video when the instructor and the student perform online education one-to-one, and the recorded video within the preset time length is generated into a video segment every time the preset time length passes, but not limited thereto. According to the invention, the video label is added to each video paragraph in the educational video recorded in real time, so that the timeliness of adding the video label is improved and the popularization of the whole scheme is facilitated.
In a preferred embodiment, the preset duration is selected from a range of 10 seconds to 10 minutes, for example, the length of each video segment is set to be 30 seconds, or 1 minute, etc. by the preset duration, but not limited thereto.
Preferably, in step S120, at least one video tag is added to the video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
instructors or students connect to a video classroom of online education;
a teacher or a student opens a dialog box in a video classroom to input characters;
a teacher or a student turns pages of a teaching material in a video classroom;
a white board is used by a teacher or a trainee in a video classroom;
a teacher or a student plays audio and video files in a video classroom;
the instructor or student uses animated special effects in a video classroom;
the instructor or student sends points in the video classroom; and
the instructor or student leaves the video classroom.
In a preferred embodiment of the present invention, an online course video playback mechanism is created in an online course, which can rely on a self-defined text event format and store audio/video information, whiteboard actions, and classroom operation records in a classroom into a text file and a video/audio file according to the format. When the video file is watched on the same set of online course software, the data can be transmitted through the video file and the text file to implement various functions (for example, the whole course is played back by removing the time difference by using an algorithm). Each data in the event text file can be edited through a background, and the details are adjusted to the characters written on the white board, the drawn lines and the like. The method for playing back the online course video record profile can achieve the purposes of modifying and changing the video record profile and each event content and video and audio playing paragraph in a simple mode, namely, only by modifying the course wb file text file, and has high maintenance flexibility.
The invention aims to create an online course video recording playback mechanism. The mechanism defines a set of text event formats. Events in this format are stored in the video stream file and also stored as an editable UTF-8 file. Various events (such as handwriting events, page changing events, etc.) can allow a person to dynamically modify various contents of the video file, and also determine the sequence of triggering the events of the video file (such as dialing the video file of a teacher or the video file of a student), so that the video file is not required to be assembled through a script language, and the video file can be directly played by using a lesson program interface after modification. The technical threshold and the operation cost of editing a playable online classroom video profile can be reduced.
The present invention creates a text event format for recording events of various whiteboard actions and video and audio actions triggered during online course, and stores a text file (wb file, below) and a video and audio stream file (instructor video and audio file, below) through the prior art. After the course is finished, all the behaviors and videos in the course can be played back and forth through the wb file and the instructor video profile, and the computer program has the characteristics of being modifiable and not required to be compiled again.
The invention relates to a character event recording mode, which comprises the following steps:
a. when any user (instructor, student or IT staff) of the online course enters the classroom, the video streaming event host starts the classroom entity and creates a wb file and writes a line of event ' init4 ', which represents the beginning of the online course (the event format is ' current time-event name-event parameter ', separated by ' character symbol and ending of the event with ' CRLF ' two bytes);
b. every minute the classroom entity will write a row of events't' in the wb file and instructor profile (if instructor profile exists), the event parameter is the current unix time (in milliseconds, consisting of 13 digits);
c. when a teacher or a student logs in a course and is connected to a classroom entity and presses down to allow the local microphone and the camera to be accessed, the classroom entity writes a row of events 'r' in the wb file and the teacher video file (if the teacher video file exists), representing that the user video file begins to be recorded (event parameters: user name, user number, body identification, classroom category, video format).
d. When the instructor or student leaves the course and leaves the classroom entity, the classroom entity writes a row of events 'o' in the wb file and the instructor video profile (if the instructor video profile exists), and the representative stops recording the user video stream (event parameters: user name, user number, body identification).
e. When any user (instructor, student or IT personnel) types in the dialog box in the course, the classroom entity will write a line of event 'c' in the wb file and instructor video file (if instructor video file exists), representing the content of the speech of the user in the classroom (event parameters: custom user header (including user name and conversation object name), user name, user number, user identity, speech content).
f. When any user (instructor, authorized student or IT personnel) with the instruction book operation authority turns/presents the page of instruction book in the course, the classroom entity writes a row of events 'p' in the wb file and the instructor video book (if the instructor video book exists), which represents that the current instruction book in the classroom is in the page several (event parameter: page index value, integer type from 0).
g. When any user (instructor, authorized student or IT personnel) with whiteboard operation authority operates (adds/modifies/deletes) the whiteboard event in the course, the classroom entity writes a row of event's' in the wb file and the instructor video file (if the instructor video file exists), representing the current operation content (such as adding a new hand to draw, uploading a picture and the like) on the whiteboard (event parameters: drawing operation, drawing type, parameters required by the drawing type).
h. When the instructor uses special effects and gives points to approve the student in the course, the classroom entity writes a line of events ' rw ' in the wb file and instructor video file (if the instructor video file exists), representing the current instructor's timely approval and giving points to the student (event parameters: approval type, student name and number, student total points, and points currently accumulated by the instructor in the classroom).
j. When the instructor plays the audio file of the teaching material in the course, the classroom entity writes one row of events 'w' in wb file and instructor video file every second during the playing period (if the instructor video file exists), representing the time point and the playing behavior of the audio file played in the course by the instructor at present (event parameters: file name including audio file path, current playing state, audio file total length (second), audio file current playing position (second)).
k. When the instructor switches the user (such as a student) to be the master in the course, the classroom entity writes a line of event 'sw' in the wb file and the instructor video file (if the instructor video file exists), representing that the instructor should be switched to the master (the video of the user side is switched to the video position of the original instructor side) (event parameters: user name and number, user category).
Each second, the classroom entity writes the accumulated seconds as time codes into the video profiles of all online users after being established from the classroom entity, and the profiles are used as video profiles for synchronous use of all audio tracks when playing;
the video playback mechanism of the present invention is as follows:
the playing program obtains the course number and the verification code from the link parameter, and calls the API related to the video file to obtain the data required by the video file (such as the position of the host for storing the video audio/video file, the position of the folder for the teaching material picture file, the position of the wb file)
And the playing program end reads the wb file of the course. And storing the page event and the whiteboard event into a correlation hash and array structure, and acquiring the audio and video file name information of the instructor from the recording event of the instructor.
Confirm that the playback program does not carry the 'playback ═ 1' parameter. The time stamp of the first recorded event 'r' of the student is taken as the starting position of the playing (five minutes after the instructor enters the classroom, 300 seconds of time code), the total time of the whole classroom is calculated from the wb file, the total time zone in which the instructor logs out and logs in after the first 'r' event is cut off, and finally the calculation result is taken as the total time of the time axis.
Confirm the special event 'q' in wb file for manual modification, use all the events in wb file (filter all events recorded in instructor video file and triggered by prior art CuePoint while playing), establish a timer in program to confirm every second that the second and whether there are any events above can be triggered.
Recording the event 'r' from the instructor to preload the instructor video file and monitoring the callback message. If the callback is successful, executing the step h;
the data structure for user playing is established from the recorded events 'r' of other users.
The classroom category is taken from the instructor 'r' first recorded event to decide which video file interface framework to use.
After jumping the audio-video streaming of the instructor to the first login time point (5 minutes) of the instructor, pausing, and taking the position as the initial position of the time axis, jumping out of the inquiry interface, and allowing the user to select whether to start playing the video file. Selecting 'yes', starting to play from the play starting point;
every fifteen seconds, the instructor profile will be confirmed first, with the most recently triggered time code (315 seconds), and this value is used to compare the most recently triggered time code (314 seconds) in the trainee profile. Because the gap is 1 second (315-.
Confirming that the playing reaches 50 th second (time code 350) to trigger a page turning 'p-1' event, and updating the teaching material graph file on the program to be a second page of teaching material after emptying the canvas.
Fig. 2 to 12 are schematic views of embodiments of a video generation method based on online education of the present invention. Fig. 2 is a schematic diagram of a segmented recorded video in an embodiment of the video generation method based on online education according to the present invention. As shown in fig. 2, in another preferred embodiment of the present invention, a recorded video is divided into a plurality of video segments having a duration of 1 minute, a1, a2, A3, a4, a5, a6, a7, A8, and a9 … …, according to a preset duration (1 minute).
Fig. 3 is a frame of a picture in the video passage a1 in fig. 2.As shown in fig. 3, 1 frame of picture is extracted for textual recognition in video paragraph a1 (instructor 10 is mainly in introducing preposition use), and the first text about the content of the blackboard writing is obtained:
preposition word
Like …. Because like's verb's and prepositions can both be placed before nouns, a distinction is to be made.
What is the she like? What is she? (preposition)
As/as.:
he was interested in playing chess for a short time.
By with/by/ride/by.
The bridge was buried by robots. "
The second text is obtained by performing voice detection on the video passage a1, and in this embodiment, two words, "preposition" and "like" that appear in both the first text and the second text and occur most frequently are used as the video tags of the video passage a 1.
Fig. 4 is a frame of a picture in video passage a2 in fig. 2.As shown in fig. 4, 1 frame of picture is extracted from video paragraph a2 (instructor 10 mainly introduces usage of preposition as) for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a2, and in this embodiment, the two words "preposition" and "as" that appear in the first text and the second text most frequently are used as the video tags of the video passage a 2.
In video paragraph A3 (instructor 10 is still in the usage of introduction preposition as), 1 frame of picture is extracted for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard writing is obtained. The second text is obtained by performing voice detection on the video passage A3, and in this embodiment, the two words "preposition" and "as" that appear in the first text and the second text most frequently are used as the video tags of the video passage A3.
Fig. 5 is a frame of a picture in video passage a4 in fig. 2.As shown in fig. 5, 1 frame of picture is extracted from video paragraph a4 (instructor 10 mainly introduces the usage of preposition by) for text recognition, and a first text (the same as the first text in a1, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a4, and in this embodiment, two words, "preposition" and "by" that appear in both the first text and the second text and occur most frequently are used as the video tags of the video passage a 4.
Fig. 6 is a frame of a picture in video passage a5 in fig. 2.As shown in fig. 6, 1 frame of picture is extracted for textual recognition in video paragraph a5 (instructor 10 is mainly introducing preposition in usage), and the first text about the content of the blackboard writing is obtained:
preposition word
In … (language):
what's this in Chinese? How is this said in chinese?
On ride/hike, through (radio/tv):
do you go hot on foot? Do you walk there?
Over pass (radio), span:
the key in touch over the radio while working. "
The second text is obtained by performing voice detection on the video passage a5, and in this embodiment, two words, "preposition" and "in" that appear in the first text and the second text and occur most frequently are used as the video tags of the video passage a 5.
In video paragraph a6 (instructor 10 is still in the usage of introduction preposition in), 1 frame of picture is extracted for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a6, and in this embodiment, two words, "preposition" and "in" that appear in the first text and the second text and occur most frequently are used as the video tags of the video passage a 6.
Fig. 7 is a frame of a picture in video passage a7 in fig. 2.As shown in fig. 7, 1 frame of pictures is extracted from video paragraph a7 (instructor 10 mainly introduces usage of preposition on) for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a7, and in this embodiment, two words, "preposition" and "on" that both appear and appear most frequently are used as the video tags of the video passage a 7.
In video paragraph A8 (instructor 10 is still in the usage of introduction preposition on), 1 frame of picture is extracted for image-text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage A8, and in this embodiment, two words, "preposition" and "on" that both appear and appear most frequently are used as the video tags of the video passage A8.
Fig. 8 is a frame of a picture in video passage a9 in fig. 2.As shown in fig. 8, 1 frame of picture is extracted from video paragraph a9 (instructor 10 mainly introduces the usage of preposition over) for text recognition, and a first text (the same as the first text in a5, and will not be described here again) about the content of the blackboard-writing is obtained. The second text is obtained by performing voice detection on the video passage a9, and in this embodiment, two words, "preposition" and "over" that both appear and occur most frequently are used as the video tags of the video passage a 9.
Fig. 9 is a schematic diagram of a video lesson sent to a user by establishing a user tag based on personal information of the user. Fig. 10 is a schematic diagram of a video file obtained by screening and editing video scenes. Fig. 11 is a schematic diagram of all video paragraphs contained in a video file. As shown in fig. 9, the self-learner 11 extracts the keywords of the wrong questions from the wrong-question set in the history database, and generates the edit tags based on the keywords of the wrong questions, and in this embodiment, the edit tags generated based on the keywords of the wrong questions of the learner 11 are "preposition", "as", "in", and "on". Screening video paragraphs of video paragraphs A1, A2, A3, A4, A5, A6, A7, A8 and A9 … … video tags hit editing tags 'prepositions', 'as', 'in' and 'on' to obtain three video files B1, B2 and B3, wherein the video file B1 hits editing tags 'prepositions', 'as'; video file B2 hits the edit tag "preposition", "in"; the video file B3 hits the edit tag "preposition", "on", edits the video files B1, B2 and B3 into one video file C (see fig. 10), and transmits the video file C to the mobile terminal 11A of the trainee 11. The student 11 can learn the video file C, and the fact that the student 11 plays the video file C at the mobile terminal 11A is that the video paragraphs a2, A3, a5, a6, a7 and A8 are substantially played in sequence, so that the relevant knowledge points of the subjects which are most prone to error are reviewed in a very targeted manner, and the learning efficiency and the humanized experience are greatly improved.
Fig. 12 is a schematic diagram of creating user tags based on personal information of a plurality of users respectively, and sending obtained video courses to the users respectively. Referring to fig. 9, 10, 11 and 12, the present invention can divide the recorded video into a plurality of video segments for teaching of all recorded instructors and trainees stored in the online education server, form different video files (C1, C2, C3) according to different requirements (e.g., error set, viewing preference, etc.) of each trainee (11, 12, 13), respectively, and send the video files to the mobile terminals (11A, 12B, 13C) of each trainee, respectively, even for video lessons of online education in the same hall, after editing according to different requirements of each trainee, each trainee can obtain a video file customized or selected according to personal conditions, thereby improving teaching efficiency, better matching different trainees with different learning schedules and knowledge backgrounds, and maximizing the use of recorded video resources of the existing online education, and the better information matching effect is exerted.
Fig. 13 is a schematic diagram of a first embodiment of a video generation method based on online education of the present invention. As shown in fig. 13, an embodiment of the present invention further provides a video generating system based on online education, for implementing the above-mentioned video generating method based on online education, where the video generating system 5 based on online education includes:
the recorded video segmentation module 51 is used for segmenting at least one segment of recorded video of online education into a plurality of video segments according to preset time length;
the video tag adding module 52 is used for adding at least one video tag to the video section according to the content information of the video section;
the video file editing module 53 is used for screening and editing the video frequency bands according to the video tags to obtain at least one video file containing a plurality of video frequency bands;
the user video customizing module 54 establishes at least one user tag according to the personal information of the user, screens the video file with the requirement for editing, obtains at least one video course and sends the video course to the user.
In a preferred embodiment, the video tag adding module performs image-text recognition on at least one frame of picture in the video paragraph to obtain a first text of the video paragraph; performing voice recognition on the video paragraph to obtain a second text of the video paragraph; and obtaining at least one word with the highest occurrence frequency according to the first text and the second text corresponding to the video paragraph as a video label of the video paragraph.
In a preferred embodiment, the recorded videos are teaching videos of all recorded instructors and trainees stored in the online education server, and each teaching video is divided into a plurality of video paragraphs according to preset time length; or
The recorded video is a real-time recorded video when the instructor and the student perform one-to-one online education, and the recorded video in the preset time length is generated into a video paragraph after the preset time length.
In a preferred embodiment, the video tag adding module adds at least one video tag to the video paragraph according to a trigger event in the content information of the video paragraph, where the trigger event includes at least one of the following events:
instructors or students connect to a video classroom of online education;
a teacher or a student opens a dialog box in a video classroom to input characters;
a teacher or a student turns pages of a teaching material in a video classroom;
a white board is used by a teacher or a trainee in a video classroom;
a teacher or a student plays audio and video files in a video classroom;
the instructor or student uses animated special effects in a video classroom;
the instructor or student sends points in the video classroom; and
the instructor or student leaves the video classroom.
According to the video generation system based on online education, the whole video is divided into video segments, tags are added after the video segments, the whole video is convenient to select, optimize and edit, the watching value of video playback is increased, and humanized experience is improved.
The embodiment of the invention also provides video generation equipment based on online education, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the online education based video generation method via execution of the executable instructions.
As shown above, the embodiment divides the whole video into video segments, adds tags behind, facilitates selection and optimized editing of the whole video, increases the viewing value of video playback, and improves the humanized experience.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 14 is a schematic structural diagram of a video generating apparatus based on online education according to the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 14. The electronic device 600 shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 14, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
Embodiments of the present invention also provide a computer-readable storage medium for storing a program, which when executed, implements the steps of the video generation method based on online education. In some possible embodiments, the aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
As shown above, the embodiment divides the whole video into video segments, adds tags behind, facilitates selection and optimized editing of the whole video, increases the viewing value of video playback, and improves the humanized experience.
Fig. 15 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 15, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In summary, the present invention is directed to a video generation method, system, device and storage medium based on online education, in which a tag is added after a whole video is divided into video segments, so that the whole video is conveniently selected and optimized for editing, the viewing value of video playback is increased, and the user experience is improved.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.