CN114466150A - Automatic video recording method and device, electronic equipment and storage medium - Google Patents

Automatic video recording method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114466150A
CN114466150A CN202210370724.XA CN202210370724A CN114466150A CN 114466150 A CN114466150 A CN 114466150A CN 202210370724 A CN202210370724 A CN 202210370724A CN 114466150 A CN114466150 A CN 114466150A
Authority
CN
China
Prior art keywords
video
user
recording
exercise information
information corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210370724.XA
Other languages
Chinese (zh)
Inventor
唐学武
郑伊萌
包鹏飞
王奇奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hex Technology Co ltd
Original Assignee
Beijing Hex Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hex Technology Co ltd filed Critical Beijing Hex Technology Co ltd
Priority to CN202210370724.XA priority Critical patent/CN114466150A/en
Publication of CN114466150A publication Critical patent/CN114466150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The application relates to the field of computer technology, in particular to an automatic video recording method, an automatic video recording device, electronic equipment and a storage medium, wherein the method comprises the steps of determining the operation corresponding to an operation instruction when the operation instruction triggered by a first user is detected, wherein the operation comprises the steps of starting recording, suspending recording and stopping recording; acquiring a video and exercise information corresponding to the video based on the operation corresponding to the operation instruction; acquiring a response result of a second user based on the exercise information corresponding to the video, and determining second user name list information based on the exercise information corresponding to the video and the first response result of the second user; and sending the video and exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information. The method and the device can be used for conveniently recording the video and sending the video in a targeted manner.

Description

Automatic video recording method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for automatically recording a video, an electronic device, and a storage medium.
Background
Along with the rapid development of economy, the attention of people to education is gradually increased. In schools with rich network resources, internet technology has begun to be applied to teaching. In order to facilitate the students to review the teaching content in class, the video of the teaching exercises of the teachers is sent to the students.
In the related technology, a teacher is recorded in the whole course of the teaching process, and the teacher clips videos after class and sends the videos to all students in a WeChat group mode. However, the video editing has a requirement on the editing capability of teachers, the teachers need to consume a lot of time and energy in the editing process, and the sent video is not targeted, so that students ignore video resources, and teaching resources are lost.
Disclosure of Invention
In order to record a video conveniently and send the video in a targeted manner, the application provides an automatic video recording method, an automatic video recording device, electronic equipment and a storage medium.
In a first aspect, the present application provides an automatic video recording method, which adopts the following technical scheme:
an automatic video recording method comprises the following steps:
when an operation instruction triggered by a first user is detected, determining operation corresponding to the operation instruction, wherein the operation comprises starting recording, suspending recording and stopping recording;
acquiring a video and exercise information corresponding to the video based on the operation corresponding to the operation instruction;
acquiring a first answering result of a second user based on the exercise information, and determining second user name list information based on the exercise information corresponding to the video and the first answering result of the second user;
and sending the video and exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information.
By adopting the technical scheme, the operation instruction of the first user can be detected, the operation instruction of the first user is analyzed, the operation corresponding to the operation instruction is obtained, the corresponding video is obtained based on the operation, the first answer result of the second user based on the exercise information is obtained based on the exercise information, the first answer result is analyzed, the wrongly made list is determined as the second user name list information, the video and the exercise information are sent to the second user in the second user list, the automatic recording and the video decoding are realized, the step of editing the video under the first user class is omitted, the video decoding is sent in a targeted mode, and the possibility of teaching resource waste is reduced.
In another possible implementation manner, when an operation instruction triggered by a first user is detected, determining an operation corresponding to the operation instruction includes:
when an operation instruction triggered by a first user is detected, responding to a click operation triggered by the first user, and acquiring a buried point data packet;
and determining a buried point event and an operation corresponding to the buried point event based on the buried point data packet.
By adopting the technical scheme, when the operation instruction triggered by the first user is detected, the buried point data packet is obtained and analyzed, and the data buried point analysis can accurately obtain the operation of the first user.
In another possible implementation manner, the obtaining a video and exercise information corresponding to the video based on an operation corresponding to the operation instruction includes:
when detecting that the operation corresponding to the operation instruction comprises a trigger command, recording a time point;
and acquiring the video and the exercise information corresponding to the video based on the time point.
By adopting the technical scheme, the operation of the operation instruction can be analyzed, and when the operation comprises a trigger command, the time point can be recorded, so that the entering time point and the exiting time point are determined, corresponding videos are obtained according to the entering time point and the exiting time point, so that the videos of the same exercise are combined, the videos of different exercises are intercepted, and therefore a second user can learn conveniently.
In another possible implementation manner, the obtaining a video and exercise information corresponding to the video based on the time point further includes:
performing semantic recognition on the video to obtain a semantic recognition result;
judging whether the video corresponds to the exercise information or not based on the semantic recognition result;
and if the video does not correspond to the exercise information corresponding to the video, preprocessing the video.
By adopting the technical scheme, the video can be subjected to semantic recognition, the semantic recognition result is analyzed, whether the video corresponds to the problem information or not is judged, so that the resource waste caused by the fact that the video and the problem information do not correspond to a second user is reduced, and when the video and the problem information do not correspond to each other, the video is preprocessed, namely, the video is merged and intercepted, so that the video and the problem information which are sent to the second user can correspond to each other.
In another possible implementation manner, the obtaining a video and exercise information corresponding to the video based on an operation corresponding to the operation instruction further includes:
performing target recognition on the video, wherein a target of the target recognition is a first user;
tracking the object identified by the target to obtain a tracking result;
and determining a key image based on the video and the tracking result.
By adopting the technical scheme, the first user can be tracked, the video and the tracking result are analyzed, so that the key image is determined from the video, the situation that the first user is shielded possibly occurs in the explanation process, and the clear image which is not shielded by the first user is determined as the key image.
In another possible implementation manner, the determining a key image based on the video and the tracking result further includes:
determining related knowledge points based on the exercise information;
and note marking is carried out on the key images based on the related knowledge points.
By adopting the technical scheme, based on the exercise information, the knowledge points related to the exercise are determined, and the knowledge points related to the exercise are marked on the key image, so that a second user can check the knowledge points in the explanation video more clearly.
In another possible implementation manner, the sending, based on the second user name list information, the video and the exercise information corresponding to the video to a second user corresponding to the second user name list information further includes:
when a video watching ending instruction triggered by a second user is detected, acquiring a relevant exercise of exercise information corresponding to the video;
and sending the related exercises to a second user, and acquiring a second answering result of the second user, wherein the second answering result is an answering result of the second user based on the related exercises.
By adopting the technical scheme, after the second user watches the explanation video, the second user can conveniently consolidate the knowledge points and obtain the second answering result of the second user according to the exercise information, and the second answering result is analyzed, so that the learning result of the second user can be conveniently checked.
In a second aspect, the present application provides an apparatus for automatically recording a video, which adopts the following technical solutions:
an automatic video recording apparatus comprising:
the first determining module is used for determining operation corresponding to an operation instruction when the operation instruction triggered by a first user is detected, wherein the operation comprises starting recording, pausing recording and stopping recording;
the first acquisition module is used for acquiring a video and exercise information corresponding to the video based on the operation corresponding to the operation instruction;
the second acquisition module is used for acquiring a response result of a second user based on the exercise information corresponding to the video;
the second determining module is used for determining second user name list information based on the exercise information corresponding to the video and the first answering result of the second user;
and the sending module is used for sending the video and the exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information.
By adopting the technical scheme, the first determining module can detect the operation instruction of the first user and analyze the operation instruction of the first user, so that the operation corresponding to the operation instruction is determined, the first obtaining module obtains the corresponding video based on the operation and based on the exercise information, the second obtaining module obtains the first answer result of the second user based on the exercise information and analyzes the first answer result, the second determining module determines the wrong list as the second user name list information, and the sending module sends the video and the exercise information to the second user in the second user list, so that the automatic recording of the explanation video is realized, the step of editing the video under the first user is omitted, and the explanation video is sent in a targeted manner, so that the possibility of teaching resource waste is reduced.
In another possible implementation manner, when detecting an operation instruction triggered by a first user and determining an operation corresponding to the operation instruction, the first determining module is specifically configured to:
when an operation instruction triggered by a first user is detected, responding to a click operation triggered by the first user, and acquiring a buried point data packet;
and determining a buried point event and an operation corresponding to the buried point event based on the buried point data packet.
In another possible implementation manner, when the first obtaining module obtains the video and the exercise information corresponding to the video based on the operation corresponding to the operation instruction, the first obtaining module is specifically configured to:
when detecting that the operation corresponding to the operation instruction comprises a trigger command, recording a time point;
and acquiring a video and exercise information corresponding to the video based on the time point.
In another possible implementation manner, when the first obtaining module preprocesses the video based on the video and the problem information, the first obtaining module is specifically configured to:
performing semantic recognition on the video to obtain a semantic recognition result;
judging whether the video corresponds to the exercise information corresponding to the video or not based on the semantic recognition result;
and if the video does not correspond to the exercise information corresponding to the video, preprocessing the video.
In another possible implementation manner, the apparatus further includes: an object recognition module, an obtaining module, and a third determining module, wherein,
the target identification module is used for carrying out target identification on the video, and an object of the target identification is a first user;
the obtaining module is used for tracking the object identified by the target to obtain a tracking result;
and the third determining module is used for determining a key image based on the video and the tracking result.
In another possible implementation manner, the apparatus further includes: a fourth determination module and a marking module, wherein,
the fourth determining module is used for determining related knowledge points based on the exercise information;
and the marking module is used for note marking on the key images based on the related knowledge points.
In another possible implementation manner, the apparatus further includes: a third obtaining module and a fourth obtaining module, wherein,
the third acquisition module is used for acquiring the relevant exercises of the exercise information when a video watching ending instruction triggered by a second user is detected;
and the fourth obtaining module is used for sending the related exercises to the second user and obtaining a second answering result of the second user, wherein the second answering result is an answering result of the second user based on the related exercises.
In a third aspect, the present application provides an electronic device, which adopts the following technical solutions:
an electronic device, comprising:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: a method for automatic video recording according to any one of the possible implementations of the first aspect is performed.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: there is stored a computer program that can be loaded by a processor and that implements a method of automatic video recording as shown in any one of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the method comprises the steps that an operation instruction of a first user can be detected, the operation instruction of the first user is analyzed, so that operation corresponding to the operation instruction is obtained, a corresponding video is obtained based on the operation, a first answering result of a second user based on exercise information is obtained based on the exercise information, the first answering result is analyzed, a misdone list is determined as second user name list information, and the video and the exercise information are sent to the second user in a second user list, so that the explanation video is automatically recorded, the step of editing the video under the class of the first user is omitted, and the explanation video is sent in a targeted mode, and the possibility of teaching resource waste is reduced;
2. the operation of the operation instruction can be analyzed, when the operation comprises a trigger command, the time point can be recorded, the entering time point and the exiting time point are determined from the time point, corresponding videos are obtained according to the entering time point and the exiting time point, the videos of the same exercise are combined, the videos of different exercises are intercepted, and therefore the second user can learn conveniently.
Drawings
Fig. 1 is a schematic flowchart of an automatic video recording method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of an automatic video recording apparatus according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described in further detail with reference to the accompanying drawings 1-3.
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
At present, in order to promote the balance of education high-quality resources, many schools begin to expand the coverage of education high-quality resources by using an informatization means, and promote the normalization of three classes. The three classes are a special delivery class, a famous teacher class and a famous school online class. Wherein, aiming at the problem that the rural teacher resources are relatively short, the special classroom adopts synchronous lessons to balance high-quality resources through the Internet; the famous teacher class aims at the problem of poor teaching ability of teachers, gives play to the famous teacher class demonstration effect, and enables famous teacher resources to be shared in a large range by building network teaching and research activities; aiming at the problem of unbalanced urban and rural regional education, the famous school online classroom promotes high-quality resource sharing by opening courses.
In the existing three classroom correlation techniques, videos of teachers who are on class can only be recorded in the whole course, and teachers who are off class can perform secondary editing on the videos recorded in the whole course, so that the requirements on video editing capacity of the teachers are high, and a large amount of time and energy of the teachers can be consumed in the editing process. The teacher clicks a start button or an end button to record the video according to the requirement, but the video not only influences the lesson performance of the teacher, but also causes the situation that the teacher often forgets to record the video, and resource loss is caused. The teacher sends the video to the student through the mode of little letter crowd under the class, because the explanation video that sends is not targeted, the student often can ignore the video, causes the waste of resource.
The embodiment of the application provides an automatic video recording method, which can automatically record videos for explaining exercises in a classroom according to the triggering operation of teachers, determine a student list according to answer results of students, and send the students with pertinence and accuracy.
In the embodiment of the present application, a scene in which a problem explanation video is automatically recorded in a classroom is mainly used as an example for description, but the present application is not limited thereto.
In order to better implement the automatic video recording method, the following description is made by using specific embodiments and with reference to the accompanying drawings.
The embodiment of the present application provides an automatic video recording method, which is executed by an electronic device, where the electronic device may be a server, and the server may be an independent physical server, or a server cluster or a distributed system formed by multiple physical servers, but is not limited thereto, and the embodiment of the present application is not limited herein.
Further, an embodiment of the present application provides an automatic video recording method, and as shown in fig. 1, an example is given to implement an automatic video recording method, which is specifically as follows:
step S101, when an operation instruction triggered by a first user is detected, determining an operation corresponding to the operation instruction.
Wherein the operation comprises starting recording, pausing recording, and stopping recording.
To this application embodiment, install wisdom screen in the classroom in advance, this wisdom screen is the touch-sensitive screen, has a plurality of function point buttons on the wisdom screen, and each function point button all corresponds an operation. Typically, the first user is a teacher of a class. The operation instructions triggered by the first user comprise operation instructions such as 'start class, begin explanation', 'finish explanation' and 'finish class'. Wherein, the operation instruction of 'start class' corresponds to 'prepare recording', namely entering into standby state; the explanation starting operation instruction corresponds to the recording starting operation, namely, the explanation video of the exercise is recorded; the explanation finishing operation instruction corresponds to the recording finishing operation, namely, the recording of the explanation video of the exercise is finished; the operation instruction of ending class corresponds to the operation of stopping recording, namely entering a shutdown state.
And step S102, acquiring the video and exercise information corresponding to the video based on the operation corresponding to the operation instruction.
For the embodiment of the application, the electronic device can be connected with the camera through wireless Communication protocols such as WiFi, bluetooth and NFC (Near Field Communication), so that the electronic device can acquire the video recorded by the camera. Acquiring exercise information corresponding to the video: the intelligent screen is provided with a plurality of exercise number buttons, the electronic equipment can acquire the exercise number button clicked by the first user, the electronic equipment can also acquire the exercise number input by the first user, and the corresponding exercise question content is determined in the exercise library.
Step S103, acquiring a first answering result of the second user based on the exercise information corresponding to the video, and determining second user name list information based on the exercise information corresponding to the video and the first answering result of the second user.
For the embodiments of the present application, the second user is typically a student. The electronic device may obtain the answer results of the test questions such as the job and the test paper of the second user in advance, and analyze the answer results of the test questions of the second user. Based on the current exercise information, the electronic equipment determines the answer result of the current exercise of the second user from the answer result of the test questions of the second user, and determines the second user information with the wrong exercise history as the second user name list information so as to determine the wrong name list of the exercise.
And step S104, sending the video and exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information.
For the embodiment of the application, the electronic equipment can acquire account information of all students in advance, and sequentially send the videos and exercise information to the second user accounts corresponding to the second user account information based on the second user account information, so that the exercise information corresponding to the videos and the videos can be sent to the second user who often makes wrong exercise in a targeted manner.
Specifically, in the embodiment of the present application, when the operation instruction triggered by the first user is detected in step S101, determining the operation corresponding to the operation instruction may specifically include step S1011 (not shown in the figure) and step S1012 (not shown in the figure), wherein,
in step S1011, when the operation instruction triggered by the first user is detected, the click operation triggered by the first user is responded, and the buried point data packet is obtained.
For the embodiment of the application, when the electronic device detects the click operation of the first user on the smart screen, the embedded data packet of the current click operation button is obtained. For example, when a first user clicks "start explanation" on the smart screen, the electronic device can detect the first user-triggered click button and obtain the current buried point data packet.
Furthermore, the buried point data can also be temporarily stored in a buried point queue, and the buried point data stored in the buried point queue is a buried point queue to be sent. The electronic equipment can obtain the buried point data packets stored in the buried point queue, so that the processing amount of the electronic equipment for processing the buried point data packets is relieved.
Step S1012, based on the buried point data packet, determines a buried point event and an operation corresponding to the buried point event.
For the embodiment of the application, the electronic device can perform data buried point analysis on the buried point data packet and determine buried point events, wherein the buried point events include events of starting recording, suspending recording, stopping recording and the like, so that the operation corresponding to the buried point events is determined. Data embedded point analysis is a commonly used data acquisition method, a plurality of sections of codes are pre-implanted in an embedded data packet, and corresponding functions can be realized when electronic equipment runs each section of code.
Specifically, in the embodiment of the present application, the acquiring of the video and the problem information corresponding to the video in step S102 based on the operation corresponding to the operation instruction may specifically include step S1021 (not shown in the figure) and step S1022 (not shown in the figure), wherein,
step S1021, when it is detected that the operation corresponding to the operation instruction includes a trigger command, recording a time point.
For the embodiment of the present application, the operation instruction triggered by the first user may include a trigger command such as "start recording", or may also include a non-trigger command such as "next page". And if the electronic equipment detects that the operation instruction corresponds to the trigger command, recording the current time point. For example, when the operation instruction triggered by the first user is a "next page" non-trigger command, that is, the first user plays a next slide, the recording time operation is not required.
In step S1022, based on the time point, the video and the exercise information corresponding to the video are obtained.
For the embodiment of the application, the electronic device can record a plurality of time points of the first user at the teaching end, determine the time point corresponding to the operation of "start recording" as the entry time point, determine the time point corresponding to the operation of "end recording" as the exit time point, and determine the video between the entry time point and the exit time point as the explanation video of the problem.
For example, the entry time point is 14 points 23 minutes 54 seconds, the exit time point is 14 points 27 minutes 35 seconds, and the time period video between 14 points 23 minutes 54 seconds and 14 points 27 minutes 35 seconds is determined as the explanation video of the problem.
Further, in the embodiment of the present application, step S1022 may be followed by: step S10231 (not shown), step S10232 (not shown), and step S10233 (not shown), wherein,
and step S10231, performing semantic recognition on the video to obtain a semantic recognition result.
For the embodiment of the application, semantic recognition is performed on the video: the electronic equipment converts the audio frequency into characters by extracting the audio frequency in the video, and then carries out semantic recognition on the characters. Carrying out semantic recognition on characters: the electronic device may pre-process, feature extract, and enter the model by pre-processing the text. Preprocessing the characters: performing word segmentation processing on characters, splitting a sentence into a plurality of parts, and performing corresponding matching on the plurality of parts based on a dictionary; and (3) carrying out feature extraction on the characters: feature extraction can be performed based on a TF-IDF (Term Frequency-Inverse text Frequency) model; entering text into the model: the characters can be input into a pre-trained FastText model to extract key information in the characters.
Step S10232, judging whether the video corresponds to the exercise information corresponding to the video based on the semantic recognition result.
For the embodiment of the application, based on the semantic recognition result, the electronic device determines whether the current video corresponds to the problem, that is, whether the video is explained as the problem. If the video corresponds to the exercise, whether the video is a complete explanation video of the exercise is judged.
For example, the subject is 21, and 21 has 21. (1), 21. (2) and 21. (3), and the electronic device compares the semantic recognition result with the subject content based on the semantic recognition result to determine whether the video explains all the contents of 21 subjects.
And step S10233, if the video does not correspond to the exercise information corresponding to the video, preprocessing the video.
For the embodiment of the application, the video is preprocessed, including the video is intercepted and merged. A problem may include multiple trivia questions, where a first user may have first explained a first trivia question, then another problem, then a second trivia question for the problem, or a second user may have asked a question, and the recorded videos may need to be intercepted or combined.
If the video contains 21 questions, 22 questions and 23 questions, intercepting and segmenting the video, namely intercepting the explanation videos of the 21 questions, 22 questions and 23 questions according to the semantic recognition result; if the video includes the explanation video of the 21. (1) question and the next video includes the explanation videos of the 21. (2) question and the 21. (3) question, the three small explanation videos of the 21 questions are merged.
Further, in the explanation process of the first user, the first user may switch the interfaces between the problem explanation and the courseware. In the process of preprocessing the video, the electronic equipment respectively records time points when detecting a 'start explanation' instruction, a courseware switching interface instruction and a return explanation interface instruction triggered by a first user, and intercepts the video based on the recorded time points, namely deletes the video switched by the courseware of the first user in the explanation video to obtain the exercise explanation video.
In a possible implementation manner of the embodiment of the present application, the method may further include: step S102a (not shown), step S102b (not shown), and step S102c (not shown), wherein step S102a (not shown), step S102b (not shown), and step S102c (not shown) can be performed after step S102 (not shown), wherein,
step S102a, performing object recognition on the video.
The target identification object is a first user.
For the embodiment of the application, the target identification is carried out on the video: the frame image can be subjected to target recognition by performing frame extraction processing on the video. Carrying out target identification on the frame image: the frame image can be input into a pre-trained neural network model, all the photos of the first user are input in advance, and training of the neural network model is carried out to identify the target object in the video.
The frame image is subjected to target recognition, and targets can be classified and recognized through a BoW (Bag Of Words) model. In the electronic equipment target identification method, an image ratio is taken as a document, characteristics in the image are taken as words in the document, and the image is subjected to target identification.
Step S102b, tracking the object identified by the target, and obtaining a tracking result.
For the embodiment of the application, the object of target identification is tracked: the electronic device may perform target Tracking through a Tracking-Learning-Detection (TLD), where the TLD may establish an initial Detection model at a position where a first frame appears, and may adapt to a change of a target in a subsequent frame update model to perform target Tracking.
In step S102c, a key image is determined based on the video and the tracking result.
For the embodiment of the application, the electronic equipment determines the image which displays the topic more completely and clearly as the key image. In the course of teaching, the situation that the body of the first user shields the picture of the test question due to the explanation of the test question occurs, and the target recognition object is tracked, so that the picture that the first user does not shield the test question is determined as the key image from the multiple frame images. After the key image is determined, the key image can be determined as a cover image of the video, and the key image and the video can also be sent to a second user together.
In a possible implementation manner of the embodiment of the present application, the method may further include: step 102c1 (not shown) and step 102c2 (not shown), wherein step 102c1 (not shown) and step 102c2 (not shown) can be performed after step S102c 102c (not shown), wherein,
step S102c1, determining relevant knowledge points based on the problem information corresponding to the video.
For the embodiment of the application, the electronic equipment analyzes the exercise information based on the exercise information corresponding to the video, can extract keywords from the exercise information, and matches the extracted keywords with the knowledge point library to obtain the related knowledge points of the exercise information.
For example, the problem is about the Pythagorean theorem, the electronic device analyzes the problem information to obtain keywords of the problem, such as a right triangle, a right side, a hypotenuse length, and the like, and the keywords are matched with the knowledge point library to obtain the knowledge points related to the problem, such as the Pythagorean theorem.
Step S102c2, note-marking the key images based on the relevant knowledge points.
In the embodiment of the application, in the process of explaining the problem by the first user, the related knowledge points are generally explained, the parameters of the problem are substituted into the formula to obtain the result, and the electronic equipment marks the image containing the related knowledge points of the problem.
For example, the key images may be marked with underlining, highlighting in different colors, and bolding, and the electronic device may underline knowledge points of the key images to facilitate the second user to quickly learn the knowledge points associated with the problem.
In a possible implementation manner of the embodiment of the present application, the method may further include: step S105 (not shown), step S106 (not shown), and step S107 (not shown), wherein step S105 (not shown), step S106 (not shown), and step S107 (not shown) may be performed after step S104 (not shown), wherein,
step S105, when the instruction of finishing watching the video triggered by the second user is detected, acquiring the relevant exercises of the exercise information corresponding to the video.
For the embodiment of the application, after the electronic equipment sends the video and the exercise information to the second user, the second user is prompted to watch repeatedly, and when the electronic equipment detects a watching finishing instruction triggered by the second user, the related exercises such as the same knowledge points, the same difficulty, the same area, the same year and the like are obtained and used as the related exercises.
And step S106, sending the relevant exercises to the second user, and acquiring a second answering result of the second user.
And the second answering result is the answering result of the second user based on the relevant exercises.
For the embodiment of the application, the relevant problem is sent to the second user to check whether the second user really grasps the problem form and the knowledge point corresponding to the problem. Preset conditions may be preset, for example, when the answer result contest rate of the second user is greater than or equal to 90%, the second user already knows the question type and knowledge point related to the problem, and the user does not need to be prompted to watch the explanation video.
Further, the above embodiment introduces an automatic video recording method through a step flow, and the following embodiment introduces an automatic video recording apparatus from the perspective of apparatus structure, which is described in detail in the following embodiment:
an embodiment of the present application provides an automatic video recording device, as shown in fig. 2, the automatic video recording device 20 may specifically include:
the first determining module 201 is configured to determine, when an operation instruction triggered by a first user is detected, an operation corresponding to the operation instruction, where the operation includes starting recording, pausing recording, and stopping recording;
the first obtaining module 202 is configured to obtain a video and exercise information corresponding to the video based on an operation corresponding to the operation instruction;
the second obtaining module 203 is configured to obtain a response result of the second user based on the exercise information corresponding to the video;
a second determining module 204, configured to determine second user name list information based on the exercise information corresponding to the video and the answer result of the second user;
a sending module 205, configured to send a video and exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information;
in another possible implementation manner of the embodiment of the application, when the first determining module 201 determines, when detecting an operation instruction triggered by a first user, an operation corresponding to the operation instruction, the first determining module is specifically configured to:
responding to a click operation triggered by a first user, and acquiring a buried point data packet;
and determining a buried point event and an operation corresponding to the buried point event based on the buried point data packet.
In another possible implementation manner of the embodiment of the application, when the first obtaining module 202 obtains the video and the exercise information corresponding to the video based on the operation corresponding to the operation instruction, it is specifically configured to:
when detecting that the operation corresponding to the operation instruction comprises a trigger command, recording a time point;
acquiring video and exercise information based on the time points;
based on the video and the problem information, the video is preprocessed.
In another possible implementation manner of the embodiment of the present application, when the first obtaining module 202 preprocesses a video based on the video and the problem information, it is specifically configured to:
performing semantic recognition on the video to obtain a semantic recognition result;
judging whether the video corresponds to the exercise information or not based on the semantic recognition result;
and if the video does not correspond to the exercise information, preprocessing the video.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: an object recognition module, an obtaining module, and a third determining module, wherein,
the target identification module is used for carrying out target identification on the video, and an object of the target identification is a first user;
the obtaining module is used for tracking the object identified by the target to obtain a tracking result;
and the third determining module is used for determining the key image based on the video and the tracking result.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a fourth determination module and a marking module, wherein,
the fourth determining module is used for determining related knowledge points based on the exercise information;
and the marking module is used for note marking on the key images based on the relevant knowledge points.
In another possible implementation manner of the embodiment of the present application, the apparatus 20 further includes: a third obtaining module and a fourth obtaining module, wherein,
the third acquisition module is used for acquiring the relevant exercises of the exercise information when detecting a video watching ending instruction triggered by the second user;
and the fourth obtaining module is used for sending the related exercises to the second user and obtaining a second answering result of the second user, wherein the second answering result is an answering result of the second user based on the related exercises.
By adopting the technical scheme, the first determining module can detect the operation instruction of the first user and analyze the operation instruction of the first user, thereby determining the operation corresponding to the operation instruction, the first acquisition module acquires the corresponding video based on the operation, and based on the exercise information, the second obtaining module obtains a first answering result of the second user based on the exercise information, and analyzing the first answer result, the second determining module determines the misdone list as second user name list information, the sending module sends video and exercise information to the second user in the second user list, therefore, the automatic recording of the explanation videos is realized, the step of editing the videos by the first user in class is omitted, and the explanation videos are sent in a targeted mode, so that the possibility of teaching resource waste is reduced.
Further, it should be noted that: the first determining module 201, the second determining module 204, the third determining module, and the fourth determining module may be the same determining module, may also be different determining modules, may also be partially the same determining module, and are not limited in this embodiment of the present application. The first obtaining module 202, the second obtaining module 203, the third obtaining module, and the fourth obtaining module may be the same obtaining module, may also be different obtaining modules, and may also be partially the same obtaining module, which is not limited in this embodiment of the application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In an embodiment of the present application, there is also provided an electronic device, as shown in fig. 3, where the electronic device 30 shown in fig. 3 includes: a processor 301 and a memory 303. Wherein processor 301 is coupled to memory 303, such as via bus 302. Optionally, the electronic device 30 may also include a transceiver 304. It should be noted that the transceiver 304 is not limited to one in practical applications, and the structure of the electronic device 30 is not limited to the embodiment of the present application.
The Processor 301 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 301 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 302 may include a path that transfers information between the above components. The bus 302 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
The Memory 303 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 303 is used for storing application program codes for executing the scheme of the application, and the processor 301 controls the execution. The processor 301 is configured to execute application program code stored in the memory 303 to implement the aspects illustrated in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc., may also be a server, etc. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the method and the device for automatically recording the lecture video have the advantages that the operation instruction of the first user can be detected, the operation instruction of the first user is analyzed, the operation corresponding to the operation instruction is obtained, the corresponding video is obtained based on the operation, the first answer result of the second user based on the exercise information is obtained based on the exercise information and is analyzed, the wrongly-made list is determined as the second user name list information, the video and the exercise information are sent to the second user in the second user list, the automatic recording and the explaining video are achieved, the step of editing the video under the class of the first user is omitted, the explaining video is sent in a targeted mode, and the possibility of teaching resource waste is reduced.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a few embodiments of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and that these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. An automatic video recording method, comprising:
when an operation instruction triggered by a first user is detected, determining operation corresponding to the operation instruction, wherein the operation comprises starting recording, suspending recording and stopping recording;
acquiring a video and exercise information corresponding to the video based on the operation corresponding to the operation instruction;
acquiring a first answering result of a second user based on the exercise information corresponding to the video, and determining second user name list information based on the exercise information corresponding to the video and the first answering result of the second user;
and sending the video and exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information.
2. The method for automatically recording the video according to claim 1, wherein when the operation instruction triggered by the first user is detected, determining the operation corresponding to the operation instruction comprises:
when an operation instruction triggered by a first user is detected, responding to a click operation triggered by the first user, and acquiring a buried point data packet;
and determining a buried point event and an operation corresponding to the buried point event based on the buried point data packet.
3. The method for automatically recording the video according to claim 1, wherein the obtaining the video and the exercise information corresponding to the video based on the operation corresponding to the operation instruction comprises:
when detecting that the operation corresponding to the operation instruction comprises a trigger command, recording a time point;
and acquiring a video and exercise information corresponding to the video based on the time point.
4. The method for automatically recording a video according to claim 3, wherein the method for acquiring a video and exercise information corresponding to the video based on the time point further comprises:
performing semantic recognition on the video to obtain a semantic recognition result;
judging whether the video corresponds to the exercise information corresponding to the video or not based on the semantic recognition result;
and if the video does not correspond to the exercise information corresponding to the video, preprocessing the video.
5. The method for automatically recording the video according to claim 1, wherein the method for obtaining the video and the exercise information corresponding to the video based on the operation corresponding to the operation instruction further comprises:
performing target recognition on the video, wherein a target of the target recognition is a first user;
tracking the object identified by the target to obtain a tracking result;
and determining a key image based on the video and the tracking result.
6. The method for automatically recording video according to claim 5, wherein the determining key images based on the video and the tracking result further comprises:
determining related knowledge points based on the exercise information corresponding to the video;
and note marking is carried out on the key images based on the related knowledge points.
7. The method for automatically recording a video according to claim 1, wherein the sending the video and the exercise information corresponding to the video to a second user corresponding to the second user's name list information based on the second user's name list information further comprises:
when a video watching ending instruction triggered by a second user is detected, acquiring a relevant exercise of exercise information corresponding to the video;
and sending the related exercises to a second user, and acquiring a second answering result of the second user, wherein the second answering result is an answering result of the second user based on the related exercises.
8. An automatic video recording apparatus, comprising:
the first determining module is used for determining operation corresponding to an operation instruction when the operation instruction triggered by a first user is detected, wherein the operation comprises starting recording, pausing recording and stopping recording;
the first acquisition module is used for acquiring a video and exercise information corresponding to the video based on the operation corresponding to the operation instruction;
the second acquisition module is used for acquiring a response result of a second user based on the exercise information corresponding to the video;
the second determining module is used for determining second user name list information based on the exercise information corresponding to the video and the first answering result of the second user;
and the sending module is used for sending the video and the exercise information corresponding to the video to a second user corresponding to the second user name list information based on the second user name list information.
9. An electronic device, comprising:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: performing a method of automatic video recording according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements an automatic video recording method according to any one of claims 1 to 7.
CN202210370724.XA 2022-04-11 2022-04-11 Automatic video recording method and device, electronic equipment and storage medium Pending CN114466150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210370724.XA CN114466150A (en) 2022-04-11 2022-04-11 Automatic video recording method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210370724.XA CN114466150A (en) 2022-04-11 2022-04-11 Automatic video recording method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114466150A true CN114466150A (en) 2022-05-10

Family

ID=81418179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210370724.XA Pending CN114466150A (en) 2022-04-11 2022-04-11 Automatic video recording method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114466150A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272887A1 (en) * 2013-03-12 2014-09-18 2U, Inc. Interactive asynchronous education
CN111405381A (en) * 2020-04-17 2020-07-10 深圳市即构科技有限公司 Online video playing method, electronic device and computer readable storage medium
CN111522992A (en) * 2020-04-16 2020-08-11 广东小天才科技有限公司 Method, device and equipment for putting questions into storage and storage medium
CN113301371A (en) * 2021-05-20 2021-08-24 读书郎教育科技有限公司 System and method for associating video clips of live course exercises with knowledge points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140272887A1 (en) * 2013-03-12 2014-09-18 2U, Inc. Interactive asynchronous education
CN111522992A (en) * 2020-04-16 2020-08-11 广东小天才科技有限公司 Method, device and equipment for putting questions into storage and storage medium
CN111405381A (en) * 2020-04-17 2020-07-10 深圳市即构科技有限公司 Online video playing method, electronic device and computer readable storage medium
CN113301371A (en) * 2021-05-20 2021-08-24 读书郎教育科技有限公司 System and method for associating video clips of live course exercises with knowledge points

Similar Documents

Publication Publication Date Title
CN109766412B (en) Learning content acquisition method based on image recognition and electronic equipment
CN107291343B (en) Note recording method, device and computer readable storage medium
CN108875785B (en) Attention degree detection method and device based on behavior feature comparison
CN109446315B (en) Question solving auxiliary method and question solving auxiliary client
CN109656465B (en) Content acquisition method applied to family education equipment and family education equipment
CN111027537B (en) Question searching method and electronic equipment
CN108877334B (en) Voice question searching method and electronic equipment
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN111522970A (en) Exercise recommendation method, exercise recommendation device, exercise recommendation equipment and storage medium
US10089898B2 (en) Information processing device, control method therefor, and computer program
Monteiro et al. Detecting and identifying sign languages through visual features
CN111192170B (en) Question pushing method, device, equipment and computer readable storage medium
CN108733718A (en) Display methods, device and the display device for search result of search result
CN105930487A (en) Topic search method and apparatus applied to mobile terminal
CN111079489B (en) Content identification method and electronic equipment
CN114466150A (en) Automatic video recording method and device, electronic equipment and storage medium
CN111078746B (en) Dictation content determining method and electronic equipment
CN111523343B (en) Reading interaction method, device, equipment, server and storage medium
CN109783679B (en) Learning auxiliary method and learning equipment
CN115687630A (en) Method and device for generating course learning report
CN110895924B (en) Method and device for reading document content aloud, electronic equipment and readable storage medium
CN112116505A (en) Anti-cheating online competition system and method
CN111091035A (en) Subject identification method and electronic equipment
CN111091821A (en) Control method based on voice recognition and terminal equipment
CN111090791A (en) Content query method based on double screens and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220510

RJ01 Rejection of invention patent application after publication