CN111429768B - Knowledge point splitting and integrating method and system based on teaching recording and broadcasting - Google Patents

Knowledge point splitting and integrating method and system based on teaching recording and broadcasting Download PDF

Info

Publication number
CN111429768B
CN111429768B CN202010187807.6A CN202010187807A CN111429768B CN 111429768 B CN111429768 B CN 111429768B CN 202010187807 A CN202010187807 A CN 202010187807A CN 111429768 B CN111429768 B CN 111429768B
Authority
CN
China
Prior art keywords
video
teaching
splitting
knowledge
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010187807.6A
Other languages
Chinese (zh)
Other versions
CN111429768A (en
Inventor
石雷
洪张明
金颖
张晓娴
董满生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui I Xue Tang Education Technology Co ltd
Original Assignee
Anhui I Xue Tang Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui I Xue Tang Education Technology Co ltd filed Critical Anhui I Xue Tang Education Technology Co ltd
Priority to CN202010187807.6A priority Critical patent/CN111429768B/en
Publication of CN111429768A publication Critical patent/CN111429768A/en
Application granted granted Critical
Publication of CN111429768B publication Critical patent/CN111429768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a knowledge point splitting and integrating method and a knowledge point splitting and integrating system based on teaching recording and broadcasting, wherein the method comprises the following steps: acquiring identity information and confirming the identity information; starting a recording and broadcasting system, recording a teaching process and generating a recorded video; and splitting and/or merging the recorded video according to the knowledge points to generate a knowledge point teaching video. The invention can automatically record the video of the lesson of the teacher and carry out split integration on the video of the lesson according to the knowledge points. Therefore, students can quickly find the contents wanted to be learned according to knowledge points in massive videos, and the learning efficiency of the students is improved.

Description

Knowledge point splitting and integrating method and system based on teaching recording and broadcasting
Technical Field
The invention belongs to the field of intelligent video processing, and particularly relates to a knowledge point splitting and integrating method and system based on teaching recording and broadcasting.
Background
At present, a large number of teaching videos are generated in the teaching process of teachers. Different teachers, the same lesson, may present videos of the same lesson. Each video contains several knowledge points. However, these videos are videos of the entire class. When a student wants to learn a certain knowledge point through a course video, if the student does not know a course system, all videos need to be searched; even if the knowledge point is known in a specific class, the search needs to be dragged in the class video, which affects the learning efficiency of students. The knowledge points can appear in many courses, namely many courses contain the same knowledge points, and if students do not know other courses, the corresponding videos cannot be found during learning.
Disclosure of Invention
Aiming at the problems, the invention provides a knowledge point splitting and integrating method based on teaching recording and broadcasting, which comprises the following steps:
acquiring identity information and confirming the identity information;
starting a recording and broadcasting system, recording a teaching process and generating a recorded video;
and splitting and/or merging the recorded video according to the knowledge points to generate a knowledge point teaching video.
Further, according to the knowledge points, splitting and/or merging the recorded video to generate a knowledge point teaching video, comprising:
pre-splitting a recorded video, extracting key words in the video, and generating a split video group;
extracting semantic information in a teaching calendar;
finding out a corresponding split video group according to the semantic information; and determining knowledge points of the video according to the semantic information and the split video group to generate a knowledge point teaching video.
Further, the extracting the key words in the video comprises:
and converting the voice signals in the video into texts, extracting key words in the texts by using a text processing technology, and merging the videos according to the key words.
Further, the text processing technology extracting keywords in the text comprises:
and performing word segmentation and removal of stop words on the text.
Further, the method further comprises:
and storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
The invention also provides a knowledge point splitting and integrating system based on teaching recording and broadcasting, which comprises:
the identification module is used for acquiring the identity information and confirming the identity information;
the recording module starts a recording and broadcasting system, records the teaching process of a teacher and generates a recorded video;
and the intelligent module is used for splitting and/or combining the recorded video according to the knowledge points to generate a knowledge point teaching video.
Further, the intelligent module comprises:
the splitting component is used for pre-splitting the recorded video;
the extraction component is used for extracting key words in the video and generating a split video group;
the semantic component is used for extracting semantic information in the teaching calendar;
the determining component is used for finding the corresponding split video group according to the semantic information; and determining knowledge points of the video according to the semantic information and the split video group to generate a knowledge point teaching video.
Further, the extraction component comprises:
a conversion unit for converting a voice signal in a video into a text;
an extraction unit configured to extract keywords in the text using a text processing technique;
and the merging unit is used for merging the videos according to the keywords.
Further, the text processing technology extracting keywords in the text comprises:
and performing word segmentation and removal of stop words on the text.
Further, the system further comprises:
and the storage module is used for storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
The invention can automatically record the video of the lesson of the teacher and carry out split integration on the video of the lesson according to the knowledge points. Therefore, students can quickly find the contents wanted to learn according to knowledge points in the massive videos, and the learning efficiency of the students is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a knowledge point splitting and integrating method based on teaching recording and broadcasting according to an embodiment of the invention;
fig. 2 shows a structure diagram of a knowledge point splitting and integrating system based on teaching recording and broadcasting according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a knowledge point splitting and integrating method based on teaching recording and broadcasting, which can adopt, but is not limited to, the following processes as shown in fig. 1.
The process comprises the following steps:
acquiring identity information and confirming the identity information;
starting a recording and broadcasting system, recording a teaching process and generating a recorded video;
splitting and/or merging the recorded video according to the knowledge points to generate a knowledge point teaching video;
and storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
Specifically, the identity information is acquired and confirmed.
Before recording the teaching process, the teacher identity needs to be acquired and confirmed, and the following methods can be used, but are not limited to:
method 1.1, obtaining according to the teaching calendar.
Specifically, the teaching calendar is a specific schedule for teachers to organize course teaching, and is an important basis for developing teaching and quality monitoring. The teaching calendar contains teaching information of the teacher. Illustratively, class A is given by the teacher three times a day for a total of 20 lessons, giving lessons 1-2 in the morning on Tuesday, giving lessons in classroom 101 on weeks 1-10, giving a first lesson, and giving a second lesson … …. The course B is given by a teacher Li IV and Wang Wu together, when the lessons are totally 8, each lesson is given once every 5-6 classes every friday afternoon every wednesday, the lessons are given in the first machine room in the 7 th week, and the lessons are given in the second laboratory in the 8 th week; the first lesson teaches the first chapter, the Liquan teaches, the second lesson teaches the second chapter, the Wangwuwu teaches … …. These are all reflected in the instructional calendar. Different schools and teaching calendars have different contents, and the teaching calendars of some schools also comprise specific teaching arrangement of each lesson, even more teaching information such as related work contents and score evaluation standards. Some school teaching calendars only embody which teacher is in which course on which day, and the course specific teaching information is embodied by teaching plan, teaching implementation plan, teaching outline and the like. In the invention, all files containing course teaching information belong to a teaching calendar.
Illustratively, the tutorial calendar contains tutorials and tutorial delivery plans. The teaching plan is a practical teaching document which is specifically designed and arranged for teaching contents, teaching steps, teaching methods and the like according to course standards, teaching outline and textbook requirements and the practical conditions of students by taking class hours or subjects as units for the teacher to smoothly and effectively carry out teaching activities. The teaching plan comprises teaching material analysis, student analysis, teaching purpose, difficulty, knowledge points, teaching preparation, teaching process, exercise design and the like. The teaching implementation plan refers to teaching implemented by all levels of education administration departments, schools and teachers, and is an instructive file formulated for achieving an expected teaching target, and the teaching implementation plan is a document description for implementing teaching based on a teaching design scheme. Documents containing teaching notes or teaching implementation plan functions, regardless of their names, fall within the scope of the teaching notes or teaching implementation plans described in this disclosure.
Further, the teaching implementation plan specifies what time the teacher is, in what class room, to what students, what courses are taught. The teaching plan specifies the specific teaching contents of a teacher in a certain class.
When a class has a plurality of teachers giving lessons, the teaching calendar also comprises the specific teaching time arrangement of the class, namely which lessons in the class are given by which teacher. The teaching information in the teaching calendar is provided by the teacher before the school time, and the teaching department carries out statistical setting. The teacher needs to give lessons according to the teaching calendar, that is, what time, in which classroom what class is being spoken. An exemplary tutorial calendar for Zhang III is: in the first week, a second class in Monday morning, in the third classroom, teaching mathematics in three classes; the specific content of the teaching is the teaching plan of this lesson. This means that the first week, monday, morning, two or two classes, Zhang three, and give three classes to teach mathematics in classroom III, and the teaching contents are teaching plan contents.
Specifically, the identity of the teacher can be obtained by reading the teaching calendar according to the teaching time and the teaching place. Illustratively, Zhang three and Li four give three classes a math class at the same time. Zhang three selects class one and two in Monday morning in classroom three, Li four selects class five and six in afternoon in classroom four. The teaching calendar records the time arrangement of giving lessons of Zhang Sanli IV. If a teacher classes in class four in five-six classes on Tuesday afternoon, the teacher can automatically acquire and confirm that the teacher classes are four, and the four classes are three classes to have a math class according to the teaching calendar.
The method 1.2 can adopt the acquisition modes of face recognition, password recognition, ID card recognition and the like.
Specifically, a teacher enters a classroom before class, and automatically or manually logs in a system, wherein the automatic login comprises but is not limited to face recognition and voice recognition; the manual login includes but is not limited to password identification, fingerprint identification, ID card identification and other technologies. And acquiring and confirming the identity of the teacher according to the login information.
Further, in order to prevent the occurrence of the phenomenon of private lessons and classes substitution, the method 1.1 and the method 1.2 can be used in combination to acquire and confirm the identity of the teacher. I.e. using method 1.1, the specific teacher information that should appear in the classroom is determined and the teacher identity is judged using method 1.2. In the exemplary teaching calendar, the study was taught in class four in the quarter afternoon five six classes on the quarter. Using the method 1.1, if a teacher attends a class in class four in five-six classes in the afternoon of Tuesday, the system judges that the teacher should be Li four; using method 1.2, if the teacher is identified as Zhang III; the system will determine that the teacher giving lessons within that time is wrong; if the teacher is identified as Lifour; the system will determine that the teacher giving the lesson is correct during that time.
Specifically, the recording and broadcasting system is started, the teaching process is recorded, and a recorded video is generated.
After the identity information is confirmed, the system judges that the teaching teacher is correct in the time, and starts the recording and broadcasting system. The starting may be instant starting, timing starting, or motion starting. The instant start is that the teacher is started immediately after the identity of the teacher is confirmed; the timing starting is that a lesson is provided with specific starting time according to a teaching calendar, and when the starting time of the lesson is reached, a recording and broadcasting system is started; the action is initiated, that is, the teacher performs a certain action, including but not limited to performing a specific gesture, making a specific sound, pressing a certain button, and starting the recording and broadcasting system.
And the teacher starts to attend the class and records the class attending process of the teacher. Recording the actions of the teacher in class, the voice of the teacher in class and the condition of using the blackboard; the blackboard can be a common blackboard or a white board, can be a projection curtain or a screen, can be an intelligent blackboard such as a nano blackboard, and can be used for writing and projection or showing. The teaching voice recording method can record the teaching action, the teaching voice and the blackboard using condition of the teacher together, and can also record the teaching action, the teaching voice and the blackboard using condition of the teacher respectively.
And stopping recording after the teaching is finished. The termination can be judged using, but not limited to, the following:
and 2.1, finishing the timing.
One lesson, there is a specific ending time. When the end time of the lesson is reached, the system can automatically judge that the lesson is ended and stop recording.
And 2.2, ending according to the instruction of the teacher.
The recording of a lesson can be finished according to the instruction of the teacher. The teacher instruction may use, but is not limited to, the following: use an operating instruction, speak an instruction, make a specific action. And after the instruction of the teacher is sent, the system judges that the teaching is finished and stops recording.
And 2.3, automatically judging and ending the system.
When giving lessons, the teacher can prolong the time of giving lessons sometimes and can not give lessons on time; sometimes, the teacher arranges students to make examination papers, exercises and the like in a classroom, and the teacher does not give lessons. The system can automatically judge that the teaching is finished and stop recording.
A stop action time threshold is defined. When the system records, the actions of the teacher in class, the voice in class and the conditions of using the nanometer blackboard are analyzed. The system analyzes the teaching action of the teacher, and when the time is greater than the action stopping time threshold value and the action of the teacher is less than a certain amplitude within a period of time, the system judges that the teacher stops teaching action. The system analyzes the voice of the teacher, and when the time is greater than the stop action time threshold value within a period of time and the voice of the teacher does not appear in the class, the system judges that the teacher stops the voice action. And the system analyzes the condition of using the nano blackboard, and judges that the teacher stops using the blackboard when the time is greater than the stop action time threshold value and the content on the blackboard is not changed within a period of time. Defining judgment ending conditions, wherein the judgment ending conditions can reach one or more of the following conditions: the teacher stops teaching action, the teacher stops voice action and the teacher stops using the nanometer blackboard. When the judgment requirement is met, the system can automatically judge that the teaching is finished and stop recording. Further, the system will detect the time when recording is stopped; if the stop time is not up to the time of the next class, the system is transferred to a dormant state, but the teacher can start the system recording by action starting.
The recording and broadcasting system records the teaching process and generates a recorded video.
Specifically, according to the knowledge points, splitting and/or merging processing is carried out on the recorded video, and a knowledge point teaching video is generated.
In schools, a large number of teachers give lessons and generate a large number of recorded videos.
Specifically, when a teacher goes to class, the teaching content of each class is different, and the knowledge points contained in the classes are different. For the same lesson, different teachers give lessons, but the knowledge points should be approximately the same. Some of the knowledge points will be the same for different courses. For example, in the same course, the knowledge points taught by the two teachers should be approximately the same, and include a large number of the same knowledge points, such as network reference models. Such as different courses, computer networks, and network devices and network technologies, contain some of the same knowledge points, such as the OSI network reference model.
The splitting refers to that for a video, according to a certain standard, the video is split into different regions, so as to separate a meaningful entity from the video sequence, and the meaningful entity is called a video object in the digital video. The required video is cut off and divided as required by means or method, and the required part is taken. Referred to as video splitting. The merging refers to merging a plurality of videos into one video according to a certain rule.
Specifically, according to knowledge points, the recorded videos are split and/or combined.
And after the video recording is finished, splitting and/or combining different recorded videos according to the knowledge points.
The following method is used for splitting and/or merging the recorded video of a certain lecture. Similarly, the method can be used for operating all the recorded videos, so that the splitting and/or merging processing of all the recorded videos can be completed, and different teaching videos can be formed.
A recorded video may be split and/or merged using, but not limited to, the following methods to generate a knowledge point teaching video:
pre-splitting the recorded video by using a voice recognition technology; extracting key words in the video; extracting semantic information in the teaching calendar by using a semantic recognition technology; finding out a corresponding split video group according to the semantic information; and determining knowledge points of the video according to the semantic information and the split video group to generate a knowledge point teaching video.
Specifically, the recorded video is pre-split using a voice recognition technique.
When giving lessons, the teacher does not give lessons continuously, and the lessons are paused after a sentence or a section of speech, and the pause has a certain time which is pause time. And setting a time threshold, and when the pause time exceeds the time threshold, considering that the teaching content can be split from the time threshold.
And identifying the pause time in the recorded video by using a voice identification technology, and pre-splitting the recorded video. And generating a plurality of small videos after pre-splitting the recorded video.
Illustratively, each recorded video includes a corresponding video time and a time period, the video time referring to a video duration; the time period refers to the time period information for recording the video, and the time period comprises a start time and an end time. If the video time for recording a video in a certain class is 45 minutes; the time period is 14: 00-14: 45. splitting the recorded video, and generating a plurality of small videos according to the time sequence, wherein the small videos comprise: the time consumption of the first small video is first time, and the time period is first time period; the time consumption of the second small video is a second time, and the time period is a second time period; the time consumption of the third small video is a third time, and the time period is a third time period; the time consumption of the fourth small video is a fourth time, and the time period is a fourth time period; the fifth small video is used at a fifth time, the time period is the fifth time period. The sum of the first time to the nth time equals 45 minutes; the start time of the first time period is 14: 00, the end time is the start time of the second time period; the end time of the second time period is the start time of the third time period. 45.
And defining a video group, wherein the video group is a collection of a plurality of small videos generated by splitting a recorded video.
Illustratively, the video group after the video splitting includes: the first small video, the second small video, and the third small video.
Furthermore, the sentences in the recorded video are identified by using a voice recognition technology, and the recorded video can be split according to each sentence. After splitting, the video can be further preprocessed, for example, irrelevant video segments of the class spoken by the teacher in the class are clipped.
Specifically, key words in the video are extracted.
The method for extracting the key words in the video comprises the following steps: and converting the voice signals in the video into texts by using a voice recognition technology, extracting key words in the texts by using a text processing technology, and merging the videos according to the key words.
Speech recognition is the process by which a machine converts speech signals into corresponding text or commands through a recognition and understanding process. And converting the voice contents of all the small videos in the video group into corresponding text sets through a voice recognition technology. And all the small videos in the video group, wherein each small video corresponds to the corresponding text. The collection of these texts is called a text group.
By way of example, but not limitation, speech recognition may be performed using: scientific news fly speech recognition, Baidu speech recognition, Tengdong cloud speech recognition ASR, Huacheng cloud speech recognition.
Specifically, the text processing technology refers to a technology for performing a series of processing on a text by a computer. The text processing technology comprises word segmentation and stop word removal.
The method for extracting the keywords in the text by using the text processing technology comprises the steps of performing word segmentation and stop word removal on each text in a text group by using the text processing technology, extracting the keywords, and counting the number of the keywords in the text.
The word segmentation means that a Chinese character sequence is segmented into a single word. The word segmentation is the foundation of text mining, and the effect of automatically identifying the meaning of the sentence by a computer can be achieved by successfully segmenting the input Chinese. The method is also called mechanical word segmentation method, which matches the Chinese character string to be analyzed with the entry in a sufficiently large machine dictionary according to a certain strategy, and if a certain character string is found in the dictionary, the matching is successful (a word is recognized).
By way of example, the following methods may be used for word segmentation, but are not limited to:
a word segmentation method based on character string matching, a word segmentation method based on understanding, and a word segmentation method based on statistics.
By way of example, the following tools may be used for word segmentation, but are not limited to:
SCWS、ICTCLAS、HTTPCWS、CC-CEDICT。
specifically, the words of the video text generated by recognition are segmented, and repeated words are removed.
Illustratively, for the text "begin lecturing now, this section teaches a TCP connection that includes the.
Since TCP, connection, socket, and message are independent words, these words can be recognized in the text by normal word segmentation methods. However, some curriculums include "TCP socket, TCP connection, TCP message" and "UDP socket, UDP connection, UDP message". If the words of TCP, connection, socket and message are divided separately, the accurate information can not be obtained after the words are divided for the course. Then "TCP socket, TCP connection, TCP message, UDP socket, UDP connection, UDP message" is added to the dictionary. Similarly, for all courses, corresponding special words are added in the dictionary according to the course content, the special words are keywords corresponding to the course content, and the keywords can be obtained by analyzing the course knowledge points or by a teaching calendar. Illustratively, some point of knowledge, is "TCP socket programming"; then some or all of the following keys must be included: "TCP socket programming", "TCP socket", "socket programming", "TCP", "socket", "programming"; by combining the knowledge point and other knowledge points, the keywords of the knowledge point can be judged to be 'TCP sockets' and 'programming'; since there is already "programming" in the dictionary, only "TCP sockets" need to be added to the dictionary at this time. In this case, the dictionary contains both "TCP socket" and "TCP". When a text needs to be participled, if the text contains "TCP socket", the word generated after the participle contains "TCP socket", but not "TCP" and "socket". When the text contains both "TCP socket" and "TCP data", the word segmentation results in "TCP socket", "TCP", and "data".
The result of the word segmentation is: "what this class teaches now starts to give lessons is that the TCP connection includes".
And removing stop words from the text after word segmentation.
Specifically, Stop Words refer to that in information retrieval, in order to save storage space and improve search efficiency, some Words or phrases are automatically filtered before or after processing natural language data (or text), and these Words or phrases are called Stop Words (Stop Words). The stop words are manually input and are not automatically generated, and the generated stop words form a stop word list.
By way of example, the following stop word lists, Hadamard stop word lists, and Baidu stop word lists may be used or not. A new deactivation word list may also be further generated from the existing deactivation word list.
Illustratively, the following tools may be used, but are not limited to, to do the decommissioning of words: stop the word filter.
Exemplary, such as "in" or "in" itself has no definite meaning, and only plays a role in putting it into a complete sentence. If the word "yes" appears on almost every text, searching the word cannot guarantee that truly relevant search results can be given, and the search range is difficult to help to be narrowed, and the search efficiency is reduced. Such words are stop words. During the course of teaching, there are several words, such as "now", "start", "teaching", "this lesson", "teaching", "including", etc., which are often used in class but cannot be used to judge knowledge points. Such words are found in advance by analysis and are also added to the stop word list.
The result of this segmentation "what this class now teaches is that the TCP connection includes" remove stop words, generate "TCP connection". Thus, a keyword is found. "TCP connection".
And converting all small videos in the video group into texts by using a voice recognition technology, and extracting keywords in the texts by using a text processing technology. The following data are finally formed: the teaching video of a certain classroom at a certain time is divided into a plurality of small videos, wherein the first small video is used for a first time, the time period is a first time period, and the first small video contains first keywords, and the number of the first small videos is a; the number of the second keywords is b; the third key word, the quantity is c; ... the second small video is used for a second time, the time period is a second time period, and the second small video contains second keywords, and the number of the second small videos is d; a fourth keyword, the number of which is e; a fifth keyword, the number of which is f; ......
Specifically, videos are merged according to keywords.
Defining key keywords, wherein one text contains a plurality of keywords, and the number of the keywords appearing in the text is different; the keywords are sorted from more to less according to the number of the keywords, if the number of the keywords is the same, the keywords are sorted according to the coding sequence of the keywords, and exemplarily, the keywords can be sorted according to the coding sequence of Chinese characters. And presetting a key keyword threshold, wherein the key keyword threshold is a natural number. The key keywords are the keywords of the key keyword threshold position before sequencing; namely, if the key keyword threshold is N, the key keyword is the first N ranked keywords. Illustratively, the preset key keyword threshold is 3. The text comprises a plurality of keywords, and the keywords are sorted from more to less as a, b, c, d, e and f; the key keywords of this text are: a. b and c.
The videos are merged using the following:
and a video group, wherein after part of the small videos are combined, a new video group is formed. And combining all the small videos in one video group to obtain the original recorded video.
A knowledge point time threshold is defined. When a teacher gives a class, a certain time is needed for the teaching of a knowledge point. When the total video time used by a plurality of adjacent small videos is less than or equal to the time threshold of the knowledge point, the keywords and the key keywords of the small videos need to be compared. For example, it is assumed that the total video time of the first small video, the second small video and the third small video is less than or equal to the knowledge point time threshold; the total video time of the first small video, the second small video, the third small video and the fourth small video is greater than the time threshold of the knowledge point; and the total video time of the second small video, the third small video, the fourth small video and the fifth time period is less than or equal to the time threshold of the knowledge point. The first small video, the second small video and the third small video are considered to be required to be compared together; the first small video, the second small video, the third small video and the fourth small video do not need to be compared together; the second small video, the third small video, the fourth small video, and the fifth time period may be compared together.
A small video combination is defined and the largest combination of small videos for comparison together is called a small video combination. For example, suppose the second small video, the third small video, the fourth small video, and the fifth small video are the largest combinations that can be compared together, called small video combinations; if the combination of the second small video, the third small video and the fourth small video is not the maximum combination, the combination is not called as a small video combination; and the combination of the first small video, the second small video and the third small video is the maximum combination, which can be called as a small video combination. The small video combination comprises a plurality of small videos which are arranged in a time sequence. Defining a middle small video, wherein any two unconnected small videos contain a plurality of small videos in the middle, and the plurality of small videos in the middle are called middle small videos.
A threshold number of keywords and a threshold number of key keywords are defined. The keywords and key keywords of any two small videos in the small video combination are smaller. If any two small videos are in a certain small video combination, if the keywords are the same, combining the two small videos and the middle small video; if the key keywords are the same, combining the two small videos and the middle small video; if the number value of the same keyword is larger than or equal to the threshold value of the number of the keyword, the two small videos and the middle small video are merged; and if the quantity value of the same key words is more than or equal to the threshold value of the quantity of the key words, combining the two small videos and the middle small video. And if the small video combination condition occurs, replacing the original small video with the combined small video, regenerating the small video combination, and continuing processing.
For example, assuming a recorded video, the video is split into the following video groups, including: the first small video, the second small video, the third small video, the fourth small video, the fifth small video and the sixth small video. The first small video, the second small video and the third small video are a first small video combination; the second small video, the third small video, the fourth small video and the fifth small video are a second small video combination; the fourth small video, the fifth small video and the sixth small video are combined by the third small video. Comparing the key words and key words of the small videos in the first small video combination, wherein the small videos are not merged when the merging condition is not met; the second small video combination is compared in order. And after the second small videos are combined, if the third small video and the fifth small video meet the merging condition, merging the third small video, the fifth small video, the middle small video and the fourth small video. And combining the third small video, the fourth small video and the fifth small video to generate a new small video which is the fifth + small video. A new video set is regenerated, the new video set comprising: the first small video, the second small video, the fifth + small video and the sixth small video. And analyzing the small videos in the new video group to reform a plurality of small video combinations. And comparing the key words and the key words of the small videos in the new small video combination, and if the key words and the key words are combined, reforming an updated video group. Until all the small videos in the generated video group can no longer be merged.
Thus, the key words in the video are extracted from one video group.
And pre-splitting all recorded videos and extracting key words in the videos.
Defining a split video group, and finally forming a plurality of split small videos when a recorded video is pre-split and key words in the video are extracted, wherein the plurality of split small videos are called the split video group.
Illustratively, a recorded video is pre-split and key words in the video are extracted to generate a split video group, where the split video group includes: splitting the first small video, and obtaining keywords corresponding to the split first small video; splitting the second small video, and obtaining keywords corresponding to the split second small video; ... The split video group further includes information of the recorded video, including: recording time, recording location, teacher identity.
Specifically, semantic information in the teaching calendar is extracted by using a semantic recognition technology.
The teaching calendar comprises teaching contents of all courses of all teachers, wherein the teaching contents comprise knowledge points, and knowledge point keywords corresponding to the knowledge points. Knowledge points can be obtained by analyzing the teaching contents. And extracting the teaching contents of all courses in the teaching calendar by using a semantic recognition technology.
Specifically, semantic recognition refers to the use of natural language processing technology to provide a computer with word reading capability and automatically recognize the meaning of words in a text. The semantic recognition may process the image, table, etc. using image recognition, table recognition, etc., and then perform the semantic recognition.
By way of example, but not limitation, semantic recognition may be performed using: tengchong Wenzhi, niuparser Chinese syntax semantic analysis system, Baidu public opinion and observation data.
Specifically, a semantic recognition technology is used for extracting teaching contents in a teaching calendar, and the extracted information is called semantic information. Illustratively, the semantic information of a class includes: time of giving lessons, place of giving lessons, identity of a teacher and knowledge point of giving lessons; the knowledge points comprise a plurality of corresponding knowledge point keywords.
Specifically, according to the semantic information, finding a corresponding split video group; and determining knowledge points of the video according to the semantic information and the split video group to generate a knowledge point teaching video.
Obtaining the time of giving lessons, the place of giving lessons and the identity information of a teacher according to the semantic information of a lesson; according to the time of giving lessons, the place of giving lessons and the identity information of teachers, corresponding recorded videos and split video groups can be found.
And comparing the keywords of the knowledge points with the keywords corresponding to each split small video in the split video group.
A contrast threshold is set. When the keywords are set, the keywords corresponding to the course are already included. If the teacher gives a lesson normally according to the teaching calendar, the recorded video necessarily contains keywords corresponding to the lesson; that is, a certain split small video in the split video group necessarily contains the keyword corresponding to the course. And if the same quantity of the keywords of a certain knowledge point and the keywords corresponding to the split small video is larger than the comparison threshold, the knowledge point is considered to be successfully matched with the split small video, and the knowledge point corresponds to the small video. If a course contains N knowledge points, the video group corresponding to the split video recorded in the course contains N split small videos. After the system is matched, if the existing knowledge points and the split small video are not successfully matched, manual processing is carried out.
If a course contains N knowledge points, the video group of the corresponding split video recorded in the course contains M split small videos, wherein N is not equal to M. After the system is matched, the knowledge points which cannot be matched or the split small video are inevitably generated, and then the manual processing is also carried out.
Therefore, the video recorded once is split and/or combined to generate the knowledge point teaching video.
Likewise, this is done for all lecture videos. Due to one course, different teachers can give lessons. Different teachers give lessons in the same course, and videos with the same knowledge points can be generated. The same knowledge point exists in different courses, and the videos of a plurality of same knowledge points are generated. The splitting generates several videos containing different knowledge points. And each video corresponds to one knowledge point in all the split videos. But each knowledge point may correspond to multiple videos.
Specifically, the videos are stored in a centralized manner according to the knowledge graph and/or the knowledge points.
Specifically, the videos are stored in a centralized mode according to the knowledge graph. The centralized preservation method comprises the following steps:
acquiring a curriculum knowledge graph, acquiring knowledge points under the curriculum knowledge graph, and storing all videos corresponding to the knowledge points according to the acquired knowledge points under the curriculum knowledge graph.
Knowledge map (Knowledge Graph) is a series of different graphs displaying Knowledge development process and structure relationship in the book intelligence field, describing Knowledge resources and carriers thereof by using visualization technology, mining, analyzing, constructing, drawing and displaying Knowledge and mutual relation between Knowledge resources and Knowledge carriers.
The knowledge graph is a modern theory which achieves the aim of multi-discipline fusion by combining theories and methods of applying subjects such as mathematics, graphics, information visualization technology, information science and the like with methods such as metrology introduction analysis, co-occurrence analysis and the like and utilizing a visualized graph to vividly display the core structure, development history, frontier field and overall knowledge framework of the subjects.
A course with its own knowledge map. The knowledge-graph contains all knowledge points under the course.
And storing all videos corresponding to the knowledge points according to the knowledge points.
Illustratively, when a student wants to learn a course, the corresponding knowledge points can be found according to the knowledge graph; according to the knowledge point, all videos under the knowledge point can be found. Therefore, students can choose to learn when learning. Students may learn a course as if the video of the first knowledge point is teacher a, the video of the second knowledge point is teacher b, the video of the third knowledge point is teacher c, and the teacher c does not go to the course.
Specifically, the video is centrally stored according to the knowledge points.
And acquiring all course knowledge points, and storing all corresponding videos according to the knowledge points. This forms a video composition with knowledge points.
Further, videos of several identical knowledge points are merged into one video. The student can fetch the data after learning.
Specifically, the invention further provides a knowledge point splitting and integrating system based on teaching recording and broadcasting, which is exemplarily shown in fig. 2. The method comprises the following steps:
the identification module is used for acquiring the identity information and confirming the identity information;
the recording module starts a recording and broadcasting system, records the teaching process of a teacher and generates a recorded video;
and the intelligent module is used for splitting and/or combining the recorded video according to the knowledge points to generate a knowledge point teaching video.
And the storage module is used for storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
Specifically, the identification module is used for acquiring identity information, confirming the identity information and generating teacher identity information;
the recording module is used for receiving the teacher identity information transmitted by the identification module, starting the recording and broadcasting system after the identity information is confirmed, recording the teaching process of the teacher and generating a recorded video;
and the intelligent module is used for receiving the recorded video transmitted by the recording module and splitting and/or combining the recorded video according to the knowledge points to generate a knowledge point teaching video.
The intelligent module includes:
the splitting component receives the recorded video transmitted by the recording module, and is used for pre-splitting the recorded video to generate a video group;
the extracting component is used for receiving the video group transmitted by the splitting component, extracting key words in the video and generating a split video group;
the extraction assembly comprises:
the conversion unit is used for receiving the video group transmitted by the splitting component and converting the voice signals in the video into texts;
the extraction unit is used for receiving the text transmitted by the conversion unit and extracting the keywords in the text by using a text processing technology; the text processing technology for extracting the keywords in the text comprises the following steps: and performing word segmentation and removal of stop words on the text.
And the merging unit is used for receiving the keywords transmitted by the extraction unit and merging the videos according to the keywords to generate a split video group.
The semantic component is used for extracting semantic information in the teaching calendar and generating semantic information;
the determining component is used for receiving the split video group transmitted by the extracting component and the semantic information transmitted by the semantic component; the video processing device is used for finding out the corresponding split video group according to the semantic information; and determining knowledge points of the video according to the semantic information and the split video group to generate a knowledge point teaching video.
And the storage module is used for receiving the video with the determined knowledge points transmitted by the determination component and intensively storing the video according to the knowledge map and/or the knowledge points.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A knowledge point splitting and integrating method based on teaching recording and broadcasting is characterized by comprising the following steps:
acquiring identity information and confirming the identity information;
starting a recording and broadcasting system, recording a teaching process and generating a recorded video;
according to the knowledge points, splitting and/or merging the recorded video to generate a knowledge point teaching video, comprising the following steps: pre-splitting a recorded video, extracting key words in the video, and generating a split video group; extracting semantic information in a teaching calendar; finding out a corresponding split video group according to the semantic information; determining knowledge points of the video according to the semantic information and the split video group, and generating a knowledge point teaching video;
the pre-splitting of the recorded video comprises: recognizing the pause time in the recorded video by using a voice recognition technology, setting a time threshold, considering that the teaching content can be split from the pause time when the pause time exceeds the time threshold, pre-splitting the recorded video, and generating a plurality of small videos after the pre-splitting of the recorded video; and recognizing the sentences in the recorded video by using a voice recognition technology, splitting the recorded video according to each sentence, further preprocessing the video after splitting, and cutting video segments which are not related to the classroom and are spoken by a teacher in the classroom.
2. The teaching recording-based knowledge point splitting and integrating method as claimed in claim 1, wherein the extracting key words in the video comprises:
and converting the voice signals in the video into texts, extracting key words in the texts by using a text processing technology, and merging the videos according to the key words.
3. The teaching recording-based knowledge point splitting and integrating method as claimed in claim 2, wherein the text processing technology extracting keywords from the text comprises:
and performing word segmentation and removal of stop words on the text.
4. The educational recorded broadcast-based knowledge point splitting integration method of claim 1, further comprising:
and storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
5. A knowledge point splitting and integrating system based on teaching recording and broadcasting is characterized by comprising:
the identification module is used for acquiring the identity information and confirming the identity information;
the recording module starts a recording and broadcasting system, records the teaching process of a teacher and generates a recorded video;
the intelligent module is used for splitting and/or combining the recorded video according to the knowledge points to generate a knowledge point teaching video, and comprises: the splitting component is used for pre-splitting the recorded video; the extraction component is used for extracting key words in the video and generating a split video group; the semantic component is used for extracting semantic information in the teaching calendar; the determining component is used for finding the corresponding split video group according to the semantic information; determining knowledge points of the video according to the semantic information and the split video group, and generating a knowledge point teaching video;
the pre-splitting of the recorded video comprises: recognizing the pause time in the recorded video by using a voice recognition technology, setting a time threshold, considering that the teaching content can be split from the pause time when the pause time exceeds the time threshold, pre-splitting the recorded video, and generating a plurality of small videos after the pre-splitting of the recorded video; and recognizing the sentences in the recorded video by using a voice recognition technology, splitting the recorded video according to each sentence, further preprocessing the video after splitting, and cutting video segments which are not related to the classroom and are spoken by a teacher in the classroom.
6. The teaching recording-based knowledge point splitting integration system of claim 5, wherein the extraction component comprises:
a conversion unit for converting a voice signal in a video into a text;
an extraction unit configured to extract keywords in the text using a text processing technique;
and the merging unit is used for merging the videos according to the keywords.
7. The teaching recording-based knowledge point splitting integration system of claim 6, wherein the text processing technology extracting keywords in the text comprises:
and performing word segmentation and removal of stop words on the text.
8. The educational recorded broadcast-based knowledge point splitting integration system of claim 7, further comprising:
and the storage module is used for storing the videos in a centralized manner according to the knowledge map and/or the knowledge points.
CN202010187807.6A 2020-03-17 2020-03-17 Knowledge point splitting and integrating method and system based on teaching recording and broadcasting Active CN111429768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010187807.6A CN111429768B (en) 2020-03-17 2020-03-17 Knowledge point splitting and integrating method and system based on teaching recording and broadcasting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010187807.6A CN111429768B (en) 2020-03-17 2020-03-17 Knowledge point splitting and integrating method and system based on teaching recording and broadcasting

Publications (2)

Publication Number Publication Date
CN111429768A CN111429768A (en) 2020-07-17
CN111429768B true CN111429768B (en) 2022-04-05

Family

ID=71548216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010187807.6A Active CN111429768B (en) 2020-03-17 2020-03-17 Knowledge point splitting and integrating method and system based on teaching recording and broadcasting

Country Status (1)

Country Link
CN (1) CN111429768B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914760B (en) * 2020-08-04 2021-03-30 华中师范大学 Online course video resource composition analysis method and system
CN111866608B (en) * 2020-08-05 2022-08-16 北京华盛互联科技有限公司 Video playing method, device and system for teaching
CN112529748A (en) * 2020-12-10 2021-03-19 成都农业科技职业学院 Intelligent education platform based on time node mark feedback learning state
CN112560663B (en) * 2020-12-11 2024-08-23 南京谦萃智能科技服务有限公司 Teaching video dotting method, related equipment and readable storage medium
CN112866606B (en) * 2021-01-20 2023-04-18 宁波阶梯教育科技有限公司 Recording and broadcasting method, recording and broadcasting equipment and computer readable storage medium
CN113259763B (en) * 2021-04-30 2023-04-07 作业帮教育科技(北京)有限公司 Teaching video processing method and device and electronic equipment
CN113076074B (en) * 2021-06-07 2021-10-01 广州朗国电子科技股份有限公司 Electronic blackboard writing reproduction method and system, electronic blackboard and readable medium
CN115086750B (en) * 2022-06-20 2024-08-13 深圳市圣泽软件有限公司 Automatic screen recording method and device, teaching equipment and storage medium
CN116437139B (en) * 2023-03-06 2024-04-12 广州开得联软件技术有限公司 Classroom video recording method, device, storage medium and equipment
CN117033665B (en) * 2023-10-07 2024-01-09 成都华栖云科技有限公司 Method and device for aligning map knowledge points with video

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663907A (en) * 2012-05-10 2012-09-12 北京中熙正保远程教育技术有限公司 Video teaching system and video teaching method
CN104408983A (en) * 2014-12-15 2015-03-11 广州市奥威亚电子科技有限公司 Recording and broadcasting equipment-based intelligent teaching information processing device and method
CN104978421A (en) * 2015-06-30 2015-10-14 北京竞业达数码科技有限公司 Knowledge point based video teaching resource editing method and apparatus
CN109801194A (en) * 2017-11-17 2019-05-24 深圳市鹰硕技术有限公司 It is a kind of to follow teaching method with remote evaluation function
CN109817040A (en) * 2019-01-07 2019-05-28 北京汉博信息技术有限公司 A kind of processing system for teaching data
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080318200A1 (en) * 2005-10-13 2008-12-25 Kit King Kitty Hau Computer-Aided Method and System for Guided Teaching and Learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663907A (en) * 2012-05-10 2012-09-12 北京中熙正保远程教育技术有限公司 Video teaching system and video teaching method
CN104408983A (en) * 2014-12-15 2015-03-11 广州市奥威亚电子科技有限公司 Recording and broadcasting equipment-based intelligent teaching information processing device and method
CN104978421A (en) * 2015-06-30 2015-10-14 北京竞业达数码科技有限公司 Knowledge point based video teaching resource editing method and apparatus
CN109801194A (en) * 2017-11-17 2019-05-24 深圳市鹰硕技术有限公司 It is a kind of to follow teaching method with remote evaluation function
CN109817040A (en) * 2019-01-07 2019-05-28 北京汉博信息技术有限公司 A kind of processing system for teaching data
CN110035330A (en) * 2019-04-16 2019-07-19 威比网络科技(上海)有限公司 Video generation method, system, equipment and storage medium based on online education

Also Published As

Publication number Publication date
CN111429768A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111429768B (en) Knowledge point splitting and integrating method and system based on teaching recording and broadcasting
Jugder The thematic analysis of interview data: An approach used to examine the influence of the market on curricular provision in Mongolian higher education institutions
Foster et al. Anticipating a post-task activity: The effects on accuracy, complexity, and fluency of second language performance
CN109189535B (en) Teaching method and device
Christensen Legal reading and success in law school: An empirical study
CN111833861B (en) Event evaluation report generation based on artificial intelligence
CN113590956B (en) Knowledge point recommendation method, knowledge point recommendation device, knowledge point recommendation terminal and computer readable storage medium
CN105427696A (en) Method for distinguishing answer to target question
CN111522970A (en) Exercise recommendation method, exercise recommendation device, exercise recommendation equipment and storage medium
Peng Supervisors’ views of the generic difficulties in thesis writing of Chinese EFL research students
CN113779345B (en) Teaching material generation method and device, computer equipment and storage medium
CN116994467A (en) Answer area acquisition and electronic job processing method, system, equipment and medium
Jawhar Conceptualising CLIL in a Saudi context: A corpus linguistic and conversation analytic perspective
CN112598547A (en) Education topic generation method and device based on automatic production line and electronic equipment
CN110111011B (en) Teaching quality supervision method and device and electronic equipment
Hjorth NaturalLanguageProcesing4All: -A Constructionist NLP tool for Scaffolding Students’ Exploration of Text
CN115731753A (en) Intelligent whiteboard system with interaction function and interaction method
US11526669B1 (en) Keyword analysis in live group breakout sessions
Hashlamoun et al. Exploring the Teaching Experiences of Teachers Using Computer-Based Assessments When Teaching Interactive Multimedia Classes.
Schwandt Qualitative data analysis: A sourcebook of new methods: by Mathew B. Miles and A. Michael Huberman Beverly Hills, CA: Sage, 1984. 263 pages
Farjami et al. EFL teachers’ perceptions of intermediate learners’ demotivation and the strategies used to reduce demotivation in an Iranian context
CN113408810A (en) Intelligent course management system
CN111967255A (en) Internet-based automatic language test paper evaluation method and storage medium
Kornieva et al. Development of Speaking Skills Assessment Criteria for Engineering Students
CN111508289A (en) Language learning system based on word use frequency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant