CN118210946A - Automatic generation method, medium and system for digital teaching video courseware - Google Patents

Automatic generation method, medium and system for digital teaching video courseware Download PDF

Info

Publication number
CN118210946A
CN118210946A CN202410309201.3A CN202410309201A CN118210946A CN 118210946 A CN118210946 A CN 118210946A CN 202410309201 A CN202410309201 A CN 202410309201A CN 118210946 A CN118210946 A CN 118210946A
Authority
CN
China
Prior art keywords
video
text
knowledge point
teaching
knowledge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410309201.3A
Other languages
Chinese (zh)
Inventor
高阳
何淑媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Jinke Education Technology Co ltd
Original Assignee
Yunnan Jinke Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Jinke Education Technology Co ltd filed Critical Yunnan Jinke Education Technology Co ltd
Priority to CN202410309201.3A priority Critical patent/CN118210946A/en
Publication of CN118210946A publication Critical patent/CN118210946A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an automatic generation method, medium and system of digital teaching video courseware, belonging to the technical field of teaching video courseware, wherein the automatic generation method of the digital teaching video courseware comprises the following steps: firstly, a teaching video is preprocessed, a video text is obtained and matched with a knowledge point database, and a first knowledge point set is obtained. And then, acquiring a related text of the first knowledge point set according to the electronic teaching materials corresponding to the teaching video, and establishing a knowledge point tree. And then, generating a second knowledge point set according to the matching condition of the video text and the first text set. And determining a plurality of positions of the electronic teaching materials corresponding to the video text according to the positions of the second knowledge point set in the knowledge point tree. Finally, extracting corresponding texts and pictures according to a plurality of positions of the electronic teaching materials corresponding to the video texts, and organizing to generate courseware contents of the corresponding video clips; the invention can realize the automatic generation of the digital teaching courseware and greatly improve the efficiency of manufacturing the digital courseware.

Description

Automatic generation method, medium and system for digital teaching video courseware
Technical Field
The invention belongs to the technical field of teaching video courseware, and particularly relates to an automatic generation method, medium and system of digital teaching video courseware.
Background
With the development of information technology, digital teaching videos are increasingly widely applied to teaching. Compared with the traditional teaching video, the digital teaching video courseware can realize the depth fusion of video content and electronic courseware, and realize interactive teaching. The existing digital teaching video courseware mainly comprises the following two manufacturing modes:
one is that the teacher records the teaching video manually, make the digital courseware corresponding to video content manually at the same time, realize the high fit of video courseware content. However, this manual production method is time-consuming and laborious, and the teacher needs to be proficient in video production and digital courseware production at the same time, and cannot realize dynamic update of the content.
The other is that the teacher only records teaching video, and the technician adds interactive elements later to form digital video courseware. The separated working mode can improve video production efficiency, but later-stage video content is not deeply understood, accurate correspondence between video and courseware is difficult to determine, and the effect of high content fit cannot be achieved.
Therefore, the existing digital teaching video courseware manufacturing mode is low in efficiency, and organic combination of content and form cannot be achieved. The specific problems include: the manufacturing process is complex, and the output efficiency is low; the video content and the digital courseware are difficult to realize accurate correspondence; courseware content cannot be updated dynamically, and maintenance cost is high.
Disclosure of Invention
In view of the above, the invention provides an automatic generation method, medium and system for digital teaching video courseware, which can realize automatic generation of digital teaching courseware and greatly improve the efficiency of digital courseware manufacture.
The invention is realized in the following way:
The first aspect of the invention provides an automatic generation method of digital teaching video courseware, which comprises the following steps:
S10, acquiring teaching videos, and performing video preprocessing operations including video segmentation, text recognition and text preprocessing to obtain video texts;
S20, matching is carried out according to the video text by adopting a preset knowledge point database, so as to obtain a plurality of knowledge points, and the knowledge points are recorded as a first knowledge point set;
S30, acquiring a plurality of related texts of the first knowledge point set according to the electronic teaching materials corresponding to the teaching videos, and recording the texts as a first text set;
S40, establishing a knowledge point tree corresponding to the first text set according to the appearance sequence of each related text in the first text set in the electronic teaching material, wherein each node of the knowledge point tree represents a knowledge point, and the edges of the knowledge point tree represent the relationship of the knowledge points connected with each other;
S50, determining knowledge points in the electronic teaching materials corresponding to the video text according to the matching condition of the video text and the first text set, and generating a second knowledge point set;
s60, determining a plurality of positions of the electronic teaching materials corresponding to the video text according to the positions of the second knowledge point set in the knowledge point tree;
And S70, extracting texts and pictures corresponding to the second knowledge point set from the electronic teaching material according to the positions of the electronic teaching material corresponding to the video texts, and organizing to generate courseware content of the corresponding video clips.
On the basis of the technical scheme, the automatic generation method of the digital teaching video courseware can be improved as follows:
wherein, the specific embodiment of S10 includes:
collecting teaching video data, and obtaining teaching video to be processed;
converting the acquired teaching video into a processable format;
Dividing the video by using a dividing algorithm to obtain a plurality of video clips;
Performing text recognition on the teaching video frame by using a text detection recognition algorithm to obtain extracted text;
and performing code conversion and useless text filtering pretreatment on the extracted text to obtain the structured text data.
The beneficial effects of adopting above-mentioned improvement scheme are: the method fully utilizes computer vision and a deep learning algorithm, can effectively and automatically extract text information in the video, and provides data support for automatically generating digital teaching courseware.
Further, the specific embodiment of S20 includes:
constructing a video discipline knowledge point relation database;
Parsing the video text using natural language processing techniques such as semantic analysis;
extracting text keyword composition feature vectors;
Calculating the similarity between the feature vector and a knowledge point vector prestored in a knowledge point database according to a matching algorithm;
and classifying and filtering the knowledge points according to the similarity to obtain an accurate matching result.
The beneficial effects of adopting above-mentioned improvement scheme are: the method realizes the accurate correspondence of the video text and the knowledge points, and provides a basis for the follow-up intelligent selection of related teaching material contents.
Further, the specific embodiment of S30 includes:
constructing an electronic teaching material knowledge point base;
inquiring corresponding teaching material text in a knowledge base based on knowledge points of the video;
Sequentially sequencing and filtering the query text;
And updating and perfecting the knowledge base content by using the newly added knowledge points.
The beneficial effects of adopting above-mentioned improvement scheme are: and acquiring related texts by querying a teaching material knowledge base, establishing links between video knowledge points and corresponding teaching material knowledge, and providing material sources for subsequent generation of digital courseware content.
Further, the specific embodiment of S40 includes:
generating a knowledge point node for each related text;
Analyzing the text knowledge point relation to determine the edges between the nodes;
Forming a knowledge point tree structure by adding node edges;
checking the logic of the adjustment tree structure;
the hierarchical structure of the tree is optimized using a tree construction algorithm.
The beneficial effects of adopting above-mentioned improvement scheme are: the purpose of constructing the knowledge point tree is to represent the knowledge points of the related text by a tree structure, and the tree structure can intuitively reflect the logic relationship among the knowledge and provides a basis for determining the video related knowledge points.
Further, the S50 embodiment includes:
Extracting lexical syntax semantic features of the video text;
calculating the similarity between text feature vectors;
judging matched related texts according to the similarity threshold value;
directly acquiring knowledge points corresponding to the matching text;
and checking the matching with low partial similarity.
The beneficial effects of adopting above-mentioned improvement scheme are: through text feature matching, knowledge points taught by the video text can be judged, and a basis is provided for determining teaching material knowledge content related to the video.
Further, the S60 embodiment includes:
positioning the position of a video knowledge point node in a knowledge point tree;
backtracking the father node by taking the knowledge point as the end point to form a path;
Mapping the appearance sequence of the text in the teaching material according to the path;
The order of the plurality of knowledge point text intervals is adjusted.
The beneficial effects of adopting above-mentioned improvement scheme are: and positioning the corresponding position of the video knowledge point in the teaching material through the tree structure, and establishing an alignment relation between the video content and the teaching material content.
Further, the courseware for generating the corresponding video clip adopts an AI generating tool of the PPT.
A second aspect of the present invention provides a computer readable storage medium, where the computer readable storage medium stores program instructions, where the program instructions are configured to execute the method for automatically generating a digital teaching video courseware described above when the program instructions are executed.
A third aspect of the present invention provides an automatic generation system for digital teaching video courseware, which includes the computer readable storage medium described above.
Compared with the prior art, the automatic generation method of the digital teaching video courseware has the following steps:
(1) Full-automatic generation of digital video courseware is realized: the whole generation flow does not need to be manually participated, the video content is automatically analyzed, corresponding teaching material resources are extracted, and the interactive digital courseware is organized and generated;
(2) The video content and the digital courseware realize accurate correspondence: knowledge points are extracted through video text analysis and positioned to the teaching material content, so that the problem that the correspondence between the knowledge points and the teaching material content is inaccurate in the prior art is effectively solved;
(3) Supporting dynamic update of courseware content: when the teaching materials are updated, the automatic updating of the digital courseware content can be realized by re-running the generating flow, repeated production is not needed, and the maintenance cost is greatly reduced;
(4) The generation efficiency is high: by utilizing the technologies of computer vision, natural language processing and the like, the high-efficiency automatic generation which cannot be achieved by manpower is realized. The whole process can be executed in parallel, so that the output time is further shortened;
(5) The professional requirements are reduced: by adopting an automatic generation mode, a common teacher can produce high-quality digital courseware without mastering professional skills of video production and courseware production;
(6) The adaptability is strong: through parameter adjustment, digital teaching video courseware in a corresponding form can be automatically generated aiming at different subjects and course contents.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for automatically generating digital teaching video courseware.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
As shown in fig. 1, a flowchart of a method for automatically generating a digital teaching video courseware according to a first aspect of the present invention includes the following steps:
S10, acquiring teaching videos, and performing video preprocessing operations including video segmentation, text recognition and text preprocessing to obtain video texts;
S20, matching is carried out according to the video text by adopting a preset knowledge point database, so as to obtain a plurality of knowledge points, and the knowledge points are recorded as a first knowledge point set;
s30, acquiring a plurality of relevant texts of a first knowledge point set according to the electronic teaching materials corresponding to the teaching videos, and recording the texts as a first text set;
S40, according to the appearance sequence of each related text in the first text set in the electronic teaching material, establishing a knowledge point tree corresponding to the first text set, wherein each node of the knowledge point tree represents a knowledge point, and the edges of the knowledge point tree represent the relationship of the knowledge points connected with each other;
s50, determining knowledge points in the electronic teaching material corresponding to the video text according to the matching condition of the video text and the first text set, and generating a second knowledge point set;
S60, determining a plurality of positions of the electronic teaching materials corresponding to the video text according to the positions of the second knowledge point set in the knowledge point tree;
and S70, extracting texts and pictures corresponding to the second knowledge point set in the electronic teaching material according to a plurality of positions of the electronic teaching material corresponding to the video texts, and organizing to generate courseware content of the corresponding video clips.
For step S10, the following sub-steps may be adopted in the specific embodiment:
(1) Collecting video data: the method comprises the step of obtaining teaching videos to be processed. The related teaching video can be downloaded from a network public teaching video website, and the teaching video can be recorded by a recording device. The video quality is required to be ensured to be good enough, the resolution is high, and the picture is clear;
(2) Video format conversion: the collected teaching video is converted into a format which can be processed by an algorithm, typically mpg, mp4, avi and the like. The purpose of the conversion is to correctly read and analyze video data when the video is processed by a subsequent algorithm;
(3) Video segmentation: and dividing the teaching video according to a certain strategy to obtain a plurality of video clips. The segmentation strategy can be equal time segmentation, and a video clip is intercepted at regular intervals; the method can also carry out intelligent segmentation according to video content, and detect the video transition position for segmentation. The purpose of segmentation is to change the subsequent processing object into a shorter video segment, so that accurate positioning processing is facilitated;
(4) Text recognition: and carrying out text recognition on the video clips, and extracting text information in video pictures. Text detection and recognition algorithms based on deep learning can be used to recognize text regions in each video frame image and perform character recognition to convert the text information. Identifying text information such as key frame characters, subtitles, notes and the like in the video;
(5) Text preprocessing: the extracted text information is preprocessed, including conversion coding, useless text filtering, word segmentation and the like, the text is converted into structured data, and a processing basis is provided for subsequent matching knowledge points.
The method fully utilizes computer vision and a deep learning algorithm, can effectively and automatically extract text information in the video, and provides data support for automatically generating digital teaching courseware.
For step S20, the following sub-steps may be adopted in the specific embodiment:
(1) Constructing a knowledge point database: and constructing a corresponding relation knowledge point database according to the disciplines, chapters and knowledge points of the teaching video. The database stores knowledge points of each chapter of the subject related to the video, and organizes the upper and lower relationships between the knowledge points in a tree structure. Database construction is the basis of matching knowledge points;
(2) Text processing: and processing the video text by using a natural language processing technology, including word segmentation, part-of-speech tagging, syntactic analysis and the like, and acquiring semantic information of the text. The processing purpose is to extract the semantic attribute of the text, so that the matching of the subsequent knowledge points is facilitated;
(3) Feature extraction: and extracting the characteristics of keywords, proper nouns and the like of the text according to the text processing result to form a text characteristic vector. The feature extraction can refine the core key content of the text information;
(4) Feature matching: and comparing the similarity between the text feature vector and the knowledge point feature vector pre-stored in the knowledge point database, and giving TopN knowledge points with highest similarity as matching results according to a matching algorithm. Common matching algorithms include semantic matching, edit distance algorithm, etc. The matching result is the knowledge point related to the video text, N can be determined empirically, and 10 is generally selected;
(5) Classifying knowledge points: and further processing the matching result, deleting the wrongly matched knowledge points to obtain a knowledge point set for accurately describing the video content, and completing the matching process with a knowledge point database.
By the method, accurate correspondence of the video text and the knowledge points is realized, and a basis is provided for subsequent intelligent selection of related teaching material contents.
For step S30, the following sub-steps may be adopted in the specific embodiment:
(1) Constructing an electronic teaching material knowledge base: and acquiring an electronic teaching material corresponding to the teaching video, extracting text content in the teaching material, and identifying knowledge points in the text to form an electronic teaching material knowledge base. The knowledge base stores structured knowledge point data of teaching materials;
(2) Extracting relevant text: and according to the knowledge points obtained by video matching, inquiring related knowledge point texts in the electronic teaching material knowledge base, and organizing the inquiry results into related text sets. The related text reflects the corresponding knowledge of the video knowledge points in the teaching materials;
(3) Sequencing and filtering: and sequencing the related texts in sequence to ensure that the sequence of the texts is consistent with the appearance sequence in the teaching materials. While filtering duplicate and too short nonsensical text. Sequencing the filtered text set to obtain a refined knowledge point corresponding text;
(4) Updating the knowledge base: and using the video to match the newly acquired knowledge points to update the electronic teaching material knowledge base, and continuously enriching the integrity of the content of the knowledge base. Periodic updates evolve the knowledge base to better serve knowledge point matching.
And acquiring related texts by querying a teaching material knowledge base, establishing links between video knowledge points and corresponding teaching material knowledge, and providing material sources for subsequent generation of digital courseware content.
For step S40, the following sub-steps may be adopted in the specific embodiment:
(1) And (3) constructing a node: each text map generates a node according to the related text set, and the node contains knowledge point information of the text. All node sets form the basis of a knowledge point tree;
(2) Determining the relation: analyzing the relation of the upper level, the lower level, the adjacent relation, the predecessor relation and the like between knowledge points in two texts, connecting two nodes to form a directed edge, and representing the relation between the knowledge points;
(3) Forming a tree structure: forming a tree network of knowledge points by continuously adding the relationship between nodes and edges, wherein the root node is a knowledge subject, the child nodes are detailed knowledge points, and the edges represent the logic relationship of the knowledge points;
(4) Correction tree structure: checking whether the tree structure accords with the knowledge logic, and modifying the node edge relation with errors to ensure that the knowledge point tree correctly reflects the knowledge structure;
(5) Optimizing the tree structure: and a certain tree construction algorithm is adopted to prune and adjust the tree, and the hierarchical structure of the knowledge point tree is optimized to be more balanced.
Specifically, the goal of this step is to build a knowledge point tree that reflects the logical relationships between knowledge points.
The related text set is defined as te= { TE 1,te2,...tei...,ten }, where TE i represents the i-th related text.
The process of constructing the knowledge point tree is as follows:
For each text te i, the knowledge point set contained in the text is extracted and defined as
For each knowledge pointTree node ne ij is created. All node sets are defined as ne= { NE 11,ne12,...,neij,...,nenm }.
Analyzing knowledge pointsAnd/>The relationship between the edges e ij,kl, connecting nodes ne ij and ne kl. All edge sets are defined as E.
An initial tree t= (N, E) is constructed with all nodes and edges.
Checking the structure of the tree T, modifying the wrong node relation, ensuring that the sequence of knowledge points is consistent with the logic relation, and obtaining the tree T'.
The tree T' is optimized using a tree construction algorithm (e.g., prim algorithm, etc.) to obtain a final knowledge point tree T *.
The relation judgment between the knowledge points utilizes a knowledge graph technology, and the logic relation is determined by analyzing the semantic relevance of the two knowledge points.
The goal of optimizing the tree structure is to obtain a well-defined, minimally deep knowledge point tree, which can use a minimum spanning tree algorithm.
The purpose of constructing the knowledge point tree is to represent the knowledge points of the related text by a tree structure, and the tree structure can intuitively reflect the logic relationship among the knowledge and provides a basis for determining the video related knowledge points.
For step S50, the following sub-steps may be adopted in the specific embodiment:
(1) Feature extraction: analyzing the video text, extracting lexical, syntactic and semantic features of the text by using a natural language processing technology, and forming text feature vectors;
(2) Similarity calculation: comparing the similarity between the video text feature vector and all text feature vectors in the related text set, and calculating cosine similarity between texts;
(3) Matching text: and selecting the related text which is most similar to the video text as the matching text according to the similarity. The text with the similarity exceeding the threshold value is determined to be the text successfully matched;
(4) Determining knowledge points: corresponding knowledge points can be directly obtained from the matched related texts, and the knowledge points corresponding to all the matched texts are collected to form a knowledge point set of the video text;
(5) Removing the false matches: and (3) checking the matching with lower partial similarity, removing the error knowledge points which are not high in similarity but matched, and further optimizing the knowledge point matching result.
Specifically, the objective of this step is to determine the knowledge points corresponding to the video text.
Define video text as d and the related text set as te= { TE 1,te2,...,ten }.
The process of determining knowledge points is as follows:
For text d and te i e T, feature vectors F d are extracted respectively, Feature extraction utilizes word vector, word frequency, etc. techniques.
Calculating the similarity of the text d and te i:
if sim (d, te i) > σ (threshold), then the text te i is considered to match d.
Obtaining knowledge point sets from matched tes i The knowledge points of the final video text are the union of the knowledge points corresponding to the matching text.
And judging the matched text through similarity calculation, and acquiring knowledge points of the matched text to complete knowledge point determination of the video text.
Through text feature matching, knowledge points taught by the video text can be judged, and a basis is provided for determining teaching material knowledge content related to the video.
For step S60, the following sub-steps may be adopted in the specific embodiment:
(1) Positioning nodes: the method comprises the steps of finding the position of each knowledge point node in a video knowledge point set in a knowledge point tree, and obtaining detailed information of the nodes in the tree structure;
(2) Confirmation path: backtracking the father node of the node by taking the knowledge point node as an end point until the tree root, forming a path of the knowledge point in the tree structure, and representing a logic relationship chain among the knowledge points;
(3) Mapping text: positioning the appearance sequence of the related texts in the teaching material content according to the knowledge point path to obtain a teaching material text interval corresponding to the knowledge point;
(4) Adjusting the sequence: sequentially adjusting the text intervals of the teaching materials of the knowledge points to ensure that the interval sequence is consistent with the sequence of the video text;
(5) Outputting a result: and finally, determining the position information of the teaching material knowledge content corresponding to the video text, and providing a positioning basis for the teaching material content for the digital courseware.
Specifically, the target is to determine the teaching material position corresponding to the video text.
Defining a knowledge point tree as T 2=(N2,E2), node set The set of video knowledge points P 3.
Determining the position of the teaching material:
Finding the node corresponding to the knowledge point p 3∈P3 in the tree T
The parent node is traced back from n p until the root, and the tree Path (p 3) of the knowledge point p 3 is obtained.
The text position section Loc (p 3) is located in the textbook document according to Path (p 3).
Loc (p 3) is determined for all knowledge points p 3∈P3.
Adjusting position intervals according to video text sequenceIs a sequence of (a).
And finally determining the teaching material position information corresponding to the video text.
By backtracking the node path on the knowledge point tree, the corresponding text position of the teaching material can be positioned and corresponds to the video text sequence, and position information is provided for subsequent content extraction.
And positioning the corresponding position of the video knowledge point in the teaching material through the tree structure, and establishing an alignment relation between the video content and the teaching material content.
For step S70, the following sub-steps may be adopted in the specific embodiment:
(1) Extracting text: according to the knowledge content position of the teaching material matched with the video text, text information of the corresponding position is extracted from the electronic teaching material;
(2) Extracting pictures: extracting corresponding picture resources in the teaching materials according to the matched positions, wherein the picture resources comprise graphic contents such as illustration, form, formula and the like;
(3) The organization content: the extracted texts and pictures are organized and integrated according to the sequence to form digital teaching courseware content corresponding to the video clips;
(4) Adding interaction: adding necessary interactive elements such as video clip points, hyperlinks, bullets and the like according to the text and picture contents to form interactive digital courseware;
(5) Outputting courseware: and combining the generated video clip digital courseware with the corresponding video clip to form a complete digital teaching video courseware, thereby realizing automatic generation.
Additionally, the AI generation tools of PPT may also be used directly to generate courseware, such as Canva AI, slides AI, copilot AI, etc. as is common.
Through accurately extracting the teaching material content and adding the interactive elements, the automatic generation of the digital teaching video courseware is finally realized, and the video and courseware content can be accurately corresponding.
The method fully utilizes the technologies of natural language processing, information retrieval, data mining and the like, realizes the mapping from video to text to knowledge points to corresponding teaching material contents, completes the automatic generation of digital courseware contents through a series of algorithms, and can greatly reduce the manual workload of courseware manufacture.
The second aspect of the present invention provides a computer readable storage medium, where the computer readable storage medium stores program instructions, and the program instructions are used to execute the automatic generation method of a digital teaching video courseware when running.
The third aspect of the present invention provides an automatic generation system for digital teaching video courseware, which comprises the computer readable storage medium.
The core innovation point of the automatic generation method of the digital teaching video courseware is that the accurate corresponding relation between the video content and the knowledge points of the corresponding teaching materials is established. Through the corresponding relation, the interactive digital courseware is generated by automatically extracting the teaching material content, and the video content analysis and the knowledge point positioning are not needed to be manually carried out. The technical principle is as follows:
(1) Video preprocessing: and (3) realizing video content analysis by using computer vision and other technologies, including character recognition and obtaining video text information. This provides a data basis for subsequent knowledge point extraction;
(2) Knowledge point extraction: and automatically positioning the knowledge points related to the video through matching the video text with the knowledge point library. This is key to achieving understanding of video content;
(3) Constructing a knowledge network: constructing a knowledge net of the relation between knowledge points by means of a teaching material knowledge system to form a logic structure between the knowledge points;
(4) Accurate positioning: according to the position of the knowledge points in the knowledge network, the corresponding knowledge content in the teaching material can be accurately traced and positioned;
(5) Automatically organizing and generating digital courseware: and acquiring the positioned teaching material content, including texts, pictures and organizing to generate interactive digital courseware.
Through the organic cooperation of the technical points, the automatic analysis of the video content is realized, and the corresponding teaching material resources can be accurately positioned, so that the technical problem that the video content and the digital courseware are difficult to align in the digital teaching video courseware manufacturing is effectively solved.

Claims (10)

1. The automatic generation method of the digital teaching video courseware is characterized by comprising the following steps of:
S10, acquiring teaching videos, and performing video preprocessing operations including video segmentation, text recognition and text preprocessing to obtain video texts;
S20, matching is carried out according to the video text by adopting a preset knowledge point database, so as to obtain a plurality of knowledge points, and the knowledge points are recorded as a first knowledge point set;
S30, acquiring a plurality of related texts of the first knowledge point set according to the electronic teaching materials corresponding to the teaching videos, and recording the texts as a first text set;
S40, establishing a knowledge point tree corresponding to the first text set according to the appearance sequence of each related text in the first text set in the electronic teaching material, wherein each node of the knowledge point tree represents a knowledge point, and the edges of the knowledge point tree represent the relationship of the knowledge points connected with each other;
S50, determining knowledge points in the electronic teaching materials corresponding to the video text according to the matching condition of the video text and the first text set, and generating a second knowledge point set;
s60, determining a plurality of positions of the electronic teaching materials corresponding to the video text according to the positions of the second knowledge point set in the knowledge point tree;
And S70, extracting texts and pictures corresponding to the second knowledge point set from the electronic teaching material according to the positions of the electronic teaching material corresponding to the video texts, and organizing to generate courseware content of the corresponding video clips.
2. The automatic generation method of digital teaching video courseware according to claim 1, wherein the specific implementation mode of S10 includes:
collecting teaching video data, and obtaining teaching video to be processed;
converting the acquired teaching video into a processable format;
Dividing the video by using a dividing algorithm to obtain a plurality of video clips;
Performing text recognition on the teaching video frame by using a text detection recognition algorithm to obtain extracted text;
and performing code conversion and useless text filtering pretreatment on the extracted text to obtain the structured text data.
3. The automatic generation method of digital teaching video courseware according to claim 2, wherein the specific implementation manner of S20 includes:
constructing a video discipline knowledge point relation database;
Parsing the video text using natural language processing techniques such as semantic analysis;
extracting text keyword composition feature vectors;
Calculating the similarity between the feature vector and a knowledge point vector prestored in a knowledge point database according to a matching algorithm;
and classifying and filtering the knowledge points according to the similarity to obtain an accurate matching result.
4. The automatic generation method of digital teaching video courseware according to claim 3, wherein the specific implementation manner of S30 includes:
constructing an electronic teaching material knowledge point base;
inquiring corresponding teaching material text in a knowledge base based on knowledge points of the video;
Sequentially sequencing and filtering the query text;
And updating and perfecting the knowledge base content by using the newly added knowledge points.
5. The automatic generation method of digital teaching video courseware according to claim 4, wherein the specific implementation manner of S40 includes:
generating a knowledge point node for each related text;
Analyzing the text knowledge point relation to determine the edges between the nodes;
Forming a knowledge point tree structure by adding node edges;
checking the logic of the adjustment tree structure;
the hierarchical structure of the tree is optimized using a tree construction algorithm.
6. The automatic generation method of digital teaching video courseware according to claim 5, wherein the S50 embodiment includes:
Extracting lexical syntax semantic features of the video text;
calculating the similarity between text feature vectors;
judging matched related texts according to the similarity threshold value;
directly acquiring knowledge points corresponding to the matching text;
and checking the matching with low partial similarity.
7. The automatic generation method of digital teaching video courseware according to claim 6, wherein the specific implementation mode of S60 includes:
positioning the position of a video knowledge point node in a knowledge point tree;
backtracking the father node by taking the knowledge point as the end point to form a path;
Mapping the appearance sequence of the text in the teaching material according to the path;
The order of the plurality of knowledge point text intervals is adjusted.
8. The automatic generation method of digital teaching video courseware according to claim 7, wherein the courseware for generating the corresponding video clip adopts an AI generation tool of PPT.
9. A computer readable storage medium, wherein program instructions are stored in the computer readable storage medium, and when the program instructions are executed, the program instructions are used to perform a method for automatically generating a digital teaching video courseware according to any one of claims 1-8.
10. An automatic generation system for digital teaching video courseware, comprising the computer readable storage medium of claim 9.
CN202410309201.3A 2024-03-19 2024-03-19 Automatic generation method, medium and system for digital teaching video courseware Pending CN118210946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410309201.3A CN118210946A (en) 2024-03-19 2024-03-19 Automatic generation method, medium and system for digital teaching video courseware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410309201.3A CN118210946A (en) 2024-03-19 2024-03-19 Automatic generation method, medium and system for digital teaching video courseware

Publications (1)

Publication Number Publication Date
CN118210946A true CN118210946A (en) 2024-06-18

Family

ID=91453776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410309201.3A Pending CN118210946A (en) 2024-03-19 2024-03-19 Automatic generation method, medium and system for digital teaching video courseware

Country Status (1)

Country Link
CN (1) CN118210946A (en)

Similar Documents

Publication Publication Date Title
CN110399457B (en) Intelligent question answering method and system
CN109635171B (en) Fusion reasoning system and method for news program intelligent tags
CN108121829B (en) Software defect-oriented domain knowledge graph automatic construction method
CN113094512B (en) Fault analysis system and method in industrial production and manufacturing
CN111353314A (en) Story text semantic analysis method for animation generation
CN111159356A (en) Knowledge graph construction method based on teaching content
CN112925563A (en) Code reuse-oriented source code recommendation method
CN114239588A (en) Article processing method and device, electronic equipment and medium
CN113868382A (en) Method and device for extracting structured knowledge from Chinese natural language
CN111831624A (en) Data table creating method and device, computer equipment and storage medium
CN110162651B (en) News content image-text disagreement identification system and identification method based on semantic content abstract
CN112579444B (en) Automatic analysis modeling method, system, device and medium based on text cognition
CN117952200A (en) Knowledge graph and personalized learning path construction method and system
CN117709866A (en) Method and system for generating bidding document and computer readable storage medium
CN117473054A (en) Knowledge graph-based general intelligent question-answering method and device
CN116303641A (en) Laboratory report management method supporting multi-data source visual configuration
CN118210946A (en) Automatic generation method, medium and system for digital teaching video courseware
CN113553844B (en) Domain identification method based on prefix tree features and convolutional neural network
CN112347121B (en) Configurable natural language sql conversion method and system
CN114332903A (en) Lute music score identification method and system based on end-to-end neural network
CN113722421A (en) Contract auditing method and system and computer readable storage medium
CN116306573B (en) Intelligent analysis method, device and equipment for engineering practice and readable storage medium
CN112487160B (en) Technical document tracing method and device, computer equipment and computer storage medium
CN116975447B (en) Portable learning machine and resource retrieval method thereof
CN117573883A (en) Quality control system, quality control method, model and generation device of knowledge graph

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination