WO2021000909A1 - 一种课程优化方法、装置和系统 - Google Patents

一种课程优化方法、装置和系统 Download PDF

Info

Publication number
WO2021000909A1
WO2021000909A1 PCT/CN2020/099892 CN2020099892W WO2021000909A1 WO 2021000909 A1 WO2021000909 A1 WO 2021000909A1 CN 2020099892 W CN2020099892 W CN 2020099892W WO 2021000909 A1 WO2021000909 A1 WO 2021000909A1
Authority
WO
WIPO (PCT)
Prior art keywords
knowledge point
video
information
teaching
knowledge
Prior art date
Application number
PCT/CN2020/099892
Other languages
English (en)
French (fr)
Inventor
刘慧军
姚浩
孙桂勇
Original Assignee
北京易真学思教育科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京易真学思教育科技有限公司 filed Critical 北京易真学思教育科技有限公司
Priority to EP20835234.4A priority Critical patent/EP3996070A4/en
Priority to US17/624,330 priority patent/US11450221B2/en
Publication of WO2021000909A1 publication Critical patent/WO2021000909A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals

Definitions

  • the present invention relates to the field of computers and mobile Internet, in particular to a method, device and system for optimizing courses.
  • the teaching system can be, for example, the future blackboard system developed by Good Future.
  • the main lecturer needs to prepare lessons before class, and planning needs to be done in the process of preparing the lesson.
  • a certain knowledge point can be explained clearly through a few pages of courseware. For example, it is necessary to initiate a few times with the children in the classroom during class.
  • Interactive communication For the time-consuming of each link of knowledge points, interactive communication, etc., the lecturer will have an approximate estimated time when preparing lessons. However, the estimated time is often inaccurate, and during the teaching process, the lecturer does not have an accurate reference standard for grasping the progress of the lecture, such as when and where to talk. In other words, for a certain knowledge point, there is no uniform standard for measuring how many pages of courseware and how long the lecturer should use to explain a certain knowledge point.
  • the main lecturer will teach according to the pre-made courseware.
  • the courseware is pre-configured with multiple knowledge points, multiple interactive activities and their respective required teaching time and number of lecture pages.
  • the lecturer will interact according to the knowledge points and The duration and number of pages of the activity and its configuration are taught. However, whether the knowledge points, interactive activities and their pre-configured durations, number of pages of courseware, etc.
  • the lecturer/tutor may have a subjective feeling, for example, the teaching time of a certain knowledge point is too long, but the lecturer/tutor cannot quantify it, for example It is impossible to directly clarify how long or short the reservation time is, that is, it is impossible to directly clarify the optimal teaching time and the number of lecture pages for a certain knowledge point.
  • the present invention provides a course optimization method and system.
  • the lecture videos broadcasted by multiple teachers are segmented according to knowledge points, and the multiple segmented videos are classified according to knowledge points.
  • Introduce user feedback in ways such as volume sorting and then obtain the optimal video of each knowledge point, and perform data integration on the optimal video of each knowledge point to obtain the configuration information of the optimal video of each knowledge point, and use this configuration information as Optimal configuration information, and according to the optimal configuration information for teaching configuration, so that a better teaching effect can be obtained.
  • the present invention proposes a course optimization method, including the following steps:
  • lecture information where the lecture information includes lecture videos
  • the knowledge point recognition on the lecture video includes: at least one recognition method is used for knowledge point recognition, and when more than two recognition methods are used, the weight of each recognition method is different.
  • performing knowledge point recognition on a lecture video includes: using at least one of OCR text recognition, video scene recognition, and voice recognition to perform knowledge point recognition.
  • the knowledge point information includes: the knowledge point, the video start time of the knowledge point, the video end time of the knowledge point, and the confidence level.
  • the teaching information also includes real-time usage information.
  • the frames of the teaching video are sampled or the video voice sequence is extracted. The timing of the sampling or extraction of the video and voice sequence is based on real-time Use information to proceed.
  • segmenting the teaching video according to the knowledge point information includes:
  • the teaching video is segmented according to the segmentation information of knowledge points.
  • the segmentation decision on the knowledge point information to obtain the knowledge point segmentation information includes:
  • At least two sets of knowledge point information corresponding to the teaching video obtained through each recognition method are clustered according to knowledge points, and each knowledge point corresponds to more than two knowledge point information;
  • the two or more credible scores calculated for each knowledge point will be sorted, and the knowledge point information with the highest score is the segmentation information of the knowledge point;
  • segmentation information of each knowledge point a group of segmentation information of the knowledge point corresponding to the teaching video is formed.
  • the method further includes: performing a time axis for the group of knowledge point segmentation information check.
  • the segmentation decision on the knowledge point information to obtain the knowledge point segmentation information includes: when only one recognition method is used for the knowledge point recognition, there is no need to make a segmentation decision, and the knowledge point information is used The information is segmented at the knowledge points that segment the lecture video.
  • the structured information includes: video ID, knowledge points, video duration, number of courseware pages, and credible scores.
  • evaluating the segmented video to obtain the optimal video for each knowledge point includes:
  • the present invention provides a course optimization device, including:
  • the teaching information collection unit is used to collect the teaching information, where the teaching information includes the teaching video;
  • the knowledge point recognition unit is used to identify the knowledge points of the lecture video and obtain knowledge point information
  • the video segmentation unit is used to segment the teaching video according to the knowledge point information
  • the evaluation unit is used to evaluate the segmented video and obtain the best video for each knowledge point;
  • the optimization unit is used to make courseware based on the structured information corresponding to the optimal video of each knowledge point to obtain optimized courseware.
  • the present invention provides a course optimization system, including a memory and a processor, and the memory stores instructions; the processor is used to execute the following steps according to the instructions stored in the memory:
  • lecture information where the lecture information includes lecture videos
  • the present invention provides a method and system for optimizing courses.
  • the teaching videos broadcasted by multiple teachers are segmented according to knowledge points, and multiple segmented videos are classified according to knowledge points, and are introduced through user ratings, click volume sorting, etc.
  • User feedback and then obtain the optimal video of each knowledge point, and perform data integration on the optimal video of each knowledge point, obtain the configuration information of the optimal video of each knowledge point, and use the configuration information as the optimal configuration information. And according to the optimal configuration information for teaching configuration, so that a better teaching effect can be obtained.
  • Figure 1 is a flowchart of the course optimization method of the present invention
  • Figure 2 shows the data information collected by the teaching information collection unit
  • Figure 3 is a specific way of recognizing text content in a video through OCR text recognition
  • Figure 4 shows a specific way of recognizing lecture videos through video scene recognition
  • Figure 5 shows a specific way of recognizing lecture videos through voice recognition
  • Fig. 6 shows a flow chart of combining the three recognition methods to realize the segmentation of knowledge points in lecture videos
  • Figure 7 is a process of determining the optimal video for each knowledge point for multiple video clips of the same knowledge point
  • FIG. 8 shows a flowchart of the processing of step S300 in FIG. 1;
  • FIG. 9 shows a flowchart of the processing of step S310 in FIG. 8;
  • Fig. 10 shows a flowchart of the processing of step S400 in Fig. 1.
  • the system architecture includes a server and multiple clients.
  • the multiple clients communicate with the server.
  • the multiple clients may be any terminal devices. It may include any terminal device such as a mobile phone, a tablet computer, a notebook computer, a PC, a PDA (Personal Digital Assistant, that is, a personal digital assistant), a car computer, etc., which is not specifically limited here.
  • the operating system of the terminal device may be a Windows series operating system, a Unix-type operating system, a Linux-type operating system, a Mac operating system, an ANDROID-type operating system, etc., and no specific limitation is made here.
  • the main teacher teaches in a remote way through live broadcast.
  • the tutor cooperates with the main teacher's live teaching to realize the teaching and guidance of the students in the classroom.
  • the main lecturer teaches through the teaching system, and the tutors also conduct teaching guidance through the teaching system.
  • the teaching system can be, for example, the future blackboard system developed by Good Future.
  • the main lecturer teaches according to the courseware during the live broadcast.
  • the courseware is pre-made according to the configuration information.
  • the configuration information is configured by the main lecturer according to the class time, the number of teaching knowledge points, and the difficulty of each knowledge point.
  • the configuration information includes the number of courseware pages of each knowledge point and the estimated teaching time, interactive activity time, explanation time, rest time, etc. Whether the configuration information of the courseware is appropriate, for example, whether the number of courseware pages configured for a certain knowledge point and the estimated teaching time are sufficient to explain the knowledge point clearly, or whether the number of courseware pages configured for a certain knowledge point is excessive If the estimated time is too long, the lecturer/tutor may have a subjective feeling.
  • the teaching time allocated for a certain knowledge point is not enough and the students cannot fully understand it.
  • the main lecturer/tutor cannot quantify the difference between the configuration information and the actual teaching needs. For example, it is impossible to directly clarify how much longer or shorter the estimated teaching time configured for a certain knowledge point is than the actual teaching time, that is, the main lecturer/ The tutor cannot directly clarify the optimal configuration information of the courseware.
  • the present invention provides a course optimization method and system.
  • the video taught by the teacher is segmented according to knowledge points, and multiple segmented videos are classified according to the knowledge points, and the user scoring, click weighting, etc. Introduce user evaluation, and then obtain the optimal video of each type of knowledge point, and perform data integration on the optimal video of each knowledge point, obtain the optimal configuration information of each type of knowledge point, and make courseware according to the optimal configuration information. In this way, a better teaching effect can be obtained.
  • Fig. 1 is a course optimization method provided by an embodiment of the present invention, which can be executed by a processor. The method includes the following steps:
  • S100 Collect lecture information, where the lecture information includes lecture videos;
  • the courseware is made according to the structured information corresponding to the optimal video of each knowledge point to obtain the optimized courseware.
  • steps S100-S500 of the course optimization method will be described in detail below.
  • S100 Collect teaching information to obtain courseware configuration information, real-time usage information, and teaching video information.
  • the invention collects the teaching information through the teaching information collection unit.
  • the teaching information includes the configuration information used by the teacher when making the courseware before the lecture, the real-time use information of the lecturer/tutor during the lecture during the lecture, and the video information of the live broadcast of the course.
  • Figure 2 shows the data information collected by the teaching information collection unit.
  • the lecturer Before teaching, the lecturer will make the courseware in advance according to the configuration information.
  • the courseware is pre-made according to the configuration information, which is configured by the lecturer according to the class time, the number of knowledge points in the lecture, and the difficulty of each knowledge point.
  • the configuration information includes the number of pages of the courseware for each knowledge point and the expected teaching time, interactive activities and expected time, explanation activities and expected time, break time between classes, and get out of class time.
  • the teaching information collection unit collects the configuration information and stores it in the corresponding database for subsequent use.
  • the main lecturer/tutor will turn the page of the courseware or initiate an interactive operation as the lecture progresses.
  • the management data of the page turning operation or interactive operation reflects the real-time usage information of the current lecture. Therefore, by collecting the RBI data of the lecturer/tutor during the lecture, the real-time usage information during the lecture can be collected.
  • the real-time usage information can be recorded as: the second page of the first knowledge point, 13 minutes and 50 seconds; or the break between classes, 20 minutes and 15 seconds; or the second page of multiple choice questions, 13 minutes and 10 seconds.
  • the time is exemplified as the time from the beginning of the class.
  • Beijing time or other time zone time
  • the dot data can be: the second page of the first knowledge point, 16:13 minutes and 50 seconds.
  • the whole process of the lecture by the lecturer is collected through the camera in the live room.
  • the server collects the real-time data of the teacher's lecture and pushes it to the tutoring classrooms across the country, and saves the full amount of video and the collected video on the server.
  • the information is used for subsequent video segmentation and classification.
  • the teaching information collection unit also collects live video of the whole process of explanation, and can be stored in, for example, a large object file storage system.
  • S200 Perform knowledge point recognition on the lecture video to obtain knowledge point information.
  • the teaching video is segmented according to knowledge points, that is, each teaching video is divided into several video clips according to the knowledge points.
  • the recognition can, for example, recognize text information in the video through OCR text recognition, and/or detect different scenes in the video through video scene recognition, and/or recognize the voice content of the video through voice recognition .
  • the knowledge point information in the teaching video can be obtained, so that the teaching video can be divided into video segments with knowledge points as the unit according to the knowledge point information.
  • the timing of the sampling is preferably based on the real-time usage information during the teaching process. That is, the timing of sampling the frames of the teaching video is based on the teacher's courseware during the teaching process. Near the time point of page turning, the courseware has page turning behavior at that time point, that is, this time point may be the time when the knowledge point changes or the knowledge point and other activities are switched.
  • the process of OCR text recognition of knowledge points is mainly for three types: (1) the same knowledge points of the same page of courseware, the text recognition results are classified as the same knowledge point; (2) different pages of courseware of the same knowledge point, through text correlation After analysis, they are classified as the same knowledge point; (3) Different knowledge points or knowledge points, interaction, and get out of class dismissal are classified as different knowledge points.
  • the knowledge point information includes the knowledge point, the video start time of the knowledge point, and the video end time of the knowledge point.
  • each knowledge point information may also include a confidence level.
  • the OCR character recognition may also adopt other feasible methods.
  • the OCR character recognition technology belongs to the prior art in the field and is not specifically limited here.
  • the timing of the sampling is preferably based on the real-time usage information during the teaching process.
  • the timing of sampling the frames of the teaching video is to be based on the time when the teacher turns the page of the courseware during the teaching process, and the page turning behavior occurs at this time. , That is, this point in time may be the time when the knowledge point changes or the knowledge point and other activities switch.
  • the scene information includes the scene, the video start time of the scene, and the video end time of the scene.
  • each scene information may also include a confidence level.
  • the identified different scenes actually correspond to knowledge points or interactive activities, so knowledge point recognition can also be realized through video scene recognition.
  • the video scene recognition is performed in a CV manner.
  • other feasible methods may also be used.
  • the video scene recognition technology belongs to the existing technology in the field and is not specifically limited here.
  • the timing of extracting the video and voice sequence is preferably based on the real-time usage information in the course of teaching, that is, the timing of extracting the video and voice sequence for sampling is based on the vicinity of the time when the teacher's courseware turns the page during the teaching process, and the courseware is turned over at this time.
  • Page behavior that is, the time point may be the time when the knowledge point changes or the knowledge point and other activities switch.
  • the method of speech recognition can be used to determine Audio clips corresponding to each knowledge point.
  • Repeated correlation analysis of the text results of all the extracted video speech sequences after speech recognition can identify all the knowledge point information in the teaching video, and sort all the recognized knowledge points according to the time of the lecture. Determine the video time interval corresponding to a knowledge point.
  • the knowledge point information includes the knowledge point, the video start time of the knowledge point, and the video end time of the knowledge point.
  • each knowledge point information may also include a confidence level.
  • the speech recognition can be performed by LSTM technology, of course, other feasible methods can also be used.
  • the speech recognition technology belongs to the existing technology in the field and is not specifically limited here.
  • OCR text recognition, video scene recognition or voice recognition can recognize the knowledge point information in the teaching video.
  • recognition methods for the knowledge points in the teaching videos are not limited to the above three. There are other recognition methods in the field, as long as the knowledge points in the teaching videos can be recognized.
  • each recognition method has its own advantages and disadvantages.
  • a combination of two or more recognition methods can also be selected for recognition.
  • a combination of two or three of the above three identification methods is used, and the combination of this embodiment does not constitute a limitation to the application.
  • This application uses the combination of OCR text recognition, video scene recognition and voice recognition as an example to elaborate.
  • the three recognition methods have their own advantages and disadvantages.
  • the appropriate recognition method should be selected according to the advantages and disadvantages of the various recognition methods. For example, different weights can be set for each recognition method.
  • OCR text recognition and speech recognition can clearly obtain the knowledge points taught by the teacher, but the video scene detection is not sensitive to the switching of knowledge points, and the corresponding knowledge points cannot be given.
  • the weight of video scene recognition is set to Lowest; the lecturer’s courseware itself condenses the essence of the corresponding knowledge points, some key titles or content fields themselves are the refinement of the knowledge points, and the content of the general knowledge points has the highest accuracy, so the OCR text is recognized when used in combination
  • the weight is set to the highest; the voice sequence contains a lot of voice information, but it is relatively trivial, without refining the central idea, and can not easily summarize the corresponding knowledge points, but it is the embodiment of the teacher’s actual explanation of the rhythm, so when used in combination
  • the weights of the three recognition methods can be set as, for example, OCR text recognition (60% weight)>voice recognition (30%)>scene detection (10%).
  • S300 Segment the lecture video according to the knowledge point information.
  • each knowledge point segmentation information in the group of knowledge point segmentation information may be obtained by different recognition methods.
  • the first knowledge point segmentation information is obtained by OCR text recognition
  • the second knowledge point segmentation information is obtained by voice recognition.
  • FIG. 8 shows a flowchart of the processing of step S300 in FIG. 1. As shown in Fig. 8, step S300 may include steps S310 and S320.
  • S310 Perform segmentation decision on knowledge point information to obtain knowledge point segmentation information.
  • the video segmentation unit generates three sets of knowledge point information/scene information through three methods of OCR text recognition, voice recognition and video scene recognition.
  • each group of knowledge point information/scenario information represents all the knowledge points in the teaching video, and may include one or more knowledge point information/scenario information.
  • Each knowledge point information/scenario information corresponds to a knowledge point.
  • the knowledge point information/scene information includes the knowledge point, the video start time of the knowledge point, the video end time of the knowledge point, the confidence level, etc.
  • the knowledge point information generated by OCR text recognition and speech recognition includes the knowledge point, the video start time of the knowledge point, the video end time of the knowledge point, and the confidence level;
  • the scene information generated by the video scene recognition includes the scene and the scene The video start time of the scene, the video end time of the scene and the corresponding confidence level.
  • Detection method get information OCR text recognition Knowledge points, video start time, video end time, confidence Video scene recognition Scene, video start time, video end time, confidence level Speech Recognition Knowledge points, video start time, video end time, confidence
  • the knowledge point segmentation information constitutes a group of knowledge point segmentation information, which is used to realize the segmentation of the teaching video according to the knowledge point, and send the segmented multiple video clips and the group of knowledge point segmentation information Database storage.
  • FIG. 9 shows a flowchart of the processing of step S310 in FIG. 8. As shown in FIG. 9, step S310 may include steps S3110 to S3140.
  • S3110 Cluster multiple sets of knowledge point information/scene information according to knowledge points.
  • the three sets of knowledge point information/scene information are clustered according to knowledge points, that is, three knowledge point information/scene information generated by three recognition methods for a certain knowledge point are corresponded, so Each knowledge point has three knowledge point information/scene information obtained through different recognition methods.
  • the video start and end time contained in the knowledge point information/scene information can be clustered, and the video start time and the video end time are similar to one category.
  • S3120 Calculate credibility scores for multiple knowledge point information/scene information of each knowledge point.
  • the credibility score of each knowledge point information/scenario can be calculated by the confidence level in each knowledge point information/scenario information and the weight of the corresponding recognition method.
  • the weights of the three recognition methods are set as: OCR text recognition (60% weight)>voice recognition (30%)>scene detection (10%).
  • the specific calculation method is:
  • Reliability score recognition method weight * knowledge point information / scene information confidence.
  • S3130 Sort the multiple credible scores calculated for each knowledge point, and the knowledge point information/scenario information with the highest score is the segmentation information of the knowledge point.
  • the calculated credibility scores of the three knowledge point information/scene information corresponding to each knowledge point are sorted, and the highest score is the segmentation information of the knowledge point.
  • S3140 Form a group of knowledge point segmentation information according to the segmentation information of each knowledge point.
  • the three knowledge points information/scenario information corresponding to each knowledge point in the teaching video are calculated and ranked by credibility scores, and the knowledge of each knowledge point can be obtained.
  • Point segmentation information constitutes a group of knowledge point segmentation information, which is used to realize the segmentation of the teaching video according to the knowledge point.
  • the method may further include: performing time axis verification on the group of knowledge point segmentation information.
  • each knowledge point information in the group of knowledge point segmentation information may be obtained by different recognition methods, for example, the first knowledge point information is obtained by OCR text recognition, and the second knowledge point information is obtained by voice recognition. , So it is necessary to check the video time axis in the group of segmentation information to avoid overlapping video segments after segmentation.
  • the video clips are detected to overlap in time, they can be corrected according to the calculation result of step S3120, and the credible scores corresponding to the two overlapping knowledge point video clips are compared.
  • the knowledge point video clip with the higher score is retained, and the lower score Delete the overlapping part from the video clip of the knowledge point.
  • S320 Segment the teaching video according to the segmentation information of the knowledge points.
  • a group of knowledge point segmentation information used to segment the teaching video is obtained.
  • the teaching video is divided according to the group of knowledge point segmentation information Divide into multiple video segments according to knowledge points, and assign a unique video ID to each video segment.
  • the video segmentation method is a well-known technology in the art, and is not specifically limited here.
  • each structured information includes: video ID, knowledge points, video duration, number of courseware pages, and credible score. As shown in the following table:
  • the segmented video fragments and corresponding structured information are stored in the database for subsequent use.
  • S400 Evaluate the segmented video, and obtain an optimal video for each knowledge point.
  • the teaching video can be divided into multiple video clips according to the knowledge points.
  • Video clips By performing the above steps on the teaching videos of multiple teachers of the same subject, the same grade, and the same course, you can obtain more information for each knowledge point. Video clips.
  • the optimal video for each knowledge point is determined mainly according to methods such as video annotation, viewer scoring, and click weighting.
  • Fig. 10 shows a flowchart of the processing of step S400 in Fig. 1.
  • step S400 may include steps S410 to S440.
  • the optimal video for each knowledge point can be determined through steps S410 to S440 shown in FIG.
  • S410 Provide a video clip access platform for users to access.
  • This application provides a unified video access platform externally by building a video clip search recommendation system for dual teachers or teachers from other business departments to search and access, and integrate the segmented video clips of similar knowledge points for push.
  • S420 The user accesses the video search recommendation system to search for relevant knowledge points.
  • Users can search according to the content they are interested in, and return the ranking results of the user's search related information.
  • the initial state can be recalled and sorted according to the result score obtained in the video detection system, and then the corresponding video can be click-weighted by the user's click behavior.
  • the query process of cold data (data that is queried for the first time or less frequently queried) mainly relies on the credible scores in the structured data of knowledge points for recall ranking. After the user clicks the corresponding recall rank ranking link, the user accesses the video slice file system to return the video clips watched.
  • Hot data (data that is frequently viewed) is based on two parts of the ranking of the recall rank in the query process: (1) the credible score in the structured data of knowledge points; (2) the corresponding weight score obtained through regular integration and recommendation. After the user clicks the corresponding recall ranking link, he accesses the video slice file storage system to return the watched video clip.
  • S430 The user scores the video.
  • the user After the user browses the corresponding knowledge point video, the user is provided with a video rating, a tag check and evaluation, and a method of determining the weight of the video segment according to the number of clicks to increase the result weight of the corresponding video segment.
  • the score can range from 1 star to 5 stars; and can provide corresponding label options, such as: the knowledge points are very good, the teacher is very good, The teacher's speech is very general, and the user can click the corresponding label to evaluate.
  • the weight and score of the video clip ranking are improved. It is also possible to improve the score of the video segment by summarizing user scores and weighting; summarizing users to check the evaluation and weighting; finally, the weighted score information of the corresponding knowledge point video is integrated for use by the search recommendation system.
  • the regularly integrated data including which videos of the knowledge points are the best videos, are then structured and stored in the corresponding database to complete the persistence.
  • S500 Make courseware according to the configuration information of the optimal video for each knowledge point to obtain optimized courseware.
  • step S400 the optimal video for each knowledge point can be obtained, and the structured information corresponding to the optimal video can be queried from the database. Therefore, the knowledge points corresponding to the optimal video, the length of the video, the number of courseware pages, etc., can be obtained, that is, the optimal configuration information corresponding to the knowledge points can be obtained, which can guide the teacher to refer to the optimal knowledge points for the corresponding knowledge points during the course preparation process Configure information for the production of courseware. Therefore, the optimization of the curriculum can be realized.
  • one or more of courseware configuration information, real-time practical information, and optimal configuration information can also be compared and displayed to facilitate teachers to use when optimizing courseware based on the optimal configuration information, so as to achieve the effect of course optimization .
  • the lecture videos broadcast by multiple teachers are segmented according to knowledge points, and multiple segmented videos are classified according to knowledge points, and user feedback is introduced through user ratings, clicks sorting, etc. Then obtain the optimal video of each knowledge point, and perform data integration on the optimal video of each knowledge point, obtain the configuration information of the optimal video of each knowledge point, use the configuration information as the optimal configuration information, and follow the The optimal configuration information is used for teaching configuration, so that a better teaching effect can be obtained.
  • the second embodiment of the present invention also provides a course optimization device, including:
  • the teaching information collection unit is used to collect the teaching information, where the teaching information includes the teaching video;
  • the knowledge point recognition unit is used to identify the knowledge points of the lecture video and obtain knowledge point information
  • the video segmentation unit is used to segment the teaching video according to the knowledge point information
  • the evaluation unit is used to evaluate the segmented video and obtain the best video for each knowledge point;
  • the optimization unit is used to make courseware based on the structured information corresponding to the optimal video of each knowledge point to obtain optimized courseware.
  • the third embodiment of the present invention also provides a course optimization system, including a memory and a processor, and the memory stores instructions; the processor is configured to execute the following steps shown in FIG. 1 according to the instructions stored in the memory:
  • S100 Collect lecture information, where the lecture information includes lecture videos;
  • the courseware is made according to the structured information corresponding to the optimal video of each knowledge point to obtain the optimized courseware.
  • steps of the present invention described above can be implemented by a general computing device. They can be concentrated on a single computing device or distributed on a network composed of multiple computing devices. In particular, they can be implemented by program codes executable by a computing device, so that they can be stored in a storage device for execution by the computing device, or they can be made into individual integrated circuit modules, or multiple modules or The steps are implemented as a single integrated circuit module. In this way, the present invention is not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

一种课程优化方法、装置和系统,其中该方法至少包括以下步骤:对授课信息进行采集,其中,授课信息包括授课视频;对授课视频进行知识点识别,获得知识点信息;根据知识点信息对授课视频进行切分;对切分后视频进行评价,获得每个知识点的最优视频;以及根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。将多个教师直播的授课视频按知识点进行切分,通过引入用户评分方式获得每一知识点的最优视频,根据每个知识点的最优视频的配置信息进行授课配置,从而可以获得更佳的授课效果。

Description

一种课程优化方法、装置和系统
本申请要求于2019年07月3日提交中国专利局、申请号为201910594625.8、发明名称为“一种课程优化方法、装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机及移动互联网领域,具体涉及一种对课程进行优化的方法、装置和系统。
背景技术
在双师课堂的教学使用场景中,有一位主讲教师在远程通过直播的方式进行教学,在实际的教室中有一位辅导教师配合主讲教师实现对教室内学生的教学辅导。主讲教师通过教学系统进行授课,辅导教师同样通过教学系统进行教学辅导,该教学系统例如可以是好未来公司开发的未来黑板系统。
在实际教学场景中,主讲教师在上课之前需要进行备课,备课的过程中需要做规划,比如某个知识点通过几页课件能够讲清楚,再比如上课过程中需要发起几次与教室内孩子们的互动交流。对于知识点、互动交流等每一环节的耗时,主讲教师在备课的时候会有一个大概的预估时间。但是,该预估的时间往往是不准确的,并且授课过程中,主讲教师对授课的进度把握也没有一个准确的参考标准,比如什么时间应该讲到什么地方等。也就是说,对于某一个知识点,主讲老师该用几页课件、多长时间来进行讲解没有一个统一衡量的标准。
主讲教师在直播过程中按照预先制作的课件进行授课,其中课件中预先配置了多个知识点、多个互动活动及其各自所需的授课时长、授课页数,主讲教师会根据知识点、互动活动及其配置的时长、页数进行授课。然而,该知识点、互动活动及其预先配置的时长、课件页数等配置是否合适,例如,对于某个知识点来说,通过其预定时长的讲解是否足以学生理解该知识点,或者对于某个知识点来说,其预定的时长是否过于冗长,主讲老师/ 辅导老师可能有一个主观上的感受,例如某个知识点授课时长过长了,但是主讲老师/辅导老师无法将其量化,例如无法直接明确预定时长长了多少或短了多少,即无法直接明确某个知识点最优的授课时长、授课页数是多少。
发明内容
为了解决这个问题,本发明提供了一种课程优化方法和系统,将多个教师直播的授课视频按知识点进行切分,并将多个切分视频按知识点进行分类,通过用户评分、点击量排序等方式引入用户反馈,进而获得每一知识点的最优视频,并对各知识点的最优视频进行数据整合,获得每个知识点的最优视频的配置信息,将该配置信息作为最优配置信息,并按照该最优配置信息进行授课配置,从而可以获得更佳的授课效果。
为解决上述技术问题,本发明提出一种课程优化方法,包括以下步骤:
对授课信息进行采集,其中,授课信息包括授课视频;
对授课视频进行知识点识别,获得知识点信息;
根据知识点信息对授课视频进行切分;
对切分后视频进行评价,获得每个知识点的最优视频;以及
根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
在一种实施方式中,对授课视频进行知识点识别包括:采用至少一种识别方式进行知识点识别,当采用两种以上识别方式时,每种识别方式的权重不同。
在一种实施方式中,对授课视频进行知识点识别包括:采用OCR文字识别、视频场景识别、语音识别中的至少一种方式进行知识点识别。
在一种实施方式中,知识点信息包括:知识点、知识点的视频开始时间、知识点的视频结束时间、以及置信度。
在一种实施方式中,授课信息还包括实时使用信息,对授课视频进行知识点识别前,先对授课视频的帧进行抽样或抽取视频语音序列,该抽样或抽取视频语音序列的时机为根据实时使用信息进行。
在一种实施方式中,根据知识点信息对授课视频进行切分包括:
对知识点信息进行切分决策,以获得知识点切分信息;以及
根据知识点切分信息对授课视频进行切分。
在一种实施方式中,当采用两种以上识别方式进行知识点识别时,对知识点信息进行切分决策,以获得知识点切分信息包括:
将通过每种识别方式获得的授课视频对应的至少两组知识点信息按照知识点进行聚类,每个知识点对应于两个以上知识点信息;
为每个知识点的两个以上知识点信息分别计算可信分数;
将为每个知识点计算出的两个以上可信分数进行排序,分数最高的知识点信息即为该知识点的切分信息;以及
根据每个知识点的切分信息,形成对应于授课视频的一组知识点切分信息。
在一种实施方式中,在根据每个知识点的切分信息,形成对应于授课视频的一组知识点切分信息后,所述方法还包括:对该组知识点切分信息进行时间轴校验。
在一种实施方式中,计算可信分数具体为,可信分数=识别方式权重*知识点信息的置信度。
在一种实施方式中,对知识点信息进行切分决策,以获得知识点切分信息包括:当仅采用一种识别方式进行知识点识别时,无需进行切分决策,知识点信息即为用于对授课视频进行切分的知识点切分信息。
在一种实施方式中,结构化信息包括:视频ID、知识点、视频时长、课件页数、可信分数。
在一种实施方式中,对切分后视频进行评价,获得每个知识点的最优视频包括:
提供视频片段访问平台供用户访问;
用户访问视频搜索推荐系统,进行相关知识点的搜索;
用户对视频进行评分;以及
整合用户的评分获得每个知识点的最优视频。
另一方面,本发明提供了一种课程优化装置,包括:
授课信息采集单元,用于对授课信息进行采集,其中,授课信息包括授课视频;
知识点识别单元,用于对授课视频进行知识点识别,获得知识点信息;
视频切分单元,用于根据知识点信息对授课视频进行切分;
评价单元,用于对切分后视频进行评价,获得每个知识点的最优视频;以及
优化单元,用于根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
另一方面,本发明提供了一种课程优化系统,包括存储器和处理器,存储器存储指令;处理器用于根据存储在存储器中的指令,执行如下步骤:
对授课信息进行采集,其中,授课信息包括授课视频;
对授课视频进行知识点识别,获得知识点信息;
根据知识点信息对授课视频进行切分;
对切分后视频进行评价,获得每个知识点的最优视频;以及
根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
本申请实施例中提供的一个或多个技术方案,至少具有如下技术效果或优点:
本发明提供了一种课程优化方法和系统,将多个教师直播的授课视频按知识点进行切分,并将多个切分视频按知识点进行分类,通过用户评分、点击量排序等方式引入用户反馈,进而获得每一知识点的最优视频,并对各知识点的最优视频进行数据整合,获得每个知识点的最优视频的配置信息,将该配置信息作为最优配置信息,并按照该最优配置信息进行授课配置,从而可以获得更佳的授课效果。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳 动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明课程优化方法的流程图;
图2示出了授课信息采集单元采集的数据信息;
图3为通过OCR文字识别的方式对视频中文字内容进行识别的具体方式;
图4为通过视频场景识别方式对授课视频进行识别的具体方式;
图5为通过语音识别方式对授课视频进行识别的具体方式;
图6示出了组合使用该三种识别方式实现对授课视频进行知识点切分的流程图;
图7为对于同一知识点的多个视频片段,确定每个知识点的最优视频的流程;
图8示出了图1中的步骤S300的处理的流程图;
图9示出了图8中的步骤S310的处理的流程图;
图10示出了图1中的步骤S400的处理的流程图。
具体实施方式
下面将参考附图并结合实施例,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在介绍本发明实施例之前,先介绍本发明涉及的系统架构,该系统架构包括服务器和多个客户端,多个客户端与服务器之间通信连接,该多个客户端可以是任何终端设备,可以包括手机、平板电脑、笔记本电脑、PC机、PDA(Personal Digital Assistant,即个人数字助理)、车载电脑等任意终端设备,此处不做具体限定。终端设备的操作系统可以为Windows系列操作系统、Unix类操作系统、Linux类操作系统、Mac操作系统、ANDROID类操作系统等,此处不做具体限定。
在双师课堂的实际教学场景中,主讲教师在远程通过直播的方式进行 教学,在实际的教室中辅导教师配合主讲教师的直播授课实现对教室内学生的教学及辅导。主讲教师通过教学系统进行授课,辅导教师同样通过教学系统进行教学辅导,该教学系统例如可以是好未来公司开发的未来黑板系统。
主讲教师在直播过程中根据课件进行授课,其中该课件是按照配置信息预先制作的,该配置信息是主讲教师根据课时、授课知识点的多少、各知识点的难易程度等信息配置的,该配置信息包括各知识点的课件页数及预计授课时长、互动活动时长、讲解时长、休息时长等。对于课件的配置信息是否合适,例如,为某个知识点配置的课件页数及预计授课时长是否足够将该知识点讲解清楚,或者对于某个知识点来说,其配置的课件页数是否过多或者预计的时长是否过于冗长,主讲老师/辅导老师可能有一个主观上的感受,例如为某个知识点配置的授课时长不够,学生不能完全理解。但是主讲老师/辅导老师无法将配置信息与实际授课需求的差异进行量化,例如无法直接明确为某知识点配置的预计授课时长比实际需要的授课时长长了多少或短了多少,即主讲老师/辅导老师无法直接明确课件的最优配置信息。
为了解决这个问题,本发明提供了一种课程优化方法和系统,将教师授课的视频按知识点进行切分,并将多个切分视频按知识点进行分类,通过用户评分、点击加权等方式引入用户评价,进而获得每类知识点的最优视频,并对各知识点的最优视频进行数据整合,获得每类知识点的最优配置信息,并按照该最优配置信息进行课件制作,从而可以获得更佳的授课效果。
图1为本发明实施例提供的一种课程优化方法,其可被处理器执行,该方法包括如下步骤:
S100,对授课信息进行采集,其中,授课信息包括授课视频;
S200,对授课视频进行知识点识别,获得知识点信息;
S300,根据知识点信息对授课视频进行切分;
S400,对切分后视频进行评价,获得每个知识点的最优视频;
S500,根据每个知识点的最优视频对应的结构化信息进行课件制作, 获得优化的课件。
以下将详细介绍该课程优化方法的步骤S100-S500的具体实现。
S100,对授课信息进行采集,获得课件配置信息、实时使用信息、和授课视频信息。
对课程进行优化前需要先获得现有授课信息。本发明通过授课信息采集单元对授课信息进行采集。其中授课信息包括授课前教师制作课件时使用的配置信息、授课过程中主讲教师/辅导教师授课时的实时使用信息、以及直播课程的视频信息。图2示出了授课信息采集单元采集的数据信息。
授课前,主讲老师根据配置信息预先制作课件。其中该课件是按照配置信息预先制作的,该配置信息是主讲教师根据课时、授课知识点的多少、各知识点的难易程度等信息配置的。如图2所示,该配置信息包括各知识点的课件页数及预计授课时长、互动活动及预计时长、讲解活动及预计时长、课间休息时长、下课时间等。授课信息采集单元对该配置信息进行采集并存储于相应的数据库中供后续使用。
在实际授课过程中,主讲教师/辅导教师会随着授课的进行对课件进行翻页操作或发起互动操作,该翻页操作或互动操作的打点数据反应了当前授课的实时使用信息。因此,通过采集主讲教师/辅导教师授课时的打点数据,就可以实现采集授课时的实时使用信息。
对于上课过程中的实时使用信息,可以通过在客户端配置的预制节点进行数据上报来实现,如图2所示,比如在主讲老师进行翻页的过程中上报对应的操作行为和相应时间,每一页都有相应的时间对应。在主讲教师或者辅导教师发起互动的时候,也会上报其行为和相应的时间,收集到的数据信息存储到数据库中供后续使用。
实时使用信息例如可以记录为:第一知识点第二页,13分50秒;或课间休息,20分15秒;或选择题第二页,13分10秒。其中该时间例举为从上课开始后的计时,当然也可以采用北京时间(或其他时区时间),例如,打点数据可以为:第一知识点第二页,16:13分50秒。
上述仅是对获取实时使用信息方式的例举,并不对构成对本申请的限定,还可以采用其他现有技术,只要能够实现该功能即可。
对于直播课程的视频信息,通过直播间的摄像头来采集主讲老师进行讲课的全过程,服务器收集教师授课的实时数据,推送到全国各地的辅导课堂,并在服务器端保存全量视频,收集到的视频信息供后续视频切分和分类使用。如图2所示,授课信息采集单元采集的还有直播讲解的全过程的视频,并可以存储在例如大对象文件存储系统内。
S200,对授课视频进行知识点识别,获得知识点信息。
通过授课信息采集单元采集到授课信息后,将授课视频按照知识点进行切分,也就是要将每个授课视频按照知识点划分为若干个视频片段。
为了完成对授课视频的按知识点切分,需要先从授课视频中识别出每个知识点。该识别例如可以通过OCR文字识别的方式对视频中文字信息进行识别,和/或通过视频场景识别的方式对视频中不同场景进行检测,和/或通过语音识别的方式对视频的语音内容进行识别。对授课视频进行知识点识别后,可以获得该授课视频中的知识点信息,从而可以根据知识点信息将该授课视频切分为以知识点为单位的视频片段。
对授课视频进行识别时需要先对授课视频的帧进行抽样,该抽样的时机优选为根据授课过程中的实时使用信息进行,即将对授课视频的帧进行抽样的时机选择为根据授课过程中教师课件翻页的时间点附近,该时间点课件发生了翻页行为,即该时间点可能为知识点发生变化或者知识点和其他活动进行切换的时机。
下面首先介绍通过OCR文字识别的方式对视频中文字内容进行识别的具体实施例。如图3所示,先对授课视频的帧进行抽样,并将抽样的视频帧进行OCR文字识别,然后对相邻两个视频帧识别出的文本结果进行相关性分析,根据文本相关性分析的结果可以确定相邻两个视频帧中的知识点是否属于同一知识点。对知识点进行OCR文字识别过程中主要针对三种类型:(1)同一页课件知识点相同,文字识别结果归为同一知识点;(2)同一个知识点的不同页课件,通过文本相关性分析后归为同一知识点;(3)不同知识点或者知识点和互动、下课等情形归为不同知识点。
对抽取的所有视频帧中的相邻两个视频帧的文本结果重复进行相关性分析,可以识别出授课视频中的所有知识点信息,对识别出的所有知识点 信息按照授课时间的早晚进行排序,即可以确定某个知识点对应的视频时间区间。其中,知识点信息包括知识点、知识点的视频开始时间、知识点的视频结束时间,可选地,每个知识点信息还可以包括置信度。
可选地,该OCR文字识别也可以采用其他的可行方式,该OCR文字识别技术属于本领域的现有技术,在此不作具体限定。
通过视频场景识别方式对授课视频进行识别时,如图4所示,首先对授课视频的帧进行抽样,并对相邻两个视频帧进行场景识别。该抽样的时机优选为根据授课过程中的实时使用信息进行,即将对授课视频的帧进行抽样的时机选择为根据授课过程中教师课件翻页的时间点附近,该时间点课件发生了翻页行为,即该时间点可能为知识点发生变化或者知识点和其他活动进行切换的时机。
根据对视频场景的识别结果,可以识别出相邻的两个课件页是同类型相似课件页或不同类型的课件页。对知识点进行视频场景识别过程中主要针对三种类型:(1)同一页课件显然属于同一个场景;(2)对于同一个知识点的不同课件页,由于其授课情形类似,因此识别为同一个场景;(3)对于不同课件页的不同知识点,当识别出场景切换时,归为不同场景,例如,当老师开启幕布时或者课间休息时则明显与知识点授课的场景不同,因此会识别为不同场景。
对抽取的所有视频帧中的相邻两个视频帧重复进行场景识别,可以识别出授课视频中的所有场景信息,从而获得不同场景的分界线,对应到视频的时间轴上,可以得到授课视频根据场景进行了时间上的划分结果。其中,场景信息包括场景、场景的视频开始时间、场景的视频结束时间,可选地,每个场景信息还可以包括置信度。
显然,该识别出的不同的场景实际是与知识点或互动活动相对应的,因此通过视频场景识别同样可以实现知识点识别。
可选地,该视频场景识别通过CV方式进行,当然也可以采用其他的可行方式,该视频场景识别技术属于本领域的现有技术,在此不作具体限定。
通过语音识别方式对授课视频进行识别时,如图5所示,首先对授课 视频抽取视频语音序列,然后对语音序列进行识别输出对应的文本内容,并对文本内容进行相关性分析。
该抽取视频语音序列的时机优选为根据授课过程中的实时使用信息进行,即将抽取视频语音序列进行抽样的时机选择为根据授课过程中教师课件翻页的时间点附近,该时间点课件发生了翻页行为,即该时间点可能为知识点发生变化或者知识点和其他活动进行切换的时机。
由于同一个知识点的内容的授课语音识别出来的文本是有相似性的,通过文本上下文的相关性分析能够确定某段时间是在讲同一个知识点,因此,通过语音识别的方式可以确定出每个知识点对应的音频片段。对抽取的所有视频语音序列进行语音识别后的文本结果重复进行相关性分析,可以识别出授课视频中的所有知识点信息,对识别出的所有知识点信息按照授课时间的早晚进行排序,即可以确定某个知识点对应的视频时间区间。其中,知识点信息包括知识点、知识点的视频开始时间、知识点的视频结束时间,可选地,每个知识点信息还可以包括置信度。
同时,教师授课过程中进行知识点切换或者课间休息等情形时,一般会存在较长的语音停顿,因此通过在语音识别的过程中,识别音频中间长时间的语音停顿可以确定知识点切换或互动活动等情形。通过该方式也可以实现对不同知识点进行识别。
可选地,该语音识别可以通过LSTM技术进行,当然也可以采用其他的可行方式,该语音识别技术属于本领域的现有技术,在此不作具体限定。
可见通过OCR文字识别、视频场景识别或语音识别均能够识别出授课视频中的知识点信息。当然,本领域技术人员应该明了,对授课视频中知识点的识别方式并不限于上述的三种,本领域中还有其他的识别方式,只要能够完成授课视频中知识点的识别即可。
实际使用过程中,每种识别方式均有其各自的优缺点,为了能够更准确的完成对视频进行知识点的识别和分类,还可以选用两种或两种以上识别方式组合进行识别。例如采用上述三种识别方式中的两两组合或者三者组合的方式进行,本实施例的组合方式不构成对本申请的限定。
本申请以OCR文字识别、视频场景识别和语音识别三种识别方式组合 为例进行详细阐述。
本实施例中,为了组合使用三种识别方式,首先我们对比一下这三种识别方式的优缺点,如表1所示:
Figure PCTCN2020099892-appb-000001
表1三种识别方式优缺点对比表
可见,三种识别方式各有优缺点,那么就要根据各种识别方式的优缺点的侧重来选择合适的识别方式,例如可为每种识别方式设置不同的权重。OCR文字识别和语音识别都能够明确地获得老师讲的各知识点,但视频场景检测却对知识点的切换不敏感,不能给出相应知识点,因此组合使用时将视频场景识别的权重设置为最低;主讲老师的课件本身就浓缩了相应知识点的精华部分,一些关键的标题或者内容字段本身就是对知识点的提炼,概括知识点的内容准确率最高,所以组合使用时将OCR文字识别的权重设置为最高;语音序列中包含了大量的语音信息,但较为琐碎,没有对中心思想的提炼,不能便捷地总结出对应的知识点,但却是老师实际讲解节奏的体现,所以组合使用时将语音识别的权重设置为稍低于OCR文字识别。在一个实施例中,可将三种识别方式的权重设置例如为:OCR文字识别(权重60%)>语音识别(30%)>场景检测(10%)。
S300,根据知识点信息对授课视频进行切分。
通过每种识别方式对授课视频进行识别后均会生成与该授课视频对应的一组知识点信息,当两种或两种以上识别方式组合使用时,需要根据生成的两组或两组以上知识点信息来确定最终用于对授课视频进行知识点切分的若干知识点信息,即对两组或两组以上知识点信息进行切分决策,以确定用于对授课视频进行切分的一组知识点切分信息。当然,该组知识点 切分信息中的每个知识点切分信息可能是由不同识别方式获得的。例如,第一个知识点切分信息由OCR文字识别方式获得,第二个知识点切分信息由语音识别方式获得。
另外,本领域技术人员应该明了,当仅采用一种识别方式进行知识点识别时,无需对其生成的一组知识点信息进行切分决策,该组知识点信息即为用于对授课视频进行切分的一组知识点切分信息。
图8示出了图1中的步骤S300的处理的流程图。如图8所示,步骤S300可以包括步骤S310和S320。
S310,对知识点信息进行切分决策,以获得知识点切分信息。
以下重点对于组合使用两种或两种以上识别方式的切分决策进行阐述。仍以组合OCR文字识别、语音识别和视频场景识别三种方式为例,如图6所示,其示出了组合使用该三种识别方式实现对授课视频进行知识点切分的流程图。视频切分单元将授课视频分别通过OCR文字识别、语音识别和视频场景识别三种方式生成三组知识点信息/场景信息。其中,每组知识点信息/场景信息表征了该授课视频中所有的知识点,可能包括一个或多个知识点信息/场景信息,每个知识点信息/场景信息对应于一个知识点,每个知识点信息/场景信息包括知识点、知识点的视频开始时间、知识点的视频结束时间、置信度等。
在切分决策前,先来介绍每种识别方式获得的一组知识点信息/场景信息的组成。如表2所示,OCR文字识别和语音识别生成的知识点信息包括知识点、知识点的视频开始时间、知识点的视频结束时间、置信度;视频场景识别产出的场景信息包括场景、场景的视频开始时间、场景的视频结束时间和相应置信度。
检测方法 获得信息
OCR文字识别 知识点、视频开始时间、视频结束时间、置信度
视频场景识别 场景、视频开始时间、视频结束时间、置信度
语音识别 知识点、视频开始时间、视频结束时间、置信度
表2三种识别方式识别出的信息
当通过三种识别方式生成三组知识点信息/场景信息后,对于每个知识 点来说均存在三个通过不同识别方式获得的知识点信息/场景信息。通过将该知识点信息/场景信息中的置信度与以上介绍的每种识别方式的权重进行计算,从而可以获得每个知识点信息/场景信息的可信分数,将每个知识点对应的三个知识点信息/场景信息的可信分数进行排序,分数最高的即为该知识点的知识点切分信息。对授课视频中的每个知识点对应的三个知识点信息/场景信息均计算可信分数并排序,即可获得每个知识点的知识点切分信息。该些知识点切分信息即构成了一组知识点切分信息,用于实现对授课视频的按知识点切分,并将切分后的多个视频片段和该组知识点切分信息送数据库存储。
图9示出了图8中的步骤S310的处理的流程图。如图9所示,步骤S310可以包括步骤S3110到S3140。
S3110,将多组知识点信息/场景信息按知识点进行聚类。
在该实施例中,即将三组知识点信息/场景信息按知识点进行聚类,即将对于某个知识点来说通过三种识别方式分别产生的三个知识点信息/场景信息进行对应,这样每个知识点均存在三个通过不同识别方式获得的知识点信息/场景信息。实际执行过程中可以通过知识点信息/场景信息中包含的视频开始和结束时间进行聚类,将视频开始时间和视频结束时间相近的归为一类。
S3120,为每个知识点的多个知识点信息/场景信息分别计算可信分数。
在本实施例中,可通过每个知识点信息/场景信息中的置信度和对应识别方式的权重来计算每个知识点信息/场景的可信分数。如上所述,在一个实施例中,将三种识别方式的权重设置为:OCR文字识别(权重60%)>语音识别(30%)>场景检测(10%)。具体计算方式为:
可信分数=识别方式权重*知识点信息/场景信息置信度。
S3130,将为每个知识点计算出的多个可信分数进行排序,分数最高的知识点信息/场景信息即为该知识点的切分信息。
在本实施例中,将计算出的每个知识点对应的三个知识点信息/场景信息的可信分数进行排序,分数最高的即为该知识点的切分信息。
S3140,根据每个知识点的切分信息,形成一组知识点切分信息。
在本实施例中,通过重复执行步骤S3120和S3130中对授课视频中的每个知识点对应的三个知识点信息/场景信息均计算可信分数并排序,即可获得每个知识点的知识点切分信息。该些知识点切分信息即构成了一组知识点切分信息,用于实现对授课视频的按知识点切分。
可见,通过执行上述步骤S3110-S3140,即完成了对两组或两组以上知识点信息进行切分决策,从而确定了用于对授课视频进行切分的一组知识点切分信息。该组知识点切分信息可在下述步骤中用于对授课视频进行按知识点切分。
可选地,在步骤S3140后,所述方法还可包括:对该组知识点切分信息进行时间轴校验。
由于该组知识点切分信息中的每个知识点信息可能是由不同识别方式获得的,例如,第一个知识点信息由OCR文字识别方式获得,第二个知识点信息由语音识别方式获得,所以需要对该组切分信息中的视频时间轴进行校验,避免切分后的视频片段时间重叠。当检测到视频片段时间重叠时,可根据步骤S3120的计算结果进行更正,将重叠的两个知识点视频片段对应的可信分数进行比较,分数较高的知识点视频片段保留,从分数较低的知识点视频片段中删除重叠部分。或者,可以根据授课过程中的实时使用信息进行更正,由于授课过程中的实时使用信息反应了教师课件的翻页时间点,可能是知识点切换时间,因此可根据该数据对重叠的视频片段进行更正。
S320,根据知识点切分信息对授课视频进行切分。
参考图6,通过图8所示的步骤S310的切分决策获得了用于对授课视频进行切分的一组知识点切分信息,在本步骤中根据该组知识点切分信息将授课视频按知识点切分为多个视频片段,并可以为每个视频片段分配一个唯一的视频ID。其中视频切分方式是本领域的公知技术,在此不做具体限定。
结合上文所述的在步骤S100中获得的制作课件时使用的配置信息以及授课过程中的实时使用信息,可以获得每个视频片段对应的知识点和课件页数。因此,对于切分出的每个视频片段都会生成一个结构化信息,每 个结构化信息包括:视频ID、知识点、视频时长、课件页数、可信分数。如下表所示:
知识点 切片视频id 视频时长 课件页数 结果分数
表3结构化信息组成
将授课视频按知识点进行切分后,将切分的视频片段和对应的结构化信息存储到数据库中供后续使用。
S400,对切分后视频进行评价,获得每个知识点的最优视频。
通过以上步骤S100-S300可以将授课视频按照知识点切分为多个视频片段,通过对同一学科、同一年级、同一课程的多位教师的授课视频均执行以上步骤可以为每个知识点获得多个视频片段。
我们需要知道每个知识点视频片段中呈现的知识点结构化信息是否合理,为此需要对每个知识点的若干个视频片段进行评价,来确定对于教师或学生来说哪个视频片段的结构化信息更为合理,即确定每个知识点的最优视频片段。本申请通过引入用户评价机制可以对同一知识点的多个视频片段进行评价。
本申请实施例中,主要是根据视频标注、观看者打分、点击加权等方式来确定每个知识点的最优视频。
图10示出了图1中的步骤S400的处理的流程图。如图10所示,步骤S400可以包括步骤S410到S440。
如图7所示,对于同一知识点的多个视频片段,可以通过图10所示的步骤S410到S440确定每个知识点的最优视频。
S410,提供视频片段访问平台供用户访问。
本申请通过搭建视频片段搜索推荐系统,对外提供统一的视频访问平台,供双师或者其他业务部门的老师进行搜索和访问,并整合切分后的同类知识点视频片段进行推送。
S420,用户访问视频搜索推荐系统,进行相关知识点的搜索。
用户可以根据自己感兴趣的内容进行搜索,返回用户搜索相关信息的rank排序结果。初始状态可以根据视频检测系统中获得的结果分数进行召回排序,后续通过用户的点击行为对相应视频进行点击加权操作。
冷数据(首次被查询或者查询频率比较少数据)查询过程主要依靠知识点结构化数据中可信分数进行召回排序,用户点击相应召回rank排序链接后,访问视频切片文件系统返回观看的视频片段。
热数据(经常被查看的数据)查询过程召回rank排序的依据有两部分内容:(1)知识点结构化数据中的可信分数;(2)通过定期整合推荐获得的相应权重分数。用户点击相应召回rank排序链接后,访问视频切片文件存储系统返回观看的视频片段。
S430,用户对视频进行评分。
在用户浏览完对应的知识点视频后,提供用户对视频评分、标签勾选评价、以及根据点击量来确定视频片段加权的方式来提升相应视频片段的结果权重。
用户观看完视频后,提供用户对视频的评分,分值例如可以从1颗星到5颗星等;并可以提供相应的标签选项,比如:知识点讲解的非常棒、老师讲的非常棒、老师讲的很一般等,用户可以通过点击相应标签进行评价。
S440,整合用户的评分获得每个知识点的最优视频。
通过定期整合每个知识点的哪些视频片段是经常被点击的,进行相应的点击加权,提高该视频片段排序的权重和分数。还可以通过汇总用户评分,进行加权的方式来提高该视频片段的评分;汇总用户勾选评价,进行加权;最后整合出相应知识点视频的加权分数信息供搜索推荐系统使用。
将定期整合后的数据,包括知识点的哪些视频是最优视频,然后将其数据进行结构化存储,存入对应数据库从而完成持久化。
S500,根据每个知识点的最优视频的配置信息进行课件制作,获得优化的课件。
通过步骤S400可以获得每个知识点的最优视频,并且从数据库中可以查询得到该最优视频对应的结构化信息。因此,可以得到最优视频对应的知识点、视频时长、课件页数等,即获得了该知识点对应的最优配置信息,从而可以指导老师在备课过程中,针对相应的知识点参考最优配置信息进行课件的制作。因此,可以实现对课程的优化。
另外,在本申请中,还可以将课件配置信息、实时实用信息、最优配置信息的一个或多个进行对比展示,方便教师在根据最优配置信息优化课件时使用,从而达到课程优化的效果。
基于本申请提供的课程优化方法,将多个教师直播的授课视频按知识点进行切分,并将多个切分视频按知识点进行分类,通过用户评分、点击量排序等方式引入用户反馈,进而获得每一知识点的最优视频,并对各知识点的最优视频进行数据整合,获得每个知识点的最优视频的配置信息,将该配置信息作为最优配置信息,并按照该最优配置信息进行授课配置,从而可以获得更佳的授课效果。
本发明第二实施例还提供的一种课程优化装置,包括:
授课信息采集单元,用于对授课信息进行采集,其中,授课信息包括授课视频;
知识点识别单元,用于对授课视频进行知识点识别,获得知识点信息;
视频切分单元,用于根据知识点信息对授课视频进行切分;
评价单元,用于对切分后视频进行评价,获得每个知识点的最优视频;以及
优化单元,用于根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
本发明第三实施例还提供一种课程优化系统,包括存储器和处理器,存储器存储指令;处理器用于根据存储在存储器中的指令,执行图1所示的如下步骤:
S100,对授课信息进行采集,其中,授课信息包括授课视频;
S200,对授课视频进行知识点识别,获得知识点信息;
S300,根据知识点信息对授课视频进行切分;
S400,对切分后视频进行评价,获得每个知识点的最优视频;以及
S500,根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
显然,本领域技术人员应该明白,上述的本发明的各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个 计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而可以将它们存储在存储装置中由计算装置来执行,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (14)

  1. 一种课程优化方法,至少包括以下步骤:
    对授课信息进行采集,其中,所述授课信息包括授课视频;
    对所述授课视频进行知识点识别,获得知识点信息;
    根据所述知识点信息对所述授课视频进行切分;
    对切分后视频进行评价,获得每个知识点的最优视频;以及
    根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
  2. 根据权利要求1所述的方法,其中,所述对所述授课视频进行知识点识别包括:
    采用至少一种识别方式进行知识点识别,当采用两种以上识别方式时,每种识别方式的权重不同。
  3. 根据权利要求1或2所述的方法,其中,所述对所述授课视频进行知识点识别包括:
    采用OCR文字识别、视频场景识别、语音识别中的至少一种方式进行知识点识别。
  4. 根据权利要求1-3中任一项所述的方法,其中,所述知识点信息包括:知识点、知识点的视频开始时间、知识点的视频结束时间、以及置信度。
  5. 根据权利要求1-4中任一项所述的方法,其中,所述授课信息还包括实时使用信息,对所述授课视频进行知识点识别前,先对所述授课视频的帧进行抽样或抽取视频语音序列,该抽样或抽取视频语音序列的时机为根据实时使用信息进行。
  6. 根据权利要求1-5中任一项所述的方法,其中,所述根据所述知识点信息对所述授课视频进行切分包括:
    对所述知识点信息进行切分决策,以获得知识点切分信息;以及
    根据所述知识点切分信息对所述授课视频进行切分。
  7. 根据权利要求6所述的方法,其中,当采用两种以上识别方式进行知识点识别时,所述对所述知识点信息进行切分决策,以获得知识点切分信息 包括:
    将通过每种识别方式获得的所述授课视频对应的至少两组知识点信息按照知识点进行聚类,每个知识点对应于两个以上知识点信息;
    为每个知识点的两个以上知识点信息分别计算可信分数;
    将为每个知识点计算出的两个以上可信分数进行排序,分数最高的知识点信息即为该知识点的切分信息;以及
    根据每个知识点的切分信息,形成对应于所述授课视频的一组知识点切分信息。
  8. 根据权利要求7所述的方法,其中,在所述根据每个知识点的切分信息,形成对应于所述授课视频的一组知识点切分信息后,所述方法还包括:
    对该组知识点切分信息进行时间轴校验。
  9. 根据权利要求7或8所述的方法,其中,所述计算可信分数具体为,可信分数=识别方式权重*知识点信息的置信度。
  10. 根据权利要求6所述的方法,其中,所述对所述知识点信息进行切分决策,以获得知识点切分信息包括:
    当仅采用一种识别方式进行知识点识别时,无需进行切分决策,知识点信息即为用于对所述授课视频进行切分的知识点切分信息。
  11. 根据权利要求1-10中任一项所述的方法,其中,所述结构化信息包括:视频ID、知识点、视频时长、课件页数、可信分数。
  12. 根据权利要求1-11中任一项所述的方法,其中,所述对切分后视频进行评价,获得每个知识点的最优视频包括:
    提供视频片段访问平台供用户访问;
    所述用户访问视频搜索推荐系统,进行相关知识点的搜索;
    所述用户对视频进行评分;以及
    整合所述用户的评分获得每个知识点的最优视频。
  13. 一种课程优化装置,包括:
    授课信息采集单元,用于对授课信息进行采集,其中,所述授课信息包括授课视频;
    知识点识别单元,用于对所述授课视频进行知识点识别,获得知识点信 息;
    视频切分单元,用于根据所述知识点信息对所述授课视频进行切分;
    评价单元,用于对切分后视频进行评价,获得每个知识点的最优视频;以及
    优化单元,用于根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
  14. 一种课程优化系统,包括存储器和处理器,所述存储器存储指令;所述处理器用于根据存储在所述存储器中的所述指令,执行如下步骤:
    对授课信息进行采集,其中,所述授课信息包括授课视频;
    对所述授课视频进行知识点识别,获得知识点信息;
    根据所述知识点信息对所述授课视频进行切分;
    对切分后视频进行评价,获得每个知识点的最优视频;以及
    根据每个知识点的最优视频对应的结构化信息进行课件制作,获得优化的课件。
PCT/CN2020/099892 2019-07-03 2020-07-02 一种课程优化方法、装置和系统 WO2021000909A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20835234.4A EP3996070A4 (en) 2019-07-03 2020-07-02 METHOD, APPARATUS AND SYSTEM FOR TEACHING PROGRAM OPTIMIZATION
US17/624,330 US11450221B2 (en) 2019-07-03 2020-07-02 Curriculum optimisation method, apparatus, and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910594625.8 2019-07-03
CN201910594625.8A CN110322738B (zh) 2019-07-03 2019-07-03 一种课程优化方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2021000909A1 true WO2021000909A1 (zh) 2021-01-07

Family

ID=68122457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099892 WO2021000909A1 (zh) 2019-07-03 2020-07-02 一种课程优化方法、装置和系统

Country Status (4)

Country Link
US (1) US11450221B2 (zh)
EP (1) EP3996070A4 (zh)
CN (1) CN110322738B (zh)
WO (1) WO2021000909A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420132A (zh) * 2021-06-15 2021-09-21 读书郎教育科技有限公司 一种大班直播课讨论区提问快速响应的方法
CN113643580A (zh) * 2021-08-13 2021-11-12 四川红色旗子教育科技有限公司 一种教育共同体系统
CN113963306A (zh) * 2021-12-23 2022-01-21 北京大学 基于人工智能的课件片头制作方法和装置
CN114036445A (zh) * 2022-01-10 2022-02-11 广东省出版集团数字出版有限公司 一种用于教师备课的数字教材提供平台
CN118711589A (zh) * 2024-08-27 2024-09-27 江苏盛美塾信息科技有限公司 一种基于人工智能的多功能一体化交互控制系统及方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322738B (zh) * 2019-07-03 2021-06-11 北京易真学思教育科技有限公司 一种课程优化方法、装置和系统
CN111107442B (zh) * 2019-11-25 2022-07-12 北京大米科技有限公司 音视频文件的获取方法、装置、服务器及存储介质
TWI741550B (zh) * 2020-03-31 2021-10-01 國立雲林科技大學 書籤影格的生成方法、自動生成書籤的影音播放裝置及其使用者介面
CN111711834B (zh) * 2020-05-15 2022-08-12 北京大米未来科技有限公司 录播互动课的生成方法、装置、存储介质以及终端
CN111866608B (zh) * 2020-08-05 2022-08-16 北京华盛互联科技有限公司 一种用于教学的视频播放方法、装置和系统
CN112560663B (zh) * 2020-12-11 2024-08-23 南京谦萃智能科技服务有限公司 教学视频打点方法、相关设备及可读存储介质
CN112286943B (zh) * 2020-12-26 2021-04-20 东华理工大学南昌校区 基于时事案例优化的思想政治教案展示方法和系统
CN112735195A (zh) * 2020-12-31 2021-04-30 彭山峻 一种录播类高效学练的自学系统
CN112911326B (zh) * 2021-01-29 2023-04-11 平安科技(深圳)有限公司 弹幕信息处理方法、装置、电子设备和存储介质
CN113704550B (zh) * 2021-07-15 2024-08-13 北京墨闻教育科技有限公司 教学短片生成方法及系统
CN113891026B (zh) * 2021-11-04 2024-01-26 Oook(北京)教育科技有限责任公司 一种录播视频的标记方法、装置、介质和电子设备
CN114979787A (zh) * 2022-05-17 2022-08-30 北京量子之歌科技有限公司 一种直播回放管理方法、装置、设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105810024A (zh) * 2016-05-05 2016-07-27 北京爱学习博乐教育科技有限公司 一种自适应教学课件的方法
US20170011642A1 (en) * 2015-07-10 2017-01-12 Fujitsu Limited Extraction of knowledge points and relations from learning materials
CN106878632A (zh) * 2017-02-28 2017-06-20 北京知慧教育科技有限公司 一种视频数据的处理方法和装置
CN206400819U (zh) * 2016-10-12 2017-08-11 北京新晨阳光科技有限公司 课程录制设备及系统
CN108280153A (zh) * 2018-01-08 2018-07-13 天津科技大学 一种碎片化知识智能化聚合方法
CN109274913A (zh) * 2018-10-17 2019-01-25 北京竞业达数码科技股份有限公司 一种视频智能切片剪辑方法及系统
CN109389870A (zh) * 2017-08-10 2019-02-26 亿度慧达教育科技(北京)有限公司 一种应用于电子教学中的数据自适应调整方法及其装置
CN109460488A (zh) * 2018-11-16 2019-03-12 广东小天才科技有限公司 一种辅助教学方法及系统
CN110322738A (zh) * 2019-07-03 2019-10-11 北京易真学思教育科技有限公司 一种课程优化方法、装置和系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140377732A1 (en) * 2013-06-21 2014-12-25 Gordon L. Freedman Method and system for providing video pathways within an online course
US9892194B2 (en) * 2014-04-04 2018-02-13 Fujitsu Limited Topic identification in lecture videos
CN104484420A (zh) * 2014-12-17 2015-04-01 天脉聚源(北京)教育科技有限公司 一种用于制作智慧教学系统课件的方法及装置
US20160364115A1 (en) * 2015-06-12 2016-12-15 Scapeflow, Inc. Method, system, and media for collaborative learning
CN105117467A (zh) * 2015-08-28 2015-12-02 上海第九城市教育科技股份有限公司 一种多媒体课件的内容管理方法及系统
US20180293912A1 (en) * 2017-04-11 2018-10-11 Zhi Ni Vocabulary Learning Central English Educational System Delivered In A Looping Process
CN107968959B (zh) * 2017-11-15 2021-02-19 广东广凌信息科技股份有限公司 一种教学视频的知识点分割方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170011642A1 (en) * 2015-07-10 2017-01-12 Fujitsu Limited Extraction of knowledge points and relations from learning materials
CN105810024A (zh) * 2016-05-05 2016-07-27 北京爱学习博乐教育科技有限公司 一种自适应教学课件的方法
CN206400819U (zh) * 2016-10-12 2017-08-11 北京新晨阳光科技有限公司 课程录制设备及系统
CN106878632A (zh) * 2017-02-28 2017-06-20 北京知慧教育科技有限公司 一种视频数据的处理方法和装置
CN109389870A (zh) * 2017-08-10 2019-02-26 亿度慧达教育科技(北京)有限公司 一种应用于电子教学中的数据自适应调整方法及其装置
CN108280153A (zh) * 2018-01-08 2018-07-13 天津科技大学 一种碎片化知识智能化聚合方法
CN109274913A (zh) * 2018-10-17 2019-01-25 北京竞业达数码科技股份有限公司 一种视频智能切片剪辑方法及系统
CN109460488A (zh) * 2018-11-16 2019-03-12 广东小天才科技有限公司 一种辅助教学方法及系统
CN110322738A (zh) * 2019-07-03 2019-10-11 北京易真学思教育科技有限公司 一种课程优化方法、装置和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3996070A4

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420132A (zh) * 2021-06-15 2021-09-21 读书郎教育科技有限公司 一种大班直播课讨论区提问快速响应的方法
CN113643580A (zh) * 2021-08-13 2021-11-12 四川红色旗子教育科技有限公司 一种教育共同体系统
CN113963306A (zh) * 2021-12-23 2022-01-21 北京大学 基于人工智能的课件片头制作方法和装置
CN113963306B (zh) * 2021-12-23 2022-07-19 北京大学 基于人工智能的课件片头制作方法和装置
CN114036445A (zh) * 2022-01-10 2022-02-11 广东省出版集团数字出版有限公司 一种用于教师备课的数字教材提供平台
CN114036445B (zh) * 2022-01-10 2022-03-29 广东省出版集团数字出版有限公司 一种用于教师备课的数字教材提供平台
CN118711589A (zh) * 2024-08-27 2024-09-27 江苏盛美塾信息科技有限公司 一种基于人工智能的多功能一体化交互控制系统及方法

Also Published As

Publication number Publication date
US20220208014A1 (en) 2022-06-30
CN110322738A (zh) 2019-10-11
CN110322738B (zh) 2021-06-11
EP3996070A4 (en) 2024-01-24
EP3996070A1 (en) 2022-05-11
US11450221B2 (en) 2022-09-20

Similar Documents

Publication Publication Date Title
WO2021000909A1 (zh) 一种课程优化方法、装置和系统
WO2019091131A1 (zh) 在网络教学系统中推荐教师的方法
US11151892B2 (en) Internet teaching platform-based following teaching system
WO2019095446A1 (zh) 一种具有语音评价功能的跟随教学系统
WO2021062990A1 (zh) 视频分割方法、装置、设备及介质
CN106126524B (zh) 信息推送方法和装置
CN105117996A (zh) 智能校园课程信息推荐及共享系统
WO2021218028A1 (zh) 基于人工智能的面试内容精炼方法、装置、设备及介质
CN103544663A (zh) 网络公开课的推荐方法、系统和移动终端
CN111930925B (zh) 一种基于在线教学平台的试题推荐方法及系统
CN111522970A (zh) 习题推荐方法、装置、设备及存储介质
CN114095749B (zh) 推荐及直播界面展示方法、计算机存储介质、程序产品
CN109933650B (zh) 一种作业中图片题目的理解方法及系统
CN113590956A (zh) 知识点推荐方法、装置、终端及计算机可读存储介质
CN113254708A (zh) 一种视频搜索方法、装置、计算机设备及存储介质
CN111192170B (zh) 题目推送方法、装置、设备和计算机可读存储介质
CN118193701A (zh) 基于知识追踪和知识图谱的个性化智能答疑方法及装置
TW202040446A (zh) 線上教學系統的教師調派方法及其伺服端
CN112560663B (zh) 教学视频打点方法、相关设备及可读存储介质
KR20170048862A (ko) 양방향 학습 서비스 제공 방법
CN111523028A (zh) 数据推荐方法、装置、设备及存储介质
KR102599370B1 (ko) 운영변환 기반 실시간 협업 편집 서비스 제공이 가능한 맞춤형 콘텐츠 제공 시스템 및 그 방법
CN111078746B (zh) 一种听写内容确定方法及电子设备
CN111813919A (zh) 一种基于句法分析与关键词检测的mooc课程评价方法
CN113822589A (zh) 智能面试方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20835234

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020835234

Country of ref document: EP

Effective date: 20220203