CN111081095A - Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method - Google Patents

Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method Download PDF

Info

Publication number
CN111081095A
CN111081095A CN201910881880.0A CN201910881880A CN111081095A CN 111081095 A CN111081095 A CN 111081095A CN 201910881880 A CN201910881880 A CN 201910881880A CN 111081095 A CN111081095 A CN 111081095A
Authority
CN
China
Prior art keywords
learning
audio
experiment
user
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910881880.0A
Other languages
Chinese (zh)
Inventor
叶丙成
郑曜忻
郭家良
周靖昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pan Xueyou Co ltd
Original Assignee
Pan Xueyou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from TW107136683A external-priority patent/TWI722327B/en
Priority claimed from TW107136684A external-priority patent/TW202016869A/en
Application filed by Pan Xueyou Co ltd filed Critical Pan Xueyou Co ltd
Publication of CN111081095A publication Critical patent/CN111081095A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an audio-visual teaching platform, an analysis subsystem and method, and a recommendation subsystem and method. The video teaching platform comprises an analysis subsystem, a recommendation subsystem and a training subsystem, wherein the analysis subsystem provides learning behavior data, so that the recommendation subsystem generates at least one estimation learning mode by utilizing the learning behavior data to further generate a recommendation learning combination, and when the training subsystem receives the recommendation learning combination, the training subsystem sets a video knowledge content according to the video knowledge content and transmits the video knowledge content to a user to provide the user with watching learning. The invention recommends the video and audio knowledge content to the users with the same or similar presumption learning mode, and analyzes the interactive users between the users and the video and audio knowledge content, so that the users can really watch the video and audio knowledge content and the learning interest is enhanced.

Description

Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method
Technical Field
The invention relates to the technical field of audio-visual teaching, in particular to an audio-visual teaching platform, an analysis subsystem and method, and a recommendation subsystem and method.
Background
With the progress of network and personal electronic devices, the multimedia teaching has gradually replaced the old teaching modes such as supplementary class or family education. In the field of personal improvement or enterprise education training, on-line multimedia teaching is not limited by fixed teaching time and fixed teaching place, and can be stopped, played back, and turned fast at any time.
In the existing multimedia teaching, the learning information is often transmitted from the teaching system end to the user end, but the user cannot measure or get feedback about the real participation and learning effect, and further cannot adjust the appropriate learning content according to the learning degree of the user, resulting in unidirectional learning information transmission.
And when the enterprise uploads the audio-visual knowledge content by the internal staff independently, and then selects the proper audio-visual knowledge content to make an educational training course, the method can generate excessive fragmentary audio-visual knowledge content, so that a large amount of manpower is needed for arrangement, and the connection with the audio-visual knowledge content of formal systematic educational training is more difficult to be made, so that the students of the enterprise are embarrassed when learning.
Although there is a measure for learning effect based on the number of times, hours, etc. of playing learning information, it is still impossible to get feedback from the user to further adjust the appropriate learning content. For example, in the case of corporate education, a user usually plays learning information with an idle computer to prolong the viewing hours, but does not have a real response to learning activities.
In addition, learning effect and measurement are different in meaning of user interaction under different fields, types and even learning information of different instructors, so that a calculation mode of weighting measurement is required.
Therefore, how to use the learning history of the user and collect and analyze feedback information in the learning process to integrate audio-visual knowledge contents from various sources, including regular education training contents, external digital platform knowledge contents and internal user uploaded knowledge contents, to plan an optimal course architecture, and to perform recommendation and adaptive evaluation of learning information with the architecture, which is a focus of all parties to improve urgently.
Disclosure of Invention
In view of the problems of the prior art, the invention provides an audio-visual teaching platform, an analysis subsystem and method, and a recommendation subsystem and method, which solve the problems of how to use the learning history of a user and collect feedback information in the analysis and learning process to integrate audio-visual knowledge contents from various sources, plan an optimal course architecture, and recommend learning information and adaptively evaluate learning information by using the architecture.
One objective of the present invention is to provide an audio-visual teaching platform, which includes:
an analysis subsystem, which generates a learning behavior data related to any interaction time point and an interaction factor of the next user on the time axis of the video knowledge content through a user grouping data, an image object sequence, an audio sequence, a message sequence and a time sequence interaction sequence;
a recommendation subsystem, connected to the analysis subsystem, for generating at least one inferred learning mode according to learning behavior data of at least one user viewing the multi-source video/audio knowledge content, and generating a successive learning experiment combination according to the inferred learning mode, so as to verify the successive learning experiment combination according to any learning mode in the inferred learning mode, and when the successive learning experiment combination meets the verification requirement, using the successive learning experiment combination as a recommended learning combination; and
the training subsystem is connected with the recommending subsystem, sets each interaction time point in each audio-visual knowledge content according to the recommending and learning combination, transmits each interaction time point in each audio-visual knowledge content to a database, and provides at least one set audio-visual knowledge content for a user so as to provide the audio-visual knowledge content for the user to watch and learn.
Optionally, the analysis subsystem further comprises:
an integrated multi-sequence analysis module, which generates the learning behavior data related to the interaction factors at any one of the interaction time points of the user on the time axis of the video knowledge content by the user clustering data, the image object sequence, the audio sequence, the message sequence and the time sequence interaction sequence;
the user-defined interaction component is arranged on the video and audio knowledge content interface and comprises a plurality of interaction factors, and each interaction factor provides input interaction data;
the user analysis module calculates distances and groups the historical learning behavior data of all users in a database according to the historical learning behavior data, classifies the current user into a corresponding user group, and generates the user group data related to the group classification result;
the audio-visual content analysis module marks objects in the image data belonging to the audio-visual knowledge content to generate an image object sequence related to each time point in the audio-visual knowledge content, and performs audio analysis on the audio data belonging to the audio-visual knowledge content, calculates the pitch of each sound frame and generates the audio sequence related to each time point in the audio-visual knowledge content;
and the message analysis module is used for classifying the purpose of the interactive data by using the text extracted from the database to generate the message sequence related to the purpose of the interactive data.
Optionally, the recommendation subsystem includes:
a exploration module for generating at least one of said inferred learning patterns based on said learning behavior data of at least one user viewing said audiovisual knowledge content from multiple sources;
an experiment module connected with the exploration module to receive the estimated learning mode and generate a subsequent learning experiment combination according to the estimated learning mode;
and the verification module is connected with the experiment module and the exploration module to receive the estimated learning mode and the continuous learning experiment combination, verifies the continuous learning experiment combination according to any learning mode in the estimated learning mode, and takes the continuous learning experiment combination as the recommended learning combination when the continuous learning experiment combination meets the verification requirement.
Optionally, the training subsystem generates a knowledge map according to each recommended learning combination, and transmits the knowledge map to the database.
The analysis subsystem provides a learning behavior data, so that the recommendation subsystem can generate at least one estimation learning mode by using the learning behavior data, and further generate a recommendation learning combination.
Therefore, the related video and audio knowledge content is recommended to the users in the same or similar presumptive learning mode, the interactive users between the users and the video and audio knowledge content are analyzed, the time point of the reduction of the concentration of the users in the learning process is analyzed and improved, the users can really watch the video and audio knowledge content, and the learning interest is improved.
In view of the above problems, it is another object of the present invention to utilize the learning history of the user and collect and analyze the feedback information during the learning process to achieve adaptive evaluation of the learning information.
Another objective of the present invention is to provide an analysis subsystem of an audio-visual teaching platform, which mainly comprises a user-defined interaction component, a user analysis module, an audio-visual content analysis module, a message analysis module, a time sequence interaction module, and an integrated multi-sequence analysis module.
The user-defined interaction component is arranged on the video-audio knowledge content interface and comprises a plurality of interaction factors, such as question making, question answering, note taking, key marking, image explaining, quick turning and reversing, and each interaction factor provides input interaction data;
the user analysis module calculates and groups the historical learning behavior data of all users in a database according to the historical learning behavior data, classifies the current user into a corresponding user group, and generates the user group data related to the group result;
the video and audio content analysis module marks objects in the video data belonging to the video and audio knowledge content to generate a video object sequence related to each time point in the video and audio knowledge content, and carries out audio analysis on the audio data belonging to the video and audio knowledge content, calculates the pitch of each sound frame and generates an audio sequence related to each time point in the video and audio knowledge content;
the message analysis module uses the extracted text in the database to classify the purpose of the interactive data and generate the message sequence related to the purpose of the interactive data;
the time sequence interaction module correlates the interaction data input time with the corresponding time points in the audio-visual knowledge content time axis and generates interaction time points, and then combines each interaction time point in the audio-visual knowledge content time axis to generate a time sequence interaction sequence;
the integrated multi-sequence analysis module generates learning behavior data related to an interaction factor at any interaction time point of a next user on a time axis of the video knowledge content through user grouping data, an image object sequence, an audio sequence, a message sequence and a time sequence interaction sequence.
The user-defined interactive component further sets the weight of the interactive factor, and learning behavior data corresponding to each user group and the interactive factor is generated through the association of the integrated multi-sequence analysis module and the user group data.
Another objective of the present invention is to further provide an analysis method for audio-visual teaching platform, comprising the steps of:
inputting interactive data in an interactive factor in a user-defined interactive component arranged in the audio-visual knowledge content interface;
after the historical learning behavior data of all users in the database are subjected to distance calculation and clustering through a user analysis module according to the historical learning behavior data, clustering the corresponding users classified by the current user, and generating user clustering data;
the image data belonging to the video knowledge content is marked through a video content analysis module, and an image object sequence related to each time point in the video knowledge content is generated;
audio data belonging to the audio-visual knowledge content is subjected to audio analysis of pitches of all frames of the audio data through the audio-visual content analysis module, and an audio sequence related to all time points in the audio-visual knowledge content is generated;
extracting texts in a database, and carrying out purpose classification on interactive data through a message analysis module to generate a message sequence related to the purpose of the interactive data;
generating interactive time points by the time sequence interactive module according to the interactive data input time and corresponding time points in the video knowledge content, and combining the interactive time points in the video knowledge content time axis to generate a time sequence interactive sequence; and
user grouping data, image object sequences, audio sequences, message sequences and time sequence interaction sequences are associated with the interaction data through the integrated multi-sequence analysis module, and learning behavior data at any time point in the video knowledge content are generated.
The time sequence interaction module in the analysis method of the audio-visual teaching platform collects each interaction time point and corresponding learning behavior data on the audio-visual knowledge content time axis to generate a long-short term memory model.
The current user is compared with the long-short term memory model, and the current user is classified into a user group with similar learning type in the long-short term memory model according to the learning behavior data.
The user group to which the current user belongs is judged through the long-term and short-term memory model, and the concentration index is evaluated according to the distribution of each interaction time point on the time axis of the user group in the video knowledge content and the corresponding learning behavior data of each interaction time point.
The user group to which the current user belongs is judged through the long-term and short-term memory model, and the interaction factor on the user-defined interaction component is started according to the time interval with lower attention index of the user group on the time axis in the video knowledge content, so as to provide input interaction data.
In view of the problems of the prior art, a further object of the present invention is to utilize at least one past behavior of viewing the audiovisual knowledge content from multiple sources to find out the audiovisual knowledge content related to the past viewing of the knowledge content from multiple sources through experiments and verification processes, so that the learning data can be recommended and applied adaptively without much labor for organizing the audiovisual knowledge content.
Another objective of the present invention is to provide a recommendation subsystem of an audio-visual teaching platform, which includes a exploration module, an experiment module, and a verification module, wherein the exploration module generates at least one inferred learning mode according to a learning behavior data of at least one user viewing audio-visual knowledge from multiple sources, the experiment module generates a continuous learning experiment combination according to the inferred learning mode, the verification module verifies whether the continuous learning experiment combination meets the verification requirement according to one of the learning modes, and if so, the continuous learning experiment combination is used as the recommended learning combination.
Wherein, the step of generating at least one estimated learning pattern according to a learning behavior data of at least one user watching the video and audio knowledge contents of multiple sources by using the exploration module further comprises: classifying the video knowledge content into at least one knowledge cluster according to the source and the learning subject, taking at least one learning behavior of the user on the video knowledge content as a key index, then capturing time sequence data of each learning behavior of the user in the knowledge cluster, wherein the data format is the user access at the current time point and the interactive record of the knowledge content, then using a Decision Tree (Decision Tree) classification model to separate the learning behaviors of the user in the knowledge cluster, generating a plurality of learning modes, finally calculating the key pointer value of each learning mode, and outputting at least one estimated learning mode.
The learning behavior refers to access behavior at each time point and interaction data of each video and audio knowledge content when each user watches each video and audio knowledge content.
The experiment module selects one of the learning modes and one of the knowledge clusters to generate the subsequent learning experiment combination, wherein the subsequent learning experiment combination comprises at least one video knowledge content, the learning behavior and an experiment key index, the video knowledge content of the subsequent learning experiment combination can be a person who has been watched or not been watched by a user, the learning behavior is the same as or different from the selected learning mode, and the experiment key index can be the same as or different from the key pointer of the selected learning mode.
The experiment module enables an external test group to call the continuous learning experiment combination for experiment through the connection interface.
The experiment module judges whether other users browse the audio-visual knowledge contents of the successive learning experiment combination and practice the learning behaviors, and determines that the experiment is successful when the requirements of the key indexes of the experiment are met.
The experiment module judges that other users do not browse the audio-visual knowledge contents of the continuous learning experiment combination, practice the learning behaviors or do not meet one of the requirements of the key indexes of the experiments, and the experiment module determines that the experiments fail.
When the verification module verifies that the successive learning experiment combination meets a verification requirement according to at least one learning mode, the successive learning experiment combination is used as the recommended learning combination.
After the experiment module completes the experiment, the after-event distribution data of one randomly sampled after-event distribution data and the time sequence data of the learning behaviors of the successive learning experiment combination are used for calculating the distribution distance between the after-event distribution data and the time sequence data of the learning behaviors, whether the similarity degree of the after-event distribution data and the time sequence data of the learning behaviors accords with a verification threshold value is defined, and if the similarity degree of the after-event distribution data and the time sequence data of the learning behaviors accords with the verification threshold value, the successive learning experiment combination is used as the recommended learning combination.
If the verification module judges that the verification threshold is not met, the experiment is still determined to be failed, another continuous learning experiment combination is regenerated, and then the experiment and the verification procedure are carried out.
Wherein, the recommendation subsystem further comprises:
an optimal practice module converts the time sequence data of the learning behavior of any user and the information of the video and audio knowledge content into a series of variables, takes each user as a node, calculates the clustering distance between the series of variables of each user, and takes the clustering distance as a clustering reference, or further respectively searches the user with the minimum clustering distance in each clustering group as a group opinion leader.
The best practice module is used for mining and calculating the users which are not grouped and most accord with the learning mode recommended by browsing under each knowledge cluster by using a frequent mode, and actively enabling the users which are not grouped to approach the grouped groups by one of the recommendation, matching and competition mechanisms.
Still another object of the present invention is to further provide a recommendation method for audio-visual teaching platform, comprising the following steps:
the method comprises the steps of generating at least one presumptive learning mode by using a exploration module according to learning behavior data of at least one user watching audio-visual knowledge contents of multiple sources, generating a continuous learning experiment combination by using an experiment module according to the presumptive learning mode, verifying whether the continuous learning experiment combination meets a verification requirement or not by using a verification module according to one of the learning modes, and taking the continuous learning experiment combination as a recommended learning combination by the verification module and recommending the recommended learning combination to other users with the same presumptive learning mode when the verification module judges that the continuous learning experiment combination meets the requirement, so that the audio-visual knowledge contents of the multiple sources can be optimized and linked for learning.
When the verification module generates the recommended learning combination, a best practices module recommends the recommended learning combination to other users who need the inferred learning mode.
Optionally, the process of generating the inferred learning mode by the exploration module comprises the following steps:
defining a knowledge cluster: the exploration module classifies the video knowledge contents into at least one knowledge cluster according to the sources and the same or similar learning subjects;
defining key indexes: the exploration module takes at least one learning behavior, a self-defined expression or a combination of the learning behavior and the self-defined expression of one of the video and audio knowledge contents as a key pointer; and
the exploration module generates at least one of the putative learning modes: the exploration module is used for acquiring time series data of each learning behavior of each user in the knowledge cluster, generating a plurality of learning modes according to the learning behavior of each user in the knowledge cluster by using a decision tree classification model according to each key pointer to separate the learning behaviors of each user in the knowledge cluster, calculating an optimal or set key threshold value in the key pointer of each learning mode, and outputting the optimal or set key threshold value as at least one estimated learning mode.
Optionally, the step of generating the subsequent learning experiment combination by the experiment module according to the inferred learning mode further includes:
the experiment module selects one of the learning modes as an experiment learning mode and one of the knowledge clusters to generate the subsequent learning experiment combination;
the experiment module judges whether other users browse the audio-visual knowledge contents of the continuous learning experiment combination and practice the learning behaviors;
when the experiment module judges that other users browse the audio-visual knowledge contents of the continuous learning experiment combination and practice the learning behaviors, judging whether the requirements of reaching the experiment key indexes exist;
when the experiment module judges that the requirement of the experiment key index is met, the experiment is successful;
when the experiment module judges that other users do not browse the content of the video knowledge or do not practice one of the learning behaviors of the continuous learning experiment combination or do not meet the requirement of the experiment key index, another presumptive learning mode is regenerated.
Optionally, the verification module verifies the continuous learning experiment combination by using the following steps:
defining at least one learning mode accepted by the verification module as a piece of pre-event distribution data, and calculating the post-event distribution data according to the pre-event distribution data;
after the experiment module finishes the experiment, calculating the distribution distance between the post distribution data after random sampling and the time sequence data of the practical learning behavior of the continuous learning experiment combination;
defining whether the similarity degree of the two accords with a verification threshold value;
when the similarity degree accords with the verification threshold, taking the continuous learning experiment combination as the recommended learning combination;
and when the similarity does not meet the verification threshold, determining that the experiment fails, regenerating a new continuous learning experiment combination, and performing the experiment and verification again.
Wherein, the system further comprises an optimal practice module which converts the optimal practice module into a series of variables according to the time series data of the learning behaviors of all users and the identity information of the users, wherein the identity information of the user comprises the name, sex, age or affiliated unit name … of the user, wherein the time series data of the learning behavior can be the message leaving time, the message leaving content, the marked object …, etc. of the user, the best practice module takes each user as a node, the best practice module uses the series variables to calculate the clustering distance between each user and takes the clustering distance as the clustering reference, and each user meeting different grouping reference thresholds is classified into at least one grouping group, or further searching each user with the minimum average distance of one clustering distance in each clustering group as the group opinion leader of the clustering group.
The best practice module calculates users which are not grouped and most accord with the browsing and recommending learning mode under each knowledge cluster by using Frequent Pattern Mining, and actively leads the users which do not enter any grouping group to approach one of the grouping groups through mechanisms such as recommendation, matching, competition and the like.
Optionally, the best practice module is performed according to the following steps:
converting the time sequence data of the learning behavior of the user and the identity information of the user into a series of variables;
using each user as a node, calculating the clustering distance between each user by using the series variables, and using the clustering distance as a clustering reference, so as to classify each user meeting different clustering reference thresholds into at least one clustering group, or further using the user with the minimum average distance of the clustering distances searched in each clustering group as a group opinion leader;
and under each knowledge cluster, using frequent pattern mining calculation to enable the users which are not grouped and most accord with the browsing recommendation learning pattern to approach to the grouped group of the users through recommendation, matching, competition mechanisms or combination of any two or more of the recommendation, matching and competition mechanisms of the users.
The embodiment of the invention has the beneficial effects that:
in the above scheme, the analysis subsystem provides a learning behavior data, so that the recommendation subsystem can generate at least one estimation learning mode by using the learning behavior data, and further generate a recommendation learning combination. Therefore, the related video and audio knowledge content is recommended to the users in the same or similar presumption learning mode, the interactive users between the users and the video and audio knowledge content are analyzed, the time point of the users with reduced concentration in the learning process is analyzed and improved, the users can really watch the video and audio knowledge content, and the learning interest is improved.
Drawings
FIG. 1 is a system architecture diagram of a video and audio teaching platform according to the present invention;
FIG. 2 is a system architecture diagram of an analysis subsystem of the audio/video teaching platform of the present invention;
FIG. 3 is a schematic diagram of a user-defined interactive component of the present invention;
FIG. 4 is a flow chart of an analysis method of the audio-visual teaching platform of the present invention;
FIG. 5 is a diagram of a long term and short term memory model according to the present invention;
FIG. 6 is a schematic diagram of the concentration prediction and excitation mechanism of the present invention;
FIG. 7 is a system architecture diagram of a recommendation subsystem of the audio visual teaching platform of the present invention;
FIG. 8 is a schematic diagram of a recommended learning combination according to the present invention;
FIG. 9 is a schematic illustration of the recommendation of FIG. 8;
fig. 10 is an operation flow chart of the recommendation method of the audio-visual teaching platform of the present invention.
Description of reference numerals:
100. an analysis subsystem;
101. learning behavior data;
110. customizing the interactive component by a user;
111. an interaction factor;
120. a user analysis module;
130. a message analysis module;
140. the video and audio content analysis module;
150. a time sequence interaction module;
160. integrating a multi-sequence analysis module;
200. a recommendation subsystem;
201. recommending a learning combination;
210. a prospecting module;
220. an experiment module;
230. a verification module;
300. a training subsystem;
301. the content of the video and audio knowledge;
400. a user;
500. a database.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Please refer to fig. 1, which is a system architecture diagram of the audio/video teaching platform according to the present invention. As shown in FIG. 1, the video and audio teaching platform of the present invention is applied to an electronic device having a connection network (e.g. a local area network or the Internet), for example: a web server or a personal computer.
The video teaching platform is mainly composed of an analysis subsystem 100, a recommendation subsystem 200, and a training subsystem 300, wherein the analysis subsystem 100 is used for analyzing data such as user group data, image object sequences, audio sequences, message sequences, and sequence interaction sequences, so as to generate learning behavior data 101 related to any interaction time point and interaction factor on a time axis of video knowledge content 301.
The recommendation subsystem 200 is connected to the analysis subsystem 100 to generate at least one inferred learning mode according to the learning behavior data 101 of the video/audio knowledge content 301 from multiple sources watched by at least one user 400 by using exploration, experiment, and verification procedures, and finally verify whether the combination of the continued learning experiments meets the verification requirements after the experimental procedures generate the combination of the continued learning experiments according to the inferred learning mode, and if so, the combination of the continued learning experiments is used as the recommended learning combination 201.
The training subsystem 300 is connected to the recommendation subsystem 200 and the analysis subsystem 100, sets each interaction time point in each piece of audio-visual knowledge content 301 according to the recommendation learning combination 201 by receiving the recommendation learning combination 201, and generates a knowledge map according to each recommendation learning combination 201 to transmit each interaction time point and knowledge map in each piece of audio-visual knowledge content 301 to the database 500, and the training subsystem 300 provides at least one set audio-visual knowledge content 301 to a user 400, so as to provide at least one set audio-visual knowledge content 301 to the user 400 for watching and learning.
Thus, the user 400 can perform the actions of watching, learning and interacting through the video and audio knowledge content 301 provided by the training subsystem 300, and when the user 400 watches the video and audio knowledge content 301 by using the user 400, the analysis subsystem 100 and the recommendation subsystem 200 also perform the subsequent actions, thereby learning and optimizing the analysis and recommendation functions.
Please refer to fig. 2 and fig. 3, which are schematic diagrams of an analysis subsystem of the audio-visual teaching platform and a user-defined interaction component according to the present invention. As shown in the figure, the user-defined interactive component 110 is disposed in the software interface or the audio-visual knowledge content interface, the user-defined interactive component 110 has a plurality of interactive factors 111 for providing input interactive data, wherein the interactive factors 111 include question asking, question answering, note taking, key marking, image explaining, fast turning, reversing and the like, and when the user opens the interactive factors 111, the user can type text content, symbols, mouse clicks and the like in a dialog box or a field opened by the user as the input interactive data. Wherein the system administrator can adjust the weight of the interaction factor 111 and the meaning of the interaction (e.g. 30% collaboration, 40% communication, 30% conflict as shown in fig. 3).
The analysis subsystem of the av teaching platform includes a user analysis module 120, a message analysis module 130, an av content analysis module 140, a time sequence interaction module 150, and an integrated multi-sequence analysis module 160, and these modules can be edited by using a programming language and stored in an electronic device, or a storage medium device (e.g., a network file server, a disk drive, or a portable disk) connected to the electronic device.
The user analysis module 120 calculates distances and groups historical learning behavior data of all users in the database, for example, using a data clustering technique, such as K-means (K-means) and Hierarchical clustering, according to the historical learning behavior data, classifies the current user into a corresponding user group, and uses the result of the current group as user clustering data, where the user clustering data is statistical information items related to past learning history or use platform of the user, for example: question answer rate, average watching duration, number of friends and the like.
In the K-average algorithm, assuming that two variables of the user's question-answer ratio and average watching duration are respectively used as grouped variables, the relationship can be expressed by a two-dimensional coordinate system, first, K centers are randomly selected, and each data point x is calculatedjTo each center muiThe distance (using Euclidean distance) of (A), using the center of each point nearest to the center as a cluster, then according to the cluster result, recalculating a new center point S, and continuously iterating the process to obtain the minimum value J according to the following formula:
Figure BDA0002206148370000131
the message analysis module 130 extracts corresponding texts from the database, classifies the current interaction data inputted by the user according to the purpose, such as using an Intent classification (Intent classification) method, and generates a message sequence, wherein the Intent classification is a text message inputted by the user, and further determines what the purpose of the text message inputted by the user is, for example, a plurality of categories are defined in the database in advance, for example: after transportation of the categories of airplane, train, express, etc., when using the long-short term memory (LSTM), the category of transportation that the user is most likely to need to take is determined and output according to the content of the text message input by the user, for example: if the user enters "I want to go Japan from Taiwan," the LSTM will output "plane," whereby the intent analysis determines the intended content of the message sequence, such as the user's question, complaint, chatty.
The audio-visual content analysis module 140 marks objects in the image data belonging to the input audio-visual knowledge content, for example, identifies the objects in the image data, generates the image object sequence related to each time point in the audio-visual knowledge content, and performs audio analysis on the audio data belonging to the input audio-visual knowledge content, for example, first cuts the whole audio data into a plurality of frames, then uses a pitch detection algorithm (Harmonic product sampling) to Down-sample the original audio for a plurality of times (Down sampling), combines the compressed audio of the original audio after sampling, so as to highlight the high point of the fundamental frequency, and calculates the pitch of each frame, eliminates unstable pitch and smoothes, and generates the audio sequence related to each time point in the audio-visual knowledge content, such as the pitch and frequency of each time point.
The time sequence interactive module 150 takes the time point of the input interactive data and the time point corresponding to the time axis of the audio-visual knowledge content of the input interactive data as an interactive time point, and generates a time sequence interactive sequence including a plurality of interactive time points.
The integrated multi-sequence analysis module 160 combines the user group data, the image object sequence, the audio sequence, the message sequence and the time sequence interaction sequence to generate a learning behavior data, so that the learning behavior data can obtain the interaction result between the current user and the user in the user group and the image or sound in the audio-visual knowledge content at any interaction time point in the time axis of the audio-visual knowledge content. A long-short term memory model can also be established by the distribution of the interaction time points in the video-audio knowledge content time axis.
Please refer to fig. 4, which is a flowchart of an analysis method of an audio/video teaching platform according to the present invention, comprising the following steps:
s101: the user plays the video and audio knowledge content, wherein a user-defined interaction component is arranged on the video and audio knowledge content interface;
s102: inputting interactive data by a user in an interactive factor in a user-defined interactive component;
s103: the user analysis module calculates the distance and the clustering of the historical learning behavior data of all users in the database according to the historical learning behavior data, then clusters the corresponding users classified by the current user, and takes the clustering result as the user clustering data;
s104: marking the image data belonging to the video knowledge content by a video content analysis module to generate an image object sequence related to each time point in the video knowledge content;
s105: the audio data belonging to the audio knowledge content is subjected to audio analysis of the pitch of each sound frame of the audio data calculated by the audio content analysis module to generate an audio sequence related to each time point in the audio knowledge content;
s106: extracting texts in the database, and performing purpose classification on the input interaction data by a message analysis module to generate a message sequence related to the purpose of the interaction data;
s107: taking the interactive data input time point and the time point corresponding to the video and audio knowledge content time axis as interactive time points, collecting a plurality of interactive time points on the time axis, and generating a time sequence interactive sequence; and
s108: the integrated multi-sequence analysis module combines the user grouping data, the image object sequence, the audio sequence, the message sequence and the time sequence interaction sequence to generate corresponding learning behavior data at the interaction time point in the video knowledge content time axis.
And when the user turns on the interaction factor in the user-defined interaction component again, repeating the steps from S102 to S108 to generate corresponding learning behavior data at the interaction time point in the audio-visual knowledge content time axis.
The steps of S102 to S107 are executed after receiving the current interactive data input by the user in the interactive factor, and the execution sequence is not limited thereto.
Please refer to fig. 5 and 6, which are schematic diagrams of a long-term and short-term memory model and a concentration prediction and excitation mechanism according to the present invention. As shown in the figure, the long-short term memory model is established and characterized as follows:
establishing a schematic diagram of a long-term and short-term memory model:
the time sequence interaction module collects and establishes a long-short term memory model by using each interaction time point and corresponding learning behavior data on the video-audio knowledge content time axis, in other words, uses user grouping data, image object sequence, audio sequence, message sequence and time sequence interaction sequence as input parameters to establish the long-short term memory model capable of achieving self-learning, concentration evaluation and adaptability.
Self-learning of long-short term memory model:
the long-short term memory model compares the learning behavior data inputted by the current user with the long-short term memory model, and classifies the current user into a user group with similar learning type in the long-short term memory model according to the learning behavior data to learn by itself.
The long-term and short-term memory model can be learned along with the increase of the learning behavior data volume of the users in the database, and the current users are classified into the corresponding user groups in real time to carry out subsequent various predictions.
Concentration assessment of long-short term memory model:
the user group to which the current user belongs is judged through the long-term and short-term memory model, and the concentration index is evaluated according to the distribution of each interaction time point on the time axis of the user group in the video knowledge content and the learning behavior data of each interaction time point.
The method provides a basis for further predicting the possibly generated interaction behavior or adjusting the content of the video knowledge as a system manager by the current users or the affiliated user groups and the corresponding change of the attention index and the time axis in the video knowledge content.
Concentration prediction and excitation mechanism of long-short term memory model:
the user group to which the current user belongs is judged through the long-term and short-term memory model, the interactive factor on the user-defined interactive component is started according to the time interval in which the user group is concentrated on the time axis in the audio-visual knowledge content and the power index is lower, the input interactive data is provided, the motivation of the user for inputting the interactive data is increased through increasing the weight of the interactive factor, and the learning behavior data is improved.
By the attention assessment of the long-short term memory model, the current observation time is replaced to be the basis of the assessment learning effect assessment, the time point that the attention index of the current user is possibly reduced in the video knowledge content can be predicted, the interaction factor is started at the time point to provide incentives such as reward and points, the interaction feedback of the current user is obtained and the attention is increased, the problem that the learning information is played by an idle computer is effectively improved, the current user really watches the content of the video knowledge content, and the interest of the current user is improved.
Please refer to fig. 7, which is a system architecture diagram of a recommendation subsystem of the audio/video teaching platform according to the present invention. As shown in fig. 7, the recommendation subsystem of the video and audio teaching platform of the present invention includes a exploration module 210, an experiment module 220, a verification module 230, and a best practice module, and these modules can be edited by a programming language and stored in an electronic device, or a storage medium device (e.g., a network file server, a disk drive, or a drive …) connected to the electronic device.
The exploration module 210 receives learning behavior data from different users viewing the content of the audiovisual knowledge from different sources at the same or different users, and generates at least one inferred learning pattern according to at least one of the learning behavior data. The user may be another electronic device with a connected network (e.g., a local area network or the internet), such as: personal computers, notebook computers, tablet computers, smart phones, and the like.
The exploration module 210 classifies each content of audio-visual knowledge into at least one knowledge cluster according to its source and learning topic, and further, the exploration module 210 may analyze the text or voice content in each content of audio-visual knowledge by using the existing exploration techniques such as semantic meaning or semantic meaning, and automatically classify each content of audio-visual knowledge into different knowledge clusters, or receive the classification information of the system administrator or user, and classify each content of audio-visual knowledge into at least one knowledge cluster according to its source and learning topic.
For example: students in China and primary schools have knowledge clusters of different subjects such as Chinese, English and mathematics, and enterprises have knowledge clusters of different education such as new person training and enterprise culture. For example: the name of the knowledge cluster is "" insurance "", and may include video and audio knowledge contents such as a long shot insurance film uploaded by a user, an accident insurance film loaded in an enterprise course library by a system manager, a fire affair film and a car accident film uploaded by the user from a third-party video and audio website, and the like. Alternatively, the content of the audio-visual knowledge from different sources, such as the balance of sexuality, the interest of working, and the provision of holiday, can be categorized into a knowledge cluster named "labor interest".
Alternatively, the content of the audio-visual knowledge such as english words, english grammars, english sentence patterns, japanese words, japanese grammars, japanese sentence patterns, etc. may be classified as a knowledge cluster named "foreign language learning", or the content of the audio-visual knowledge such as english words, english grammars, english sentence patterns, etc. may be classified as a knowledge cluster named "english learning", or the content of the audio-visual knowledge such as japanese words, japanese grammars, japanese sentence patterns, etc. may be classified as a knowledge cluster named "japanese learning".
As can be seen from the above, the sources of the video knowledge content at least include the teaching films self-made by the user, the teaching films uploaded by the system administrator, or the films downloaded or linked from the video website. All knowledge clusters are stored in the database.
The exploration module 210 uses at least one learning behavior of the user with respect to each piece of audiovisual knowledge content or a custom expression as a key index, where the learning behavior may be a test result after learning, a number of interactions between the user and the instructor, a number of questions asked …, and the custom expression may be a combination of any two or more of the learning behaviors, or a combination of any two or more of the learning behaviors, and the key index is, for example, more than 90 points of the learned test result. Or the number of the test results after learning is more than 80, and the number of the interaction between the user and the instructor is more than 3. Or the test score after learning accounts for more than 80 percent and accounts for 30 percent of the weight, the number of times of interaction between the user and the instructor accounts for more than 3 times and accounts for 30 percent of the weight, and the number of times of questioning the question accounts for more than 4 times and accounts for 40 percent of the weight.
Furthermore, the exploration module 210 captures time-series data of each learning behavior of each user in the knowledge clusters, and uses a Decision Tree (Decision Tree) classification model to separate the learning behaviors of each user in each knowledge cluster according to each key pointer to generate a plurality of learning modes, and finally calculates an optimal one or a set key threshold in the key pointers of each learning mode, and outputs the optimal one or the set key threshold as at least one estimated learning mode.
Furthermore, the learning behavior refers to the access behavior of each user at each time point when viewing the video knowledge content of each source and the interaction data of each video knowledge content, wherein the access behavior is as follows: normal play, reverse play, fast forward play at various speeds, pause …, etc., and interactive activities include making comments, asking questions, answering questions, notes, and punctuation …, etc.
The experiment module 220 selects one of the learning modes as an experiment learning mode and one of the knowledge clusters to generate a subsequent learning experiment combination, wherein the subsequent learning experiment combination comprises at least one video knowledge content, a learning behavior and an experiment key index, the video knowledge content of the subsequent learning experiment combination can be a user who has watched or has not watched, the learning behavior can be the same as or different from the selected learning mode, and the experiment key index can be the same as or different from the key index of the selected learning mode.
Furthermore, the experiment module 220 may open a connection interface to enable an external testing group to call the subsequent learning experiment combination for performing an experiment, the external testing group may enter the experiment module 220 for a user of a certain community website for learning, the experiment module 220 determines whether the user of the testing group browses the audio-visual knowledge content of the subsequent learning experiment combination and practices the learning behavior thereof and has a requirement for reaching an experiment key index for the process of performing the experiment, if the audio-visual knowledge content of the subsequent learning experiment combination and practices the learning behavior thereof are browsed and practiced by the above, and reaches the requirement for the experiment key index, the experiment is successful, otherwise, the experiment module 220 selects another learning mode of the selected knowledge cluster again, and the experiment is repeated until the success is achieved according to the above process.
The verification module 230 selects at least one learning mode to verify whether the subsequent learning experiment combination meets the verification requirement. When the verification module 230 verifies that the continuous learning experiment combination meets the requirement, the continuous learning experiment combination is used as the recommended learning combination. The verification module 230 defines at least one learning mode accepted by the verification module 230 as pre-Distribution (Prior Distribution) data, and the learning mode selected by the verification module 230 must be different from the learning mode selected by the experiment module 220, so as to calculate post-Distribution (Posterior Distribution) data, and after the experiment module 220 completes the experiment, the post-Distribution data after random sampling and the time sequence data of the subsequent learning experiment combination in the learning behavior are used to calculate the Distribution distance between the two.
The foregoing distribution distance algorithm is different from the foregoing node distances, where the distribution distance is referred to as actual learning behavior time series data of a subsequent learning experiment combination, and when the difference between the actual learning behavior time series data and learning behavior time series data of subsequent distribution data is similar, the distribution distance is close, otherwise, the distribution distance is far, the most commonly used method is KL divergence (Kullback-leiblerdivegence), and the distribution distance is defined as:
Figure BDA0002206148370000181
in the above equation, p (x) and q (x) are probabilities of two distributions, and thus it can be known that when the difference between the two distributions is large, p (x)/q (x) is quite large, and when the difference is small, p (x)/q (x) is small, which is defined as whether the similarity between the two distributions meets a verification threshold, if it is determined that the similarity meets the verification threshold, the subsequent learning experiment combination is used as the recommended learning combination.
If the verification threshold is not met, the experiment is still determined to be failed, a continuous learning experiment combination is regenerated, and the experiment and the verification are carried out again. Regardless of whether the verification is successful or not, the present invention can re-enter the exploration module 210 and the experiment module 220 at any time, re-calculate the combination of the learning mode, the estimation learning mode and the continuous learning experiment, and perform the experiment and the verification.
In the invention, the best practice module converts a series of variables according to the time sequence data of the learning behaviors of the users and the identity information of the users, the best practice module takes each user as a node, calculates the clustering distance between each series of variables and takes the clustering distance as a clustering reference, and classifies each user which accords with different clustering reference threshold values into at least one clustering group or further respectively searches for the user with the minimum average distance of one clustering distance in each clustering group as the group opinion leader of the clustering group.
The identity information of the user includes the name, sex, age, or name … of the affiliated unit of the user, wherein the time series data of the learning behavior may be the message time, message content, tagged object …, etc. of the user, and the learning behavior of the user may be the operation on the movie playing mode, for example: pause, rewind, leave a message, and each user operates differently on different types of movies. To illustrate how the best practice module converts the time series data of the learning behavior of the user and the identity information of the user into a series of variables, the following description is made by using a certain staff to see a movie summarized as a knowledge cluster trained by a new person:
the employee presses pause when the playing time of the film is 1:05, no message is left, and the unit of the employee is the management department, and the series variables can be defined as follows:
65 for x1, pause for x2, empty for x3, new training for x4, and management for x 5;
wherein x1 is 65, which means that there is a learning behavior when the film is played for 1 minute and 5 seconds, x2 is pause, which means that the learning behavior is paused to play the film, x3 is empty, which means that there is no message content at this time, if there is message content, the message content will be filled in, x4 is new person training, which means that the employee is watching the film of the knowledge cluster summarized as new person training, and x5 is management department, which means that the employee is the name of the affiliated entity of the enterprise;
if the employee leaves a message in the movie at a time of 2 minutes and 12 seconds, the series variable may be defined as follows:
x1 ═ 132, x2 ═ message, x3 ═ message content, x4 ═ new person training, x5 ═ management department;
wherein x1 is 132, which indicates that there is a learning behavior when the film is played for 2 minutes and 12 seconds, x2 is left word, which indicates that there is a learning behavior left word, x3 is left word content, which indicates that the left word at this time is four words such as left word content, x4 is new person training, which indicates that the employee is watching the film of the knowledge cluster summarized as new person training, and x5 is management department, which indicates that the employee is the name of the company;
the above is merely an example, and is not limited thereto. For example, x4 is a training of a new person, and besides the knowledge cluster representing the movie, a series of variables of chapters and types of the movie and possibly a series of variables of scores after being watched by a student can be added to help the best practice module establish the accuracy of the clustering.
Furthermore, the best practice module calculates each knowledge cluster by using Frequent Pattern Mining (frequency Pattern Mining), and the users who are not grouped and most accord with the browsing recommendation learning mode actively make the users who are not grouped approach one of the group groups by one or the combination of more than two of the mechanisms of recommendation, matching, competition, etc.
In summary, please refer to fig. 8 and 9, which are schematic diagrams of a recommended learning combination and a recommended learning combination according to the present invention. As shown in the figure, a user A finishes an intra-enterprise training course A-1, an external course A-1 and an external course A-2 (namely one of the presumptive learning modes) in advance, and the recommendation subsystem for learning by utilizing the content of the audio-visual knowledge of the invention carries out recommendation after the experiment and verification of the exploration module, the experiment module and the verification module, wherein the generated recommended learning mode can be the uploading course A-1 of a student, the intra-enterprise training course A-2 and the external course A-1, and the recommended learning behaviors of all courses are in the recommended learning mode.
In addition, after the recommended learning mode is selected under each knowledge cluster, if the recommended learning mode is recommended to a certain group, the recommended learning mode is preferentially recommended to the group opinion leader, and then the group opinion leader is pushed to other users belonging to the grouped group. So that users in the same group can have an opportunity to achieve the learning effect similar to the recommended learning mode. Or may be recommended directly to users who are not grouped.
Referring to fig. 10, an action flow chart of a recommendation method for an audio-visual teaching platform according to the present invention includes the following steps:
and S21, generating at least one presumptive learning mode according to the learning behavior data of the video and audio knowledge content of the multiple sources watched by at least one user by using the exploration module. The method also comprises the following steps:
s211, defining a knowledge cluster: the exploration module classifies all the video and audio knowledge contents into at least one knowledge cluster according to the sources and the same or similar learning subjects; the exploration module 1 can analyze the text or voice content in the video knowledge content by using the existing exploration technologies such as semantic meaning or semantic meaning, and automatically classify each video knowledge content into different knowledge clusters, or receive the classification information of the system administrator or user, and classify each video knowledge content into at least one knowledge cluster according to the source and the learning subject.
S212, defining key indexes: the exploration module takes at least one learning behavior or self-defined expression of each video and audio knowledge content as a key index; the learning behaviors can be the test results after learning, the number of times of interaction between the user and the instructor, the number of times of questioning and the like, and the self-defined expression can be a combination of any two learning behaviors or a combination of different weights of any two learning behaviors.
S213, the exploration module generates at least one inferred learning mode: the exploration module 1 captures time series data of each learning behavior of each user in the knowledge clusters, and uses a Decision Tree (Decision Tree) classification model to separate the learning behaviors of each user in each knowledge cluster according to each key pointer to generate a plurality of learning modes, and finally calculates the best one or a set key threshold value in the key pointers of each learning mode to output as at least one estimated learning mode.
S22, generating a continuous learning experiment combination by the experiment module according to the presumption learning mode; the method also comprises the following steps:
s221, the experiment module selects one of the learning modes as an experiment learning mode and one of the knowledge clusters to generate a subsequent learning experiment combination; the continuous learning experiment combination comprises at least one audio-visual knowledge content, a learning behavior and an experiment key index, the audio-visual knowledge content of the continuous learning experiment combination can be a person who has been watched or not watched by a user, the learning behavior can be the same as or different from the selected learning mode, and the experiment key index can be the same as or different from the key pointer of the selected learning mode.
S222, judging whether other users browse the audio-visual knowledge content of the continuous learning experiment combination and practice the learning behaviors of the other users by the experiment module, if so, performing S223, and otherwise, performing S221; the other users can open the connection interface by the experiment module to enable the external test group to call the continuous learning experiment combination for carrying out the experiment.
And S223, the experiment module judges whether the continuous learning experiment combination has the requirement of reaching the key index of the experiment, if so, the experiment is successful, otherwise, the step S213 is carried out.
Because it is not certain to make a conclusion whether the subsequent learning experiment combination successfully tested by the experiment module can meet the learning objective of each knowledge cluster, the following steps are performed after the step S22 in order to make the subsequent learning experiment combination closer to the learning objective of each knowledge cluster:
s23, the verification module verifies whether the continuous learning experiment combination meets the verification requirement according to one of the estimated learning modes, the learning mode selected by the verification module must be different from the learning mode selected by the experiment module, when the verification module judges that the continuous learning experiment combination meets the requirement, the verification module takes the continuous learning experiment combination as the recommended learning combination, otherwise, the experiment is still determined to fail, and the step S213 is carried out; the verification mode of the verification module for the continuous learning experiment combination comprises the following steps:
s231, defining at least one learning mode accepted by the verification module as pre-Distribution (Priordistribution) data, and calculating post-Distribution (Posterior Distribution) data according to the pre-Distribution data;
s232, after the experiment module finishes the experiment, calculating the distribution distance between the post distribution data after random sampling and the actual learning behavior time sequence data of the subsequent learning experiment combination;
s233, defining whether the similarity degree of the two accords with a verification threshold, if yes, proceeding to step S234, otherwise proceeding to step S235;
and S234, taking the continuous learning experiment combination as a recommended learning combination.
If the experiment is still determined to be failed in S235, the step (S213) is repeated to generate a new combination of successive learning experiments, and the experiment and verification are performed again.
In addition, no matter whether the verification is successful or not, the method can re-enter the exploration module and the experiment module at any time, re-calculate the combination of the learning mode, the estimation learning mode and the continuous learning experiment, and perform the experiment and the verification.
In order to combine the recommended learning with an appropriate recommendation to other users, in the present invention, after step S23, the following steps are performed:
s24, the best practice module recommends the recommended learning combination to other users needing the same or similar presumptive learning mode, so that the video and audio knowledge content of multiple sources can optimize the joint learning. The recommendation mode of the best practice module for recommending the learning combination comprises the following steps:
s241, the best practice module converts the time series data of the learning behaviors of the user and the knowledge content information into a series of variables;
s242, using each user as a node, calculating the clustering distance between the series variables of each user, and using the clustering distance as a clustering reference, and classifying each user meeting different clustering reference thresholds into at least one clustering group, or further searching each clustering group for the user with the smallest average distance of one clustering distance as a group opinion leader;
s243, the best practice module calculates the users which are not grouped and most accord with the browsing recommendation learning mode under each knowledge cluster by using Frequent Pattern Mining, and actively leads the users which do not enter any group to approach one group through one or the combination of more than two of the mechanisms of recommendation, matching, competition and the like.
According to the method, the content of the audio-visual knowledge from multiple sources can be screened without a large amount of manpower, and the optimized linkage learning modes of various learning subjects can be generated. The recommended learning combination can be recommended to other users with the same or similar presumptive learning modes, so that the content of the video knowledge from multiple sources can optimize the joint learning. Therefore, the invention solves the problems that the traditional video knowledge content needs a great amount of manpower to be collated and is more difficult to be linked with the video knowledge content of regular systematic education training.

Claims (33)

1. An audio-visual teaching platform, comprising:
an analysis subsystem, which generates a learning behavior data related to any interaction time point and an interaction factor of the next user on the time axis of the video knowledge content through a user grouping data, an image object sequence, an audio sequence, a message sequence and a time sequence interaction sequence;
a recommendation subsystem, connected to the analysis subsystem, for generating at least one inferred learning mode according to the learning behavior data of the video and audio knowledge content from multiple sources watched by at least one user, and generating a successive learning experiment combination according to the inferred learning mode, so as to verify the successive learning experiment combination according to any learning mode in the inferred learning mode, and when the successive learning experiment combination meets the verification requirement, using the successive learning experiment combination as a recommended learning combination; and
the training subsystem is connected with the recommending subsystem, sets each interaction time point in each audio-visual knowledge content according to the recommending and learning combination, transmits each interaction time point in each audio-visual knowledge content to a database, and provides at least one set audio-visual knowledge content for a user so as to provide the audio-visual knowledge content for the user to watch and learn.
2. The video and audio teaching platform of claim 1, wherein the analysis subsystem further comprises:
an integrated multi-sequence analysis module, which generates the learning behavior data related to the interaction factor at any one of the interaction time points of the user on the time axis of the video knowledge content according to the user clustering data, the image object sequence, the audio sequence, the message sequence and the time sequence interaction sequence;
the user-defined interaction component is arranged on the video and audio knowledge content interface and comprises a plurality of interaction factors, and each interaction factor provides input interaction data;
the user analysis module calculates distances and groups the historical learning behavior data of all users in a database according to the historical learning behavior data, classifies the current user into a corresponding user group, and generates the user group data related to the group classification result;
the audio-visual content analysis module marks objects in the image data belonging to the audio-visual knowledge content to generate an image object sequence related to each time point in the audio-visual knowledge content, and performs audio analysis on the audio data belonging to the audio-visual knowledge content, calculates the pitch of each sound frame and generates an audio sequence related to each time point in the audio-visual knowledge content;
and the message analysis module is used for classifying the purpose of the interactive data by using the text extracted from the database to generate the message sequence related to the purpose of the interactive data.
3. The video and audio teaching platform of claim 1, wherein the recommendation subsystem comprises:
a exploration module for generating at least one of said inferred learning patterns based on said learning behavior data of at least one user viewing said audiovisual knowledge content from multiple sources;
an experiment module connected with the exploration module to receive the estimated learning mode and generate a subsequent learning experiment combination according to the estimated learning mode;
and the verification module is connected with the experiment module and the exploration module to receive the estimated learning mode and the continuous learning experiment combination, verifies the continuous learning experiment combination according to any learning mode in the estimated learning mode, and takes the continuous learning experiment combination as the recommended learning combination when the continuous learning experiment combination meets the verification requirement.
4. The video and audio teaching platform of claim 1 wherein the training subsystem generates a knowledge map according to each of the recommended learning combinations and transmits the knowledge map to the database.
5. An analysis subsystem of a video and audio teaching platform, comprising:
an integrated multi-sequence analysis module, which generates a learning behavior data related to an interaction factor at any interaction time point on the time axis of a video and audio knowledge content of a next user through a user grouping data, an image object sequence, an audio sequence, a message sequence and a time sequence interaction sequence;
the user-defined interaction component is arranged on an audio-visual knowledge content interface and comprises a plurality of interaction factors, and each interaction factor provides input interaction data;
a user analysis module, which calculates the distance and groups the historical learning behavior data of all users in a database according to each historical learning behavior data, and classifies the current user into the corresponding user group to generate the user group data related to the group result;
the audio-visual content analysis module marks objects in the image data belonging to the audio-visual knowledge content to generate an image object sequence related to each time point in the audio-visual knowledge content, and performs audio analysis on the audio data belonging to the audio-visual knowledge content, calculates the pitch of each sound frame and generates an audio sequence related to each time point in the audio-visual knowledge content;
a message analysis module, which uses the extracted text in the database to classify the purpose of the interactive data, and generates the message sequence related to the purpose of the interactive data;
and the time sequence interaction module is used for correlating the input time of the interaction data with the corresponding time points in the time axis of the video knowledge content to generate interaction time points, and then combining each interaction time point in the time axis of the video knowledge content to generate the time sequence interaction sequence.
6. The analysis subsystem of video and audio teaching platform of claim 5, wherein the interaction factors include question asking, question answering, note taking, key marking, emoticons, quick turn, and reverse turn.
7. The audio-visual platform analysis subsystem of claim 5, wherein the user-defined interaction component further sets the weights of the interaction factors, and generates the learning behavior data corresponding to each user group and the interaction factors by associating the integrated multi-sequence analysis module with the user group data.
8. An analysis method for a video and audio teaching platform is characterized by comprising the following steps:
inputting an interaction data in an interaction factor on a user-defined interaction component arranged on a video and audio knowledge content interface;
after historical learning behavior data of all users in a database are subjected to distance calculation and clustering through a user analysis module according to the historical learning behavior data, clustering the corresponding users classified by the current user, and generating user clustering data;
the method comprises the following steps that image data belonging to video knowledge content is marked through a video content analysis module, and an image object sequence related to each time point in the video knowledge content is generated;
the audio data in the audio-visual knowledge content is subjected to audio analysis by calculating the pitch of each sound frame of the audio data through the audio-visual content analysis module to generate an audio sequence related to each time point in the audio-visual knowledge content;
extracting texts in the database, and performing purpose classification on the interactive data through a message analysis module to generate a message sequence related to the purpose of the interactive data;
generating an interaction time point by the interaction data input time and each corresponding time point in the video knowledge content through a time sequence interaction module, and then combining each interaction time point in the time axis of the video knowledge content to generate a time sequence interaction sequence; and
and associating the user grouping data, the image object sequence, the audio sequence, the message sequence and the time sequence interaction sequence with the interaction data through an integrated multi-sequence analysis module, and generating learning behavior data at any time point in the video knowledge content.
9. The method of claim 8, wherein the interaction factors include question asking, question answering, note taking, key marking, emoticons, quick turn, and reverse turn.
10. The method for analyzing video and audio teaching platform according to claim 8, further comprising:
setting the weight of the interaction factor, and generating the learning behavior data corresponding to each user group and the interaction factor through the association of the integrated multi-sequence analysis module and the user group data.
11. The method of claim 8, wherein the time sequence interaction module collects the interaction time points and the corresponding learning behavior data on the time axis of the video knowledge content to generate a long-term and short-term memory model.
12. The method of claim 11, wherein the current user is compared with the long-term and short-term memory model, and classified into a group of users with similar learning behaviors in the long-term and short-term memory model according to the learning behavior data.
13. The method of claim 12, wherein the group of users belonging to the current user is determined through the long-term and short-term memory model, and an attention index is evaluated according to the distribution of the interaction time points of the group of users on the time axis of the audio-visual knowledge content and the learning behavior data corresponding to the interaction time points.
14. The method of claim 13, wherein the long-term and short-term memory model is used to determine the user groups to which the current user belongs, and the interaction factors on the user-defined interaction components are activated to provide the input interaction data according to the time interval of the user groups with lower attention index on the time axis of the audio-visual knowledge content.
15. The utility model provides a recommendation subsystem of audio-visual teaching platform which characterized in that includes:
the exploration module is used for generating at least one presumptive learning mode according to learning behavior data of at least one user watching the video and audio knowledge contents from multiple sources;
an experiment module connected with the exploration module to receive the estimated learning mode and generate a subsequent learning experiment combination according to the estimated learning mode;
and the verification module is connected with the experiment module and the exploration module to receive the estimated learning mode and the continuous learning experiment combination, verifies the continuous learning experiment combination according to any learning mode in the estimated learning mode, and takes the continuous learning experiment combination as a recommended learning combination when the continuous learning experiment combination meets the verification requirement.
16. The recommendation subsystem of claim 15, wherein the exploration module classifies each of the audiovisual knowledge content into at least one knowledge cluster according to its source and an associated learning topic, and uses at least one learning behavior, a custom expression, or a combination thereof of a user for one of the audiovisual knowledge content as a key indicator.
17. The recommendation subsystem of video/audio teaching platform of claim 16, wherein the exploration module captures a time series data of each learning behavior of each user in the knowledge clusters, and the exploration module uses a decision tree classification model to separate the learning behaviors of each user in each knowledge cluster according to each key pointer to generate a plurality of learning patterns, and calculates a key threshold value best or set in the key pointer of each learning pattern to output as at least one of the inferred learning patterns.
18. The recommendation subsystem of claim 17, wherein the learning behavior is access behavior at each time point and interaction data for each of the audiovisual knowledge content when each user views each of the audiovisual knowledge content.
19. The recommendation subsystem of claim 18, wherein the experiment module selects one of the learning modes and one of the knowledge clusters to generate the subsequent learning experiment set, the subsequent learning experiment set includes at least one of audiovisual knowledge content, the learning behavior and an experiment key indicator, and the audiovisual knowledge content of the subsequent learning experiment set may be viewed or not viewed by the user, and the learning behavior is the same as or different from the selected learning mode, and the experiment key indicator may be the same as or different from the key indicator of the selected learning mode.
20. The recommendation subsystem of claim 19, wherein the experiment module enables an external testing group to invoke the subsequent learning experiment combination for performing an experiment via a connection interface.
21. The recommendation subsystem of video and audio teaching platform of claim 20, wherein said experiment module determines whether other users have browsed each of said video and audio knowledge contents of said sequential learning experiment combination and practiced said learning behavior, and when the requirements of each of said experiment key indicators are met, the experiment is deemed to be successful.
22. The recommendation subsystem of a video and audio teaching platform of claim 21, wherein the experiment module determines that other users do not browse each video and audio knowledge content of the continued learning experiment combination, practice the learning behavior, or meet one of the requirements of each experiment key index, and the experiment module determines that the experiment fails.
23. The recommendation subsystem of video and audio teaching platform of claim 17, wherein the verification module verifies that the continued learning experiment combination meets a verification requirement according to at least one of the learning modes, and then uses the continued learning experiment combination as the recommended learning combination.
24. The recommendation subsystem of video and audio teaching platform of claim 23, wherein at least one of the learning patterns accepted by the verification module is used as a priori distribution data to calculate posterior distribution data, after the experiment module completes the experiment, the posterior distribution data of one of the randomly sampled learning behaviors and the time series data of the subsequent learning experiment combination are used to calculate a distribution distance therebetween, which is defined as whether a similarity between the two matches a verification threshold, and if the similarity is determined to match the verification threshold, the subsequent learning experiment combination is used as the recommended learning combination.
25. The recommendation subsystem of claim 24, wherein if the verification module determines that the verification threshold is not met, the verification module determines that the experiment fails, and regenerates another subsequent learning experiment combination for further experiment and verification procedure.
26. The recommendation subsystem of audio-visual teaching platform of claim 17, further comprising:
an optimal practice module converts the time sequence data of the learning behavior of any user and the information of the video and audio knowledge content into a series of variables, takes each user as a node, calculates the clustering distance between the series of variables of each user, and takes the clustering distance as a clustering reference, or further respectively searches the user with the minimum clustering distance in each clustering group as a group opinion leader.
27. The recommendation subsystem of claim 26, wherein the best practice module uses a frequent pattern mining to calculate under each knowledge cluster, the ungrouped users that most meet the learning pattern recommended by browsing, and actively make the ungrouped users approach the ungrouped users to their ungrouped groups through their recommendation, matching and competition mechanisms.
28. A recommendation method for an audio-visual teaching platform is applied to an electronic device, and the following steps are executed on the electronic device:
generating at least one presumptive learning mode according to learning behavior data of at least one user watching each video and audio knowledge content from multiple sources by using a exploration module;
generating a continuous learning experiment combination by an experiment module according to the presumption learning mode, and determining that the experiment is successful when the experiment module judges that the continuous learning experiment combination meets the requirement of an experiment key pointer;
verifying whether the continuous learning experiment combination meets a verification requirement or not by a verification module according to any learning mode; and
and when the verification module judges that the continuous learning experiment combination meets the requirements, the verification module takes the continuous learning experiment combination as a recommended learning combination.
29. The method of claim 28, wherein when the verification module generates the recommended learning combination, a best practices module recommends the recommended learning combination to other users in need of the inferred learning mode.
30. The method for recommending a video and audio teaching platform according to claim 28, wherein said process of generating said inferred learning mode by said exploring module comprises the steps of:
defining a knowledge cluster: the exploration module classifies the video knowledge contents into at least one knowledge cluster according to the sources and the same or similar learning subjects;
defining key indexes: the exploration module takes at least one learning behavior, a self-defined expression or a combination of the learning behavior and the self-defined expression of one of the video and audio knowledge contents as a key pointer; and
the exploration module generates at least one of the putative learning modes: the exploration module is used for acquiring time series data of each learning behavior of each user in the knowledge cluster, and using a decision tree classification model to distinguish the learning behavior of each user in the knowledge cluster according to each key pointer to generate a plurality of learning modes, so as to calculate a key threshold value which is optimal or set in the key pointer of each learning mode, and output the key threshold value as at least one estimated learning mode.
31. The method of claim 30, wherein the step of generating the subsequent learning experiment combination by the experiment module according to the inferred learning mode further comprises:
the experiment module selects one of the learning modes as an experiment learning mode and one of the knowledge clusters to generate the subsequent learning experiment combination;
the experiment module judges whether other users browse the audio-visual knowledge contents of the continuous learning experiment combination and practice the learning behaviors;
when the experiment module judges that other users browse the audio-visual knowledge contents of the continuous learning experiment combination and practice the learning behaviors, judging whether the requirements of reaching the experiment key indexes exist;
when the experiment module judges that the requirement of the experiment key index is met, the experiment is successful;
when the experiment module judges that other users do not browse the content of the video knowledge or do not practice one of the learning behaviors of the continuous learning experiment combination or do not meet the requirement of the experiment key index, another presumptive learning mode is regenerated.
32. The recommendation method for video and audio teaching platform according to claim 31, wherein the verification module verifies the continuing learning experiment combination in a way that includes the following steps:
defining at least one learning mode accepted by the verification module as a piece of pre-distribution data, and calculating post-distribution data according to the pre-distribution data;
after the experiment module finishes the experiment, calculating the distribution distance between the post distribution data after random sampling and the time sequence data of the practical learning behavior of the continuous learning experiment combination;
defining whether the similarity degree of the two accords with a verification threshold value;
when the similarity degree accords with the verification threshold, taking the continuous learning experiment combination as the recommended learning combination;
and when the similarity does not meet the verification threshold, determining that the experiment fails, regenerating a new continuous learning experiment combination, and performing the experiment and verification again.
33. The method for recommending a video and audio teaching platform according to claim 29, wherein said best practice module is performed according to the following steps:
converting the time sequence data of the learning behavior of the user and the identity information of the user into a series of variables;
using each user as a node, calculating the clustering distance between each user by using the series variables, and using the clustering distance as a clustering reference, and classifying each user meeting different clustering reference threshold values into at least one clustering group, or further using the user with the minimum average distance of the clustering distance for searching one of the clustering groups as a group opinion leader;
and under each knowledge cluster, using frequent pattern mining calculation to enable the users which are not grouped and most accord with the browsing recommendation learning pattern to approach to the grouped group of the users through recommendation, matching, competition mechanisms or combination of any two or more of the recommendation, matching and competition mechanisms of the users.
CN201910881880.0A 2018-10-18 2019-09-18 Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method Pending CN111081095A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
TW107136683A TWI722327B (en) 2018-10-18 2018-10-18 Audio-visual content and user interaction sequence analysis system and method
TW107136684A TW202016869A (en) 2018-10-18 2018-10-18 Recommended method and system for learning video/audio knowledge content including an exploration module, an experiment module and a verification module
TW107136683 2018-10-18
TW107136684 2018-10-18

Publications (1)

Publication Number Publication Date
CN111081095A true CN111081095A (en) 2020-04-28

Family

ID=70310194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910881880.0A Pending CN111081095A (en) 2018-10-18 2019-09-18 Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method

Country Status (1)

Country Link
CN (1) CN111081095A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230089790A1 (en) * 2021-09-20 2023-03-23 International Business Machines Corporation Constraint-based multi-party image modification

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106694A (en) * 2006-07-12 2008-01-16 鸿友科技股份有限公司 Interactive integration and playing system for multimedia video/audio archive
CN102136023A (en) * 2010-01-21 2011-07-27 中华电信股份有限公司 Video-audio interaction system and method
CN103034500A (en) * 2011-12-15 2013-04-10 微软公司 Intelligent mode recommendation based on user input
US20130262136A1 (en) * 2012-03-28 2013-10-03 Diagnosisone, Inc. Method And System For Improving Quality Of Care And Safety And Continuous Physician And Patient Learning
CN103778580A (en) * 2014-01-21 2014-05-07 孙景琪 Wireless type classroom teaching interacting method based on mobile phones
CN105095516A (en) * 2015-09-16 2015-11-25 中国传媒大学 Broadcast television subscriber grouping system and method based on spectral clustering integration
CN105959372A (en) * 2016-05-06 2016-09-21 华南理工大学 Internet user data analysis method based on mobile application
CN106131042A (en) * 2016-07-29 2016-11-16 广东小天才科技有限公司 A kind of method and system of online interaction study
CN106408475A (en) * 2016-09-30 2017-02-15 中国地质大学(北京) Online course applicability evaluation method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101106694A (en) * 2006-07-12 2008-01-16 鸿友科技股份有限公司 Interactive integration and playing system for multimedia video/audio archive
CN102136023A (en) * 2010-01-21 2011-07-27 中华电信股份有限公司 Video-audio interaction system and method
CN103034500A (en) * 2011-12-15 2013-04-10 微软公司 Intelligent mode recommendation based on user input
US20130262136A1 (en) * 2012-03-28 2013-10-03 Diagnosisone, Inc. Method And System For Improving Quality Of Care And Safety And Continuous Physician And Patient Learning
CN103778580A (en) * 2014-01-21 2014-05-07 孙景琪 Wireless type classroom teaching interacting method based on mobile phones
CN105095516A (en) * 2015-09-16 2015-11-25 中国传媒大学 Broadcast television subscriber grouping system and method based on spectral clustering integration
CN105959372A (en) * 2016-05-06 2016-09-21 华南理工大学 Internet user data analysis method based on mobile application
CN106131042A (en) * 2016-07-29 2016-11-16 广东小天才科技有限公司 A kind of method and system of online interaction study
CN106408475A (en) * 2016-09-30 2017-02-15 中国地质大学(北京) Online course applicability evaluation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230089790A1 (en) * 2021-09-20 2023-03-23 International Business Machines Corporation Constraint-based multi-party image modification

Similar Documents

Publication Publication Date Title
US20200051450A1 (en) Audio-visual teaching platform and recommendation subsystem, analysis subsystem, recommendation method, analysis method thereof
CN107230174B (en) Online interactive learning system and method based on network
CN110232109A (en) A kind of Internet public opinion analysis method and system
Geng et al. Understanding the focal points and sentiment of learners in MOOC reviews: A machine learning and SC‐LIWC‐based approach
CN111833853B (en) Voice processing method and device, electronic equipment and computer readable storage medium
KR20180105693A (en) Digital media content extraction and natural language processing system
CN111723784B (en) Risk video identification method and device and electronic equipment
CN114254208A (en) Identification method of weak knowledge points and planning method and device of learning path
CN111931073B (en) Content pushing method and device, electronic equipment and computer readable medium
CN111524578A (en) Psychological assessment device, method and system based on electronic psychological sand table
Chu et al. Click-based student performance prediction: A clustering guided meta-learning approach
CN113705191A (en) Method, device and equipment for generating sample statement and storage medium
CN114357204B (en) Media information processing method and related equipment
CN116049557A (en) Educational resource recommendation method based on multi-mode pre-training model
Meddeb et al. Personalized smart learning recommendation system for arabic users in smart campus
Shan et al. [Retracted] Research on Classroom Online Teaching Model of “Learning” Wisdom Music on Wireless Network under the Background of Artificial Intelligence
CN115310520A (en) Multi-feature-fused depth knowledge tracking method and exercise recommendation method
CN111081095A (en) Video and audio teaching platform, analysis subsystem and method, recommendation subsystem and method
CN116226410A (en) Teaching evaluation and feedback method and system for knowledge element connection learner state
CN113590772A (en) Abnormal score detection method, device, equipment and computer readable storage medium
TW202016869A (en) Recommended method and system for learning video/audio knowledge content including an exploration module, an experiment module and a verification module
Pradeep et al. Web app for quick evaluation of subjective answers using natural language processing
CN117635381B (en) Method and system for evaluating computing thinking quality based on man-machine conversation
Vaidhehi et al. Design of a context-aware recommender systems for undergraduate program recommendations
KR20190052320A (en) Apparatus for providing personalized contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200428