CN114444982A - Teaching quality monitoring system based on internet education - Google Patents

Teaching quality monitoring system based on internet education Download PDF

Info

Publication number
CN114444982A
CN114444982A CN202210361099.2A CN202210361099A CN114444982A CN 114444982 A CN114444982 A CN 114444982A CN 202210361099 A CN202210361099 A CN 202210361099A CN 114444982 A CN114444982 A CN 114444982A
Authority
CN
China
Prior art keywords
information
headset
video
teaching
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210361099.2A
Other languages
Chinese (zh)
Other versions
CN114444982B (en
Inventor
李全
杨金雄
许长城
张金合
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oxbridge Education & Technology Shenzhen Co ltd
Original Assignee
Oxbridge Education & Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oxbridge Education & Technology Shenzhen Co ltd filed Critical Oxbridge Education & Technology Shenzhen Co ltd
Priority to CN202210361099.2A priority Critical patent/CN114444982B/en
Publication of CN114444982A publication Critical patent/CN114444982A/en
Application granted granted Critical
Publication of CN114444982B publication Critical patent/CN114444982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Multimedia (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a teaching quality monitoring system based on internet education, and relates to the technical field of internet education. Generating at least one detection instruction in the process of playing the teaching video, and acquiring audio information of the teaching video within n seconds before the playing progress after the detection instruction is generated; then mapping the audio information into standard light change information, then controlling the flickering of the headset state lamp based on the standard light change information, simultaneously acquiring an image to be identified containing the headset state lamp, and then acquiring light change information to be matched of the headset state lamp; matching the optical variation information to be matched with the standard optical variation information, and if the video is finished, all matching results corresponding to the student end are successful, so that a teaching quality evaluation invitation is sent to the student end; the student ends receiving the invitation are all complete teaching videos for complete learning, and the influence on the accuracy of internet education quality evaluation caused by the fact that students do not listen to classes seriously is avoided.

Description

Teaching quality monitoring system based on internet education
Technical Field
The invention relates to the technical field of internet education, in particular to a teaching quality monitoring system based on internet education.
Background
Internet + education is a new education form combining Internet science and technology with the education field. At present, the most common form of internet education is that students learn by watching teaching videos online. Meanwhile, monitoring the teaching quality of internet education is an important means for improving the teaching quality, the teaching quality is obtained by testing or scoring after learning, the internet education is more suitable for online education, the learning process of students cannot be accurately controlled due to the particularity of the internet education, and the students may not be seriously attending class when the teaching video is played, so that the testing or scoring result is influenced, and the accuracy of teaching quality evaluation is influenced.
In order to solve the above problems, currently, for internet education, a face recognition technology is generally adopted to ensure that a student is carefully attending a class in front of a screen when the student watches a teaching video.
However, the existing face recognition technology has the following defects:
firstly, personal privacy has risks, the face recognition technology needs to collect face information of students, and a third-party education institution acquires face information of the students, so that the risk of information leakage exists;
secondly, most of devices do not have a depth-of-field camera, and a common camera cannot well identify face information at night or in other scenes with dark light, so that the use scene is limited.
Finally, the face recognition technology based on 2D has low cheating difficulty, and detection is easily bypassed, for example, cheating is performed in a pre-recorded mode, so that the recognition result is inaccurate.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a teaching quality monitoring system based on internet education, which solves the problems of lower safety, accuracy and applicability of the existing face recognition technology in the teaching quality monitoring aiming at the internet education at present.
(II) technical scheme
In order to realize the purpose, the invention is realized by the following technical scheme:
a teaching quality monitoring system based on internet education, comprising: a server side and a student side;
the server side includes:
the detection instruction generation module is used for generating at least one detection instruction in the process of playing the teaching video;
the playing progress acquiring module is used for acquiring the playing progress of the education video of the student end after the detection instruction is generated;
the audio acquisition module is used for acquiring audio information of the education video within n seconds before the playing progress;
the optical variation information mapping module is used for mapping the audio information into standard optical variation information and sending the standard optical variation information to a corresponding student end;
the teaching quality evaluation invitation module sends a teaching quality evaluation invitation to the student end if all matching results corresponding to the student end are successful until the video is finished;
the student terminal includes:
the teaching video playing module is used for playing a teaching video and outputting an audio through the headset;
the headset state lamp control module is used for controlling the flickering of the headset state lamp based on the standard light variation information;
the image acquisition module is used for acquiring an image to be identified containing the headset state lamp after acquiring standard light variation information;
the optical variation information identification module is used for acquiring optical variation information to be matched of the headset state lamp from a video to be identified; and matching the optical variation information to be matched with the standard optical variation information, and sending a matching result to the server side.
Further, the student terminal further comprises: the headset wearing detection module is used for determining whether the user takes off the headset or not;
the detection instruction generation module acquires the playing state of the teaching video after the student end detects that the headset is picked off, does not generate the detection instruction if the playing state of the teaching video is suspended, and generates the detection instruction if the playing state of the teaching video is in playing.
Further, the optically variable information mapping module includes:
the audio text conversion unit is used for converting the audio information into text information;
the text analysis unit is used for extracting noun sequences in the text information; the noun sequence is a sequence in which all extracted nouns in the text information are arranged in sequence;
the mapping unit is used for converting the noun sequence into binary standard optical variable information according to a preset rule, wherein the preset rule comprises the step of converting the word number of the noun into 0 of a corresponding digit, and 1 is set between two adjacent nouns;
and the standard optical variable information transmission unit is used for transmitting the standard optical variable information to the corresponding student terminal.
Further, the extracting noun sequences in the text information includes:
and segmenting the text information, acquiring the word type of each segmentation result, and keeping the segmentation results with the word types as nouns and the sequence corresponding to the word types in the text information to obtain a noun sequence.
Further, the controlling the flickering of the headset status light based on the standard light variation information includes:
in the binaryzation light variation information, 0 represents that the headset state lamp is turned off, and 1 represents that the headset state lamp is turned on; and the time of the lamp-out is calculated as t = α × mi; the time calculation formula of the lamp turning on is t = alpha; wherein mi represents the word number of the ith noun in the noun sequence, and alpha represents the preset unit time length.
Further, the optically variable information identification module includes:
the headset state lamp identification unit is used for positioning a headset state lamp in a video to be identified based on a target identification algorithm;
and the coding unit is used for converting the flickering condition of the headset state lamp into binary light variation information to be matched, and the headset state lamp is always on when the light variation information is not obtained.
Further, the acquiring the audio information of the educational video within n seconds before the progress of playing includes: audio information of the educational video within 30 seconds before the progress of the play.
Further, the headset state lamp is a working state indicator lamp of the microphone.
(III) advantageous effects
1. The method comprises the steps of generating at least one detection instruction in the process of playing a teaching video, and acquiring the playing progress of the teaching video of a student end after the detection instruction is generated; then acquiring audio information of the education video within n seconds before the playing progress; then mapping the audio information into standard light variation information, sending the standard light variation information to a corresponding student end, then controlling the flickering of the headset state lamp based on the standard light variation information, simultaneously acquiring an image to be identified containing the headset state lamp, and then acquiring light variation information to be matched of the headset state lamp from a video to be identified; matching the optical variation information to be matched with the standard optical variation information, sending the matching result to the server side, and if the video is finished, all the matching results corresponding to the student side are successful, indicating that the student completely learns the teaching video, and finally sending a teaching quality evaluation invitation to the student side; the student ends receiving the invitation are all complete teaching videos for complete learning, the reliability of data is high, and the influence on the accuracy of internet education quality evaluation caused by the fact that students do not listen to classes seriously is avoided.
2. Whether the student finishes the teaching video completely or not is only identified through the change sequence of the on and off of the headset state lamp without acquiring the face information of the student, so that the personal privacy is guaranteed, and the identification only needs light variation information, so that the teaching video can adapt to various use environments.
3. In the invention, the standard optical variation information is affected by the generation time of the detection instruction, noun sequences corresponding to the teaching video audio information and the like, so that the standard optical variation information cannot be known in advance, and therefore deception cannot be carried out in a prerecording mode, and the safety of the system is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a system block diagram of an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
The embodiment of the invention provides a teaching quality monitoring system based on internet education, which is shown in a figure 1 and comprises the following components:
a server side and a student side;
wherein, the server end includes:
the detection instruction generation module is used for generating at least one detection instruction in the process of playing the teaching video;
the playing progress acquiring module is used for acquiring the playing progress of the education video of the student end after the detection instruction is generated;
the audio acquisition module is used for acquiring audio information of the education video within n seconds before the playing progress;
the optical variation information mapping module is used for mapping the audio information into standard optical variation information and sending the standard optical variation information to a corresponding student end;
the teaching quality evaluation invitation module sends a teaching quality evaluation invitation to the student end if all matching results corresponding to the student end are successful until the video is finished;
and the student side includes:
the teaching video playing module is used for playing a teaching video and outputting an audio through the headset;
the headset state lamp control module is used for controlling the flickering of the headset state lamp based on the standard light variation information;
the image acquisition module is used for acquiring an image to be identified containing the headset state lamp after acquiring standard light variation information;
the optical variation information identification module is used for acquiring optical variation information to be matched of the headset state lamp from a video to be identified; and matching the optical variation information to be matched with the standard optical variation information, and sending a matching result to the server side.
Further, the student terminal further comprises: the headset wearing detection module is used for determining whether the user takes off the headset or not;
the detection instruction generation module acquires the playing state of the teaching video after the student end detects that the headset is picked off, does not generate the detection instruction if the playing state of the teaching video is suspended, and generates the detection instruction if the playing state of the teaching video is in playing.
Further, the optically variable information mapping module includes:
the audio text conversion unit is used for converting the audio information into text information;
the text analysis unit is used for extracting noun sequences in the text information; the noun sequence is a sequence in which all extracted nouns in the text information are arranged in sequence;
the mapping unit is used for converting the noun sequence into binary standard optical variable information according to a preset rule, wherein the preset rule comprises the step of converting the word number of the noun into 0 of a corresponding digit, and 1 is set between two adjacent nouns;
and the standard optical variable information transmission unit is used for transmitting the standard optical variable information to the corresponding student terminal.
Further, the extracting noun sequences in the text information includes:
and segmenting the text information, acquiring the word type of each segmentation result, and keeping the segmentation results with the word types as nouns and the sequence corresponding to the word types in the text information to obtain a noun sequence.
Further, the controlling the flickering of the headset status light based on the standard light variation information includes:
in the binaryzation light variation information, 0 represents that the headset state lamp is turned off, and 1 represents that the headset state lamp is turned on; the time calculation formula of the lamp turning-off is t = alpha × mi, and the time calculation formula of the lamp turning-on is t = alpha; wherein mi represents the word number of the ith noun in the noun sequence, and alpha represents the preset unit time length.
Further, the optically variable information identification module includes:
the headset state lamp identification unit is used for positioning a headset state lamp in a video to be identified based on a target identification algorithm;
and the coding unit is used for converting the flickering condition of the headset state lamp into binary light variation information to be matched, and the headset state lamp is always on when the light variation information is not obtained.
Further, the acquiring the audio information of the education video within n seconds before the playing progress includes: audio information of the educational video within 30 seconds before the progress of the play.
Further, the headset state lamp is a working state indicator lamp of the microphone.
The following describes the implementation process of the present embodiment in detail:
hardware information:
in this application, student's end can be equipped with the equipment such as desktop, notebook computer, the intelligent flat board that this application corresponds software, and it all has subassemblies such as display, headset, network camera in order to guarantee that internet education function is complete to realize.
The headset is an integrated body of an earphone and a microphone, which is different from a common earphone, a microphone which is not available in the common earphone is provided, when the headset is used, the microphone is positioned in front of the mouth of a user, and in order to facilitate understanding of the using state of the microphone, a state indicator lamp of the microphone is usually arranged on the microphone, and the state indicator lamp comprises two modes of a non-working state (light-off) and a working state (light-on), and the default state is the working state because the internet education needs interaction.
And still can be provided with light sensor etc. in the ear muff of headset and constitute headset and wear detection module, realize simple earphone and wear state discernment.
Based on the above environment, the specific implementation steps of this embodiment are as follows:
s1, after the student logs in the account of the student, the student selects the teaching video to be learned on the server, and then the teaching video playing module of the student plays the teaching video.
Referring to fig. 2:
and S2, in the process of teaching video playing, the detection instruction generating module generates at least one detection instruction.
In specific implementation, since the headset is picked off, the user may leave the seat and continue playing the video, and a detection is needed, but the headset is limited by the current technology, and there is a false trigger condition in headset wearing detection, therefore, for the headset with the headset wearing detection module, a trigger mechanism generated by the following detection instruction may be further adopted:
and after the student end detects that the headset is picked off, acquiring the playing state of the teaching video, if the playing state of the teaching video is pause, not generating a detection instruction, and if the playing state of the teaching video is in playing, generating the detection instruction.
For devices without a headset wearing a detection module, the detection instruction may be generated periodically, for example, once every ten minutes.
In order to ensure smooth detection, the two starting mechanisms of the detection instruction can be combined together.
And S3, after the detection instruction is generated, the playing progress acquiring module of the server side acquires the education video playing progress of the student side.
S4, the audio obtaining module obtains the audio information of the education video within n seconds before the playing progress.
In the implementation, in order to ensure that a noun sequence containing enough nouns can be generated subsequently, n is set to be not less than 30 s.
And S5, the optical variation information mapping module maps the audio information into standard optical variation information and sends the standard optical variation information to the corresponding student side.
In specific implementation, the method comprises the following steps:
and S5.1, converting the audio information into text information by using a voice recognition algorithm.
And S5.2, extracting noun sequences in the text information. The method specifically comprises the following steps:
s5.2.1, segmenting the text information,
s5.2.2, obtaining the word type of each word segmentation result,
s5.2.3, keeping the word segmentation result of the word type as noun and the corresponding sequence in the text information, and obtaining the noun sequence.
The noun sequence is a sequence in which all extracted nouns in the text information are arranged in sequence.
For example, the following steps are carried out: the assumption that the extracted audio information is "light refraction" means that when light obliquely enters another medium from one medium, the propagation direction is changed, so that the light is deflected at the boundary of different media. Belongs to the refraction phenomenon of light. "
After the word segmentation, the terms "light", "refraction", "meaning", "light", "from", "one", "medium", "oblique", "incident", "another", "medium", "time", "propagation", "direction", "occurrence", "change", "thereby", "causing", "light", "on", "different", "medium", "at", "interface", "occurrence", "deflection", "phenomenon" can be obtained.
According to the part-of-speech analysis, the nouns are as follows:
"light," medium, "" direction, "" light, "" medium, "" interface, "" phenomenon.
Thus, the resulting noun sequence is denoted as: { light, medium, direction, ray, medium, interface, phenomenon }.
And S5.3, converting the noun sequence into binary standard light variation information according to a preset rule, wherein the preset rule comprises the step of converting the word number of the noun into 0 of a corresponding digit, and 1 is set between every two adjacent nouns.
For example, the following steps are carried out: noun sequence: { light, medium, direction, ray, medium, boundary, phenomenon }, where the number of words of "light" is 1, the number of words of "medium" is 2, and the number of words of "boundary" is 3, the converted binarized standard optical variation information can be recorded as: 0-1-0-1-00-1-00-1-00-1-00-1-00-1-000-1-00-1.
And S5.4, transmitting the standard light variation information to a corresponding student terminal.
And S6, after the standard light variation information is obtained, the headset state lamp control module at the student end controls the flickering of the headset state lamp based on the standard light variation information, and meanwhile, the image acquisition module acquires the image to be recognized, which comprises the headset state lamp.
In specific implementation, if the standard light variation information is not received, the headset state lamp is kept in a working state (normally bright), and the standard light variation information is used for controlling the flickering of the headset state lamp, wherein the method comprises the following steps:
in the binaryzation light variation information, 0 represents that the headset state lamp is turned off, and 1 represents that the headset state lamp is turned on; the time calculation formula of the lamp turning-off is t = alpha × mi, and the time calculation formula of the lamp turning-on is t = alpha; wherein mi represents the word number of the ith noun in the noun sequence, and alpha represents the preset unit time length. Assuming α =1 second, the standard light variation information is converted into a blinking of the headset state light as:
off 1 second (corresponding to 0) -on 1 second (corresponding to 1) -off 2 seconds (corresponding to 00) -on 1 second (corresponding to 1) -off 3 seconds (corresponding to 000) -on 1 second (corresponding to 1) -off 2 seconds (corresponding to 00) -on 1 second (corresponding to 1).
And the time length of the image to be identified should be at least 2 times of the value of n.
S7, the light variation information identification module acquires the light variation information to be matched of the headset state lamp from the video to be identified; and matching the optical variation information to be matched with the standard optical variation information, and sending a matching result to the server side.
In specific implementation, the following steps can be adopted:
s7.1, positioning an earphone status lamp in a video to be identified based on a target identification algorithm;
and S7.2, timing on and off of the headset state lamp, so that the flickering condition of the headset state lamp can be converted into binary light variation information to be matched.
S7.3, matching the optical variation information to be matched with the standard optical variation information, and if the matching result is successful, indicating that the student is watching the teaching video at the moment and does not leave the seat; otherwise, the student leaves the seat at this time, and the teaching video is playing, that is, the student does not watch the whole teaching video completely.
And S8, if all matching results corresponding to the student end are successful until the video is finished, the student completely watches the teaching video, and the evaluation reliability is high, the teaching quality evaluation invitation module sends a teaching quality evaluation invitation to the student end.
In summary, compared with the prior art, the method has the following beneficial effects:
1. the method comprises the steps of generating at least one detection instruction in the process of playing a teaching video, and acquiring the playing progress of the teaching video of a student end after the detection instruction is generated; then acquiring audio information of the education video within n seconds before the playing progress; then mapping the audio information into standard light variation information, sending the standard light variation information to a corresponding student end, then controlling the flickering of the headset state lamp based on the standard light variation information, simultaneously acquiring an image to be identified containing the headset state lamp, and then acquiring light variation information to be matched of the headset state lamp from a video to be identified; matching the optical variation information to be matched with the standard optical variation information, sending the matching result to the server side, and if the video is finished, all the matching results corresponding to the student side are successful, indicating that the student completely learns the teaching video, and finally sending a teaching quality evaluation invitation to the student side; the student ends receiving the invitation are all complete teaching videos for complete learning, the reliability of data is high, and the influence on the accuracy of internet education quality evaluation caused by the fact that students do not listen to classes seriously is avoided.
2. Whether the student finishes the teaching video completely or not is only identified through the change sequence of the on and off of the headset state lamp without acquiring the face information of the student, so that the personal privacy is guaranteed, and the identification only needs light variation information, so that the teaching video can adapt to various use environments.
3. In the invention, the standard optical variation information is affected by the generation time of the detection instruction, noun sequences corresponding to the teaching video audio information and the like, so that the standard optical variation information cannot be known in advance, and therefore deception cannot be carried out in a prerecording mode, and the safety of the system is high.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A teaching quality monitoring system based on internet education is characterized by comprising: a server side and a student side;
the server side includes:
the detection instruction generation module is used for generating at least one detection instruction in the process of playing the teaching video;
the playing progress acquiring module is used for acquiring the playing progress of the education video of the student end after the detection instruction is generated;
the audio acquisition module is used for acquiring audio information of the education video within n seconds before the playing progress;
the optical variation information mapping module is used for mapping the audio information into standard optical variation information and sending the standard optical variation information to a corresponding student end;
the teaching quality evaluation invitation module sends a teaching quality evaluation invitation to the student terminal if all matching results corresponding to the student terminal are successful until the video is finished;
the student terminal includes:
the teaching video playing module is used for playing a teaching video and outputting an audio through the headset;
the headset state lamp control module is used for controlling the flickering of the headset state lamp based on the standard light variation information;
the image acquisition module is used for acquiring an image to be identified containing the headset state lamp after acquiring standard light variation information;
the optical variation information identification module is used for acquiring optical variation information to be matched of the headset state lamp from a video to be identified; and matching the optical variation information to be matched with the standard optical variation information, and sending a matching result to the server side.
2. The internet education-based teaching quality monitoring system of claim 1 wherein the student terminal further includes: the headset wearing detection module is used for determining whether the user takes off the headset or not;
the detection instruction generation module acquires the playing state of the teaching video after the student end detects that the headset is picked off, does not generate the detection instruction if the playing state of the teaching video is suspended, and generates the detection instruction if the playing state of the teaching video is in playing.
3. The internet education-based teaching quality monitoring system of claim 1, wherein the optically variable information mapping module includes:
the audio text conversion unit is used for converting the audio information into text information;
the text analysis unit is used for extracting noun sequences in the text information; the noun sequence is a sequence in which all extracted nouns in the text information are arranged in sequence;
the mapping unit is used for converting the noun sequence into binary standard optical variable information according to a preset rule, wherein the preset rule comprises the step of converting the word number of the noun into 0 of a corresponding digit, and 1 is set between two adjacent nouns;
and the standard optical variable information transmission unit is used for transmitting the standard optical variable information to the corresponding student terminal.
4. The system for monitoring teaching quality based on internet education as claimed in claim 3, wherein said extracting noun sequences in text messages includes:
and segmenting the text information, acquiring the word type of each segmentation result, and keeping the segmentation results with the word types as nouns and the sequence corresponding to the word types in the text information to obtain a noun sequence.
5. The internet education-based teaching quality monitoring system of claim 3, wherein the controlling of the blinking of the headset status lights based on the standard light variation information comprises:
in the binaryzation light variation information, 0 represents that the headset state lamp is turned off, and 1 represents that the headset state lamp is turned on; and the time of the lamp-out is calculated as t = α × mi; the time calculation formula of the lamp lighting is t = alpha, wherein mi represents the number of words of the ith noun in the noun sequence, and alpha represents the preset unit time length.
6. The internet education-based teaching quality monitoring system of claim 5, wherein the optically variable information recognition module comprises:
the headset state lamp identification unit is used for positioning a headset state lamp in a video to be identified based on a target identification algorithm;
and the coding unit is used for converting the flickering condition of the headset state lamp into binary light variation information to be matched, and the headset state lamp is always on when the light variation information is not obtained.
7. The system for monitoring teaching quality based on internet education as claimed in claim 1, wherein said obtaining audio information of the education video within n seconds before the progress of the playing includes: audio information of the educational video within 30 seconds before the progress of the play.
8. The internet education-based teaching quality monitoring system of claim 1 wherein the headset status light is a microphone operating status indicator light.
CN202210361099.2A 2022-04-07 2022-04-07 Teaching quality monitoring system based on internet education Active CN114444982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210361099.2A CN114444982B (en) 2022-04-07 2022-04-07 Teaching quality monitoring system based on internet education

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210361099.2A CN114444982B (en) 2022-04-07 2022-04-07 Teaching quality monitoring system based on internet education

Publications (2)

Publication Number Publication Date
CN114444982A true CN114444982A (en) 2022-05-06
CN114444982B CN114444982B (en) 2022-07-01

Family

ID=81360332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210361099.2A Active CN114444982B (en) 2022-04-07 2022-04-07 Teaching quality monitoring system based on internet education

Country Status (1)

Country Link
CN (1) CN114444982B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001100625A (en) * 1999-09-29 2001-04-13 Yokogawa Electric Corp System and method for schooling
WO2013078144A2 (en) * 2011-11-21 2013-05-30 Age Of Learning, Inc. Computer-based language immersion teaching for young learners
CN108648520A (en) * 2018-03-27 2018-10-12 小叶子(北京)科技有限公司 A kind of piano performance learning method and device
CN110796005A (en) * 2019-09-27 2020-02-14 北京大米科技有限公司 Method, device, electronic equipment and medium for online teaching monitoring
CN111833672A (en) * 2020-08-05 2020-10-27 北京育宝科技有限公司 Teaching video display method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001100625A (en) * 1999-09-29 2001-04-13 Yokogawa Electric Corp System and method for schooling
WO2013078144A2 (en) * 2011-11-21 2013-05-30 Age Of Learning, Inc. Computer-based language immersion teaching for young learners
CN108648520A (en) * 2018-03-27 2018-10-12 小叶子(北京)科技有限公司 A kind of piano performance learning method and device
CN110796005A (en) * 2019-09-27 2020-02-14 北京大米科技有限公司 Method, device, electronic equipment and medium for online teaching monitoring
CN111833672A (en) * 2020-08-05 2020-10-27 北京育宝科技有限公司 Teaching video display method, device and system

Also Published As

Publication number Publication date
CN114444982B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN105323648B (en) Caption concealment method and electronic device
US8201080B2 (en) Systems and methods for augmenting audio/visual broadcasts with annotations to assist with perception and interpretation of broadcast content
CN107112026A (en) System, the method and apparatus for recognizing and handling for intelligent sound
JP2009258175A (en) Lecture system and tabulation system
US20200314483A1 (en) Intelligent masking of non-verbal cues during a video communication
TW201239647A (en) Communication device for multiple-language translation system
KR102263154B1 (en) Smart mirror system and realization method for training facial sensibility expression
Han et al. Effects of modality and speaking style on Mandarin tone identification by non-native listeners
CN114444982B (en) Teaching quality monitoring system based on internet education
CN106454491A (en) Method and device for playing voice information in video smartly
CN116567351B (en) Video processing method, device, equipment and medium
US20220130413A1 (en) Systems and methods for a computerized interactive voice companion
CN116996702A (en) Concert live broadcast processing method and device, storage medium and electronic equipment
CN113591515A (en) Concentration processing method, device and storage medium
CN112673423A (en) In-vehicle voice interaction method and equipment
CN116088675A (en) Virtual image interaction method, related device, equipment, system and medium
JP3930402B2 (en) ONLINE EDUCATION SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROVIDING METHOD, AND PROGRAM
Shinohara Audiovisual English/r/−/l/identification training for Japanese-speaking adults and children
US11315544B2 (en) Cognitive modification of verbal communications from an interactive computing device
CN111078992B (en) Dictation content generation method and electronic equipment
WO2021167732A1 (en) Implementing automatic chatting during video displaying
CN116233540B (en) Parallel signal processing method and system based on video image recognition
KR20100005863A (en) Desk type apparatus for studying and method for studying using it
KR102260280B1 (en) Method for studying both foreign language and sign language simultaneously
CN112992186B (en) Audio processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant