CN111782029A - Electronic course learning supervising and urging method and system - Google Patents

Electronic course learning supervising and urging method and system Download PDF

Info

Publication number
CN111782029A
CN111782029A CN202010658846.XA CN202010658846A CN111782029A CN 111782029 A CN111782029 A CN 111782029A CN 202010658846 A CN202010658846 A CN 202010658846A CN 111782029 A CN111782029 A CN 111782029A
Authority
CN
China
Prior art keywords
learner
face
detection
electronic
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010658846.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fu Jianling
Original Assignee
Fu Jianling
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fu Jianling filed Critical Fu Jianling
Priority to CN202010658846.XA priority Critical patent/CN111782029A/en
Publication of CN111782029A publication Critical patent/CN111782029A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3287Power saving characterised by the action undertaken by switching off individual functional units in the computer system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses an electronic course learning supervising and urging method, which comprises the following steps: registering a face front image of a learner when the learner logs in a system for the first time by using an account; when a learner plays an electronic course for learning, detecting whether a face appears in front of a screen; detecting whether a face appearing in front of a screen is matched with a registered face; detecting whether the electronic course window image is shielded by other software; detecting whether the audio output by the equipment playing the electronic course is the audio matched with the electronic course; detecting whether the face of the learner faces to a screen; detecting whether the learner watches the electronic course playing window or not; detecting whether the learner is tired; detecting whether the learner is a real person in front of the screen, and integrating all detection results to judge the final transfer state of the learner; if the learner is in the non-attentive state, reminding the learner to improve the attentive state, and recording the moment when the learner is in the non-attentive state, the image shot by the camera equipment, the audio output by the equipment and various detection results; the system further detects that the expression of the learner is a confused expression, and records the confused expression and the time when the confused expression is detected; and carrying out statistical analysis on various recorded data at the end of playing the electronic course to generate a statistical report and a statistical chart and improve the suggestions of the learning concentration state and the learning suggestions of the confused electronic course content.

Description

Electronic course learning supervising and urging method and system
Technical Field
The invention relates to the technical field of computers, in particular to an electronic course learning supervising and urging method and system.
Background
With the continuous development of computer technology and the arrival of knowledge economy, the learning mode of the public is impacted unprecedentedly, and various off-line or on-line electronic books, electronic PPT courses and electronic video courses provide new requirements for the learning mode. When the electronic teaching materials are used for learning, the electronic teaching materials are usually used for self-learning, no teacher supervises the concentration state of the learner during self-learning, and if the learner is not concentrated enough, the learning efficiency is low, and even the knowledge can not be learned. If a method and a system are needed, the concentration state of the learner is supervised and the concentration state data of the learner is recorded and analyzed, so that the learner is promoted to improve the concentration state, and the learning efficiency is improved to provide guidance basis.
Disclosure of Invention
The present invention is directed to the above-mentioned problems, and an object of the present invention is to provide a method and a system for supervising the learning status of a learner, and to record and analyze the data of the learner's concentration status, so as to provide a guidance basis for improving the learning concentration status and improving the learning efficiency of the learner.
In order to solve the problems of the prior art, the invention adopts the following technical scheme:
an electronic course learning supervising and urging method comprises the specific implementation steps of:
the method comprises the steps that a learner starts and logs in an electronic course learning supervising and urging system by using an account, the electronic course learning supervising and urging system is initialized, then whether the log-in account is logged in for the first time is detected, if the account is logged in for the first time, a face registration process is executed, the electronic course learning supervising and urging system pops up a prompting window to require the learner to face a camera shooting device and watch the camera shooting device, then a face front image of the learner is shot by the camera shooting device and is correlated with the account, then an electronic course playing detection function is started, the playing state of an electronic course is monitored in real time, and when the electronic course playing is detected, the electronic course learning supervising and urging system starts other functions in the system; when the electronic course is detected to be finished, the electronic course learning supervising system automatically closes the functions except the electronic course playing detection function, so that the aim of saving energy is fulfilled; and after the electronic course is played, starting the electronic course learning supervising and urging system.
S1, after the learner starts to learn by playing the electronic course, the system starts to detect whether the face of the learner is in front of the screen. Shooting the front of the displayed electronic course by using the camera equipment, detecting whether a human face exists in the image shot by the camera equipment by using a human face detection algorithm on the image shot by the camera equipment so as to judge whether the face of the learner is in front of the screen and is expressed by face, if the face of the learner is in the image shot by the camera equipment, expressing by 1 (face = 1), otherwise, expressing by 0 (face = 0); and simultaneously, extracting the face image in the image and transmitting the extracted face image to the step 3) for use.
S2, further, comparing the face image obtained in step 2) with the learner face data recorded in step 1) by using a face identification algorithm, and indicating that identity =1 if the comparison result is that the two match, otherwise, identity = 0.
S3, further, detecting the screen displayed on the screen, and determining whether the electronic course window image is occluded by other software window images, which is represented by visible, and the electronic course window image is not occluded and represented by 1 (visible = 1), otherwise represented by 0 (visible = 0).
S4, detecting whether the audio output from the device playing the electronic lesson is the audio of the electronic lesson, thereby determining whether the learner listened to the audio of the electronic lesson, and as a result, expressing the learner 'S listening to the audio of the electronic lesson as 1 (listen = 1), otherwise expressing the learner' S listening to the audio of the electronic lesson as 0 (listen = 0).
S5, further, when the learner' S face appears in front of the screen displaying the electronic lesson, the learner may not look at the screen but at other objects, so it is necessary to determine that the learner is the electronic lesson displayed on the viewing screen. Firstly, detecting the face image obtained in the step 2) by using a detection algorithm, detecting the face orientation, and judging whether the face is oriented to a screen for displaying the electronic course; expressed as towards, the face is expressed as 1 (towards = 1) when oriented to the screen displaying the electronic lesson, and 0 (towards = 0) otherwise.
S6, further, when the learner faces towards the screen displaying the electronic lesson, the learner may be at a place in the gaze screen other than the electronic lesson window, so it is also necessary to detect the face image using a detection algorithm for the captured face image, detect the position where the eyes are actually gazed at in the screen displaying the electronic lesson, and then, in combination with the position, width and height information of the electronic lesson window, determine whether the learner 'S eyes are at the gaze electronic lesson window, denoted by se, and the learner' S eyes are actually gazed at the electronic lesson window denoted by 1 (se = 1), otherwise denoted by 0 (se = 0).
S7, further, using a detection algorithm to perform fatigue detection on the face image through the face image, wherein the face image is represented by tired, the learner is in a fatigue state and represented by 1 (tired = 1), otherwise, the learner is represented by 0 (tired = 1).
S8, further, in the course of playing the electronic lesson, in order to prevent the learner from using the photo or recording the video of his face in advance to deceive the face detection, face orientation detection, eye fixation point detection or fatigue detection, the method of detecting the real person needs to be performed on the image of the learner' S face captured by the camera, the detection method may adopt 2 types of methods of human body motion detection and human voice detection, and the human body motion-based real person detection may be: performing appointed operation and non-operation on the software human-computer interaction interface, and only finishing appointed actions by a human body; real person detection based on human voice adopts the method that a learner is required to speak a specified word or sentence; these several detection methods are exemplified below:
a) popping up a special operation requirement window on a screen, and requiring a learner to complete special operation on screen contents through touch or a mouse, wherein the special operation can be that the learner clicks special characters on the screen or slides the special characters to a specified position in the screen or other operations; and then detecting the touch screen or mouse operation by using an algorithm to detect whether the learner finishes the special operation appointed in the screen. For example, the mobile phone unlocks a control of a sliding screen interface, or draws a specified line or graph on the screen, or sequentially clicks a specified character
b) Popping up a special operation request window on a screen, requesting a learner to blink eyes and do body motions such as specific gestures, then shooting the front of the screen through a camera, carrying out motion detection on shot pictures or videos by using an algorithm, and detecting whether the learner finishes the specified body motions
c) Popping up a special operation requirement window on a screen, requiring a learner to speak a specified word or sentence, then collecting the word or sentence spoken by the learner through a microphone, detecting the word or sentence by using an algorithm, and judging whether the learner speaks the specified word or sentence
In a), b) and c), the designated operation and words are different randomly, and the times and moments needing the operation are also random, so that the pre-recorded video or voice camouflage is prevented. And judging whether the learner completes the special operation by adopting one or more combinations of the 3 real person detection methods, if the learner is detected to complete the special operation, judging that the face shot by the camera shooting equipment is a living face and is represented by motion =1, and otherwise, representing by motion = 0.
Further, summarizing the detection results of S2-S8, and performing logic AND operation, namely, judging that the concentration state of the learner is not concentration by any detection result bit 0 of S2-S8; when the concentration status of the learner is represented by concentration, 1 represents that the concentration status of the learner is concentration (concentration = 1), and 0 represents that the concentration status of the learner is not concentration (concentration = 0), the detection results of S2 to S8 are summarized and determined as in formula (1):
concentration = face&identify&visible&listen&towards&see&tired&motion(1);
the above-mentioned S2-S8 detection items are repeated at a constant cycle.
Further, when it is detected that the learner is in the inattentive state during the learning process, i.e. the Concentration =0, the system may prompt the learner to increase the learning Concentration level by stopping playing the electronic lesson or vibrating the electronic lesson window or superimposing an alarm sound on the audio output by the device.
Further, when detecting that the learner is not attentive during the learning process, i.e., the Concentration =0, the system records the current time and the image taken by the image pickup device, whether the face of the learner is in front of the screen, whether the face of the learner matches the face registered at the time of the first login, whether the face of the learner is facing the screen, whether the electronic lesson window is blocked, whether the audio listened to by the learner is the audio of the electronic lesson, whether the learner gazes at the electronic lesson window in the screen, the detection result of whether the learner is fatigued, whether the learners photographed are real persons, the picture image displayed on the screen, and the audio output by the current device.
Furthermore, when the electronic course is played, the state statistical analysis module carries out multi-dimensional statistics on all recorded information, generates a statistical report and a statistical chart and improves the suggestion of the learning state, and provides a guidance basis for the learner to adjust the learning state, improve the concentration degree and improve the learning efficiency.
Furthermore, expression detection is carried out on the face image by using a detection algorithm, expression is expressed by expresson, if the detected expression is a confused expression expressed by 1 (expresson = 1), if the detected expression is not a confused expression expressed by 0 (expresson = 0), when the face image is detected to be the confused expression, the state recording module records the moment when the expression of the learner is the confused expression and the expression detection result, after the electronic course is played, the state statistical analysis module brings the confused expression moment into statistical analysis calculation, generates a statistical chart containing the confused expression moment, and generates a suggestion for improving the learning state by combining the content played in the electronic course corresponding to the confused expression moment.
Further, when the face of the learner is detected, the three-dimensional sensor can be used for acquiring three-dimensional detection data of the space in front of the screen, and the detection is carried out through a corresponding face detection algorithm, an identity detection algorithm, a face orientation algorithm, a gazing point detection algorithm and a fatigue detection algorithm.
The electronic course learning supervising and urging system comprises the following specific components:
1) the initialization module completes system initialization of various parameters, including: configuring items needing to be detected in the formula (1), verifying a login account, popping up a prompt window when a learner logs in for the first time, recording a front image of the face of the learner, and associating the front image of the face of the learner with the login account;
2) the electronic course playing module is responsible for playing the electronic course, displaying the picture of the electronic course on a screen through a window, and playing the audio frequency of the electronic course through a loudspeaker built in hardware or outputting the audio frequency to an external loudspeaker through a hardware audio output port;
3) the electronic course playing monitoring module is used for monitoring whether the electronic course is played or not, and if the electronic course is played, the rest parts of the system are started;
4) the detection module is used for acquiring spatial two-dimensional or three-dimensional image data in front of the shooting and displaying electronic course;
5) the state detection module is used for receiving the image data of the detection module and analyzing and detecting the image data by using an algorithm, and comprises the following sub-modules:
a) the face detection submodule is responsible for detecting whether a face appears in front of the screen;
b) the identity detection submodule is responsible for detecting whether the face of the learner is matched with the recorded face associated with the learning account;
c) the electronic course window shielding detection submodule is responsible for detecting whether the electronic course is shielded by other software interfaces;
d) the audio detection submodule is used for detecting whether the audio output by the electronic course playing equipment is the audio matched with the electronic course;
e) the face orientation submodule is responsible for detecting whether the face is oriented to a screen for playing the electronic course;
f) the gazing point detection submodule is responsible for detecting whether the gazing point of the learner is on the electronic course interface or not;
g) the fatigue detection submodule is responsible for detecting whether the learner in front of the screen is fatigue;
h) the expression detection submodule is responsible for detecting whether the expression of the learner is a confused expression;
i) the real person detection sub-module is responsible for detecting that the face in front of the screen is not disguised by a face photo or a face video;
j) and the comprehensive result output sub-module is responsible for synthesizing all detection results to obtain a final concentration state result of the learner and outputting the final concentration state result to the state reminding module and the state recording module.
6) And the state reminding module is used for pausing the electronic course playing or vibrating the electronic course playing window or superposing an alarm sound in the audio output by the electronic course playing equipment when the learner is in the non-attentive state, so as to remind the learner to improve the attentive degree.
7) And the state recording module is used for recording the moment when the learner is in the inattentive state, the image shot by the camera equipment, the audio played by the electronic course playing equipment and the detection result of each sub-module in the state detection module.
8) And the state statistical analysis module is used for statistically analyzing various data recorded by the state recording module and generating a statistical report and a statistical chart as well as suggestions for improving the learning state.
Drawings
FIG. 1 is a flow chart schematically illustrating steps of an electronic lesson learning supervision method according to the present invention.
FIG. 2 is a block diagram schematically showing an electronic course learning supervision system according to the present invention.
Fig. 3 is a schematic diagram illustrating an application scenario of a mobile device according to embodiment 1 of the present invention.
Fig. 4 is a diagram schematically illustrating a desktop application scenario according to embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples, it is to be understood that the specific examples are described herein only for the purpose of explaining the present invention, and are not intended to limit the present invention, and the scope of the present invention as claimed is not limited to the scope of the examples. It is to be noted that the following processes or parameters, if not specified in particular detail, are within the reach of the person skilled in the art and can be realized by reference to the prior art.
In embodiment 1, as shown in fig. 3, when a learner learns an electronic course using a mobile phone, a tablet pc, or a laptop, the learner uses an account to start and log in an electronic course learning supervising system, the electronic course learning supervising system completes initialization, and then detects whether the log-in account is a first log-in account, if the account is a first log-in account, a face registration process is performed: the electronic lesson learning supervising and prompting system pops up a prompt window to ask the learner to face to a screen 109 or 107 or 105, watches a built-in front- view camera 108 or 106 or 104 of a mobile phone, a tablet computer or a notebook computer, then uses a detection module to shoot an image of a space region 103 or 102 or 101 in front of the screen to obtain a face front image of the learner, and associates the face front image with an account number. And then starting the electronic course playing monitoring module, wherein when the learner starts playing the electronic course for learning, the electronic course playing monitoring module detects the playing of the electronic course, and starts all other functional modules of the electronic course learning supervising and urging system. The electronic course learning supervising and prompting system takes the image of a space area 103, 102 or 101 in front of a screen by using a built-in front-view camera 108, 106 or 104 of a mobile phone or a tablet computer or a notebook computer as a detection module, detects the image taken by the detection module and the audio played by the mobile phone or the tablet computer or the notebook computer by using a state detection module, respectively completes corresponding detection and summarizes the result to a comprehensive result output sub-module by a face detection sub-module, an identity detection sub-module, a face orientation sub-module, a fixation point detection sub-module, an electronic course window shielding detection sub-module, an audio detection sub-module, a fatigue detection sub-module and a real person detection sub-module in the state detection module, synthesizes the detection results of all sub-modules by the comprehensive result output sub-module to obtain a final learner concentration state result and outputs the final learner concentration, meanwhile, the expression detection submodule is used for carrying out expression detection on the image shot by the detection module; the state reminding module receives the learner concentration state output by the state detection module, and if the learner is in the non-concentration state, the state suitcase module stops playing the electronic course or vibrates an electronic course playing window or superposes an alarm sound in the audio output by the mobile phone or the tablet personal computer or the notebook personal computer so as to remind the learner to improve the concentration degree; meanwhile, the state recording module receives the learner concentration state output by the state detection module, if the learner is in the non-concentration state, the moment when the learner is in the non-concentration state, the image shot by the camera device, the audio played by the electronic course playing device and the detection result of each sub-module in the state detection module are recorded, and the moment and the image when the expression of the learner is detected as the confused expression are recorded. And the state statistical analysis module is used for performing statistical analysis and calculation on the information recorded by the state recording module when the electronic course playing is finished every time, automatically generating various reports and statistical graphs and suggestions for improving the learning state and suggestions for learning confused course contents so as to be checked by learners.
Embodiment 2, as shown in fig. 4, when a learner learns an electronic course using a desktop computer, 203 is an external audio device, the learner starts and logs in an electronic course learning supervising system using an account, the electronic course learning supervising system completes initialization, and then detects whether the account is logged in for the first time, if the account is logged in for the first time, a face registration process is performed: the electronic course learning supervising and urging system pops up a prompt window to ask the learner to face a screen 204 hung on a wall to watch an external stereo camera 202, then uses the external stereo camera 202 as a detection module to shoot an image of a space area 201 in front of the screen to obtain a front image of the face of the learner, and associates the front image of the face with an account number. And then starting the electronic course playing monitoring module, wherein when the learner starts playing the electronic course for learning, the electronic course playing monitoring module detects the playing of the electronic course, and starts all other functional modules of the electronic course learning supervising and urging system. The electronic course learning supervising system uses the detection module to shoot the image of the space area 201 in front of the screen, uses the state detection module to detect the image shot by the detection module and the audio played by the mobile phone or the tablet personal computer or the notebook computer, corresponding detection is respectively completed through a face detection submodule, an identity detection submodule, a face orientation submodule, a point of regard detection submodule, an electronic course window shielding detection submodule, an audio detection submodule, a fatigue detection submodule and a real person detection submodule in a state detection module, results are gathered to a comprehensive result output submodule, the comprehensive result output submodule synthesizes detection results of all submodules to obtain a final learner concentration state result and outputs the final learner concentration state result to a state reminding module and a state recording module, and meanwhile, the expression detection submodule is used for carrying out expression detection on an image shot by a detection module; the state reminding module receives the learner concentration state output by the state detection module, and if the learner is in the non-concentration state, the state suitcase module stops playing the electronic course or vibrates an electronic course playing window or superposes an alarm sound in the audio output by the mobile phone or the tablet personal computer or the notebook personal computer so as to remind the learner to improve the concentration degree; meanwhile, the state recording module receives the learner concentration state output by the state detection module, if the learner is in the non-concentration state, the moment when the learner is in the non-concentration state, the image shot by the camera device, the audio played by the electronic course playing device and the detection result of each sub-module in the state detection module are recorded, and the moment and the image when the expression of the learner is detected as the confused expression are recorded. And the state statistical analysis module is used for performing statistical analysis and calculation on the information recorded by the state recording module when the electronic course playing is finished every time, automatically generating various reports and statistical graphs and suggestions for improving the learning state and suggestions for learning confused course contents for a learner to check.

Claims (8)

1. An electronic course learning supervising and urging method is characterized by comprising the following steps:
the method comprises the steps that a learner starts and logs in an electronic course learning supervising and urging system by using an account, the electronic course learning supervising and urging system is initialized, then whether the log-in account is logged in for the first time is detected, if the account is logged in for the first time, a face registration process is executed, the electronic course learning supervising and urging system pops up a prompting window to require the learner to face a camera shooting device and watch the camera shooting device, then a face front image of the learner is shot by the camera shooting device and is correlated with the account, then an electronic course playing detection function is started, the playing state of an electronic course is monitored in real time, and when the electronic course playing is detected, the electronic course learning supervising and urging system starts other functions in the system; when the electronic course is detected to be finished, the electronic course learning supervising system automatically closes the functions except the electronic course playing detection function, so that the aim of saving energy is fulfilled;
s1, after the learner plays the electronic course and starts learning, the system starts to detect whether the face of the learner is positioned in front of the screen, the image pickup equipment is used for shooting and displaying the front of the electronic course, a face detection algorithm is used for the image shot by the image pickup equipment, whether a face exists in the image shot by the image pickup equipment is detected, whether the face of the learner is positioned in front of the screen is judged and expressed by face, if the face of the learner is positioned in the image shot by the image pickup equipment, the face of the learner is expressed by 1 (face = 1), otherwise, the face of the learner is expressed by 0 (face = 0); meanwhile, extracting the face image in the image and transmitting the extracted face image to the step 3) for use;
s2, further, comparing the face image obtained in step 2) with the learner face data recorded in step 1) by using a face identification algorithm, and indicating the result as identity =1 if the two face images match, otherwise, indicating the result as identity = 0;
s3, further, detecting the screen displayed on the screen, and determining whether the electronic course window image is occluded by other software window images, which is represented by visible, and the electronic course window image is not occluded and represented by 1 (visible = 1), otherwise represented by 0 (visible = 0);
s4, detecting whether the audio output from the device playing the electronic lesson is the audio of the electronic lesson, thereby judging whether the learner listens to the audio of the electronic lesson, and the result is expressed as listen, and the learner listens to the audio of the electronic lesson and is expressed as 1 (listen = 1), otherwise is expressed as 0 (listen = 0);
s5, further, when the learner' S face appears in front of the screen displaying the electronic lesson, the learner may not look at the screen but other objects, so it is necessary to determine that the learner is the electronic lesson displayed on the viewing screen; firstly, detecting the face image obtained in the step 2) by using a detection algorithm, detecting the face orientation, and judging whether the face is oriented to a screen for displaying the electronic course; expressed as towards, the face is expressed as 1 (towards = 1) when oriented to the screen displaying the electronic lesson, and expressed as 0 (towards = 0) otherwise;
s6, further, when the learner faces towards the screen displaying the electronic lesson, the learner may be at a place in the screen other than the electronic lesson window, so it is also necessary to detect the face image by using a detection algorithm for the captured face image, detect the position where the eyes are actually gazed in the screen displaying the electronic lesson, and then determine whether the learner 'S eyes are at the gazing electronic lesson window, expressed as see, and the learner' S eyes are actually gazing at the electronic lesson window expressed as 1 (see = 1), otherwise expressed as 0 (see = 0), in combination with the position, width and height information of the electronic lesson window;
s7, further, using a detection algorithm to perform fatigue detection on the face image through the face image, wherein the face image is represented by tired, the learner is in a fatigue state and represented by 1 (tired = 1), otherwise, represented by 0 (tired = 1);
s8, further, in the course of playing the electronic lesson, in order to prevent the learner from using the photo or recording the video of his face in advance to deceive the face detection, face orientation detection, eye fixation point detection or fatigue detection, the method of detecting the real person needs to be performed on the image of the learner' S face captured by the camera, the detection method may adopt 2 types of methods of human body motion detection and human voice detection, and the human body motion-based real person detection may be: performing appointed operation and non-operation on the software human-computer interaction interface, and only finishing appointed actions by a human body; real person detection based on human voice adopts the method that a learner is required to speak a specified word or sentence; these several detection methods are exemplified below:
a) popping up a special operation requirement window on a screen, and requiring a learner to complete special operation on screen contents through touch or a mouse, wherein the special operation can be that the learner clicks special characters on the screen or slides the special characters to a specified position in the screen or other operations; then, detecting the touch screen or mouse operation by using an algorithm, and detecting whether the learner finishes the special operation appointed in the screen; for example, the mobile phone unlocks a control of a sliding screen interface, or draws a specified line or graph on the screen, or sequentially clicks a specified character
b) Popping up a special operation request window on a screen, requesting a learner to blink eyes and do body motions such as specific gestures, then shooting the front of the screen through a camera, carrying out motion detection on shot pictures or videos by using an algorithm, and detecting whether the learner finishes the specified body motions
c) Popping up a special operation requirement window on a screen, requiring a learner to speak a specified word or sentence, then collecting the word or sentence spoken by the learner through a microphone, detecting the word or sentence by using an algorithm, and judging whether the learner speaks the specified word or sentence
In a), b) and c), the specified operation and words are different randomly, and the times and moments needing to be operated are also random, so as to prevent the pre-recorded video or voice camouflage;
judging whether the learner completes special operation by adopting one or more combinations of the 3 real person detection methods, if the learner completes the special operation, judging that the face shot by the camera shooting equipment is a living face and is represented by motion =1, otherwise, representing by motion = 0;
summarizing the detection results of S2-S8, and carrying out logic AND operation, namely, judging that the concentration state of the learner is not concentration by any detection result bit 0 in S2-S8; when the concentration status of the learner is represented by concentration, 1 represents that the concentration status of the learner is concentration (concentration = 1), and 0 represents that the concentration status of the learner is not concentration (concentration = 0), the detection results of S2 to S8 are summarized and determined as in formula (1):
concentration = face&identify&visible&listen&towards&see&tired&motion(1),
the detection of S2 to S8 is repeated at a constant cycle.
2. The method of claim 1, further comprising:
when the learner is detected to be in the non-attentive state in the learning process, namely the concentrative =0, the system reminds the learner to improve the learning attentive degree by stopping playing the electronic course or vibrating the electronic course window or superposing an alarm sound in the audio output by the device.
3. The method of claim 1, further comprising:
when detecting that the learner is not attentive during the learning process, i.e., the Concentration =0, the system records the current time and the image taken by the camera device, whether the face of the learner is in front of the screen, whether the face of the learner matches the face registered at the first login, whether the face of the learner faces the screen, whether the electronic lesson window is blocked, whether the audio listened to by the learner is the audio of the electronic lesson, whether the learner gazes at the electronic lesson window in the screen, the detection result of whether the learner is tired, whether the learner is a real person, the picture image displayed on the screen, and the audio output by the current device.
4. The method of claim 1, further comprising:
furthermore, when the electronic course is played, the state statistical analysis module carries out multi-dimensional statistics on all recorded information, generates a statistical report and a statistical chart and improves the suggestion of the learning state, and provides a guidance basis for the learner to adjust the learning state, improve the concentration degree and improve the learning efficiency.
5. The method of claim 1, further comprising:
furthermore, expression detection is carried out on the face image by using a detection algorithm, expression is expressed by expresson, if the detected expression is a confused expression expressed by 1 (expresson = 1), if the detected expression is not a confused expression expressed by 0 (expresson = 0), when the face image is detected to be the confused expression, the state recording module records the moment when the expression of the learner is the confused expression and the expression detection result, after the electronic course is played, the state statistical analysis module brings the confused expression moment into statistical analysis calculation, generates a statistical chart containing the confused expression moment, and generates a suggestion for improving the learning state by combining the content played in the electronic course corresponding to the confused expression moment.
6. The method of claim 1, further comprising:
further, when the face of the learner is detected, the three-dimensional sensor can be used for acquiring three-dimensional detection data of the space in front of the screen, and the detection is carried out through a corresponding face detection algorithm, an identity detection algorithm, a face orientation algorithm, a gazing point detection algorithm and a fatigue detection algorithm.
7. The electronic course learning supervising and urging system comprises the following specific components:
1) the initialization module completes system initialization of various parameters, including: configuring items needing to be detected in the formula (1), verifying a login account, popping up a prompt window when a learner logs in for the first time, recording a front image of the face of the learner, and associating the front image of the face of the learner with the login account;
2) the electronic course playing module is responsible for playing the electronic course, displaying the picture of the electronic course on a screen through a window, and playing the audio frequency of the electronic course through a loudspeaker built in hardware or outputting the audio frequency to an external loudspeaker through a hardware audio output port;
3) the electronic course playing monitoring module is used for monitoring whether the electronic course is played or not, and if the electronic course is played, the rest parts of the system are started;
4) the detection module is used for acquiring spatial two-dimensional or three-dimensional image data in front of the shooting and displaying electronic course;
5) the state detection module is used for receiving the image data of the detection module and analyzing and detecting the image data by using an algorithm, and comprises the following sub-modules:
a) the face detection submodule is responsible for detecting whether a face appears in front of the screen;
b) the identity detection submodule is responsible for detecting whether the face of the learner is matched with the recorded face associated with the learning account;
c) the electronic course window shielding detection submodule is responsible for detecting whether the electronic course is shielded by other software interfaces;
d) the audio detection submodule is used for detecting whether the audio output by the electronic course playing equipment is the audio matched with the electronic course;
e) the face orientation submodule is responsible for detecting whether the face is oriented to a screen for playing the electronic course;
f) the gazing point detection submodule is responsible for detecting whether the gazing point of the learner is on the electronic course interface or not;
g) the fatigue detection submodule is responsible for detecting whether the learner in front of the screen is fatigue;
h) the expression detection submodule is responsible for detecting whether the expression of the learner is a confused expression;
i) the real person detection sub-module is responsible for detecting that the face in front of the screen is not disguised by a face photo or a face video;
j) the comprehensive result output sub-module is responsible for synthesizing all detection results to obtain a final concentration state result of the learner and outputting the final concentration state result to the state reminding module and the state recording module;
6) the state reminding module is used for pausing the electronic course playing or vibrating the electronic course playing window or superposing an alarm sound in audio output by the electronic course playing equipment when the learner is in the non-attentive state, so as to remind the learner to improve the attentive degree;
7) the state recording module is used for recording the moment when the learner is in the inattentive state, the image shot by the camera equipment, the audio played by the electronic course playing equipment and the detection result of each submodule in the state detection module;
8) and the state statistical analysis module is used for statistically analyzing various data recorded by the state recording module and generating a statistical report and a statistical chart as well as suggestions for improving the learning state.
8. The method of claim 7, further comprising:
when the expression detection sub-module detects that the learner is confused by the expressions, the state statistical analysis module corresponds to the electronic course playing contents according to the expression moments, so that a statistical report and a statistical chart which are confused when the learner learns the course contents and a learning suggestion aiming at the confused course contents are provided.
CN202010658846.XA 2020-07-09 2020-07-09 Electronic course learning supervising and urging method and system Pending CN111782029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010658846.XA CN111782029A (en) 2020-07-09 2020-07-09 Electronic course learning supervising and urging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010658846.XA CN111782029A (en) 2020-07-09 2020-07-09 Electronic course learning supervising and urging method and system

Publications (1)

Publication Number Publication Date
CN111782029A true CN111782029A (en) 2020-10-16

Family

ID=72759372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010658846.XA Pending CN111782029A (en) 2020-07-09 2020-07-09 Electronic course learning supervising and urging method and system

Country Status (1)

Country Link
CN (1) CN111782029A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112820419A (en) * 2021-01-27 2021-05-18 泰安市康福宝医疗科技有限公司 Hospital preoperative propaganda and education system
CN113643580A (en) * 2021-08-13 2021-11-12 四川红色旗子教育科技有限公司 Education community system
CN114442900A (en) * 2022-01-28 2022-05-06 上海橙掌信息科技有限公司 Display device and learning effect acquisition method
CN114998975A (en) * 2022-07-15 2022-09-02 电子科技大学成都学院 Foreign language teaching method and device based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006023506A (en) * 2004-07-07 2006-01-26 Tokai Univ Electronic teaching material learning support device, electronic teaching material learning support system, electronic teaching material learning support method, and electronic learning support program
CN105516280A (en) * 2015-11-30 2016-04-20 华中科技大学 Multi-mode learning process state information compression recording method
CN109086693A (en) * 2018-07-16 2018-12-25 安徽国通亿创科技股份有限公司 A kind of detection technique of online teaching study attention
CN109754661A (en) * 2019-03-18 2019-05-14 北京一维大成科技有限公司 A kind of on-line study method, apparatus, equipment and medium
CN111008914A (en) * 2018-10-08 2020-04-14 上海风创信息咨询有限公司 Object concentration analysis method and device, electronic terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006023506A (en) * 2004-07-07 2006-01-26 Tokai Univ Electronic teaching material learning support device, electronic teaching material learning support system, electronic teaching material learning support method, and electronic learning support program
CN105516280A (en) * 2015-11-30 2016-04-20 华中科技大学 Multi-mode learning process state information compression recording method
CN109086693A (en) * 2018-07-16 2018-12-25 安徽国通亿创科技股份有限公司 A kind of detection technique of online teaching study attention
CN111008914A (en) * 2018-10-08 2020-04-14 上海风创信息咨询有限公司 Object concentration analysis method and device, electronic terminal and storage medium
CN109754661A (en) * 2019-03-18 2019-05-14 北京一维大成科技有限公司 A kind of on-line study method, apparatus, equipment and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112820419A (en) * 2021-01-27 2021-05-18 泰安市康福宝医疗科技有限公司 Hospital preoperative propaganda and education system
CN112820419B (en) * 2021-01-27 2023-10-31 泰安市康福宝医疗科技有限公司 Preoperative ventilating and teaching system for hospitals
CN113643580A (en) * 2021-08-13 2021-11-12 四川红色旗子教育科技有限公司 Education community system
CN113643580B (en) * 2021-08-13 2022-12-09 四川红色旗子教育科技有限公司 Education community system
CN114442900A (en) * 2022-01-28 2022-05-06 上海橙掌信息科技有限公司 Display device and learning effect acquisition method
CN114998975A (en) * 2022-07-15 2022-09-02 电子科技大学成都学院 Foreign language teaching method and device based on big data

Similar Documents

Publication Publication Date Title
CN111782029A (en) Electronic course learning supervising and urging method and system
Atoum et al. Automated online exam proctoring
Prathish et al. An intelligent system for online exam monitoring
EP3373202B1 (en) Verification method and system
CN110349667B (en) Autism assessment system combining questionnaire and multi-modal model behavior data analysis
CN105122353B (en) The method of speech recognition for the computing device of speech recognition and on computing device
CN110383235A (en) Multi-user intelligently assists
JP5323770B2 (en) User instruction acquisition device, user instruction acquisition program, and television receiver
WO2017024845A1 (en) Stimulus information compiling method and system for tests
CN110785735A (en) Apparatus and method for voice command scenario
US11308340B2 (en) Verification method and system
WO2019051082A1 (en) Systems, methods and devices for gesture recognition
CN108537702A (en) Foreign language teaching evaluation information generation method and device
CN104919396B (en) Shaken hands in head mounted display using body
Nguyen et al. Using self-context for multimodal detection of head nods in face-to-face interactions
CN115345761A (en) Online examination auxiliary system and online examination monitoring method
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
Ma et al. Development of the Interactive Rehabilitation Game System for Children with Autism Based on Game Psychology
WO2020144835A1 (en) Information processing device and information processing method
US20220270505A1 (en) Interactive Avatar Training System
CN109326348A (en) Analysis prompting system and method
KR102122021B1 (en) Apparatus and method for enhancement of cognition using Virtual Reality
Dai et al. Group Interaction Analysis in Dynamic Context $^{\ast} $
CN110262662A (en) A kind of intelligent human-machine interaction method
CN108766127A (en) Sign language exchange method, unit and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201016