CN111242049A - Student online class learning state evaluation method and system based on facial recognition - Google Patents

Student online class learning state evaluation method and system based on facial recognition Download PDF

Info

Publication number
CN111242049A
CN111242049A CN202010043578.0A CN202010043578A CN111242049A CN 111242049 A CN111242049 A CN 111242049A CN 202010043578 A CN202010043578 A CN 202010043578A CN 111242049 A CN111242049 A CN 111242049A
Authority
CN
China
Prior art keywords
student
face
length
students
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010043578.0A
Other languages
Chinese (zh)
Other versions
CN111242049B (en
Inventor
徐麟
周传辉
李冠男
赵小维
吴棒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010043578.0A priority Critical patent/CN111242049B/en
Publication of CN111242049A publication Critical patent/CN111242049A/en
Application granted granted Critical
Publication of CN111242049B publication Critical patent/CN111242049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Marketing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Technology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Operations Research (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)

Abstract

The invention discloses a student online class learning state evaluation method based on facial recognition, which comprises the following steps: acquiring facial images of students, conditions of answering questions of the students and student information; obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer; standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into a trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained; and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student. The method of the invention can improve the recognition efficiency and the evaluation effect.

Description

Student online class learning state evaluation method and system based on facial recognition
Technical Field
The invention relates to the technical field of computers, in particular to a student online class learning state evaluation method and system based on facial recognition.
Background
At present, colleges and universities gradually introduce learning modes of network teaching in consideration of the reasons of saving teaching expenses, saving manpower and material resources, enriching teaching contents and the like. Students can learn knowledge and skills by watching video and audio on line through electronic equipment and the internet.
The inventor of the present application finds that the method of the prior art has at least the following technical problems in the process of implementing the present invention:
the network lessons are convenient for teachers and students, and the problem that students can not supervise the network lessons due to condition limitation exists. Only watching videos, and having no classroom-like process of communication and interaction between teachers and students, the method also enables some students not to really listen to classes in the course of online class learning, and the class-listening effect and the concentration degree of the students are not like traditional classroom teaching and can be timely fed back to teachers, so that the class-listening quality of the students on the online class is greatly reduced. Although the method in the prior art can monitor the learning state through devices such as a camera and the like, whether students are listening seriously or not needs to be judged in a manual mode, and the method is time-consuming, labor-consuming and low in efficiency.
Therefore, the method in the prior art has the technical problem of low efficiency.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for evaluating a learning status of a student in a web lesson based on facial recognition, so as to solve or at least partially solve the technical problem of low efficiency of the prior art.
In order to solve the above technical problem, a first aspect of the present invention provides a method for evaluating a learning status of a student in a web lesson based on facial recognition, including:
s1: acquiring facial images of students, conditions of answering questions of the students and student information;
s2: obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer;
s3: standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into a trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students;
s4: carrying out face recognition on the collected face image, extracting a face image and carrying out face feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
s5: and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
In one embodiment, the method for constructing the microexpression recognition convolutional neural network model trained in S3 includes:
searching a human face micro-expression picture respectively conforming to joyful, understanding and puzzlement state characteristics in a micro-expression database, processing the picture corresponding to the understanding degree state into picture information with uniform size and format as training data after processes of compression, stretching, sharpening and the like, wherein the web class listening understanding degree state of the student is divided into three levels: joyful, comprehension and confused, the joyful corresponding facial features comprise that eyes are opened, the face is over against the screen and mouth corners are raised, the comprehension corresponding facial features comprise that the face is over against the screen and eyebrows are spread, and the confused corresponding facial features comprise that eyebrows are tightly locked, eye microbials and mouth corners are downward;
determining the structure of a micro expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a characteristic layer, a full-link layer, a classification layer and an output layer;
and training the micro expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain the trained micro expression recognition convolutional neural network model.
In one embodiment, S3 specifically includes:
s3.1: inputting picture information corresponding to the collected facial image into a first convolution layer through an input layer, and performing feature extraction through the first convolution layer;
s3.2: performing dimensionality reduction compression processing on the image obtained in the S4.1 through the first pooling layer;
s3.3: performing feature extraction on the image subjected to the dimensionality reduction compression processing through a second convolution layer, and performing dimensionality reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the characteristic layer and outputting the one-dimensional vector to the full connection layer;
s3.5: outputting the data to a classification layer through a full connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full connection layer with the corresponding understanding degree state through the classification layer to obtain the corresponding understanding degree state of the picture;
s3.7: and outputting the corresponding comprehension degree state of the picture through the output layer.
In one embodiment, after S3.7, the method further comprises: different comprehension degree states are given different scores.
In one embodiment, the comprehension degree state corresponding to the output picture of the output layer is the comprehension degree state of the student at a moment, and the method further includes:
obtaining corresponding class state score u according to the assigned scorei
Scoring u according to class statusiObtaining the comprehension degree score U of the student in each stage of the network course learningk
Figure BDA0002368595400000031
Where N represents the number of times and K represents the phase.
In one embodiment, the step of obtaining the concentration of the student according to the comparison between the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison between the eye opening height of the student and the eye opening height of the preset standard face in the step S4 includes:
s4.1: judging whether the face of the student is facing the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of a preset standard face, wherein if the face of the student is not facing the screen, the face of the student is judged not to be focused, if the face of the student is facing the screen, the face of the student is judged to be focused, if the face of the student is facing the screen:
Figure BDA0002368595400000032
wherein L isiAnd WiLength and width of the student's face at time i, LsAnd WsLength and width of the face that is standard for students;
s4.2: according to the comparison condition of the opening height of the eyes of the student and the opening height of the preset standard face eyes, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and the judgment formula is as follows:
Figure BDA0002368595400000033
wherein HiFor the eye opening height of the student at time i, HsEye opening height, L, standard for studentsiLength of the student's face at time i, LsIf the length of the face standard for the student is larger than the length of the face standard for the student, the student is attentive at the moment i, and if the length of the face standard for the student is smaller than the length of the face standard for the student, the student is attentive at the moment i;
according to the concentration degree of the student i at the moment, whether the student state is not concentrated on in the preset time length is continuously monitored, and if the student state is not concentrated on, the student state is judged to be not concentrated on.
In one embodiment, the method further comprises dividing each lesson into different stages according to the time when the students ask questions to answer.
In one embodiment, after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to the corresponding student terminal according to the student information, and summarizing and feeding back the learning states of all the students to the corresponding teaching teacher terminal.
Based on the same inventive concept, the second aspect of the present invention provides a student online course learning state evaluation system based on facial recognition, comprising:
the information acquisition module is used for acquiring facial images of students, answer question conditions of the students and student information;
the student answer question evaluation module is used for obtaining the student answer question result according to the comparison condition of the student answer question condition and the reference answer;
the comprehension degree recognition module is used for standardizing the collected facial images into consistent picture information and inputting the consistent picture information into the trained microexpression recognition convolutional neural network model to obtain the class attending comprehension degree state of the students;
the concentration degree identification module is used for carrying out face identification on the collected face image, extracting a face picture and carrying out face feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
and the evaluation result module is used for taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
Based on the same inventive concept, a third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the method of the first aspect.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
the invention provides a student online class learning state evaluation method based on facial recognition, which is characterized in that after facial images of students, student answer question conditions and student information are obtained, the student answer question results are obtained according to the comparison condition of the student answer question conditions and reference answers; standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into a trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students; carrying out face recognition on the collected face image, extracting a face image, carrying out face feature extraction, and obtaining the concentration degree of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of a preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face; and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
Compared with the prior art that judgment is carried out in a manual mode, the online course attendance understanding degree state of the student is identified by constructing the micro-expression recognition convolutional neural network model, and the micro-expression recognition is carried out on the student, so that the subtle change of the expression and the facial features of the student can be captured to be matched with the concentration degree state, and the real-time concentration degree condition of the student in online course learning is obtained. According to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained; whether the face of the student is over against the screen or not can be judged through the length-width ratio of the face of the student and the opening height of the eyes, and whether the opening height of the eyes is larger than a threshold value or not, so that the concentration degree of the student is obtained, on one hand, the identification efficiency can be improved, and the identification accuracy can be improved, and on the other hand, the invention has three different dimensions: the student answers the question situation, the class listening comprehension degree state and the concentration degree evaluate the learning state of the student, and the comprehensive evaluation effect can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a student online course learning state evaluation method based on facial recognition according to the present invention;
FIG. 2 is a schematic view illustrating how to recognize the class attendance understanding status of a student in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a micro-expression recognition model based on a convolutional neural network according to the present invention;
FIG. 4 is a flowchart illustrating the focus evaluation of students on class according to an embodiment of the present invention;
FIG. 5 in an embodiment of the present invention, determining student t1-t2A schematic of inattention during the time period;
fig. 6 is a block diagram illustrating a structure of a system for evaluating learning status of a student in a web lesson based on facial recognition according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an implementation of a student web-lesson learning state evaluation system based on facial recognition according to an embodiment of the present invention;
fig. 8 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The invention aims to provide a student web course learning state evaluation method and system based on facial recognition, which are used for solving or at least partially solving the technical problem of low efficiency of the method in the prior art.
In order to achieve the above object, the main concept of the present invention is as follows:
firstly, acquiring facial images of students, answer question conditions of the students and student information; then, according to the comparison condition of the student answer question condition and the reference answer, obtaining the answer question result of the student; then, standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into a trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students; then, carrying out face recognition on the collected face image, extracting a face image and carrying out face feature extraction to obtain the face size and eye opening height of the student, and obtaining the concentration degree of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of a preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face; and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The embodiment provides a method for evaluating the learning state of a student in a web course based on facial recognition, please refer to fig. 1, and the method comprises the following steps:
s1: acquiring facial images of students, conditions of answering questions of the students and student information;
specifically, in the process of starting the online class learning of the student, the student starts a camera of the computer to acquire facial information of the student. The collected face image, the student answering question condition and the student information are used as input and uploaded to a server, and relevant modules acquire the input information.
The video stream is used for monitoring the student listening status, in the specific implementation process, each network class can be divided into different stages according to the time of the student asking questions to answer, for example, four stages, and the student video stream, the student answering question condition and the student information of each stage are uploaded to the server. In the specific implementation process, the learning condition of the students is considered not to be changed greatly in a short time, and the low-frequency sampling (1Hz) can be carried out on the video resources of the students, namely, the video information is collected once per second and is used for evaluating the class state of the students at the moment.
S2: and obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer.
Particularly, after the condition of the student answer questions uploaded by the students is compared with the reference answers, the scores can be scored according to the comparison condition, so that the percent score Q of the student answer questions at each stage can be obtainedkAnd K represents the K-th stage.
S3: and standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into the trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students.
Specifically, the face recognition is a biometric recognition technology for performing identity recognition based on facial feature information of a person, and integrates various professional technologies such as artificial intelligence, machine recognition, machine learning, model theory, expert system, video image processing and the like. The face recognition system mainly comprises four parts: image acquisition and detection, image preprocessing, image feature extraction, matching and identification.
Micro-expression recognition has gained wide attention in recent years as an extension of face recognition technology. Facial expressions are an intuitive reflection of human emotions and psychology. Unlike conventional facial expressions, micro-expression is a special facial micro-motion that can be used as an important basis for judging subjective emotion of a person. With the development of machine identification and deep learning technology, the feasibility and reliability of micro-expression identification are greatly improved.
The applicant of the invention discovers through a large amount of research and practice that the emotion of a student generally does not fluctuate greatly in the course of online class learning, so that the emotion characteristics of the student such as happy and difficult expression recognition of the student can not reflect the learning state of the student. The micro expression recognition module is provided for the students, and can capture the subtle expression changes and facial features of the students so as to match the micro expression changes and the facial features with the understanding degree state, and obtain the real-time understanding degree state condition of the students in the network course learning.
The convolutional neural network is one of deep learning methods, and is widely applied to the fields of computer vision and image processing. Compared with other machine learning methods, the convolutional neural network can effectively process large-scale data information and also meets the requirement that students on a network course learning platform need to process a large amount of information. The convolutional neural network takes the original image as input to carry out automatic training and feature autonomous extraction by a training mode of given input and corresponding expected output, thereby obtaining a corresponding recognition model, namely a micro-expression recognition convolutional neural network model. The process of recognizing the comprehension degree state by the micro expression recognition convolutional neural network model is shown in fig. 2.
The time for manual preprocessing can be further reduced by the S3 and the method is suitable for large-scale picture training, so that the recognition efficiency can be improved.
S4: carrying out face recognition on the collected face image, extracting a face image and carrying out face feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
specifically, S4 is to detect whether the student in the course of learning through the web lesson is attentive to learning, and evaluate the attentiveness of the student, wherein the aspect ratio of the standard face and the eye opening height of the preset standard face can be obtained in advance, and whether the face is facing the screen can be preliminarily determined through comparison of the aspect ratio of the face, and then the eye opening degree can be further determined according to comparison of the eye opening height, so that the attentiveness of the student can be determined.
S5: and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
Specifically, in this step, the above-described answer question result, the comprehension degree state, and the concentration degree are used as the final evaluation result, and the learning state of the student can be evaluated from different aspects or dimensions, so that the objectivity and accuracy of the evaluation can be improved.
In one embodiment, the method for constructing the microexpression recognition convolutional neural network model trained in S3 includes:
searching a human face micro-expression picture respectively conforming to joyful, understanding and puzzlement state characteristics in a micro-expression database, processing the picture corresponding to the understanding degree state into picture information with uniform size and format as training data after processes of compression, stretching, sharpening and the like, wherein the web class listening understanding degree state of the student is divided into three levels: joyful, comprehension and confused, the joyful corresponding facial features comprise that eyes are opened, the face is over against the screen and mouth corners are raised, the comprehension corresponding facial features comprise that the face is over against the screen and eyebrows are spread, and the confused corresponding facial features comprise that eyebrows are tightly locked, eye microbials and mouth corners are downward;
determining the structure of a micro expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a characteristic layer, a full-link layer, a classification layer and an output layer;
and training the micro expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain the trained micro expression recognition convolutional neural network model.
Specifically, the class state of the student, i.e., the degree of understanding of the class, can be classified into three levels: pleasure, comprehension, confusion. When students learn online lessons, the pleasure corresponding facial features are as follows: eyes are open, the face is opposite to the screen, the mouth angle is raised, and the like. Understanding the corresponding facial features is: the face is opposite to the screen, the eyebrows stretch, and the like. Confusing the corresponding facial features as: tight eyebrow, slight squinting of eyes, downward mouth angle, etc.
A micro-expression recognition convolutional neural network model is constructed and established by adopting a convolutional neural network, and the structure of the model is shown in figure 3. The micro expression recognition convolution neural network model mainly comprises several parts: the input layer comprises a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2, a feature layer, a full-link layer, a classification layer and an output layer. Interaction among the layers enables the model to extract the characteristics of the face picture and match the understanding degree state of the student in class, and therefore the understanding degree state of the student in class can be predicted according to the face picture of the student in class listening.
And searching a human face micro expression picture respectively conforming to the characteristics of pleasure, understanding and confusion states in the micro expression database. After the picture is compressed, stretched and sharpened, the picture is processed into picture information with uniform size and format. After the picture information is input to the convolutional layer 1, the convolutional layer 1 performs feature extraction on the picture. Then input into the pooling layer 1 for dimension reduction compression processing. And then input into the convolutional layer 2 and the pooling layer 2 to repeat the operation. The feature layer compresses the pictures into a one-dimensional vector and outputs the one-dimensional vector to the full connection layer. The fully-connected layer is a classical neural network structure and is formed by forward connection of a plurality of neurons. And outputting the state to a classifier to be matched with the corresponding comprehension degree state. Therefore, the purpose of training the convolutional neural network micro-expression recognition model is achieved. So that the model automatically learns and stores the intrinsic association of picture features and corresponding comprehension degree states.
And after the convolutional neural network model is trained, establishing a micro expression recognition model. Then, video pictures of students are standardized to be consistent picture information and input into the trained microexpression recognition convolutional neural network model, and the model outputs the corresponding understanding degree of the pictures.
In one embodiment, S3 specifically includes:
s3.1: inputting picture information corresponding to the collected facial image into a first convolution layer through an input layer, and performing feature extraction through the first convolution layer;
s3.2: performing dimensionality reduction compression processing on the image obtained in the S4.1 through the first pooling layer;
s3.3: performing feature extraction on the image subjected to the dimensionality reduction compression processing through a second convolution layer, and performing dimensionality reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the characteristic layer and outputting the one-dimensional vector to the full connection layer;
s3.5: outputting the data to a classification layer through a full connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full connection layer with the corresponding understanding degree state through the classification layer to obtain the corresponding understanding degree state of the picture;
s3.7: and outputting the corresponding comprehension degree state of the picture through the output layer.
Specifically, S3.1-3.7 introduces a processing process of the micro expression recognition convolutional neural network model, and finally an understanding degree state can be obtained.
In one embodiment, after S3.7, the method further comprises: different comprehension degree states are given different scores.
Specifically, the class state of the student, i.e., the degree of understanding of the class, is classified into three levels: pleasure, comprehension, confusion, for example, each level corresponds to a comprehension score of: 100. 80 and 40 points.
In one embodiment, the comprehension degree state corresponding to the output picture of the output layer is the comprehension degree state of the student at a moment, and the method further includes:
obtaining corresponding class state score u according to the assigned scorei
Scoring u according to class statusiObtaining the comprehension degree score U of the student in each stage of the network course learningk
Figure BDA0002368595400000101
Where N represents the number of times and K represents the phase.
Specifically, the degree of understanding at a certain time can be obtained by the above method, and then the average value is obtained, whereby the state of the degree of understanding at that stage can be obtained.
In one embodiment, the step of obtaining the concentration of the student according to the comparison between the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison between the eye opening height of the student and the eye opening height of the preset standard face in the step S4 includes:
s4.1: judging whether the face of the student is facing the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of a preset standard face, wherein if the face of the student is not facing the screen, the face of the student is judged not to be focused, if the face of the student is facing the screen, the face of the student is judged to be focused, if the face of the student is facing the screen:
Figure BDA0002368595400000102
wherein L isiAnd WiLength and width of the student's face at time i, LsAnd WsLength and width of the face that is standard for students;
s4.2: according to the comparison condition of the opening height of the eyes of the student and the opening height of the preset standard face eyes, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and the judgment formula is as follows:
Figure BDA0002368595400000111
wherein HiFor the eye opening height of the student at time i, HsEye opening height, L, standard for studentsiLength of the student's face at time i, LsIf the length of the face standard for the student is larger than the length of the face standard for the student, the student is attentive at the moment i, and if the length of the face standard for the student is smaller than the length of the face standard for the student, the student is attentive at the moment i;
according to the concentration degree of the student i at the moment, whether the student state is not concentrated on in the preset time length is continuously monitored, and if the student state is not concentrated on, the student state is judged to be not concentrated on.
Specifically, an implementation process of detecting whether a student is attentive to learning in a course of online lesson learning and evaluating the attentiveness of the student is shown in fig. 4.
When a student studies on a network course, the student needs to stare at a computer screen to study. In view of the particularity of online class learning, the evaluation criteria of the invention for the concentration or non-concentration of students during online class learning are as follows: whether the face is facing the screen and the eye opening is greater than a threshold, e.g., 50%.
After the student logs in, the student is required to collect standard facial pictures, namely, the student is over against a computer screen, eyes are opened, and the collected standard facial pictures are uploaded to a server for storage.
Carrying out face recognition on standard face pictures of students, extracting face pictures and carrying out face feature extraction to obtain standard face size (including length L)sX width Ws) And the height H of the student when the eyes are opens. Monitoring real-time facial pictures of students for online class learning, carrying out face recognition, extracting face pictures and carrying out facial feature extraction to obtain the face size (length L) of the students at moment iiX width Wi) And the height H of the student when the eyes are openi
The face size (length L) of the student at the moment i to be collectediX width Wi) And the height H of the student when the eyes are openiAnd standard face size (length L)sX width Ws) And the height H of the student when the eyes are opensThe data are jointly input into the concentration degree recognition model to judge the concentration degree state of the student at the moment.
Firstly, whether the face of a student is facing the screen at the moment i is judged, if the face of the student is not facing the screen, the student is judged not to be focused, and if the face of the student is facing the screen, the student is judged to be focused next. The decision formula for the screen is as follows:
Figure BDA0002368595400000112
in formula (2): l isiAnd WiLength and width of the face of the student at time i, LsAnd WsThe length and width of the face that is standard for students.
When a student twists or lowers his head, the length and width of the face captured by the video to the student change. However, considering that the students may move back and forth during the course, and the length and width of the faces of the students also change, the aspect ratio of the faces is used as a reference basis-when the faces of the students move back and forth while facing the screen, the collected faces become larger and smaller in equal proportion, and the aspect ratio of the faces is unchanged. Therefore, when the aspect ratio of the face of the student at the time i is greatly different from that in the standard state (considering that the computer screen has a certain width and the face of the student rotates from time to time, and the like, the proportional interval is set to (0.9,1.1)), the student is judged to be not facing the screen and not paying attention at the moment.
Because the student is just facing the screen, the conditions such as sleeping and stubborn can appear, and the student is not concentrated on the classroom at this moment. Therefore, after the face of the student is judged to be over against the screen, the eye opening of the student needs to be further judged, as shown in formula (3):
Figure BDA0002368595400000121
in the formula: hiFor the eye opening height of the student at time i, HsEye opening height, L, standard for studentsiLength of the student's face at time i, LsThe length of the face is standard for students.
Because the distance between the student and the computer screen at time i may not be consistent with the standard time, the face size may not be consistent. When the face of the student is facing the screen, the size of the face at the moment i is in equal proportion to the standard face size. Deriving the scaling from trigonometric functions
Figure BDA0002368595400000122
Then the eyes of the students at the moment i are opened to a height HiMultiplication byScaling
Figure BDA0002368595400000123
Standard eye opening height H of back and studentsAnd comparing the eye opening degrees of the students at the moment i, judging whether the eye opening degrees are larger than 50%, if so, judging that the students at the moment i are attentive, and if not, judging that the students at the moment i are not attentive.
Whether the student is attentive at each moment is judged by the method. Consider that the students can blink and lower their heads during the class focusing period. So that a continuous process should be considered for every second, whether the student is attentive or not. When it is continuously monitored that the concentration degree states of the students 10s are all inattentive (the moment of first inattentive in 10s is marked as the moment t of entering the inattentive state1) Until it is continuously monitored that the concentration degree states of the students 10s are all concentration (note that the time of the first concentration in 10s is the time t of leaving the non-concentration state2). The student does not concentrate on learning for a time period t1-t2And the rest of the time period is regarded as that the student attends to concentration, as shown in fig. 5.
According to the method, the period of inattention of the students is obtained, and the time of inattention is TiThe total duration of the student's inattention is:
Figure BDA0002368595400000131
in the formula TiThe total duration of inattention is m, the number of periods of inattention.
In one embodiment, the method further comprises dividing each lesson into different stages according to the time when the students ask questions to answer.
In one embodiment, after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to the corresponding student terminal according to the student information, and summarizing and feeding back the learning states of all the students to the corresponding teaching teacher terminal.
Specifically, after the score of the answer questions, the score of the learning concentration degree and the total time of no concentration of the students at each stage are obtained, the scores are uploaded to a educational administration for storage. Can be used as the basis for the reflection of the learning state of the student online lesson and the final evaluation of the student online lesson score. And uploading the score of the answer questions, the score of the learning concentration degree and the total non-concentration time of each stage of the student to a server. And according to the student information labels, the scoring results of the learning conditions of the students in each stage are transmitted to the corresponding students for feedback. After each section of the network course is finished, the network course learning states of all students are gathered and fed back to a teaching teacher, and the network course learning states can be used as the reference for judging the network course teaching quality and improving the teaching.
Example two
Based on the same inventive concept, the present embodiment provides a student online lesson learning state evaluation system based on facial recognition, please refer to fig. 6, the system includes:
the information acquisition module 201 is used for acquiring facial images of students, answer questions of the students and student information;
the student answer question evaluation module 202 is used for obtaining the answer question result of the student according to the comparison condition of the student answer question condition and the reference answer;
the understanding degree recognition module 203 is used for standardizing the acquired facial images into consistent picture information and inputting the consistent picture information into the trained microexpression recognition convolutional neural network model to obtain the lesson-attending understanding degree state of the students;
the concentration degree identification module 204 is used for performing face identification on the acquired facial image, extracting a face picture and performing facial feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
and the evaluation result module 205 is used for taking the answer question result, the class attending comprehension degree state and the concentration degree of the student as the evaluation result of the class attending learning state of the student.
The overall implementation flow of the system provided by this embodiment is shown in fig. 7.
The invention has the advantages that:
1. a set of recognition method and a set of recognition module for learning understanding degree of micro expression of students based on a convolutional neural network are provided, so that the recognition accuracy can be improved while the recognition efficiency of the understanding degree is improved.
2. The method and the module for identifying the real-time concentration degree of the students based on the facial features of each independent student are provided, so that the identification accuracy can be improved while the identification efficiency of the concentration degree is improved.
3. A set of student online class learning state evaluation and feedback system is constructed, and comprehensive evaluation effect can be improved.
Since the system described in the second embodiment of the present invention is a system used for implementing the method for evaluating the learning state of the student web lesson based on the facial recognition in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and the deformation of the system based on the method described in the first embodiment of the present invention, and thus the details are not described herein again. All systems adopted by the method of the first embodiment of the present invention are within the intended protection scope of the present invention.
EXAMPLE III
Based on the same inventive concept, the present embodiment provides a computer-readable storage medium on which a computer program is stored, which when executed implements the method described in the first embodiment.
Since the computer-readable storage medium introduced in the third embodiment of the present invention is a computer-readable storage medium used for implementing the student web lesson learning state evaluation method based on facial recognition in the first embodiment of the present invention, based on the method introduced in the first embodiment of the present invention, persons skilled in the art can understand the specific structure and deformation of the computer-readable storage medium, and therefore, no further description is given here. Any computer readable storage medium used in the method of the first embodiment of the present invention is within the scope of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. A student online class learning state evaluation method based on facial recognition is characterized by comprising the following steps:
s1: acquiring facial images of students, conditions of answering questions of the students and student information;
s2: obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer;
s3: standardizing the collected facial images to obtain consistent picture information, and inputting the consistent picture information into a trained microexpression recognition convolutional neural network model to obtain the class-attending comprehension degree state of the students;
s4: carrying out face recognition on the collected face image, extracting a face image and carrying out face feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
s5: and taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
2. The method of claim 1, wherein the constructing method of the microexpression recognition convolutional neural network model trained in S3 comprises:
searching a human face micro-expression picture respectively conforming to joyful, understanding and puzzlement state characteristics in a micro-expression database, processing the picture corresponding to the understanding degree state into picture information with uniform size and format as training data after processes of compression, stretching, sharpening and the like, wherein the web class listening understanding degree state of the student is divided into three levels: joyful, comprehension and confused, the joyful corresponding facial features comprise that eyes are opened, the face is over against the screen and mouth corners are raised, the comprehension corresponding facial features comprise that the face is over against the screen and eyebrows are spread, and the confused corresponding facial features comprise that eyebrows are tightly locked, eye microbials and mouth corners are downward;
determining the structure of a micro expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, a characteristic layer, a full-link layer, a classification layer and an output layer;
and training the micro expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain the trained micro expression recognition convolutional neural network model.
3. The method of claim 2, wherein S3 specifically comprises:
s3.1: inputting picture information corresponding to the collected facial image into a first convolution layer through an input layer, and performing feature extraction through the first convolution layer;
s3.2: performing dimensionality reduction compression processing on the image obtained in the S4.1 through the first pooling layer;
s3.3: performing feature extraction on the image subjected to the dimensionality reduction compression processing through a second convolution layer, and performing dimensionality reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the characteristic layer and outputting the one-dimensional vector to the full connection layer;
s3.5: outputting the data to a classification layer through a full connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full connection layer with the corresponding understanding degree state through the classification layer to obtain the corresponding understanding degree state of the picture;
s3.7: and outputting the corresponding comprehension degree state of the picture through the output layer.
4. The method of claim 3, wherein after S3.7, the method further comprises: different comprehension degree states are given different scores.
5. The method of claim 4, wherein the comprehension degree state corresponding to the output picture of the output layer is the comprehension degree state of the student at a moment, and the method further comprises:
obtaining corresponding class state score u according to the assigned scorei
Scoring u according to class statusiObtaining the comprehension degree score U of the student in each stage of the network course learningk
Figure FDA0002368595390000021
Where N represents the number of times and K represents the phase.
6. The method as claimed in claim 3, wherein the step of obtaining the concentration of the student based on the comparison of the length-width ratio of the student' S face with the length-width ratio of the preset standard face and the comparison of the eye opening height of the student with the eye opening height of the preset standard face in the step S4 comprises:
s4.1: judging whether the face of the student is facing the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of a preset standard face, wherein if the face of the student is not facing the screen, the face of the student is judged not to be focused, if the face of the student is facing the screen, the face of the student is judged to be focused, if the face of the student is facing the screen:
Figure FDA0002368595390000022
wherein L isiAnd WiLength and width of the student's face at time i, LsAnd WsLength and width of the face that is standard for students;
s4.2: according to the comparison condition of the opening height of the eyes of the student and the opening height of the preset standard face eyes, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and the judgment formula is as follows:
Figure FDA0002368595390000023
wherein HiFor the eyes of students at moment iEye opening height, HsEye opening height, L, standard for studentsiLength of the student's face at time i, LsIf the length of the face standard for the student is larger than the length of the face standard for the student, the student is attentive at the moment i, and if the length of the face standard for the student is smaller than the length of the face standard for the student, the student is attentive at the moment i;
according to the concentration degree of the student i at the moment, whether the student state is not concentrated on in the preset time length is continuously monitored, and if the student state is not concentrated on, the student state is judged to be not concentrated on.
7. The method of claim 1, further comprising dividing each lesson into different stages according to the time when students ask questions to answer.
8. The method of claim 1, wherein after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to the corresponding student terminal according to the student information, and summarizing and feeding back the learning states of all the students to the corresponding teaching teacher terminal.
9. A student web course learning state evaluation system based on facial recognition is characterized by comprising:
the information acquisition module is used for acquiring facial images of students, answer question conditions of the students and student information;
the student answer question evaluation module is used for obtaining the student answer question result according to the comparison condition of the student answer question condition and the reference answer;
the comprehension degree recognition module is used for standardizing the collected facial images into consistent picture information and inputting the consistent picture information into the trained microexpression recognition convolutional neural network model to obtain the class attending comprehension degree state of the students;
the concentration degree identification module is used for carrying out face identification on the collected face image, extracting a face picture and carrying out face feature extraction to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the face length and the face width; according to the comparison condition of the ratio of the length to the width of the face of the student and the length-width ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the concentration degree of the student is obtained;
and the evaluation result module is used for taking the answer question result, the class-attending comprehension degree state and the concentration degree of the student as the evaluation result of the class-attending learning state of the student.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements the method of any one of claims 1 to 8.
CN202010043578.0A 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system Active CN111242049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010043578.0A CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043578.0A CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Publications (2)

Publication Number Publication Date
CN111242049A true CN111242049A (en) 2020-06-05
CN111242049B CN111242049B (en) 2023-08-04

Family

ID=70865670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010043578.0A Active CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Country Status (1)

Country Link
CN (1) CN111242049B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
CN112215973A (en) * 2020-09-21 2021-01-12 彭程 Data display method, multimedia platform and electronic equipment
CN112735213A (en) * 2020-12-31 2021-04-30 奇点六艺教育科技股份有限公司 Intelligent teaching method, system, terminal and storage medium
CN112818754A (en) * 2021-01-11 2021-05-18 广州番禺职业技术学院 Learning concentration degree judgment method and device based on micro-expressions
CN112907408A (en) * 2021-03-01 2021-06-04 北京安博创赢教育科技有限责任公司 Method, device, medium and electronic equipment for evaluating learning effect of students
CN113239841A (en) * 2021-05-24 2021-08-10 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
CN113657146A (en) * 2021-06-30 2021-11-16 北京惠朗时代科技有限公司 Low-consumption identification method and device for non-concentration learning of students based on single image
WO2022052084A1 (en) * 2020-09-14 2022-03-17 Huawei Technologies Co., Ltd. Methods, systems, and media for context-aware estimation of student attention in online learning
CN114493952A (en) * 2022-04-18 2022-05-13 北京梦蓝杉科技有限公司 Education software data processing system and method based on big data
CN115631074A (en) * 2022-12-06 2023-01-20 南京熊大巨幕智能科技有限公司 Network science and education method, system and equipment based on informatization
CN116996722A (en) * 2023-06-29 2023-11-03 广州慧思软件科技有限公司 Virtual synchronous classroom teaching system in 5G network environment and working method thereof
CN117909587A (en) * 2024-01-19 2024-04-19 广州铭德教育投资有限公司 Method and system for individually recommending post-class exercises of students based on AI
CN116996722B (en) * 2023-06-29 2024-06-04 广州慧思软件科技有限公司 Virtual synchronous classroom teaching system in 5G network environment and working method thereof

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
CN108021893A (en) * 2017-12-07 2018-05-11 浙江工商大学 It is a kind of to be used to judging that student to attend class the algorithm of focus
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
CN108875606A (en) * 2018-06-01 2018-11-23 重庆大学 A kind of classroom teaching appraisal method and system based on Expression Recognition
KR101960815B1 (en) * 2017-11-28 2019-03-21 유엔젤주식회사 Learning Support System And Method Using Augmented Reality And Virtual reality
CN109657529A (en) * 2018-07-26 2019-04-19 台州学院 Classroom teaching effect evaluation system based on human facial expression recognition
KR20190043513A (en) * 2019-04-18 2019-04-26 주식회사 아이티스테이션 System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110334626A (en) * 2019-06-26 2019-10-15 北京科技大学 A kind of on-line study system based on affective state
US20190362138A1 (en) * 2018-05-24 2019-11-28 Gary Shkedy System for Adaptive Teaching Using Biometrics
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
KR101960815B1 (en) * 2017-11-28 2019-03-21 유엔젤주식회사 Learning Support System And Method Using Augmented Reality And Virtual reality
CN108021893A (en) * 2017-12-07 2018-05-11 浙江工商大学 It is a kind of to be used to judging that student to attend class the algorithm of focus
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
US20190362138A1 (en) * 2018-05-24 2019-11-28 Gary Shkedy System for Adaptive Teaching Using Biometrics
CN108875606A (en) * 2018-06-01 2018-11-23 重庆大学 A kind of classroom teaching appraisal method and system based on Expression Recognition
CN109657529A (en) * 2018-07-26 2019-04-19 台州学院 Classroom teaching effect evaluation system based on human facial expression recognition
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
KR20190043513A (en) * 2019-04-18 2019-04-26 주식회사 아이티스테이션 System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110334626A (en) * 2019-06-26 2019-10-15 北京科技大学 A kind of on-line study system based on affective state
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANG XIAOHUA ET AL.: "Two-level attention with two-stage multi-task learning for facial emotion recognition", 《ELSEVIER》, pages 217 - 225 *
徐振国: "智慧学习环境中学习画面的情感识别及其应用", 《中国博士学位论文全文数据库社会科学Ⅱ辑》, pages 127 - 21 *
马添翼;成波;: "基于面部表情特征的驾驶员疲劳状态识别", 汽车安全与节能学报, no. 03, pages 38 - 42 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
CN116018789A (en) * 2020-09-14 2023-04-25 华为技术有限公司 Method, system and medium for context-based assessment of student attention in online learning
WO2022052084A1 (en) * 2020-09-14 2022-03-17 Huawei Technologies Co., Ltd. Methods, systems, and media for context-aware estimation of student attention in online learning
CN112215973A (en) * 2020-09-21 2021-01-12 彭程 Data display method, multimedia platform and electronic equipment
CN112735213A (en) * 2020-12-31 2021-04-30 奇点六艺教育科技股份有限公司 Intelligent teaching method, system, terminal and storage medium
CN112818754A (en) * 2021-01-11 2021-05-18 广州番禺职业技术学院 Learning concentration degree judgment method and device based on micro-expressions
CN112907408A (en) * 2021-03-01 2021-06-04 北京安博创赢教育科技有限责任公司 Method, device, medium and electronic equipment for evaluating learning effect of students
CN113239841A (en) * 2021-05-24 2021-08-10 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
CN113239841B (en) * 2021-05-24 2023-03-24 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
CN113657146A (en) * 2021-06-30 2021-11-16 北京惠朗时代科技有限公司 Low-consumption identification method and device for non-concentration learning of students based on single image
CN113657146B (en) * 2021-06-30 2024-02-06 北京惠朗时代科技有限公司 Student non-concentration learning low-consumption recognition method and device based on single image
CN114493952A (en) * 2022-04-18 2022-05-13 北京梦蓝杉科技有限公司 Education software data processing system and method based on big data
CN115631074A (en) * 2022-12-06 2023-01-20 南京熊大巨幕智能科技有限公司 Network science and education method, system and equipment based on informatization
CN116996722A (en) * 2023-06-29 2023-11-03 广州慧思软件科技有限公司 Virtual synchronous classroom teaching system in 5G network environment and working method thereof
CN116996722B (en) * 2023-06-29 2024-06-04 广州慧思软件科技有限公司 Virtual synchronous classroom teaching system in 5G network environment and working method thereof
CN117909587A (en) * 2024-01-19 2024-04-19 广州铭德教育投资有限公司 Method and system for individually recommending post-class exercises of students based on AI

Also Published As

Publication number Publication date
CN111242049B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN111242049B (en) Face recognition-based student online class learning state evaluation method and system
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN108399376B (en) Intelligent analysis method and system for classroom learning interest of students
CN110334626B (en) Online learning system based on emotional state
CN110991381B (en) Real-time classroom student status analysis and indication reminding system and method based on behavior and voice intelligent recognition
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Littlewort et al. Automated measurement of children's facial expressions during problem solving tasks
CN112183238B (en) Remote education attention detection method and system
CN111046823A (en) Student classroom participation degree analysis system based on classroom video
CN113657168B (en) Student learning emotion recognition method based on convolutional neural network
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN109754653B (en) Method and system for personalized teaching
Butko et al. Automated facial affect analysis for one-on-one tutoring applications
CN111178263B (en) Real-time expression analysis method and device
CN116050892A (en) Intelligent education evaluation supervision method based on artificial intelligence
Ray et al. Design and implementation of technology enabled affective learning using fusion of bio-physical and facial expression
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
CN114187640A (en) Learning situation observation method, system, equipment and medium based on online classroom
Jain et al. Student’s Feedback by emotion and speech recognition through Deep Learning
Saurav et al. AI Based Proctoring
Sarrafzadeh et al. See me, teach me: Facial expression and gesture recognition for intelligent tutoring systems
CN114638988A (en) Teaching video automatic classification method and system based on different presentation modes
Ning et al. Application of psychological analysis of micro-expression recognition in teaching evaluation
CN111950472A (en) Teacher grinding evaluation method and system
Gupta et al. An adaptive system for predicting student attentiveness in online classrooms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant