CN111242049B - Face recognition-based student online class learning state evaluation method and system - Google Patents

Face recognition-based student online class learning state evaluation method and system Download PDF

Info

Publication number
CN111242049B
CN111242049B CN202010043578.0A CN202010043578A CN111242049B CN 111242049 B CN111242049 B CN 111242049B CN 202010043578 A CN202010043578 A CN 202010043578A CN 111242049 B CN111242049 B CN 111242049B
Authority
CN
China
Prior art keywords
student
face
students
length
eye opening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010043578.0A
Other languages
Chinese (zh)
Other versions
CN111242049A (en
Inventor
徐麟
周传辉
李冠男
赵小维
吴棒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010043578.0A priority Critical patent/CN111242049B/en
Publication of CN111242049A publication Critical patent/CN111242049A/en
Application granted granted Critical
Publication of CN111242049B publication Critical patent/CN111242049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Marketing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Technology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Operations Research (AREA)
  • Human Computer Interaction (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)

Abstract

The invention discloses a student online class learning state evaluation method based on face recognition, which comprises the following steps: acquiring face images of students, question answering conditions of the students and student information; obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer; the collected facial images are standardized to be consistent picture information and then are input into a trained micro-expression recognition convolutional neural network model, and the state of the degree of understanding of the students in class of net class is obtained; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face; and taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students. The method can improve the recognition efficiency and the evaluation effect.

Description

Face recognition-based student online class learning state evaluation method and system
Technical Field
The invention relates to the technical field of computers, in particular to a student online class learning state evaluation method and system based on face recognition.
Background
At present, the universities and the colleges consider saving teaching expenditure, saving manpower and material resources, enriching teaching contents and other reasons, and gradually introduce a learning mode of network teaching. Students can learn knowledge, skills, etc. by watching video and audio on line through electronic devices and the internet.
In the process of implementing the present invention, the present inventors have found that the method of the prior art has at least the following technical problems:
the online lessons are convenient for teachers and students, and meanwhile, the problem that students in online lessons cannot supervise due to condition limitation exists. Only watch the video, but not have the interactive process of teacher and student exchange like the classroom also makes partial student in the course of net class study not really "listen to class", and student's effect of listening to class and concentration degree do not like traditional classroom teaching can in time feed back to the teacher, leads to the student to get on the net class and listen to class quality and make a discount greatly. Although the learning state can be monitored through equipment such as a camera, the method in the prior art needs to judge whether students are in a serious class or not in a manual mode, and is time-consuming, labor-consuming and low in efficiency.
From this, the prior art method has a technical problem of low efficiency.
Disclosure of Invention
In view of the above, the invention provides a student online class learning state evaluation method and system based on face recognition, which are used for solving or at least partially solving the technical problem of low efficiency existing in the method in the prior art.
In order to solve the technical problem, a first aspect of the present invention provides a method for evaluating learning status of a student based on face recognition, including:
s1: acquiring face images of students, question answering conditions of the students and student information;
s2: obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer;
s3: the collected facial images are standardized to be consistent picture information and then are input into a trained micro-expression recognition convolutional neural network model, and the state of the degree of understanding of the students in class of net class is obtained;
s4: face recognition is carried out on the collected face images, face images are extracted, facial feature extraction is carried out, and the face size and the eye opening height of the student are obtained, wherein the face size of the student comprises face length and face width; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
S5: and taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students.
In one embodiment, the method for constructing the trained micro-expression recognition convolutional neural network model in S3 includes:
the method comprises the steps of searching a human face micro-expression picture which respectively accords with pleasure, understanding and confusion state characteristics in a micro-expression database, processing pictures corresponding to the understanding degree state after compression, stretching, sharpening and other processes into picture information with uniform size and format as training data, wherein the net class understanding degree state of students is divided into three levels: pleasure, understand, confuse, pleasant corresponding facial features including eye opening, face facing up the screen and mouth corner, understand corresponding facial features including face facing up the screen and eyebrow stretching, confuse corresponding facial features including eyebrow tightening lock, eye micro-squint and mouth corner down;
determining a structure of a micro-expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a characteristic layer, a full connection layer, a classification layer and an output layer;
And training the micro-expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain a trained micro-expression recognition convolutional neural network model.
In one embodiment, S3 specifically includes:
s3.1: the collected facial images enter picture information through an input layer and are input into a first convolution layer, and feature extraction is carried out through the first convolution layer;
s3.2: performing dimension reduction compression processing on the image obtained in the step S4.1 through a first pooling layer;
s3.3: extracting features of the image subjected to the dimension reduction compression treatment through a second convolution layer, and performing dimension reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the feature layer and outputting the one-dimensional vector to the full-connection layer;
s3.5: outputting the data to a classification layer through a full-connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full-connection layer with the corresponding understanding degree state through the classification layer to obtain the understanding degree state corresponding to the picture;
s3.7: and outputting the corresponding understanding degree state of the picture through the output layer.
In one embodiment, the method further comprises, after S3.7: different scores are assigned to different understanding states.
In an embodiment, the understanding degree state corresponding to the output picture of the output layer is an understanding degree state of the student at a moment, and the method further includes:
obtaining a corresponding class state score u according to the assigned score i
Scoring u according to class status i Obtaining understanding degree score U of student net class learning at each stage k
Where N represents the number of moments and K represents the phase.
In one embodiment, in S4, the step of obtaining the concentration of the student according to the comparison of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison of the eye opening height of the student and the eye opening height of the preset standard face includes:
s4.1: judging whether the face of the student is right opposite to the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student to the aspect ratio of the preset standard face, wherein the student is not right opposite to the screen, and judging that the student is not attentive, and the student is right opposite to the screen, and then judging in the next step, wherein a judging formula of the right opposite to the screen is as follows:
wherein L is i And W is i For the length and width of the face of the student at time i, L s And W is s The length and width of the face are standard for students;
s4.2: according to the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and a judgment formula is shown as follows:
Wherein H is i For the eye opening height of the student at the moment i, H s Eye opening height for students, L i For the length of the face of the student at time i, L s For the length of the face of the student standard, if the length is larger than the standard length, the student at the moment i is focused, and if the length is smaller than the standard length, the student at the moment i is not focused;
and continuously monitoring whether the student status is inattentive or not in a preset time period according to the concentration degree of the student at the moment i, and judging that the student status is inattentive if the student status is inattentive.
In one embodiment, the method further comprises dividing each net lesson into different stages according to the time for the student to ask questions.
In one embodiment, after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to a corresponding student terminal according to student information, and collecting and feeding back the net class learning states of all students to a corresponding teaching teacher terminal.
Based on the same inventive concept, a second aspect of the present invention provides a student online class learning state evaluation system based on face recognition, comprising:
the information acquisition module is used for acquiring face images of students, question answering conditions of the students and student information;
The student answer question evaluation module is used for obtaining the answer question result of the student according to the comparison condition of the student answer question condition and the reference answer;
the understanding degree recognition module is used for carrying out standardized processing on the collected facial images to form consistent picture information, and inputting the picture information into the trained micro-expression recognition convolutional neural network model to obtain the net class learning understanding degree state of the students;
the concentration recognition module is used for recognizing the face of the acquired face image, extracting the face image and extracting facial features to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the length and the width of the face; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
and the evaluation result module is used for taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students.
Based on the same inventive concept, a third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the method of the first aspect.
The above-mentioned one or more technical solutions in the embodiments of the present application at least have one or more of the following technical effects:
according to the face recognition-based student online class learning state evaluation method, after face images of students, question answering conditions of the students and student information are acquired, answer question results of the students are obtained according to comparison conditions of the question answering conditions of the students and reference answers; the collected facial images are standardized to be consistent picture information and then are input into a trained micro-expression recognition convolutional neural network model, and the state of the degree of understanding of the students in class of net class is obtained; face recognition is carried out on the collected face images, face images are extracted, face feature extraction is carried out, and the concentration of the students is obtained according to the comparison condition of the ratio of the length of the faces of the students to the width of the faces to the aspect ratio of the preset standard faces and the comparison condition of the eye opening heights of the students to the eye opening heights of the preset standard faces; and then taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students.
Compared with the judgment by a manual mode in the prior art, the invention identifies the online class understanding degree state of the students by constructing the micro-expression identification convolutional neural network model, and can capture the subtle changes of the expressions and the facial features of the students by carrying out micro-expression identification on the students so as to match with the concentration degree state, thereby obtaining the real-time concentration degree situation of the students in online class learning. Obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face; whether the face of the student is right against the screen or not and whether the eye opening height is larger than a threshold value or not can be judged through the aspect ratio of the face of the student and the eye opening height, so that the concentration of the student is obtained, on one hand, the recognition efficiency can be improved, and on the other hand, the recognition accuracy can be improved, and on the other hand, the invention has three different dimensions: the students answer the question, listen to the class and understand the degree state and concentrate on the degree to evaluate the student study state, can improve the comprehensive evaluation effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a student online class learning state evaluation method based on face recognition;
fig. 2 is a diagram illustrating recognition of the status of the understanding degree of the students in the online class in the embodiment of the invention;
FIG. 3 is a schematic diagram of a micro-expression recognition model based on a convolutional neural network in the present invention;
FIG. 4 is a flowchart of the evaluation of the concentration of students in class in an embodiment of the invention;
FIG. 5 shows a determination of student at t in an embodiment of the present invention 1 -t 2 Schematic drawing of inattention during a time period;
fig. 6 is a block diagram of a student online class learning status evaluation system based on face recognition according to an embodiment of the present invention;
FIG. 7 is a flowchart of an implementation of a student online class learning state evaluation system based on face recognition in an embodiment of the invention;
Fig. 8 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The invention aims to provide a student online class learning state evaluation method and system based on face recognition, which are used for solving or at least partially solving the technical problem of low efficiency in the method in the prior art.
In order to achieve the above object, the present invention is mainly conceived as follows:
firstly, acquiring face images of students, question answering conditions of the students and student information; then obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer; then, the collected facial images are standardized to be consistent picture information and then are input into a trained micro-expression recognition convolutional neural network model, and the net class learning degree state of students is obtained; then, face recognition is carried out on the collected face images, face images are extracted, facial feature extraction is carried out, the face size and the eye opening height of the student are obtained, and the concentration degree of the student is obtained according to the comparison condition of the ratio of the face length to the width of the student to the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student to the eye opening height of the preset standard face; and then taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment provides a student online class learning state evaluation method based on face recognition, referring to fig. 1, the method includes:
s1: acquiring face images of students, question answering conditions of the students and student information;
specifically, in the course of a student starting a net lesson study, the student turns on a camera of a computer to acquire face information of the student. The collected facial images, the student answer questions and the student information are uploaded as inputs to a server, and the relevant modules acquire the input information.
The video stream is used for monitoring the class listening condition of the students, in the implementation process, each net class can be divided into different stages, for example, four stages according to the time of answering the questions of the students, and the video stream of the students, the condition of answering the questions of the students and the information of the students in each stage are uploaded to the server. In the specific implementation process, the situation that the learning condition of the student does not change greatly in a short time is considered, and the video resource of the student can be sampled at a low frequency (1 Hz), namely, video information is collected once per second, so that the student can evaluate the teaching state at the moment.
S2: and obtaining the answer question result of the student according to the comparison of the answer question condition of the student and the reference answer.
Specifically, after comparing the student answer question condition uploaded by the student with the reference answer, scoring can be performed according to the comparison condition, so as to obtain the answer question percentage score Q of each stage of the student k K represents the K-th stage.
S3: and (3) carrying out standardization processing on the collected facial images to obtain consistent picture information, and inputting the picture information into a trained micro-expression recognition convolutional neural network model to obtain the online class learning understanding degree state of the students.
Specifically, the face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of a person, and integrates various professional technologies such as artificial intelligence, machine recognition, machine learning, model theory, expert system, video image processing and the like. The face recognition system mainly comprises four parts: image acquisition and detection, image preprocessing, image feature extraction, matching and recognition.
Microexpressive recognition has gained widespread attention in recent years as an extension of face recognition technology. Facial expressions are visual reflections of human emotion and mind. Unlike conventional facial expressions, microexpressions are special facial micro-motions and can be used as important basis for judging subjective emotion of a person. With the development of machine recognition and deep learning technology, the feasibility and reliability of micro expression recognition are greatly improved.
Through a great deal of research and practice, the applicant of the invention discovers that the emotion of the student does not have larger fluctuation in the course of online class learning, so that the emotion characteristics of the student such as happiness, difficulty and the like for carrying out the emotion recognition on the student can not reflect the learning state of the student. And the micro-expression recognition module is used for carrying out micro-expression recognition on students, and capturing the subtle changes of the expressions and the facial features of the students so as to match with the understanding degree state, thereby obtaining the real-time understanding degree state condition of the students in the course learning of the network.
Convolutional neural networks are one of the methods in the deep learning method, which find wide application in the fields of computer vision and image processing. Compared with other machine learning methods, the convolutional neural network can effectively process large-scale data information and also meets the requirement that students on a net lesson learning platform need to process large information. The convolutional neural network takes an original image as an input to carry out automatic training and feature autonomous extraction through a training mode of given input and corresponding expected output, so that a corresponding recognition model, namely a micro-expression recognition convolutional neural network model is obtained. The recognition process of understanding degree state by the micro expression recognition convolutional neural network model is shown in fig. 2.
The S3 can further reduce the time of manual pretreatment and is suitable for large-scale picture training, so that the recognition efficiency can be improved.
S4: face recognition is carried out on the collected face images, face images are extracted, facial feature extraction is carried out, and the face size and the eye opening height of the student are obtained, wherein the face size of the student comprises face length and face width; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
specifically, S4 is to detect whether a student is concentrating on learning in a course of learning on a net, and evaluate concentration of the student, wherein an aspect ratio of a standard face and an eye opening height of a preset standard face can be obtained in advance, whether the face is right facing a screen can be primarily judged through comparison of the aspect ratio of the face, and then the eye opening degree is further judged according to comparison of the eye opening heights, so that concentration of the student can be evaluated.
S5: and taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students.
Specifically, the step takes the answer question result, the understanding degree state and the concentration degree as the final evaluation result, and the learning state of the student can be evaluated from different aspects or dimensions, so that the objectivity and the accuracy of the evaluation can be improved.
In one embodiment, the method for constructing the trained micro-expression recognition convolutional neural network model in S3 includes:
the method comprises the steps of searching a human face micro-expression picture which respectively accords with pleasure, understanding and confusion state characteristics in a micro-expression database, processing pictures corresponding to the understanding degree state after compression, stretching, sharpening and other processes into picture information with uniform size and format as training data, wherein the net class understanding degree state of students is divided into three levels: pleasure, understand, confuse, pleasant corresponding facial features including eye opening, face facing up the screen and mouth corner, understand corresponding facial features including face facing up the screen and eyebrow stretching, confuse corresponding facial features including eyebrow tightening lock, eye micro-squint and mouth corner down;
determining a structure of a micro-expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a characteristic layer, a full connection layer, a classification layer and an output layer;
And training the micro-expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain a trained micro-expression recognition convolutional neural network model.
Specifically, the teaching status of the students, i.e. the understanding of the class, can be classified into three classes: pleasure, understand, confuse. When students learn online lessons, the facial features corresponding to pleasure are as follows: eyes open, face faces up against the screen, corners of mouth up, etc. Understanding the corresponding facial features is: the face is opposite to the screen, the eyebrows are stretched, etc. The confusing corresponding facial features are: eyebrow tightening, eye micro-squinting, mouth corner down, etc.
The structure of the micro expression recognition convolutional neural network model is shown in figure 3 by adopting the convolutional neural network construction. The micro-expression recognition convolutional neural network model mainly comprises several parts: input layer, convolution layer 1, pooling layer 1, convolution layer 2, pooling layer 2, feature layer, full connection layer, classification layer, and output layer. The interaction among the layers enables the model to extract the characteristics of the face picture and match with the understanding degree state of the student when the student is in class, so that the understanding degree state of the student when the student is in class at the moment is predicted according to the face picture of the student when the student is in class.
And searching the facial microexpressive pictures which respectively accord with pleasure, understanding and confusion state characteristics in the microexpressive database. After the pictures are compressed, stretched, sharpened and the like, the pictures are processed into picture information with uniform size and format. After the picture information is input into the convolution layer 1, the convolution layer 1 performs feature extraction on the picture. And then inputting the compressed data into the pooling layer 1 for dimension reduction compression treatment. And then input to the convolution layer 2 and the pooling layer 2 for repeated operation. The feature layer compresses the picture to a one-dimensional vector and outputs the one-dimensional vector to the full-connection layer. The full connection layer is a classical neural network structure and is formed by connecting a plurality of neurons forward. And outputting the result to a classifier to be matched with the corresponding understanding degree state. Thereby achieving the purpose of training the micro-expression recognition model of the convolutional neural network. So that the model automatically learns and stores the inherent relationship between the picture characteristics and the corresponding understanding degree states.
After the convolutional neural network model is trained, the micro-expression recognition model is built. And then, carrying out standardization processing on the video pictures of the students to form consistent picture information, inputting the picture information into a trained micro-expression recognition convolutional neural network model, and outputting the understanding degree corresponding to the pictures by the model.
In one embodiment, S3 specifically includes:
s3.1: the collected facial images enter picture information through an input layer and are input into a first convolution layer, and feature extraction is carried out through the first convolution layer;
s3.2: performing dimension reduction compression processing on the image obtained in the step S4.1 through a first pooling layer;
s3.3: extracting features of the image subjected to the dimension reduction compression treatment through a second convolution layer, and performing dimension reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the feature layer and outputting the one-dimensional vector to the full-connection layer;
s3.5: outputting the data to a classification layer through a full-connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full-connection layer with the corresponding understanding degree state through the classification layer to obtain the understanding degree state corresponding to the picture;
s3.7: and outputting the corresponding understanding degree state of the picture through the output layer.
Specifically, S3.1-3.7 describes the processing procedure of the micro-expression recognition convolutional neural network model, and finally the understanding degree state can be obtained.
In one embodiment, the method further comprises, after S3.7: different scores are assigned to different understanding states.
Specifically, the teaching status of students, i.e., the understanding degree of class, is classified into three classes: pleasure, understand, confuse, for example, each level corresponds to an understanding degree score of: 100. 80 and 40 minutes.
In an embodiment, the understanding degree state corresponding to the output picture of the output layer is an understanding degree state of the student at a moment, and the method further includes:
obtaining a corresponding class state score u according to the assigned score i
Scoring u according to class status i Obtaining understanding degree score U of student net class learning at each stage k
Where N represents the number of moments and K represents the phase.
Specifically, the degree of understanding at a certain time can be obtained by the above method, and then the average value can be obtained, thereby obtaining the state of the degree of understanding at this stage.
In one embodiment, in S4, the step of obtaining the concentration of the student according to the comparison of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison of the eye opening height of the student and the eye opening height of the preset standard face includes:
s4.1: judging whether the face of the student is right opposite to the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student to the aspect ratio of the preset standard face, wherein the student is not right opposite to the screen, and judging that the student is not attentive, and the student is right opposite to the screen, and then judging in the next step, wherein a judging formula of the right opposite to the screen is as follows:
Wherein L is i And W is i For the length and width of the face of the student at time i, L s And W is s The length and width of the face are standard for students;
s4.2: according to the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and a judgment formula is shown as follows:
wherein H is i For the eye opening height of the student at the moment i, H s Eye opening height for students, L i For the length of the face of the student at time i, L s For the length of the face of the student standard, if the length is larger than the standard length, the student at the moment i is focused, and if the length is smaller than the standard length, the student at the moment i is not focused;
and continuously monitoring whether the student status is inattentive or not in a preset time period according to the concentration degree of the student at the moment i, and judging that the student status is inattentive if the student status is inattentive.
Specifically, the implementation process of detecting whether the students concentrate on learning in the course of the net lesson learning and evaluating the concentration of the students is shown in fig. 4.
When students learn online lessons, they need to stare at the computer screen to learn. In view of the specificity of net lesson learning, the evaluation criteria of the invention for the concentration of students during net lesson learning are as follows: whether the face is facing the screen and whether the eye's opening is greater than a threshold, such as 50%.
After the student logs in, the standard face picture of the student is required to be collected, namely the student is over against a computer screen, eyes are opened, and the collected standard face picture is uploaded to a server for storage.
Face recognition is carried out on the standard face picture of the student, the face picture is extracted, and the face feature extraction is carried out, so that the standard face size (comprising the length L) s X width W s ) And the height H of the eyes of the student when open s . The real-time face images of students in online class learning are monitored, face recognition is carried out, face images are extracted, face feature extraction is carried out, and the face size (length L) of the students at moment i is obtained i X width W i ) And the height H when the eyes of the student are open at the moment i
The face size of the student at the time i (length L i X width W i ) And the height H when the eyes of the student are open at the moment i And standard face size (length L s X width W s ) And height H of student when eyes are open s The two states are input into the concentration recognition model together to judge the concentration state of the student at the moment.
Firstly, judging whether the face of the student is right opposite to the screen at the moment i, if not, judging that the student is not attentive, and if so, judging that the student is right opposite to the screen, and further judging. The decision formula for the facing screen is as follows:
In the formula (2): l (L) i And W is i For the length and width of the face of the student at time i, L s And W is s Is the standard length and width of the face of the student.
When a student twists his head or falls down, the length and width of the face of the student that the video captures changes. However, considering that the student may move back and forth during the online class, the length and width of the face of the student also change at this time, so the aspect ratio of the face is used as a reference basis—when the face of the student moves back and forth when facing the screen, the equal proportion of the collected face becomes larger and smaller, and the aspect ratio of the face is unchanged. Therefore, when the aspect ratio of the face of the student at the moment i has a large difference from the standard state (the proportional interval is defined as (0.9,1.1) in consideration of the reasons that the computer screen has a certain width, the face rotates from time to time, and the like), it is determined that the student is not facing the screen at the moment and is not attentive.
Since sleeping and foolproof may occur if the student is facing the screen, the student is not focused on the classroom at this time. Therefore, after the face of the student is judged to be opposite to the screen, the eye opening degree of the student needs to be further judged, as shown in the formula (3):
wherein: h i For the eye opening height of the student at the moment i, H s Eye opening height for students, L i For the length of the face of the student at time i, L s The length of the face is the standard for students.
Since the distance between the student and the computer screen at the moment i may be inconsistent with the standard time, the size of the face may be inconsistent. When the student's face is facing the screen, the size of the face at time i is in equal proportional relationship to the standard face size. From the trigonometric function, a scaling factor is derivedAnd then the eye opening height H of the student at the moment i i Multiplying by the scaling +.>Posterior and student standard eye opening height H s Comparing eye tension of student from time i and judging whether it isGreater than 50%, if greater than, the student is determined to be attentive at time i, and if less than, the student is determined to be attentive at time i.
By the method, whether the student is attentive at each moment is judged. Consider that students may also have small movements such as blinks, low head, etc. during their course. So that a student's concentration cannot be considered for every second, a continuous process should be considered. When the concentration states in the students 10s are continuously monitored as inattention (the first inattention time in the students 10s is marked as time t when the inattention state is entered 1 ) Until the concentration state in the student 10s is continuously monitored as concentration (the first concentration time in the student 10s is marked as the time t leaving the non-concentration state 2 ). The period of inattention for student study is t 1 -t 2 The remaining time period is considered as student concentration in class, as shown in fig. 5.
The period of inattention of the students is obtained according to the method, and the period of inattention is obtained to be T i The total inattention time of the students is as follows:
t in i The total inattention time is m is the number of inattention time periods.
In one embodiment, the method further comprises dividing each net lesson into different stages according to the time for the student to ask questions.
In one embodiment, after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to a corresponding student terminal according to student information, and collecting and feeding back the net class learning states of all students to a corresponding teaching teacher terminal.
Specifically, after the answer question score, the learning concentration score and the total inattention time of each stage of the student are obtained, the answer question score, the learning concentration score and the total inattention time are uploaded to a educational administration place for storage. Can be used as the basis for the reflection of the learning state of the student net lessons and the evaluation of the final student net lesson results. And after obtaining the answer question score, the learning concentration score and the total inattention time of each stage of the student, uploading the answer question score, the learning concentration score and the total inattention time to a server. And according to the student information labels, the grading result of the student learning condition in each stage is transmitted to the corresponding student for feedback. After each section of net class is finished, the net class learning states of all students are summarized and fed back to a teaching teacher, and the net class learning states can be used as a net class teaching quality judgment basis and a teaching improvement reference.
Example two
Based on the same inventive concept, the present embodiment provides a student online class learning state evaluation system based on face recognition, please refer to fig. 6, which includes:
an information acquisition module 201 for acquiring face images of students, conditions of answering questions by students, and student information;
the student answer question evaluation module 202 is configured to obtain a student answer question result according to a comparison of a student answer question condition and a reference answer condition;
the understanding degree recognition module 203 is configured to perform standardization processing on the collected facial images to obtain consistent picture information, and input the picture information into a trained micro-expression recognition convolutional neural network model to obtain a net class listening understanding degree state of the student;
the concentration recognition module 204 is configured to perform face recognition on the collected face image, extract a face image, and perform facial feature extraction to obtain a face size and an eye opening height of the student, where the face size of the student includes a face length and a face width; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
And the evaluation result module 205 is used for taking the answer question result, the online class understanding degree state and the concentration degree of the student as the evaluation result of the online class learning state of the student.
The overall implementation flow of the system provided in this embodiment is shown in fig. 7.
The invention has the following advantages:
1. the method and the module for recognizing the learning understanding degree of the micro-expressions of the students based on the convolutional neural network are provided, so that the recognition accuracy can be improved while the understanding degree recognition efficiency can be improved.
2. The method and the module for identifying the concentration degree of the students in real time based on the facial features of each independent student are provided, so that the concentration degree identification efficiency can be improved, and the identification accuracy can be improved.
3. A set of student online class learning state evaluation and feedback system is constructed, so that the comprehensive evaluation effect can be improved.
Because the system described in the second embodiment of the present invention is a system for implementing the method for evaluating learning status of students based on facial recognition in the first embodiment of the present invention, the specific structure and the modification of the system can be known to those skilled in the art based on the method described in the first embodiment of the present invention, and thus will not be described herein. All systems used in the method according to the first embodiment of the present invention are within the scope of the present invention.
Example III
Based on the same inventive concept, the present embodiment provides a computer-readable storage medium having stored thereon a computer program which when executed implements the method described in embodiment one.
Since the computer readable storage medium introduced in the third embodiment of the present invention is a computer readable storage medium used for implementing the method for evaluating learning status of a student based on face recognition in the first embodiment of the present invention, based on the method introduced in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and modification of the computer readable storage medium, and therefore, the detailed description thereof is omitted herein. All computer readable storage media used in the method of the first embodiment of the present invention are within the scope of the present invention.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims and the equivalents thereof, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A student online class learning state evaluation method based on face recognition is characterized by comprising the following steps:
s1: acquiring face images of students, question answering conditions of the students and student information;
s2: obtaining the answer question result of the student according to the comparison condition of the answer question condition of the student and the reference answer;
s3: the collected facial images are standardized to be consistent picture information and then are input into a trained micro-expression recognition convolutional neural network model, and the state of the degree of understanding of the students in class of net class is obtained;
s4: face recognition is carried out on the collected face images, face images are extracted, facial feature extraction is carried out, and the face size and the eye opening height of the student are obtained, wherein the face size of the student comprises face length and face width; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
s5: taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students;
the construction method of the trained micro-expression recognition convolutional neural network model in the S3 comprises the following steps:
The method comprises the steps of searching a human face micro-expression picture which respectively accords with pleasure, understanding and confusion state characteristics in a micro-expression database, processing pictures corresponding to understanding degree states into picture information with uniform size and format as training data after compression, stretching and sharpening processes, wherein the net class understanding degree states of students are divided into three levels: pleasure, understand, confuse, pleasant corresponding facial features including eye opening, face facing up the screen and mouth corner, understand corresponding facial features including face facing up the screen and eyebrow stretching, confuse corresponding facial features including eyebrow tightening lock, eye micro-squint and mouth corner down;
determining a structure of a micro-expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a characteristic layer, a full connection layer, a classification layer and an output layer;
training the micro-expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain a trained micro-expression recognition convolutional neural network model;
s4, obtaining concentration of the student according to the comparison condition of the ratio of the length of the face of the student to the width of the face and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, wherein the method comprises the following steps:
S4.1: judging whether the face of the student is right opposite to the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student to the aspect ratio of the preset standard face, wherein the student is not right opposite to the screen, and judging that the student is not attentive, and the student is right opposite to the screen, and then judging in the next step, wherein a judging formula of the right opposite to the screen is as follows:
wherein L is i And W is i For the length and width of the face of the student at time i, L s And W is s The length and width of the face are standard for students;
s4.2: according to the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and a judgment formula is shown as follows:
wherein H is i For the eye opening height of the student at the moment i, H s Eye opening height for students, L i For the length of the face of the student at time i, L s For the length of the face of the student standard, if the length is larger than the standard length, the student at the moment i is focused, and if the length is smaller than the standard length, the student at the moment i is not focused;
and continuously monitoring whether the student status is inattentive or not in a preset time period according to the concentration degree of the student at the moment i, and judging that the student status is inattentive if the student status is inattentive.
2. The method of claim 1, wherein S3 specifically comprises:
s3.1: the collected facial images enter picture information through an input layer and are input into a first convolution layer, and feature extraction is carried out through the first convolution layer;
s3.2: performing dimension reduction compression processing on the image obtained in the step S4.1 through a first pooling layer;
s3.3: extracting features of the image subjected to the dimension reduction compression treatment through a second convolution layer, and performing dimension reduction compression through a second pooling layer;
s3.4: compressing the image obtained in the step S4.3 into a one-dimensional vector through the feature layer and outputting the one-dimensional vector to the full-connection layer;
s3.5: outputting the data to a classification layer through a full-connection layer formed by forward connection of a plurality of neurons;
s3.6: matching the result output by the full-connection layer with the corresponding understanding degree state through the classification layer to obtain the understanding degree state corresponding to the picture;
s3.7: and outputting the corresponding understanding degree state of the picture through the output layer.
3. The method of claim 2, wherein after S3.7 the method further comprises: different scores are assigned to different understanding states.
4. The method of claim 3, wherein the understanding degree state corresponding to the output picture of the output layer is an understanding degree state of the student at a time, and the method further comprises:
Obtaining a corresponding class state score u according to the assigned score i
Scoring u according to class status i Obtaining understanding degree score U of student net class learning at each stage k
Where N represents the number of moments and K represents the phase.
5. The method of claim 1, further comprising dividing each net lesson into different phases according to the time of student questions.
6. The method of claim 1, wherein after S5, the method further comprises:
and uploading the learning state evaluation result to a server, feeding back the obtained learning state evaluation result to a corresponding student terminal according to student information, and collecting and feeding back the net class learning states of all students to a corresponding teaching teacher terminal.
7. A student online class learning state evaluation system based on face recognition, comprising:
the information acquisition module is used for acquiring face images of students, question answering conditions of the students and student information;
the student answer question evaluation module is used for obtaining the answer question result of the student according to the comparison condition of the student answer question condition and the reference answer;
the understanding degree recognition module is used for carrying out standardized processing on the collected facial images to form consistent picture information, and inputting the picture information into the trained micro-expression recognition convolutional neural network model to obtain the net class learning understanding degree state of the students;
The concentration recognition module is used for recognizing the face of the acquired face image, extracting the face image and extracting facial features to obtain the face size and the eye opening height of the student, wherein the face size of the student comprises the length and the width of the face; obtaining concentration of the student according to the comparison condition of the ratio of the length to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face;
the evaluation result module is used for taking the answer question result, the online class understanding degree state and the concentration degree of the students as the evaluation result of the online class learning state of the students;
the construction method of the trained micro-expression recognition convolutional neural network model in the concentration recognition module comprises the following steps:
the method comprises the steps of searching a human face micro-expression picture which respectively accords with pleasure, understanding and confusion state characteristics in a micro-expression database, processing pictures corresponding to understanding degree states into picture information with uniform size and format as training data after compression, stretching and sharpening processes, wherein the net class understanding degree states of students are divided into three levels: pleasure, understand, confuse, pleasant corresponding facial features including eye opening, face facing up the screen and mouth corner, understand corresponding facial features including face facing up the screen and eyebrow stretching, confuse corresponding facial features including eyebrow tightening lock, eye micro-squint and mouth corner down;
Determining a structure of a micro-expression recognition convolutional neural network model, wherein the structure of the model comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a characteristic layer, a full connection layer, a classification layer and an output layer;
training the micro-expression recognition convolutional neural network model by adopting training data according to preset model parameters to obtain a trained micro-expression recognition convolutional neural network model;
obtaining the concentration of the student according to the comparison condition of the ratio of the length of the face of the student to the width of the face of the student and the aspect ratio of the preset standard face and the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, comprising:
judging whether the face of the student is right opposite to the screen at the moment i according to the comparison condition of the ratio of the length to the width of the face of the student to the aspect ratio of the preset standard face, wherein the student is not right opposite to the screen, and judging that the student is not attentive, and the student is right opposite to the screen, and then judging in the next step, wherein a judging formula of the right opposite to the screen is as follows:
wherein L is i And W is i For the length and width of the face of the student at time i, L s And W is s The length and width of the face are standard for students;
according to the comparison condition of the eye opening height of the student and the eye opening height of the preset standard face, the eye opening degree of the student is judged, the concentration degree of the student at the moment i is obtained, and a judgment formula is shown as follows:
Wherein H is i For the eye opening height of the student at the moment i, H s Eye opening height for students, L i For the length of the face of the student at time i, L s For the length of the face of the student standard, if the length is larger than the standard length, the student at the moment i is focused, and if the length is smaller than the standard length, the student at the moment i is not focused;
and continuously monitoring whether the student status is inattentive or not in a preset time period according to the concentration degree of the student at the moment i, and judging that the student status is inattentive if the student status is inattentive.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements the method according to any one of claims 1 to 6.
CN202010043578.0A 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system Active CN111242049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010043578.0A CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043578.0A CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Publications (2)

Publication Number Publication Date
CN111242049A CN111242049A (en) 2020-06-05
CN111242049B true CN111242049B (en) 2023-08-04

Family

ID=70865670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010043578.0A Active CN111242049B (en) 2020-01-15 2020-01-15 Face recognition-based student online class learning state evaluation method and system

Country Status (1)

Country Link
CN (1) CN111242049B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
WO2022052084A1 (en) * 2020-09-14 2022-03-17 Huawei Technologies Co., Ltd. Methods, systems, and media for context-aware estimation of student attention in online learning
CN112215973A (en) * 2020-09-21 2021-01-12 彭程 Data display method, multimedia platform and electronic equipment
CN112735213A (en) * 2020-12-31 2021-04-30 奇点六艺教育科技股份有限公司 Intelligent teaching method, system, terminal and storage medium
CN112818754A (en) * 2021-01-11 2021-05-18 广州番禺职业技术学院 Learning concentration degree judgment method and device based on micro-expressions
CN112907408A (en) * 2021-03-01 2021-06-04 北京安博创赢教育科技有限责任公司 Method, device, medium and electronic equipment for evaluating learning effect of students
CN113239841B (en) * 2021-05-24 2023-03-24 桂林理工大学博文管理学院 Classroom concentration state detection method based on face recognition and related instrument
CN113657146B (en) * 2021-06-30 2024-02-06 北京惠朗时代科技有限公司 Student non-concentration learning low-consumption recognition method and device based on single image
CN114493952A (en) * 2022-04-18 2022-05-13 北京梦蓝杉科技有限公司 Education software data processing system and method based on big data
CN115631074B (en) * 2022-12-06 2023-06-09 南京熊大巨幕智能科技有限公司 Informationized network science and education method, system and equipment
CN116996722B (en) * 2023-06-29 2024-06-04 广州慧思软件科技有限公司 Virtual synchronous classroom teaching system in 5G network environment and working method thereof
CN117909587A (en) * 2024-01-19 2024-04-19 广州铭德教育投资有限公司 Method and system for individually recommending post-class exercises of students based on AI

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
KR101960815B1 (en) * 2017-11-28 2019-03-21 유엔젤주식회사 Learning Support System And Method Using Augmented Reality And Virtual reality
KR20190043513A (en) * 2019-04-18 2019-04-26 주식회사 아이티스테이션 System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878677B (en) * 2017-01-23 2020-01-07 西安电子科技大学 Student classroom mastery degree evaluation system and method based on multiple sensors
CN108021893A (en) * 2017-12-07 2018-05-11 浙江工商大学 It is a kind of to be used to judging that student to attend class the algorithm of focus
CN108710829A (en) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 A method of the expression classification based on deep learning and the detection of micro- expression
US20190362138A1 (en) * 2018-05-24 2019-11-28 Gary Shkedy System for Adaptive Teaching Using Biometrics
CN108875606A (en) * 2018-06-01 2018-11-23 重庆大学 A kind of classroom teaching appraisal method and system based on Expression Recognition
CN109657529A (en) * 2018-07-26 2019-04-19 台州学院 Classroom teaching effect evaluation system based on human facial expression recognition
CN110334626B (en) * 2019-06-26 2022-03-04 北京科技大学 Online learning system based on emotional state

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
KR101960815B1 (en) * 2017-11-28 2019-03-21 유엔젤주식회사 Learning Support System And Method Using Augmented Reality And Virtual reality
CN109815795A (en) * 2018-12-14 2019-05-28 深圳壹账通智能科技有限公司 Classroom student's state analysis method and device based on face monitoring
KR20190043513A (en) * 2019-04-18 2019-04-26 주식회사 아이티스테이션 System For Estimating Lecture Attention Level, Checking Course Attendance, Lecture Evaluation And Lecture Feedback
CN110334600A (en) * 2019-06-03 2019-10-15 武汉工程大学 A kind of multiple features fusion driver exception expression recognition method
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Two-level attention with two-stage multi-task learning for facial emotion recognition;Wang Xiaohua et al.;《ELSEVIER》;217-225 *
基于面部表情特征的驾驶员疲劳状态识别;马添翼;成波;;汽车安全与节能学报(03);38-42 *
智慧学习环境中学习画面的情感识别及其应用;徐振国;《中国博士学位论文全文数据库社会科学Ⅱ辑》;H127-21 *

Also Published As

Publication number Publication date
CN111242049A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242049B (en) Face recognition-based student online class learning state evaluation method and system
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
CN110334626B (en) Online learning system based on emotional state
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Littlewort et al. Automated measurement of children's facial expressions during problem solving tasks
CN112183238B (en) Remote education attention detection method and system
CN113657168B (en) Student learning emotion recognition method based on convolutional neural network
CN111275345B (en) Classroom informatization evaluation and management system and method based on deep learning
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN111523445B (en) Examination behavior detection method based on improved Openpost model and facial micro-expression
Butko et al. Automated facial affect analysis for one-on-one tutoring applications
CN116050892A (en) Intelligent education evaluation supervision method based on artificial intelligence
CN113076885B (en) Concentration degree grading method and system based on human eye action characteristics
CN110728604B (en) Analysis method and device
CN113989608A (en) Student experiment classroom behavior identification method based on top vision
CN111199378B (en) Student management method, device, electronic equipment and storage medium
CN116825288A (en) Autism rehabilitation course recording method and device, electronic equipment and storage medium
Sarrafzadeh et al. See me, teach me: Facial expression and gesture recognition for intelligent tutoring systems
Saurav et al. AI Based Proctoring
CN114638988A (en) Teaching video automatic classification method and system based on different presentation modes
CN114463810A (en) Training method and device for face recognition model
CN111950472A (en) Teacher grinding evaluation method and system
Takahashi et al. Improvement of detection for warning students in e-learning using web cameras
CN117496580B (en) Facial expression intelligent recognition robot terminal based on multi-person synchronous interaction
CN116227968A (en) Network education effect inspection system based on real-time monitoring information feedback analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant