CN110879966A - Student class attendance comprehension degree evaluation method based on face recognition and image processing - Google Patents

Student class attendance comprehension degree evaluation method based on face recognition and image processing Download PDF

Info

Publication number
CN110879966A
CN110879966A CN201910978381.3A CN201910978381A CN110879966A CN 110879966 A CN110879966 A CN 110879966A CN 201910978381 A CN201910978381 A CN 201910978381A CN 110879966 A CN110879966 A CN 110879966A
Authority
CN
China
Prior art keywords
face
target person
students
emotion
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910978381.3A
Other languages
Chinese (zh)
Inventor
姜周曙
董勇
葛照楠
侯飞虎
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910978381.3A priority Critical patent/CN110879966A/en
Publication of CN110879966A publication Critical patent/CN110879966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Abstract

The invention discloses a student class attendance comprehension degree evaluation method based on face recognition and image processing, which mainly comprises six steps of face acquisition and detection, face feature extraction and matching, face tracking, SVM algorithm training and classification, student expression change in the teaching process and student class attendance comprehension degree output. The method can perform face recognition analysis and feature extraction on students by using video stream data acquired by a classroom camera, and changes of facial expressions of the students in the teaching process are obtained according to image processing. And deducing emotion change by combining the mapping relation between psychology and body, evaluating class attendance comprehension degree of the students according to the psychological emotion change of the students, and generating a corresponding class attendance comprehension degree evaluation accuracy chart of the students. The method has the advantages of good generalization performance, good robustness and good real-time classification effect.

Description

Student class attendance comprehension degree evaluation method based on face recognition and image processing
Technical Field
The invention belongs to the field of modern teaching quality online evaluation, particularly relates to a face recognition technology, an image processing technology and emotional psychology, and discloses a student class attendance comprehension degree evaluation method based on the face recognition technology and the image processing technology.
Background
The face recognition technology is based on the face features of people, and performs feature extraction and comparison on input face images or video streams, so as to achieve identity recognition of each face. Because the face recognition technology based on the static image has large limitation, the recognition requirement of the facial expression conversion of the face is difficult to meet.
Image processing techniques are techniques that utilize a computer to analyze an image to achieve a desired result. Typically including image compression, enhancement and restoration, matching, description and identification of 3 parts. At present, the technical development is mature, but the defects of poor robustness, poor generalization performance and the like still exist.
At present, the traditional teaching quality assessment mainly adopts a questionnaire survey and a teaching subject test method, and has the defects of subjectivity, off-line property and the like.
Disclosure of Invention
Aiming at the defects, the invention provides a student class attendance comprehension degree evaluation method based on face recognition and image processing.
The method comprises the following specific steps:
marking a plurality of characteristic positions of eyes, noses and mouths of the non-expression faces and the faces with various emotions in a face database, and obtaining an Euclidean distance set M (M) of all target persons according to a formula (1)1,M2,···,MA,···},MA={ρA,1A,2,···,ρA,B,···,ρA,l}; l represents the number of all basic emotions, including fear, sadness, aversion, surprise, anger and joy;
obtaining the feature position according to the expressionless face of the target person A and a certain emotion BA euclidean distance ρ of the target personA,B
Figure BDA0002234403720000011
Where ρ isA,BThe Euclidean distance between target person A and a certain emotion B is shown, (X)A,1,YA,1) The characteristic position of the target person a's expressive face is represented.
Step (2) constructing a support vector machine classifier
The discriminant function of the SVM classifier is f (x), and the training sample set is S ═ S1,S2,···,SA,···};
SA=((x1,y1),···,(xl,yl)),yi∈{-1,1}(2)
Wherein xi=ρ,i=1,···,l,yiA category label representing target person a.
Figure BDA0002234403720000021
Wherein a isiIs the Lagrange multiplier coefficient, x is the input data, b is the offset value, K (x)iX) is a kernel function, xiX in the sample set S;
K(xi·x)=(φ(xi)·φ(x)) (4)
wherein phi (x)i) And φ (x) is a kernel vector.
Step (3), carrying out face video streaming on students in a classroom through camera equipment arranged in the classroom; acquiring key frame images at intervals of n frames;
step (4), extracting the characteristics of the key frame image;
4.1, obtaining a target face picture by adopting the prior art;
4.2 marking the characteristic position of the eyes, nose and mouth of the target face according to the characteristic position marking points in the step (1);
step (5), referring to the formula (1), obtaining the Euclidean distance between the characteristic position obtained in the step (4.2) and the corresponding characteristic position of the same target non-expression face in the face database, namely a characteristic displacement vector;
and (6) taking the characteristic displacement vector obtained in the step (5) as the input of the support vector machine classifier trained in the step (1), and finally obtaining the expression category, namely judging the emotion.
And (7) acquiring the psychological state information of the target person according to the emotion obtained in the step (6) by combining the existing emotion mapping psychology, and further judging the class attending comprehension degree of the target person.
The invention has the beneficial effects that:
1. the generalization performance is good, and even if new samples are added, the expression classification still presents a good effect.
2. Good robustness, and can achieve better facial expression classification effect under the condition of low frame rate or poor illumination
3. The real-time classification effect is good, and the support vector machine algorithm only considers which side of the hyperplane the point displacement of the unknown data exceeds.
Drawings
FIG. 1 is a block diagram showing a method adopted by the present invention patent;
FIG. 2 is a schematic diagram of automatic facial expression recognition based on a support vector machine algorithm;
FIG. 3 is a schematic diagram of positioning and tracking of facial feature points in a video stream;
FIG. 4 is a diagram showing the relationship between the number of support vectors of a decision surface and a training set;
fig. 5 is a schematic diagram of 22 facial expression feature point movement displacements of basic emotion, wherein (1) is angry, (2) is aversion, (3) is fear, (4) is happy, (5) is sad, and (6) is surprised.
Detailed Description
The invention is further analyzed with reference to specific examples.
As shown in fig. 1, the method of the present invention comprises the following steps:
marking a plurality of characteristic positions of eyes, noses and mouths of the non-expression faces and the faces with various emotions in a face database (see the description in the specification)Fig. 3), obtaining the euclidean distance set M ═ M of all target persons according to formula (1)1,M2,···,MA,···},MA={ρA,1A,2,···,ρA,B,···,ρA,l}; l represents the number of all basic emotions, including fear, sadness, aversion, surprise, anger and joy;
obtaining the Euclidean distance rho of the target person A according to the characteristic positions of the target person A non-expression face and a certain emotion BA,B
Figure BDA0002234403720000031
Where ρ isA,BThe Euclidean distance between target person A and a certain emotion B is shown, (X)A,1,YA,1) The characteristic position of the target person a's expressive face is represented.
Step (2), constructing a support vector machine classifier, as shown in FIG. 2
The discriminant function of the SVM classifier is f (x), and the training sample set is S ═ S1,S2,···,SA,···};
SA=((x1,y1),···,(xl,yl)),yi∈{-1,1} (2)
Wherein xi=ρ,i=1,···,l,yiA category label representing target person a.
Figure BDA0002234403720000032
Wherein a isiIs the Lagrange multiplier coefficient, x is the input data, b is the offset value, K (x)iX) is a kernel function, xiX in the sample set S;
K(xi·x)=(φ(xi)·φ(x)) (4)
wherein phi (x)i) And φ (x) is a kernel vector.
Step (3), carrying out face video streaming on students in a classroom through camera equipment arranged in the classroom; acquiring key frame images at intervals of n frames;
step (4), extracting the characteristics of the key frame image;
4.1, obtaining a target face picture by adopting the prior art;
4.2 marking the characteristic position of the eyes, nose and mouth of the target face according to the characteristic position marking points in the step (1);
step (5), referring to the formula (1), obtaining the Euclidean distance between the characteristic position obtained in the step (4.2) and the corresponding characteristic position of the same target non-expression face in the face database, namely a characteristic displacement vector;
and (6) taking the characteristic displacement vector obtained in the step (5) as the input of the support vector machine classifier trained in the step (1), and finally obtaining the expression category, namely judging the emotion.
And (7) acquiring the psychological state information of the target person according to the emotion obtained in the step (6) by combining the existing emotion mapping psychology, and further judging the class attending comprehension degree of the target person.
In order to establish a feature displacement method with high precision for recognizing facial expressions of students, static images in a student expression database in a teaching process are evaluated, the features of each image are manually defined, and then displacement is extracted from each group of images, wherein the images consist of a natural expression frame and an expression frame representing emotion. Each basic emotion was trained with 10 examples and each emotion was classified with 15 unknown samples. The invention uses a standard support vector machine classification algorithm and a linear kernel function. Table 1 shows the percentage of examples of correct classification and the overall recognition accuracy for each of the basic emotions, and fig. 5 is a schematic diagram of facial expression feature point displacement for each of the basic emotions extracted from the training data. From these results, it can be seen that each emotion, expressed as a characteristic movement, is very different between different subjects.
The selection of the kernel function is a key factor for determining the classification accuracy. Through experiments on a series of polynomials, Gaussian radial basis functions and S-shaped kernels, the recognition accuracy of the Gaussian radial basis functions is found to be obviously superior to that of other kernels, so that the overall recognition accuracy of the static image is greatly improved.
The support vector machine approach is highly modular, allowing the use of specific kernel functions. And it is different from the previous "black box" learning method, the support vector machine allows some intuitive judgment. The classification is realized by cascaded binary classifiers, the classification precision of a small training set is high, and the generalization performance of the data which has more variables and is difficult to separate is good.
The present invention uses libsvm as the underlying support vector machine classifier. Its stateless functionality is encapsulated in an object-oriented manner to operate in an incrementally trained interactive environment. There is no need to have to provide a complete set of training examples before any classification can be done and the user is allowed to dynamically augment the training data. It can also derive the entire state of a trained support vector machine classifier for later use.
The invention defines the basic emotion of the student in the teaching process as anger, disgust, fear, joy, sadness and surprise. Wherein facial expressions of students including fear, sadness, aversion, surprise and anger are defined as poor teaching quality. If the facial expression of the student is happy, the teaching quality is good. In order to ensure the accuracy of classification, the invention uses an analytic hierarchy process to carry out weighting processing on six basic emotions. With 45min for a lesson as a statistical rule, 1 teacher and 40 students were then invited to naturally express emotions under unconstrained conditions, such as lighting conditions, distance from camera, posture, etc. Two people are asked to provide a training example for each basic emotion and then to provide some unknown examples for classification. The classification results are compared with the student score database in the educational administration system, and the accuracy is shown in table 2.
TABLE 1 accuracy of recognition of still image displacements by support vector machine classification
Mood(s) Accuracy rate
Generating qi 82.2%
Aversion to 84.6%
Fear of 71.7%
Happy 93.0%
Sadness and sorrow 85.4%
Is surprised 99.3%
Average 86.0%
Table 2 student class attendance comprehension assessment accuracy based on emotional changes of students during teaching
Method of producing a composite material Accuracy rate
Face recognition and image processing 91.8%
The foregoing is a further description of the present invention given in connection with the specific examples provided below, and the practice of the present invention is not to be considered limited to these descriptions. Those skilled in the art to which the invention relates will readily appreciate that certain modifications and substitutions can be made without departing from the spirit and scope of the invention.

Claims (1)

1. A student class attendance comprehension degree evaluation method based on face recognition and image processing comprises the following steps:
marking a plurality of characteristic positions of eyes, noses and mouths of the non-expression faces and the faces with various emotions in a face database, and obtaining an Euclidean distance set M (M) of all target persons according to a formula (1)1,M2,…,MA,…},MA={ρA,1A,2,…,ρA,B,…,ρA,l}; l represents the number of all basic emotions, including fear, sadness, aversion, surprise, anger and joy;
obtaining the Euclidean distance rho of the target person A according to the characteristic positions of the target person A non-expression face and a certain emotion BA,B
Figure FDA0002234403700000011
Where ρ isA,BThe Euclidean distance between target person A and a certain emotion B is shown, (X)A,1,YA,1) The characteristic position of the target person A expressive face is represented;
step (2) constructing a support vector machine classifier
The discriminant function of the SVM classifier is f (x), and the training sample set is S ═ S1,S2,…,SA,…};
SA=((x1,y1),…,(xl,yl)),yi∈{-1,1} (2)
Wherein xi=ρ,i=1,…,l,yiA category label representing the target person A;
Figure FDA0002234403700000012
wherein a isiIs the Lagrange multiplier coefficient, x is the input data, b is the offset value, K (x)iX) is a kernel function, xiX in the sample set S;
K(xi·x)=(φ(xi)·φ(x)) (4)
wherein phi (x)i) And φ (x) is a kernel vector;
step (3), carrying out face video streaming on students in a classroom through camera equipment arranged in the classroom; acquiring key frame images at intervals of n frames;
step (4), extracting the characteristics of the key frame image;
4.1, obtaining a target face picture by adopting the prior art;
4.2 marking the characteristic position of the eyes, nose and mouth of the target face according to the characteristic position marking points in the step (1);
step (5), referring to the formula (1), obtaining the Euclidean distance between the characteristic position obtained in the step (4.2) and the corresponding characteristic position of the same target non-expression face in the face database, namely a characteristic displacement vector;
step (6), the characteristic displacement vector obtained in the step (5) is used as the input of the support vector machine classifier trained in the step (1), and the expression category is finally obtained, namely the emotion can be judged;
and (7) acquiring the psychological state information of the target person according to the emotion obtained in the step (6) by combining the existing emotion mapping psychology, and further judging the class attending comprehension degree of the target person.
CN201910978381.3A 2019-10-15 2019-10-15 Student class attendance comprehension degree evaluation method based on face recognition and image processing Pending CN110879966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910978381.3A CN110879966A (en) 2019-10-15 2019-10-15 Student class attendance comprehension degree evaluation method based on face recognition and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910978381.3A CN110879966A (en) 2019-10-15 2019-10-15 Student class attendance comprehension degree evaluation method based on face recognition and image processing

Publications (1)

Publication Number Publication Date
CN110879966A true CN110879966A (en) 2020-03-13

Family

ID=69728457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910978381.3A Pending CN110879966A (en) 2019-10-15 2019-10-15 Student class attendance comprehension degree evaluation method based on face recognition and image processing

Country Status (1)

Country Link
CN (1) CN110879966A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897647A (en) * 2022-04-27 2022-08-12 合创智能家具(广东)有限公司 Teaching auxiliary system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008156184A1 (en) * 2007-06-18 2008-12-24 Canon Kabushiki Kaisha Facial expression recognition apparatus and method, and image capturing apparatus
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN107292289A (en) * 2017-07-17 2017-10-24 东北大学 Facial expression recognizing method based on video time sequence
CN110135380A (en) * 2019-05-22 2019-08-16 东北大学 A kind of classroom focus knowledge method for distinguishing based on Face datection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008156184A1 (en) * 2007-06-18 2008-12-24 Canon Kabushiki Kaisha Facial expression recognition apparatus and method, and image capturing apparatus
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN107292289A (en) * 2017-07-17 2017-10-24 东北大学 Facial expression recognizing method based on video time sequence
CN110135380A (en) * 2019-05-22 2019-08-16 东北大学 A kind of classroom focus knowledge method for distinguishing based on Face datection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张磊: "基于支持向量机的表情识别研究", 《辽宁科技学院学报》 *
李燕 等: "基于SVM的面部表情分析", 《微处理机》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897647A (en) * 2022-04-27 2022-08-12 合创智能家具(广东)有限公司 Teaching auxiliary system

Similar Documents

Publication Publication Date Title
Eidinger et al. Age and gender estimation of unfiltered faces
Lucey et al. Automatically detecting pain in video through facial action units
Bishay et al. Schinet: Automatic estimation of symptoms of schizophrenia from facial behaviour analysis
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Murtaza et al. Analysis of face recognition under varying facial expression: a survey.
Ouyang et al. Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers
CN112287891B (en) Method for evaluating learning concentration through video based on expression behavior feature extraction
KR20100032699A (en) The system controled a action of the display device, based a gesture information recognition of the user
Khatri et al. Facial expression recognition: A survey
Khan et al. Facial expression recognition on real world face images using intelligent techniques: A survey
CN107578015B (en) First impression recognition and feedback system and method based on deep learning
Zhang et al. Evaluation of texture and geometry for dimensional facial expression recognition
Bekhouche Facial soft biometrics: extracting demographic traits
Lek et al. Academic Emotion Classification Using FER: A Systematic Review
CN110879966A (en) Student class attendance comprehension degree evaluation method based on face recognition and image processing
Acevedo et al. Facial expression recognition based on static and dynamic approaches
Verma et al. A comprehensive survey on human facial expression detection
Islam et al. Facial region segmentation based emotion recognition using extreme learning machine
Campomanes-Álvarez et al. Automatic facial expression recognition for the interaction of individuals with multiple disabilities
Yang Facial expression recognition and expression intensity estimation
Singh et al. Facial Expression Recognition
Göngör et al. An emotion analysis algorithm and implementation to NAO humanoid robot
Hisham et al. ESMAANI: A Static and Dynamic Arabic Sign Language Recognition System Based on Machine and Deep Learning Models
Prasad et al. Fuzzy classifier for continuous sign language recognition from tracking and shape features
Kamalakumari et al. Image Sequences Based Facial Expression Recognition Using Support Vector Machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200313