CN115966003A - System for evaluating online learning efficiency of learner based on emotion recognition - Google Patents

System for evaluating online learning efficiency of learner based on emotion recognition Download PDF

Info

Publication number
CN115966003A
CN115966003A CN202211456127.5A CN202211456127A CN115966003A CN 115966003 A CN115966003 A CN 115966003A CN 202211456127 A CN202211456127 A CN 202211456127A CN 115966003 A CN115966003 A CN 115966003A
Authority
CN
China
Prior art keywords
module
real
image
learner
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211456127.5A
Other languages
Chinese (zh)
Inventor
刘慧�
李创奇
时清玮
王宇航
李东辉
孙淑静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Normal University
Original Assignee
Henan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Normal University filed Critical Henan Normal University
Priority to CN202211456127.5A priority Critical patent/CN115966003A/en
Publication of CN115966003A publication Critical patent/CN115966003A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an emotion recognition-based learner online learning efficiency evaluation system, which comprises a real-time camera monitoring module, an image collection module, an image processing module and a statistical analysis module, wherein the real-time camera monitoring module is used for acquiring key data; the image processing module is used for analyzing and processing data; and the statistical analysis module is used for carrying out statistical analysis on the constructed model data. Compared with the prior art, the invention has the advantages that: the learning efficiency of the learner is comprehensively obtained through scientific weight calculation by combining the facial expression characteristics, the head posture and the hand skeleton behaviors through the high-definition camera carried by the learner in the online teaching process, so that the error rate of emotion recognition only from non-physiological signals is effectively reduced, and the fault tolerance rate is improved.

Description

System for evaluating online learning efficiency of learner based on emotion recognition
Technical Field
The invention relates to the field of education, in particular to an online learning efficiency evaluation system for learners based on emotion recognition.
Background
Emotion recognition originally refers to recognition of emotion of an individual to other people, and currently, most of AI automatically distinguishes emotion states of the individual by acquiring physiological or non-physiological signals of the individual, is an important component of emotion calculation, is a state integrating human feelings, ideas and behaviors, and plays an important role in human-to-human communication. Common emotion recognition methods fall into two main categories: non-physiological signal based identification and physiological signal based identification. The application based on emotion recognition is also increasingly widespread, for example, in the medical field, emotion recognition can provide a basis for diagnosis and treatment of mental diseases; in the aspect of user experience, when the product is investigated, the emotion change of the user is acquired, and the problem of the product is favorably found; in the field of transportation, timely detection of the emotional state of workers who need to pay high attention to the operation is an effective means for avoiding accidents.
At present, techniques based on emotion recognition are also being gradually applied to the field of education. The monitoring technology, which is used as the basis of non-physiological signal identification, is used for detecting whether examination cheating occurs or not and correcting irregular lesson behaviors, and is further combined in deeper aspects, such as cooperation of a candied date network and Microsoft Asia research institute, so that long-time and comprehensive analysis can be performed on learners, and teaching efficiency can be evaluated. However, the method has many disadvantages that the emotion recognition based on non-physiological signals has low reliability, and the learning efficiency of learners is determined only by the judgment of facial expressions in the cooperation, so that the reliability is poor and the accuracy is low. The scheme mainly provides a non-physiological emotion recognition method, and an evaluation system for judging the online learning efficiency of learners through head posture detection, emotion recognition and skeleton detection greatly improves the detection accuracy.
Disclosure of Invention
The invention aims to provide an on-line learning efficiency evaluation system for learners based on emotion recognition.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a learner online learning efficiency evaluation system based on emotion recognition comprises a real-time camera monitoring module based on a system architecture, an image collecting module, an image processing module and a statistical analysis module, wherein the real-time camera monitoring module is used for acquiring key data; the image processing module is used for analyzing and processing data; the statistical analysis module is used for carrying out statistical analysis on the constructed model data;
a working step of the learner online learning efficiency evaluation system based on emotion recognition is as follows:
1) Based on the face detection of a video image, a video image is given, the head posture, the face position, the neck, the arms and the flexion and extension postures of a person, namely three parts, namely image segmentation, detection object locking and feature extraction are required to be positioned and detected;
2) Analyzing and judging the facial expression, performing feature extraction on the image dimensionality reduction based on a direction gradient Histogram (HOG) of the textural features, improving the efficiency of model extraction features, improving the performance from an input level, performing weighting operation by combining linear interpolation, fully expressing image details and overall features, and adding CBAM into a network;
3) The feature definition divides the emotional state of the learner into three categories, namely 'happy', 'neutral' and 'boring', through the expression feature, and the three states are judged through combining the facial expression with the torsion angle and the head posture.
Compared with the prior art, the invention has the advantages that: according to the scheme, the head characteristics of the learner are read in real time through the high-definition camera carried by the learner in the online teaching process, the facial expression characteristics, the head posture and the hand skeleton behaviors are organically combined, the learning efficiency of the learner is comprehensively obtained through scientific weight calculation by integrating three points, so that the error rate of emotion recognition only from non-physiological signals is effectively reduced, and the fault tolerance rate is improved.
Furthermore, the real-time camera monitoring module realizes real-time monitoring of the learning state of the students in front of the screen through controlling the notebook and the computer camera in the course of the network, and transmits the acquired video signals to the computer master control operation center in real time.
Furthermore, the real-time camera monitoring module comprises a face detection function, an expression recognition function, a head posture analysis function and a body behavior judgment function.
Furthermore, the image collection module automatically collects the required audio and video as data materials by adjusting the reading frequency of the equipment and the data conversion times within a certain time, and provides a data basis for the operation of subsequent modules.
Further, the image processing module realizes the face detection of the image target, extracts a face characteristic region, divides the face characteristic region into 468 key points, tracks the body behaviors of the arm, the five fingers and the like of the learner in real time according to the model, and provides reference data for the next module.
Furthermore, the statistical analysis module is supported by data provided by the previous module, inputs the model for comparison, obtains the attention and participation of the students to the course content, and carries out time recording and real-time reminding when the participation at a certain moment is low.
Drawings
FIG. 1 is a schematic diagram of the architecture of the present invention.
Fig. 2 is a block diagram of face feature point extraction according to the present invention.
Fig. 3 (a) is a schematic close-up image of a student individual of the present invention.
Fig. 3 (b) is a schematic diagram of the detection result of the side partial image on the human face feature point according to the present invention.
Fig. 3 (c) is a schematic diagram of the detection of a part of a limb organ according to the present invention.
FIG. 4 is a preliminary decision framework diagram based on expression according to the present invention.
Fig. 5 is a schematic diagram of the analysis of the psychology and facial features of students commonly used in the on-line teaching of the present invention.
Fig. 6 is a data diagram of the expressive feature recognition in the embodiment.
FIG. 7 (a) is a schematic diagram illustrating the determination of facial expressions according to an embodiment.
FIG. 7 (b) is a schematic diagram showing the determination of another facial expression in the embodiment.
FIG. 8 is a diagram illustrating real-time tracking of the expression features in the embodiment.
Fig. 9 (a) is a schematic diagram of the expressive features of the front face observed by the system under the state of the rotation angle α between the front face and the head pose in the embodiment.
Fig. 9 (b) is a schematic diagram of the expressive features of the system observing the side face under the state of the rotation angle α of the head and the head pose in the embodiment.
Fig. 9 (c) is a schematic diagram of the expressive features of the front face observed by the system in the elevation angle β state of the front face and the head pose in the embodiment.
Fig. 9 (d) is a schematic diagram of the expressive features of the embodiment of looking up the face at the elevation angle β between the front face and the head pose.
Fig. 10 is a functional schematic of the system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
In specific implementation, as shown in the embodiment shown in fig. 1, the present invention provides a student classroom participation assessment system based on expression recognition and posture behavior, where the internal blocks mainly include a video recording block, a data acquisition block, a head posture, an expression, body behavior recognition block, and an experimental analysis and evaluation block, and specifically: 1) And a camera monitoring block is implemented, and the real-time monitoring of the learning state of the student in front of the screen during the course period is ensured by controlling the notebook and the computer camera in the course process. And the acquired video signals are transmitted to a computer main control operation center in real time, and a multidirectional posture processing method is primarily adopted to obtain the facial expression characteristics, the body posture and the five-finger characteristics. But when the intelligent analysis and judgment are carried out, the gesture recognition can be carried out only through facial expression features, and the recognition can be carried out by combining body gestures.
2) The image collection plate. The required audio and video are automatically collected as data materials by adjusting the reading frequency of the equipment and the number of times of datamation in a certain time, so that a data base is provided for the operation of subsequent modules.
3) Image processing tiles. The face detection of the image target is realized, the face characteristic region is extracted and divided into 468 key points, and the body behaviors of the arm, the five fingers and the like of the learner are tracked in real time according to the model, so that reference data are provided for the next module.
4) And a statistical analysis module. Through the data support provided by the last module, the input models are compared to obtain the attention and participation of the students to the course content, and when the participation is low at a certain moment, the time recording and the real-time reminding are carried out.
In one embodiment of the present invention, as shown in fig. 2, fig. 3 (a), fig. 3 (b), fig. 3 (c), fig. 4 and fig. 5, the specific working steps of the system are as follows:
face detection method based on video image
In the current online teaching process, learners present different postures, expressions and behaviors, and are influenced by online teaching, and the teaching environment of each learner is different, so that the scenes in the monitoring video images are complex and changeable. However, just because of online teaching, the distance between the learner and the learning screen does not exceed a certain range, and the camera provided by the learner can complete clear continuous shooting of the target, so that the face information of the learner can be effectively captured.
Given a video image, it is required to locate and detect the head pose, face position, neck, arms, and flexion and extension and pose of a person, i.e. three parts, image segmentation, object detection locking, and feature extraction. The positioning and tracking of facial and body feature points becomes very challenging due to the variability of faces and the complexity of the environment. Currently, model-based positioning is a mainstream positioning method, XGBoost (eXtreme Gradient Boosting) is a GBDT-based algorithm or engineering implementation, and is widely applied to the fields of recommendation systems, data mining and the like due to the characteristics of high efficiency, flexibility and portability. On the basis of introducing the XGboost model, the extraction of the face features is realized by combining the RESnet mechanism. Based on the existing XGboost method, the existing model and data are combined, and multi-pose face features are extracted in a network course.
As shown in fig. 3 (a), for the close-up image of the individual student, the method can effectively identify the head corner and the face characteristic region, and the shielding of the glasses, the hands and even the mask is optimized through an effective noise reduction algorithm, so that the stability and the robustness of the student are reflected. Fig. 3 (b) shows the detection result of the feature points of the face for the side partial image, except that the feature points cannot be accurately identified when the individual face is deflected or the occlusion is too large. Fig. 3 (c) is a diagram of the examination of a part of a limb organ, locked-in phase hand features, and its detailed behavior observed.
(II) analysis and judgment method of facial expression
Expression is the external embodiment and expression of emotion, and is an objective index for studying human beings, and 6 emotions of pleasure, surprise, disgust, anger, fear and sadness can be measured by facial emotion recording technology of the famous psychologist paul ackman (Showa, 1987). Needless to say, these six expressions are not well suited for education, but their features are herein taken out to make them better suited for the field of education. In normal class experience, the expression of the student transmits a large amount of emotional information to the teacher, the understanding degree of the student to the class is reflected, and emotional characteristics such as comprehension, confusion and the like are expressed. The facial expressions expressed by the students on the basis of comprehension are easy and pleasant, the head states are stared at the teachers and move along with the rhythms of the teachers, and therefore the students are interested in current learning contents and have high participation. When the students show boring or even resistant performance to the content, the behavior characteristics are also expressed as long-term head lowering, left-to-right expectation, or head-up locking, even lying on stomach for sleeping, and most of the behaviors are low in participation or the learners cannot understand the content spoken by the teacher.
The expression is a natural expression based on physiology, and starts from human instinct and is not controlled by brain, so that the expression cannot be disguised and cannot be further disguised. The expression has applications in many aspects, such as in criminal investigation in the judicial field, and whether a criminal suspect lies is judged; can be used in the field of disease treatment and prevention related to mental aspects, and helps to treat diseases of patients; the method can be applied to the field of education to help teachers to better know the emotional conditions of learners, so that emotion compensation is performed, students can learn better, and learning efficiency is improved.
The image dimensionality reduction is carried out with feature extraction based on the direction gradient Histogram (HOG) of the texture features, the efficiency of model feature extraction is improved, the performance is improved from the input level, and weighting operation is carried out by combining linear interpolation, so that the image details and the overall features are fully expressed. CBAM (attention module) is added to the network to improve accuracy. The Convolutional attention Module (CBAM), proposed by SanghyunWoo et al in 2018, is a simple and effective attention Module for feedforward Convolutional neural networks. Given an intermediate feature map, the CBAM module calculates an attention map from two different dimensions of a channel and a space in turn, and then multiplies the attention map by an input feature map, thereby performing adaptive feature optimization. Aiming at different expressions with different characteristics, different attention mechanics learning algorithm mechanisms are innovatively provided for different expressions, and space transformation network focusing is adopted, so that a network can be centralized on important parts of the face, the parameter quantity is reduced, the influence of gradient explosion is reduced, the learning efficiency and accuracy are improved, and the intelligent education method can be better applied to intelligent education.
Given a face image or video, not all parts of the face are important in detecting a particular emotion, and in many cases we only need to focus on a particular area to perceive the underlying emotion. Based on the observation and inspired by the thought of 'teaching by factors', an attention mechanism is independently provided and added, different attention mechanics learning algorithms are adopted on different expressions, and the attention mechanism is converted into a frame through a space transformation network so as to pay attention to important face regions.
In the environment of online teaching, gesture recognition and facial expression recognition technologies are introduced, head and limb gesture detection is added, the objective state of the expression and the psychological state of a learner under subjective behaviors are analyzed, the participation degree is obtained according to computer intelligent analysis, alarm can be carried out under the condition that the participation degree is long and low, and secondly, teaching conditions can be conveniently mastered by a teacher.
(III) feature definition
The emotional states of learners are mainly divided into three categories, namely, happy, neutral and boring through the expression characteristics. According to the face coding system of the ixard, the four states are determined by combining facial expressions with torsion angles and head postures, as shown in fig. 5.
Furthermore, in an embodiment of the invention, the head posture and the facial expression behavior of the learner are combined, and the online education is effectively followed and defended through the advantage of high-performance operation of the computer, so that the online learning efficiency is improved, and teachers can know the learning conditions of students, thereby supervising and compensating weak links. The facial expression and the head posture corner of the student are analyzed, the state of the student is recorded, the student is helped to adjust the learning mode and the learning method in time, a pointed scheme is formulated, and the continuous development of the student ability and the student individuation is realized. Meanwhile, if the participation degree of the students in the classroom can be accurately mastered in teaching and communication, the management efficiency can be effectively improved, the real psychological conditions of the students can be known, whether the students lie in normal performances or not can be distinguished, the management thought and the working method can be timely adjusted, the trust feeling between the students can be established, and meanwhile, sufficient preparation can be made for possible emergencies.
The operation flow of the system specifically comprises:
expression feature recognition
The deeper the network structure, the stronger its fitting ability should be, however, the performance of the network does not increase inversely when the depth of the network reaches a certain degree. In a conventional neural network, convolutional layers, fully-connected layers and other structures are simply connected together according to a certain sequence, and each layer of structure only receives information from the previous layer and transmits the information to the next layer after the information is processed by the current layer. When the network hierarchy deepens, the single connection mode can cause the performance degradation of the neural network. This deficiency is ameliorated by the introduction of Residual learning, which is the addition of "short connections" (shortcutconnections) based on the single connection approach described above. The most important of the ResNet is a residual learning unit, and a residual network is to add skipconnections (skip connection layers) on the network.
Meanwhile, due to the fact that in each backward propagation process in the network, all weights can be updated at the same time, interlayer matching is 'lack of tacit', the more the number of layers is, the more difficult the interlayer matching is, and the distribution of each layer of input needs to be controlled. We apply BatchNormalization, which has some regularization effect, and after this approach we do not need to use dropout method, while also reducing overfitting.
The method is particularly important for judging facial expressions as the basic classification of intelligent analysis, and through comparison of experiments of various models such as ResNet, googleNet, alexNet and VGG, the ResNet model with the best effect is selected, the accuracy of the experiments is improved to the highest, and the accuracy of the experiments on a training set is as high as 97.59%, as shown in FIG. 6. Let the output function be H (x), F (x) be the residual function, and x be the input:
H(x)=F(x)+x
by deep learning the training model, a judgment about the facial expression can be obtained, as shown in fig. 7 (a) and 7 (b). And can be presented in real time over time, and the real-time tracking of the expression features is shown in fig. 8:
(II) determination of head posture
And introducing included angles alpha and beta in two directions to track the state of the head posture. When the students are in distraction or the initial participation coefficient is set to be alpha (t) = -1, beta (t) = -1, t is the time of real-time change, when the degree | alpha (t) | of the change of the included angle alpha along with the time does not exceed 30 degrees, and the degree | beta (t) | of the change of the included angle beta along with the time does not exceed 15 degrees, the attention coefficient a (t) =1, beta (t) =1, or the attention coefficient is 0. The judgment based on the participation degree of the online class is specifically shown in the calculation formula (1):
Figure BDA0003953351420000061
when the student is in a negative status attribute, or the face detection fails, or the angular deflection is greater than thirty degrees, the student can be basically defined as not participating in the classroom behavior, and the student can be reminded when the time exceeds a standard. The formula (2) is shown as a formula T for the classroom participation time:
Figure BDA0003953351420000062
wherein N lines of the number of times the learner is totally detected in class, and N is the number of times the detection report is actively participating in class, T o The total class time is long, so that the participation time in the class can be obtained based on the formula, the interest degree and the learning efficiency of the students in the class are laterally expressed, and the teacher is helped to carry out overall coordination planning.
In order to effectively observe the expressive features of the human face, for the facial feature point, as shown in fig. 9 (a) and 9 (b), the included angle between the front face and the twist is the head pose rotation angle α. As shown in fig. 9 (c) and 9 (d), the angle between the front face and the upward view is the head posture elevation angle β, and parameters such as the head posture elevation angle β and the head posture elevation angle α are analyzed and determined in detail.
While there have been shown and described the fundamental principles and principal features of the invention and advantages thereof, it will be understood by those skilled in the art that the invention is not limited by the embodiments described above, which are given by way of illustration of the principles of the invention, but is susceptible to various changes and modifications without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. The utility model provides a learner online learning efficiency evaluation system based on emotion recognition, includes real-time camera monitoring module, image collection module, image processing module, statistical analysis module based on system architecture, its characterized in that: the real-time camera monitoring module is used for acquiring key data; the image processing module is used for analyzing and processing data; the statistical analysis module is used for carrying out statistical analysis on the constructed model data;
the working steps of the learner online learning efficiency evaluation system based on emotion recognition are as follows:
1) Based on the face detection of a video image, a video image is given, the head posture, the face position, the neck, the arms and the flexion and extension postures of a person, namely three parts, namely image segmentation, detection object locking and feature extraction are required to be positioned and detected;
2) Analyzing and judging facial expressions, performing feature extraction on image dimensionality reduction based on a direction gradient Histogram (HOG) of textural features, improving the efficiency of model extraction features, improving the performance from an input level, performing weighting operation by combining linear interpolation, fully expressing image details and overall features, and adding CBAM into a network;
3) The feature definition divides the emotional state of the learner into three categories, namely 'happy', 'neutral' and 'boring', through the expression feature, and the three states are judged through combining the facial expression with the torsion angle and the head posture.
2. The system of claim 1, wherein the system comprises: the real-time camera monitoring module realizes real-time monitoring of the learning state of the students in front of the screen through controlling the notebook computer and the computer camera in the course of the network course, and transmits the acquired video signals to the computer main control operation center in real time.
3. The system of claim 2, wherein the system comprises: the real-time camera monitoring module comprises a face detection function, an expression recognition function, a head posture analysis function and a body behavior judgment function.
4. The system of claim 1, wherein the system comprises: the image collection module automatically collects required audio and video as data materials by adjusting the reading frequency of the equipment and the number of times of datamation in a certain time, and provides a data base for the operation of subsequent modules.
5. The system of claim 1, wherein the system comprises: the image processing module realizes the face detection of an image target, extracts a face characteristic region, divides the face characteristic region into 468 key points, tracks the body behaviors of the arm, the five fingers and the like of a learner in real time according to the model and provides reference data for the next module.
6. The system of claim 1, wherein the system comprises: the statistical analysis module is used for inputting a model for comparison through data support provided by the previous module to obtain the attention and participation of students to the course content, and recording the time and reminding the students in real time when the participation at a certain moment is low.
CN202211456127.5A 2022-11-21 2022-11-21 System for evaluating online learning efficiency of learner based on emotion recognition Pending CN115966003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211456127.5A CN115966003A (en) 2022-11-21 2022-11-21 System for evaluating online learning efficiency of learner based on emotion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211456127.5A CN115966003A (en) 2022-11-21 2022-11-21 System for evaluating online learning efficiency of learner based on emotion recognition

Publications (1)

Publication Number Publication Date
CN115966003A true CN115966003A (en) 2023-04-14

Family

ID=87356941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211456127.5A Pending CN115966003A (en) 2022-11-21 2022-11-21 System for evaluating online learning efficiency of learner based on emotion recognition

Country Status (1)

Country Link
CN (1) CN115966003A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036877A (en) * 2023-07-18 2023-11-10 六合熙诚(北京)信息科技有限公司 Emotion recognition method and system for facial expression and gesture fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036877A (en) * 2023-07-18 2023-11-10 六合熙诚(北京)信息科技有限公司 Emotion recognition method and system for facial expression and gesture fusion
CN117036877B (en) * 2023-07-18 2024-08-23 六合熙诚(北京)信息科技有限公司 Emotion recognition method and system for facial expression and gesture fusion

Similar Documents

Publication Publication Date Title
CN110507335B (en) Multi-mode information based criminal psychological health state assessment method and system
Dewan et al. Engagement detection in online learning: a review
Górriz et al. Computational approaches to explainable artificial intelligence: advances in theory, applications and trends
CN112766173B (en) Multi-mode emotion analysis method and system based on AI deep learning
Avola et al. Deep temporal analysis for non-acted body affect recognition
CN117438048B (en) Method and system for assessing psychological disorder of psychiatric patient
Li et al. Research on leamer's emotion recognition for intelligent education system
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
Dawood et al. Affective computational model to extract natural affective states of students with Asperger syndrome (AS) in computer-based learning environment
Jingchao et al. Recognition of classroom student state features based on deep learning algorithms and machine learning
Yahaya et al. Gesture recognition intermediary robot for abnormality detection in human activities
CN115966003A (en) System for evaluating online learning efficiency of learner based on emotion recognition
Harper et al. End-to-end prediction of emotion from heartbeat data collected by a consumer fitness tracker
Sidhu et al. Deep learning based emotion detection in an online class
CN113974627A (en) Emotion recognition method based on brain-computer generated confrontation
CN113158872A (en) Online learner emotion recognition method
Lopez-Aguilar et al. Advanced system to measure UX in online learning environments
CN117237766A (en) Classroom cognition input identification method and system based on multi-mode data
Wang et al. Full-convolution Siamese network algorithm under deep learning used in tracking of facial video image in newborns
Zhang et al. Quantification of advanced dementia patients’ engagement in therapeutic sessions: An automatic video based approach using computer vision and machine learning
Sheng et al. The model of e-learning based on affective computing
Tan et al. Towards automatic engagement recognition of autistic children in a machine learning approach
Du et al. A noncontact emotion recognition method based on complexion and heart rate
Li et al. Image processing-based detection method of learning behavior status of online calssroom students
CN117036877B (en) Emotion recognition method and system for facial expression and gesture fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination