CN115471894A - Multi-mode learning state identification method based on camera shooting - Google Patents

Multi-mode learning state identification method based on camera shooting Download PDF

Info

Publication number
CN115471894A
CN115471894A CN202211158142.1A CN202211158142A CN115471894A CN 115471894 A CN115471894 A CN 115471894A CN 202211158142 A CN202211158142 A CN 202211158142A CN 115471894 A CN115471894 A CN 115471894A
Authority
CN
China
Prior art keywords
score
learning state
calculating
camera
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211158142.1A
Other languages
Chinese (zh)
Inventor
徐慧
赵旭
金怀杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202211158142.1A priority Critical patent/CN115471894A/en
Publication of CN115471894A publication Critical patent/CN115471894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of student learning state analysis, in particular to a camera shooting-based multi-mode learning state identification method, which comprises the following specific steps: s1: acquiring a face camera of a learner through a camera; s2: extracting a face image from the camera shooting, detecting facial expressions and calculating a table sentiment score; s3: extracting a head posture image from the camera, detecting the head posture, and solving a head posture score; s4: extracting a human eye image from the shot image, carrying out eye closure frequency statistics, and solving the fatigue degree; s5: and performing comprehensive scoring based on the scores of S2, S3 and S4, and quantifying the learning state result. The invention provides the multi-mode information fusion learning state identification method based on emotion, head posture and fatigue identification of face shooting, is convenient to implement, has high credibility and real-time property, can automatically inform teachers and students of the student states, helps teachers adjust teaching strategies to effectively teach, and can also remind tired students to carefully participate in learning.

Description

Multi-mode learning state identification method based on camera shooting
Technical Field
The invention relates to the technical field of student learning state analysis, in particular to a camera shooting-based multi-mode learning state identification method.
Background
The learning state refers to the characteristics of physical and mental activities in strength, stability and durability when students are engaged in learning activities. For learners, whether the learning state is good or not is one of the important influencing factors influencing the learning efficiency and the learning result. In the teaching of "student is the basis", the learner should pay more attention to the learning state. In order to ensure the learning effect of learners, teachers need to intervene in the passive learning state in time and adjust teaching strategies. Therefore, it is very important to know the learning state of the learner in time in the teaching process to determine whether the teaching is effective. However, in online teaching, the spatial separation between teachers and students makes it difficult for any teacher to know the learning status of students on the other end of the network.
The study direction of the learning state has the following 2 major aspects: (1) cognitive performance based on learner learning, such as: the mutual correlation between the learning behavior and the learning evaluation is established by monitoring the learning process and recording the learning data. On-line learning platforms represented by the MOOC widely adopt the participation of courseware, homework, examination, question asking, discussion, mutual evaluation and other links as evaluation basis. (2) Based on the physiological and psychological reactions in the learner's learning, such as: identifying and judging the learning state of the learner based on the facial expression; learning the change of the sign parameters in the learning process of the learner through the wearable device to judge the learning state of the learner, for example, predicting the attention of the student by using the heart rate. With the continuous maturity of technologies such as internet of things, big data, artificial intelligence and the like, it becomes possible to construct an automatic or semi-automatic learning behavior tracking and analyzing system, and the problem of learning analysis and other education researches becomes by dominating online learning behaviors and tracking and evaluating emotional states.
In the process of teaching implemented by teachers, especially under the condition that teachers and students on the line are spaced, learning states of students are known in time, and therefore the teaching effect is guaranteed. In the existing research, particularly in the technology which can be put into application, the post evaluation is mostly carried out, and the real-time performance is poor.
Disclosure of Invention
The invention aims to solve the defects in the prior art, provides the shooting-based multi-mode learning state identification method, is convenient to implement, has high credibility and real-time performance, can automatically inform teachers and students of the states of the students, helps teachers adjust teaching strategies to effectively teach, and can remind tired students to carefully invest in learning by providing the multi-mode information fusion learning state identification method based on emotion, head posture and fatigue identification of face shooting.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-mode learning state recognition method based on camera shooting comprises the following specific steps:
s1: acquiring a face camera of a learner through a camera;
s2: extracting a face image from the camera shooting, detecting facial expressions, and solving the expression score;
s3: extracting a head posture image from the camera, detecting the head posture, and solving a head posture score;
s4: extracting a human eye image from the shot image, carrying out eye closure frequency statistics, and solving the fatigue degree;
s5: and performing comprehensive scoring based on the scores of S2, S3 and S4, and quantifying the learning state result.
Preferably, in step S1, the specific steps are as follows:
s101: the shooting is performed periodically, and one frame is taken every 0.5 to 1 second.
Preferably, in step S2, the specific steps are as follows:
s201: the expression recognition method comprises the following specific steps: identifying by using a trained facial expression model in OpenVINO;
s202: counting the expression recognition results of each frame of picture of the video in a time period according to the emotion classification C i
S203: calculating emotion scores by adopting an analytic hierarchy process software of yaahp, and specifically comprising the following steps of:
step1: constructing a hierarchical model: the target layer is the facial expression score, the criterion layer is the negative expression, the positive expression and the neutral emotion, the sub-criterion layer is k emotions, wherein the negative emotion is k1, the positive emotion is k2, and the neutral expression is 1.
Step2: constructing a consistency judgment matrix, and specifically operating as follows:
2.1 clicking a 'judgment matrix' page;
2.2 on the page of the 'judgment matrix', comparing the relative importance of the two factors aiming at one factor on the previous layer in pairs, and manually inputting a comparison value or clicking an option given by a screen to obtain a value;
2.3, setting all values, if inconsistent data prompt occurs, modifying the data until all data meet the consistency requirement; the consistency is less than 0.1, which is in accordance with the requirement;
step3: clicking a calculation result page to obtain the weight value of each emotion;
s204: calculating a sentiment composite score G 1 The calculation is as follows:
Figure BDA0003858287250000021
wherein, G 1 Is the overall weighted score of the mood, M is the number of video frames acquired in the period, C i Is the number of facial expression statistics, w, of the corresponding label i Are the corresponding weight values.
Preferably, in step S3, the specific steps are as follows:
s301: and calculating the average value of the ohm angle in the period by the following method:
step1: calculating the average pitch angle of the head
Figure BDA0003858287250000031
Figure BDA0003858287250000032
Wherein M is the number of video frames acquired in a period, alpha i The pitch angle of the face in the ith video frame;
step2: calculating the average yaw angle of the head
Figure BDA0003858287250000033
Figure BDA0003858287250000034
Wherein M is the number of video frames acquired in a period, beta i The yaw angle of the face in the ith video frame is taken as the yaw angle of the face in the ith video frame;
s302: calculating the head pose score by the following method:
step1: calculating a pitch angle score
Figure BDA0003858287250000035
Step2: calculating a yaw score
Figure BDA0003858287250000036
Step3: the head composite score is calculated and,
Figure BDA0003858287250000037
wherein G is 2 Scores are made for the head pose.
Preferably, in step S4, the specific steps are as follows:
s401: the closed-eye detection adopts the P80 standard of a PERCLOS fatigue detection algorithm, the fatigue degree of students is represented by the closed-eye proportion of the students in a period, and the calculation method is as follows:
Figure BDA0003858287250000038
wherein n is the number of frames of the closed-eye times in the period, and M is the number of the video frames acquired in the period;
s402: calculating a fatigue score G 3 =1-P
Wherein G is 3 The fatigue score is obtained.
Preferably, in step S5, the specific steps are as follows:
s501: adopting the yaahp analytic hierarchy process software to calculate the weights of the expressions, the head postures and the fatigue degrees in the learning state;
step1: the hierarchical model is 3 layers: the target layer is "learning state", and the criterion layer is 3 kinds of feature scores: the facial expression score, the head posture score and the fatigue score are measured according to the following steps: good, general, poor;
step2: constructing a consistency judgment matrix to obtain weights corresponding to the 3 characteristics respectively: v. of 1 、v 2 、v 3
S502: comprehensive score of learning state
Figure BDA0003858287250000041
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides scores of emotion, head posture and fatigue degree based on video face shooting, and integrates the information of the emotion, the head posture and the fatigue degree to judge the learning state of a learner; the psychological and physiological performance of learners is embodied through emotion, head posture and fatigue, and the credibility is higher.
2. The invention only needs a camera which is fixedly configured with a notebook computer, a parallel analysis and a smart phone, so that additional hardware support is not needed; the video takes 5 minutes as a period, 120 frames of images are taken for processing each time, and the original images do not need to be stored after the processing is finished; therefore, the invention has real-time performance and low requirement on the memory.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a hierarchy of mood scores in an embodiment of the invention;
FIG. 3 is a diagram of an expression recognition decision matrix according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating combination weights according to an embodiment of the present invention;
FIG. 5 is a decision diagram of a three feature integration learning state according to an embodiment of the present invention;
FIG. 6 is a diagram of the learning state comprehensive evaluation weight in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, so that those skilled in the art can better understand the advantages and features of the present invention, and thus the scope of the present invention is more clearly defined. The embodiments described herein are only a few embodiments of the present invention, rather than all embodiments, and all other embodiments that can be derived by one of ordinary skill in the art without inventive faculty based on the embodiments described herein are intended to fall within the scope of the present invention.
Referring to fig. 1-6, a method for recognizing a multi-modal learning state based on camera shooting comprises the following specific steps:
s1: acquiring a face camera of a learner through a camera;
s2: extracting a face image from the camera shooting, detecting facial expressions and calculating a table sentiment score;
s3: extracting a head posture image from the camera shooting, detecting the head posture, and solving a head posture score;
s4: extracting a human eye image from the shot image, carrying out eye closure frequency statistics, and solving the fatigue degree;
s5: and performing comprehensive scoring based on the scores of S2, S3 and S4, and quantifying the learning state result.
Specifically, step S1, a camera is used to capture a human face of a learner, and the operation is as follows:
the shooting is periodically performed for 3 to 5 minutes, and one frame is taken every 1 second, and 100 to 120 frames are taken.
Specifically, in step S2, a facial image is extracted from the captured image, facial expression detection is performed, and a table sentiment score is obtained, where the operation method is as follows:
s201: identifying by using a trained facial expression model in OpenVINO;
s202: counting the expression recognition results of each frame of picture of the video in a time period according to emotion classification C i
S203: adopting the analytic hierarchy process software of yaahp to calculate the weight value of each thread, the steps are as follows:
step1: constructing a hierarchical model: the target layer is the facial expression score, the criterion layer is the negative expression, the positive expression and the neutral emotion, the sub-criterion layer is k emotions, wherein the negative emotion is k1, the positive emotion is k2, and the neutral expression is 1.
Negative expressions are set as 4 types: fear, difficulty, aversion, anger; there are 3 positive expressions: producing qi, surprise, and happy; the traditional Chinese pocket watch is 1 type and general. The hierarchical model is shown in FIG. 2.
Step2: and constructing a consistency judgment matrix. The specific operation is as follows:
2.1 clicking a 'judgment matrix' page;
2.2 on the page of the 'judgment matrix', the relative importance of the two factors aiming at one factor (the standard or the target) on the previous layer is compared in pairs, and a comparison value is manually input or a value is obtained by clicking an option given by a screen.
The consistency determination matrix of the model shown in fig. 2 is shown in fig. 3, and when the consistency is less than 0.1, the consistency determination matrix is satisfactory.
Step3: clicking a calculation result page to obtain the weight value of each emotion;
the hierarchical structure shown in fig. 2 is used to obtain 3 consistency determination matrices for facial expressions, negative expressions, and positive expressions, respectively, as shown in fig. 3.
S204: calculating an emotional composite score G 1 The calculation is as follows:
Figure BDA0003858287250000051
wherein, within a cycle, G 1 Is the composite weighted score of mood, M is the number of video frames captured, C i Is the number of facial expression statistics, w, of the corresponding label i Is the corresponding weight value.
For the decision matrix of fig. 3, the weights of the 7 emotions are obtained as shown in fig. 4.
Specifically, in step S3, a head posture image is extracted from the captured image, head posture detection is performed, and a head posture score is obtained, and the calculation method is as follows:
s301: calculating the average value of the pitch angle and the yaw angle in the period, wherein the method comprises the following steps:
step1: calculating the average pitch angle of the head
Figure BDA0003858287250000061
Figure BDA0003858287250000062
Wherein n is the number of video frames acquired in the period, alpha i The pitch angle of the face in the ith video frame.
Step2: calculating the average yaw angle of the head
Figure BDA0003858287250000063
Figure BDA0003858287250000064
Wherein n is the number of video frames acquired in a period, beta i Is the yaw angle of the face in the ith video frame.
S302: calculating the head pose score by the following method:
step1: calculating a pitch angle score
Figure BDA0003858287250000065
Step2: calculating a yaw score
Figure BDA0003858287250000066
Step3: the head-part comprehensive score is calculated,
Figure BDA0003858287250000067
specifically, step S4 is to extract a human eye image from the captured image, perform eye closure frequency statistics, and calculate the fatigue degree:
s401: the closed-eye detection adopts the P80 standard of a PERCLOS fatigue detection algorithm, the fatigue degree of students is represented by the closed-eye proportion of the students in a period, and the calculation method is as follows:
Figure BDA0003858287250000068
wherein n is the number of closed-eye frames in the period, and M is the number of unit period frames.
S402: calculating a fatigue score G 3 =1-P。
Specifically, step S5: and carrying out comprehensive scoring based on the scores of S2, S3 and S4, and quantifying a learning state result:
s501: and (4) solving the weights of the expressions, the head postures and the fatigue degrees in the learning state by adopting the yaahp analytic hierarchy process software.
Step1: the hierarchical model is 3 layers: the target layer is the "learning state", and the criterion layer is 3 kinds of feature scores: the facial expression score, the head posture score and the fatigue score, and the alternative scheme layers are as follows: good, general, poor.
The hierarchical model is shown in FIG. 5.
Step2: constructing a consistency judgment matrix to obtain weights corresponding to the 3 characteristics respectively: v. of 1 、v 2 、v 3
S502: comprehensive score of learning state
Figure BDA0003858287250000071
Wherein G is 1 、G 2 、G 3 A composite weighted score, a head pose score and a fatigue score, respectively, for mood.
For the model of fig. 5, the calculation results of the weights are shown in fig. 6.
In conclusion, the method for recognizing the learning state based on the multi-mode information fusion of the emotion, the head posture and the fatigue recognition of the face camera is convenient to implement, high in reliability and real-time, and can automatically inform teachers and students of the states of the students, help teachers adjust teaching strategies to effectively teach, and remind tired students to carefully put into learning.
The description and practice of the disclosure herein will be readily apparent to those skilled in the art from consideration of the specification and understanding, and may be modified and modified without departing from the principles of the disclosure. Therefore, modifications and improvements made without departing from the spirit of the invention should also be considered as the scope of the invention.

Claims (6)

1. A multi-mode learning state recognition method based on camera shooting is characterized by comprising the following specific steps:
s1: acquiring a face camera of a learner through a camera;
s2: extracting a face image from the camera shooting, detecting facial expressions and calculating a table sentiment score;
s3: extracting a head posture image from the camera, detecting the head posture, and solving a head posture score;
s4: extracting a human eye image from the shot image, carrying out eye closure frequency statistics, and solving the fatigue degree;
s5: and performing comprehensive scoring based on the scores of S2, S3 and S4, and quantifying the learning state result.
2. The imaging-based multi-modal learning state recognition method according to claim 1, wherein in step S1, the specific steps are as follows:
s101: the shooting is performed periodically, and one frame is taken every 0.5 to 1 second.
3. The imaging-based multi-modal learning state recognition method according to claim 1, wherein in step S2, the specific steps are as follows:
s201: the expression recognition method comprises the following specific steps: identifying by using a trained facial expression model in OpenVINO;
s202: counting the expression recognition results of each frame of picture of the video in a time period according to the emotion classification C i
S203: calculating emotion scores by adopting an analytic hierarchy process software of yaahp, and specifically comprising the following steps of:
step1: constructing a hierarchical model: the target layer is the facial expression score, the criterion layer is the negative expression, the positive expression and the neutral emotion, the sub-criterion layer is k emotions, wherein the negative emotion is k1, the positive emotion is k2, and the neutral expression is 1;
step2: constructing a consistency judgment matrix, and specifically operating as follows:
2.1 clicking a 'judgment matrix' page;
2.2 on the page of the judgment matrix, comparing the relative importance of the two factors aiming at one factor of the previous layer in pairs, and manually inputting a comparison value or clicking an option given by a screen to obtain a value;
2.3, setting all values, if inconsistent data prompt occurs, modifying the data until all data meet the consistency requirement; the consistency is less than 0.1, which is in accordance with the requirement;
step3: clicking a calculation result page to obtain the weight value of each emotion;
s204: calculating an emotional composite score G 1 The calculation formula is as follows:
Figure FDA0003858287240000011
wherein G is 1 Is the overall weighted score of the mood, M is the number of video frames acquired in the period, C i Is the number of facial expression statistics, w, of the corresponding label i Is the corresponding weight value.
4. The method for recognizing a multi-modal learning state based on imaging according to claim 1, wherein in step S3, the specific steps are as follows:
s301: and calculating the average value of the ohm angle in the period by the following method:
step1: calculating the average pitch angle of the head
Figure FDA0003858287240000021
Figure FDA0003858287240000022
Wherein M is the number of video frames acquired in a period, alpha i The pitch angle of the face in the ith video frame;
step2: calculating the average yaw angle of the head
Figure FDA0003858287240000023
Figure FDA0003858287240000024
Wherein M is the number of video frames acquired in a period, beta i The yaw angle of the face in the ith video frame is taken as the yaw angle of the face in the ith video frame;
s302: calculating a head pose score by:
step1: calculating a pitch angle score
Figure FDA0003858287240000025
Step2: calculating a yaw score
Figure FDA0003858287240000026
Step3: the head composite score is calculated and,
Figure FDA0003858287240000027
wherein, G 2 Scores are made for the head pose.
5. The imaging-based multi-modal learning state recognition method according to claim 1, wherein in step S4, the specific steps are as follows:
s401: the closed-eye detection adopts the P80 standard of a PERCLOS fatigue detection algorithm, the fatigue degree of students is represented by the closed-eye proportion of the students in a period, and the calculation method is as follows:
Figure FDA0003858287240000028
wherein n is the number of frames of the closed-eye times in the period, and M is the number of the video frames acquired in the period;
s402: calculating a fatigue score G 3 =1-P
Wherein, G 3 The fatigue score is obtained.
6. The imaging-based multi-modal learning state recognition method according to claim 1, wherein in step S5, the specific steps are as follows:
s501: adopting the yaahp analytic hierarchy process software to calculate the weights of the expressions, the head postures and the fatigue degrees in the learning state;
step1: the hierarchical model is 3 layers: the target layer is "learning state", and the criterion layer is 3 kinds of feature scores: facial expression score, head posture score, tiredness score, the measure layer is: good, general, poor;
step2: constructing a consistency judgment matrix to obtain weights corresponding to the 3 characteristics respectively: v. of 1 、v 2 、v 3
S502: learning state composite score
Figure FDA0003858287240000031
CN202211158142.1A 2022-09-22 2022-09-22 Multi-mode learning state identification method based on camera shooting Pending CN115471894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211158142.1A CN115471894A (en) 2022-09-22 2022-09-22 Multi-mode learning state identification method based on camera shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211158142.1A CN115471894A (en) 2022-09-22 2022-09-22 Multi-mode learning state identification method based on camera shooting

Publications (1)

Publication Number Publication Date
CN115471894A true CN115471894A (en) 2022-12-13

Family

ID=84335383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211158142.1A Pending CN115471894A (en) 2022-09-22 2022-09-22 Multi-mode learning state identification method based on camera shooting

Country Status (1)

Country Link
CN (1) CN115471894A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101563977B1 (en) * 2014-07-09 2015-10-28 공주대학교 산학협력단 Emotion inference system based on fuzzy integral in accordance with personalized emotional information
CN107392159A (en) * 2017-07-27 2017-11-24 竹间智能科技(上海)有限公司 A kind of facial focus detecting system and method
CN112613579A (en) * 2020-12-31 2021-04-06 南京视察者智能科技有限公司 Model training method and evaluation method for human face or human head image quality and selection method for high-quality image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101563977B1 (en) * 2014-07-09 2015-10-28 공주대학교 산학협력단 Emotion inference system based on fuzzy integral in accordance with personalized emotional information
CN107392159A (en) * 2017-07-27 2017-11-24 竹间智能科技(上海)有限公司 A kind of facial focus detecting system and method
CN112613579A (en) * 2020-12-31 2021-04-06 南京视察者智能科技有限公司 Model training method and evaluation method for human face or human head image quality and selection method for high-quality image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马璐 等: "物流决策与优化", 华中科技大学出版社, pages: 203 - 206 *

Similar Documents

Publication Publication Date Title
CN108399376B (en) Intelligent analysis method and system for classroom learning interest of students
CN111242049A (en) Student online class learning state evaluation method and system based on facial recognition
US11475788B2 (en) Method and system for evaluating and monitoring compliance using emotion detection
KR20100016696A (en) Student learning attitude analysis systems in virtual lecture
CN111275345B (en) Classroom informatization evaluation and management system and method based on deep learning
CN109685692A (en) A kind of noninductive acquisition and analysis system of various dimensions student learning behavior
CN111507592A (en) Evaluation method for active modification behaviors of prisoners
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN111523444A (en) Classroom behavior detection method based on improved Openpos model and facial micro-expressions
Li et al. Research on leamer's emotion recognition for intelligent education system
Chiu et al. A bayesian classification network-based learning status management system in an intelligent classroom
CN112085392A (en) Learning participation degree determining method and device and computer equipment
CN114973126A (en) Real-time visual analysis method for student participation degree of online course
CN111523445A (en) Examination behavior detection method based on improved Openpos model and facial micro-expression
Sidhu et al. Deep learning based emotion detection in an online class
Meriem et al. Determine the level of concentration of students in real time from their facial expressions
Bao et al. An Emotion Recognition Method Based on Eye Movement and Audiovisual Features in MOOC Learning Environment
Villegas-Ch et al. Identification of emotions from facial gestures in a teaching environment with the use of machine learning techniques
Shah et al. Assessment of student attentiveness to e-learning by monitoring behavioural elements
CN115471894A (en) Multi-mode learning state identification method based on camera shooting
Samir et al. Exam cheating detection system with multiple-human pose estimation
CN116994465A (en) Intelligent teaching method, system and storage medium based on Internet
CN115966003A (en) System for evaluating online learning efficiency of learner based on emotion recognition
CN113506027A (en) Course quality assessment and improvement method based on student visual attention and teacher behavior
CN111611896A (en) Management system for preventing cheating in examination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination