CN111507241A - Lightweight network classroom expression monitoring method - Google Patents

Lightweight network classroom expression monitoring method Download PDF

Info

Publication number
CN111507241A
CN111507241A CN202010288809.4A CN202010288809A CN111507241A CN 111507241 A CN111507241 A CN 111507241A CN 202010288809 A CN202010288809 A CN 202010288809A CN 111507241 A CN111507241 A CN 111507241A
Authority
CN
China
Prior art keywords
correspond
eyebrow
action
expressions
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010288809.4A
Other languages
Chinese (zh)
Inventor
阳天瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Juyang Technology Group Co ltd
Original Assignee
Sichuan Juyang Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Juyang Technology Group Co ltd filed Critical Sichuan Juyang Technology Group Co ltd
Priority to CN202010288809.4A priority Critical patent/CN111507241A/en
Publication of CN111507241A publication Critical patent/CN111507241A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention provides a lightweight class online expression monitoring method, which comprises the steps of collecting facial pictures of students, combining different actions of eyes, mouths and eyebrows to correspond to different expressions, and obtaining an image training set; training a classroom expression monitoring model by using an image training set of a width learning network structure; shooting facial expressions of students in a classroom to obtain a static image of facial features; and inputting the static image into the classroom expression monitoring model to obtain an output value, and determining the expression mode to which the expressions of the students belong in the image. The expression recognition method is more effective in expression recognition result, more accurate in expression mode recognition, and low in operation cost and technical difficulty.

Description

Lightweight network classroom expression monitoring method
Technical Field
The invention relates to an expression monitoring method, and belongs to the field of image processing.
Background
The online classroom is a course form which is rapidly developed in recent years, obtains high attention of governments, colleges and enterprises in the global scope, and becomes important force for promoting higher education changes. The online classroom realizes the large-scale transmission of the teaching process by utilizing the rapidity and the convenience of video transmission, and introduces interactive practice aiming at the problem of insufficient teaching feedback caused by video unidirectional transmission. The teaching feedback provided by interactive exercises is still insufficient compared to the traditional offline lecture process. In the offline course, the lecturer can obtain feedback through the facial expression of the student and through asking questions of the student, so that timely teaching adjustment is made, and the online classroom cannot achieve the point.
The neural network based on deep learning is a feasible direction for monitoring classroom expressions, but with the popularization of network classes, a large number of students may exist in each classroom, and then a system with a high recognition rate needs a neural network with deeper levels, which brings troubles of overlarge calculated amount, overlong training time and excessive memory consumption, and on the other hand, the deep structure network with deeper levels relates to a large amount of weights and parameters, so that the weights and parameters of the network have to be continuously adjusted so as to achieve the best training result of the network.
And the expression recognition is carried out by adopting the width learning, the real-time online learning can be realized, the training speed is high, compared with the training of a depth learning system in a high-performance GPU server for dozens of hours or several days, the width learning system can be easily constructed within dozens of seconds or several minutes, and is 1000-fold and 2000-fold faster than the depth learning system, even in a common computer. In the identification process, high-precision identification results can be realized without depending on high-performance computing equipment, and the weights and parameters of the network do not need to be continuously adjusted. Therefore, compared with a deep learning system, the wide learning system greatly reduces the cost and the technical difficulty, and does not need an expensive GPU server or repeated parameter adjustment of technicians.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the lightweight class expression monitoring method for the online class, the expression recognition result is quicker and more effective, and the operation cost and the technical difficulty are low.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
s1, collecting a face picture of the student, labeling the positions of eyes, a mouth and eyebrows in the face picture, labeling the action modes of the action expressions of each part, and combining different actions of the eyes, the mouth and the eyebrows to correspond to different expressions to obtain an image training set;
s2, training a classroom expression monitoring model by using the image training set obtained in the step S1; the classroom expression monitoring model is of a width learning network structure and is a two-layer network comprising an input layer and an output layer;
s3, shooting the facial expressions of students in a classroom to obtain real-time videos of the facial expressions of the students;
s4, performing framing processing on the real-time video, and converting the real-time video into a static image with facial features of the student;
and S5, inputting the static image obtained in the step S4 into a classroom expression monitoring model, obtaining an output value, and determining an expression mode to which the expression of the student belongs in the image.
In the step S1, when the mouth is motionless, the motionless eyes and eyebrow motionless correspond to normal expressions, the motionless eyes and eyebrow frightening correspond to thinking expressions, the closed eye and eyebrow motionless correspond to boring expressions, the closed eye and eyebrow frightening correspond to thinking expressions, the open eye and eyebrow motionless correspond to anger expressions, the open eye and eyebrow frightening correspond to angry expressions, and the open eye and eyebrow frightening correspond to surprise expressions; when the mouth of the user performs a sipping action which is biased to thinking or sadness, the eye-motionless action and the eyebrow-motionless action correspond to a thinking expression, the eye-motionless action and the eyebrow wrinkling correspond to a thinking expression, the eye-closing action and the eyebrow-motionless action correspond to a sadness expression, the eye-closing action and the eyebrow wrinkling correspond to a sadness expression, the eye-glaring action and the eyebrow wrinkling correspond to an angry expression, and the eye-glaring action and the eyebrow picking correspond to an angry expression; when the mouth makes a sipping action biased towards anger, all the actions of the eyes and eyebrows correspond to angry expressions; when the mouth of the mouth makes a grin action, the eye-absent action and the eyebrow-absent action correspond to sad expressions, the eye-absent action and the eyebrow wrinkling or eyebrow picking correspond to happy expressions, the eye closing action and the eyebrow-absent action correspond to sad expressions, the eye closing action and the eyebrow wrinkling or eyebrow picking correspond to happy expressions, the eye-glaring action and the eyebrow-absent action correspond to bored expressions, and the eye-glaring action and the eyebrow picking correspond to surprised expressions.
In the step S2, an image training set is input, and a width learning system extracts features of images in the image training set to generate feature nodes and enhancement nodes of the feature nodes, which are used as input layers of the facial expression image monitoring model;
the characteristic nodes are obtained by mapping image data X in the image training set, and if n characteristic nodes are generated, the characteristic nodes
Figure BDA0002449607660000021
Wherein the content of the first and second substances,
Figure BDA0002449607660000022
and
Figure BDA0002449607660000023
respectively, randomly generated weight coefficients and bias terms; given sign Zi≡[Z1...Zi]Feature nodes representing image data mappings in all image training sets;
enhancing node passing functions
Figure BDA0002449607660000024
Obtained, is marked as HjThe first j groups of all enhanced nodes are noted as Hj≡[H1,...,Hj]Wherein, in the step (A),
Figure BDA0002449607660000025
and
Figure BDA0002449607660000026
respectively, randomly generated weight coefficients and bias terms, the mth group of enhanced nodes
Figure BDA0002449607660000027
At the moment, the output value Y of the classroom expression monitoring model is [ Z ]n|Hm]WmWeight parameter W of whole classroom expression monitoring modelmObtaining the result by pseudo-inversion, Wm=(V3T*V3+In+m*c)-1*V3TY, where c is a regularization parameter, V3The characteristic nodes and the enhanced node columns are spliced.
In step S2, the specific steps of extracting feature generation feature nodes and enhancement nodes of the feature nodes of the images in the image training set, and using them together as the input layer of the facial expression monitoring model, are as follows:
let Tp×qFor training data of an image training set, each element is data in a pixel, p is the number of samples, q is the total number of pixels of the sample image, and for Tp×qPerforming Z score standardization; for Tp×qIs subjected to augmentation, Tp×qAnd finally, adding one column, wherein the added column is the result of the matrix multiplication on the right side of the equal sign and is changed into T1(p×(q+1))
Generate (q +1) × N1Random weight matrix WeIn which N is1Is the number of characteristic nodes of each window, WeThe values are uniformly distributed between (0, 1) to obtain characteristic nodes H1,H1=T1×WeT1 being T1(p×(q+1))) Then carrying out normalization processing;
to H1Performing sparse representation to find a sparse matrix WβSo that T is1×Wβ=H1If the characteristic node of the current window is V1=normal(T1×Wβ) Normal denotes normalization;
iterating the above step of generating feature nodes by N2Next, the obtained characteristic node matrix y is p × (N)2×N1) A matrix of (a);
adding bias terms to the characteristic node matrix y and standardizing to obtain H2
Assuming N3 as the number of enhanced nodes, the coefficient matrix W of the enhanced nodeshIs of size (N)1×N2+1)×N3And a random matrix subjected to orthogonal normalization;
activating the enhanced node, then
Figure BDA0002449607660000031
Wherein s is the scaling scale of the enhanced node, and tan sig is a commonly used activation function in the BP neural network;
obtaining an input layer V3=[y V2]Each ofCharacteristic dimension of each sample is N1×N2+N3
The invention has the beneficial effects that:
1) the six basic expression modes are happy, hurted, feared, angry, surprised and disliked, but the expression probability of fear and disliked in a classroom is small, and even if the expression probability is small, the expression probability is mostly irrelevant to the content of courses, so that the fear and the dislike are removed from the expression modes, and the thinking, chatting and normal addition with high occurrence frequency to the expression recognition result make the expression recognition result more effective.
2) The invention relates to a network classroom facial expression monitoring method based on a width learning system, and a width learning architecture has shallow layers, does not need a GPU server and a large amount of training time, and can complete training in dozens of minutes on a common computer; and the weights and parameters do not need to be continuously adjusted, so that the method is simple and convenient. Therefore, the invention has low operation cost and technical difficulty and can be deployed in common schools.
3) The expression mode is obtained through the combination of the action modes, so that the expression mode is more accurately identified.
Based on the reason, the invention can be widely applied to the field of classroom student behavior monitoring.
Drawings
FIG. 1 is a block diagram of a width learning system of the present invention.
Fig. 2 is a process schematic of an embodiment of the invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
The invention provides a lightweight class network expression monitoring method based on a breadth learning neural network, which comprises the following steps:
s1, creating a width learning image training set;
acquiring facial pictures of students through shooting equipment;
making a label of the facial expression picture: and manually marking the positions of eyes, mouth and eyebrows in the face picture by using software VGG Image annotor. Firstly, each part is selected through a software manual frame, and a label is added to the action table of each part, wherein the added label is the action mode of the label. The judgment of the action mode depends on a annotator, the judgment of the action mode of eyes and eyebrows can be judged visually, and for the action mode of mouth, particularly the action mode of a closed mouth biased to thinking and a closed mouth biased to anger, the annotator is required to judge the whole expression in the picture, and then the judgment result is reflected in the annotation. Labeling the overall facial expression mode according to the table 1;
TABLE 1 mapping relationship table of action mode and expression mode
Figure BDA0002449607660000041
Figure BDA0002449607660000051
S2, training a facial expression monitoring model based on width learning;
and (5) training a classroom expression monitoring model by using the image training set obtained in the step S1, namely all the annotated pictures. The classroom expression monitoring model is of a width learning network structure and is a two-layer network comprising an input layer and an output layer;
inputting an image training set, extracting the features of the images in the image training set by a width learning system to generate feature nodes and enhanced nodes of the feature nodes, and using the feature nodes and the enhanced nodes as an input layer of the facial expression image monitoring model;
characteristic node passing function
Figure BDA0002449607660000052
Obtaining image data X in an image training set to be input, mapping and generating an i-th group of characteristic nodes;
if n feature nodes are generated, the expression is as follows:
Figure BDA0002449607660000053
wherein X is the image data in the input image training set,
Figure BDA0002449607660000054
is the weight coefficient of the weight of the image,
Figure BDA0002449607660000055
is the term of the offset, and,
Figure BDA0002449607660000056
and
Figure BDA0002449607660000057
are all randomly generated;
given sign Zi≡[Z1...Zi]Feature nodes representing image data mappings in all input image training sets;
enhancing node passing functions
Figure BDA0002449607660000058
Obtained, is marked as HjThe first j groups of all enhanced nodes are noted as Hj≡[H1,...,Hj]Wherein, in the step (A),
Figure BDA0002449607660000059
is the weight coefficient of the weight of the image,
Figure BDA00024496076600000510
is the term of the offset, and,
Figure BDA00024496076600000511
and
Figure BDA00024496076600000512
all generated randomly, the mth group of enhanced nodes is represented as:
Figure BDA00024496076600000513
the image recognition model is represented by the following formula:
Figure BDA00024496076600000514
weight parameter W of the whole facial expression monitoring modelmObtaining a result through pseudo-inverse, and setting Y as an output value of the facial expression monitoring model, namely:
Y=V3×Wm
then by pseudo-inverse:
Wm=(V3T*V3+In+m*c)-1*V3Ty, where c is a regularization parameter, V3The characteristic nodes and the enhanced node columns are spliced and jointly used as an input layer, and the expression is as follows:
V3=(ZnHm);
in the training process of the facial expression monitoring model, the value of Y is a given output value of a training set;
solving to obtain WmThe training of the facial expression monitoring model is completed (Y is set by manual parameter adjustment, parameters needing to be adjusted are few in width learning, parameter adjustment is not completely needed, only one time is needed, the training time is short, and subsequent online learning is incremental learning, so that the training is not in conflict with the online learning);
s3, shooting the facial expressions of the students in the classroom through shooting equipment to obtain real-time videos of the facial expressions of the students;
s4, framing the real-time video, and taking an image every five seconds to convert the image into a t-frame static image with the facial features of the student;
s5 facial expression monitoring mechanism based on width learning system
The still images obtained in step S4 are all input into the facial expression monitoring model, and output values are obtained to determine the expression patterns to which the expressions of the students in the images belong.
In step S2, the specific steps of extracting feature generation feature nodes and enhancement nodes of the feature nodes of the images in the image training set, and using them together as the input layer of the facial expression monitoring model, are as follows:
s21, establishing feature node mapping of input data:
let Tp×qIs the training data of the image training set, p is the number of samples, q is the total number of pixels of the sample image, for Tp×qPerforming Z score standardization; in order to add bias terms directly through matrix operation when generating characteristic nodes, T is subjected top×qIs subjected to augmentation, Tp×qFinally, one column is added (each element is data in a pixel, and the added column is the result of matrix multiplication and the right side of equal sign)), so that T is changed into1(p×(q+1))
S22, generating characteristic nodes of each window:
generating a random weight matrix We,WeIs a (q +1) × N1Of random weight matrix of (2), wherein N1Is the number of characteristic nodes of each window, WeThe values (0, 1) are uniformly distributed to obtain a characteristic node H1,H1=T1×We(T1 is T1(p×(q+1))) Then carrying out normalization processing;
to H1Performing sparse representation, and finding out a sparse matrix W by adopting a lasso methodβSo that T is1×Wβ=H1If the characteristic node of the current window is V1=normal(T1×Wβ) Normal denotes normalization;
setting N2 as the iteration number; and iterating the above-described generate feature node steps by N2Next, the obtained characteristic node matrix y is p × (N)2×N1) A matrix of (a);
s23, generating an enhanced node;
adding bias terms to the characteristic node matrix y and standardizing to obtain H2
Assuming N3 as the number of enhanced nodes, the coefficient matrix W of the enhanced nodeshIs of size (N)1×N2+1)×N3And a random matrix subjected to orthogonal normalization;
activating the enhanced node, then:
Figure BDA0002449607660000071
wherein s is the scaling scale of the enhanced node, tan sig is a commonly used activation function in the BP neural network, and the features expressed by the enhanced node can be activated to the greatest extent; the enhanced node is not expressed sparsely and is not iterated by windows;
s24, obtaining an input layer V3=[y V2]Feature dimension of each sample is N1×N2+N3

Claims (4)

1. A lightweight class network expression monitoring method is characterized by comprising the following steps:
s1, collecting a face picture of the student, labeling the positions of eyes, a mouth and eyebrows in the face picture, labeling the action modes of the action expressions of each part, and combining different actions of the eyes, the mouth and the eyebrows to correspond to different expressions to obtain an image training set;
s2, training a classroom expression monitoring model by using the image training set obtained in the step S1; the classroom expression monitoring model is of a width learning network structure and is a two-layer network comprising an input layer and an output layer;
s3, shooting the facial expressions of students in a classroom to obtain real-time videos of the facial expressions of the students;
s4, performing framing processing on the real-time video, and converting the real-time video into a static image with facial features of the student;
and S5, inputting the static image obtained in the step S4 into a classroom expression monitoring model, obtaining an output value, and determining an expression mode to which the expression of the student belongs in the image.
2. The lightweight network classroom expression monitoring method of claim 1, wherein: in the step S1, when the mouth is motionless, the motionless eyes and eyebrow motionless correspond to normal expressions, the motionless eyes and eyebrow frightening correspond to thinking expressions, the closed eye and eyebrow motionless correspond to boring expressions, the closed eye and eyebrow frightening correspond to thinking expressions, the open eye and eyebrow motionless correspond to anger expressions, the open eye and eyebrow frightening correspond to angry expressions, and the open eye and eyebrow frightening correspond to surprise expressions; when the mouth of the user performs a sipping action which is biased to thinking or sadness, the eye-motionless action and the eyebrow-motionless action correspond to a thinking expression, the eye-motionless action and the eyebrow wrinkling correspond to a thinking expression, the eye-closing action and the eyebrow-motionless action correspond to a sadness expression, the eye-closing action and the eyebrow wrinkling correspond to a sadness expression, the eye-glaring action and the eyebrow wrinkling correspond to an angry expression, and the eye-glaring action and the eyebrow picking correspond to an angry expression; when the mouth makes a sipping action biased towards anger, all the actions of the eyes and eyebrows correspond to angry expressions; when the mouth of the mouth makes a grin action, the eye-absent action and the eyebrow-absent action correspond to sad expressions, the eye-absent action and the eyebrow wrinkling or eyebrow picking correspond to happy expressions, the eye closing action and the eyebrow-absent action correspond to sad expressions, the eye closing action and the eyebrow wrinkling or eyebrow picking correspond to happy expressions, the eye-glaring action and the eyebrow-absent action correspond to bored expressions, and the eye-glaring action and the eyebrow picking correspond to surprised expressions.
3. The lightweight network classroom expression monitoring method of claim 1, wherein: in the step S2, an image training set is input, and a width learning system extracts features of images in the image training set to generate feature nodes and enhancement nodes of the feature nodes, which are used as input layers of the facial expression image monitoring model;
the characteristic nodes are obtained by mapping image data X in the image training set, and if n characteristic nodes are generated, the characteristic nodes
Figure FDA0002449607650000011
1, ·, n; wherein the content of the first and second substances,
Figure FDA0002449607650000012
and
Figure FDA0002449607650000013
respectively, randomly generated weight coefficients and bias terms; given sign Zi≡[Z1...Zi]Representing all image training setsThe feature nodes of the image data map of (1);
enhancing node passing functions
Figure FDA0002449607650000021
Obtained, is marked as HjThe first j groups of all enhanced nodes are noted as Hj≡[H1,...,Hj]Wherein, in the step (A),
Figure FDA0002449607650000022
and
Figure FDA0002449607650000023
respectively, randomly generated weight coefficients and bias terms, the mth group of enhanced nodes
Figure FDA0002449607650000024
At the moment, the output value Y of the classroom expression monitoring model is [ Z ]n|Hm]WmWeight parameter W of whole classroom expression monitoring modelmObtaining the result by pseudo-inversion, Wm=(V3T*V3+In+m*c)-1*V3TY, where c is a regularization parameter, V3The characteristic nodes and the enhanced node columns are spliced.
4. The lightweight network classroom expression monitoring method of claim 3, wherein: in step S2, the specific steps of extracting feature generation feature nodes and enhancement nodes of the feature nodes of the images in the image training set, and using them together as the input layer of the facial expression monitoring model, are as follows:
let Tp×qFor training data of an image training set, each element is data in a pixel, p is the number of samples, q is the total number of pixels of the sample image, and for Tp×qPerforming Z score standardization; for Tp×qIs subjected to augmentation, Tp×qAnd finally, adding one column, wherein the added column is the result of the matrix multiplication on the right side of the equal sign and is changed into T1(p×(q+1))
Generation of (q +1))×N1Random weight matrix WeIn which N is1Is the number of characteristic nodes of each window, WeThe values are uniformly distributed between (0, 1) to obtain characteristic nodes H1,H1=T1×WeT1 being T1(p×(q+1))) Then carrying out normalization processing;
to H1Performing sparse representation to find a sparse matrix WβSo that T is1×Wβ=H1If the characteristic node of the current window is V1=normal(T1×Wβ) Normal denotes normalization;
iterating the above step of generating feature nodes by N2Next, the obtained characteristic node matrix y is p × (N)2×N1) A matrix of (a);
adding bias terms to the characteristic node matrix y and standardizing to obtain H2
Assuming N3 as the number of enhanced nodes, the coefficient matrix W of the enhanced nodeshIs of size (N)1×N2+1)×N3And a random matrix subjected to orthogonal normalization;
activating the enhanced node, then
Figure FDA0002449607650000025
Wherein s is the scaling scale of the enhanced node, and tan sig is a commonly used activation function in the BP neural network;
obtaining an input layer V3=[y V2]Feature dimension of each sample is N1×N2+N3
CN202010288809.4A 2020-04-14 2020-04-14 Lightweight network classroom expression monitoring method Pending CN111507241A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010288809.4A CN111507241A (en) 2020-04-14 2020-04-14 Lightweight network classroom expression monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010288809.4A CN111507241A (en) 2020-04-14 2020-04-14 Lightweight network classroom expression monitoring method

Publications (1)

Publication Number Publication Date
CN111507241A true CN111507241A (en) 2020-08-07

Family

ID=71874244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010288809.4A Pending CN111507241A (en) 2020-04-14 2020-04-14 Lightweight network classroom expression monitoring method

Country Status (1)

Country Link
CN (1) CN111507241A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117290747A (en) * 2023-11-24 2023-12-26 中国民用航空飞行学院 Eye movement data-based flight state monitoring method, storage medium and electronic equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
CN108615010A (en) * 2018-04-24 2018-10-02 重庆邮电大学 Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern
CN108921877A (en) * 2018-07-25 2018-11-30 大连海事大学 A kind of long term object track algorithm based on width study
CN109117731A (en) * 2018-07-13 2019-01-01 华中师范大学 A kind of classroom instruction cognitive load measuring system
CN109344682A (en) * 2018-08-02 2019-02-15 平安科技(深圳)有限公司 Classroom monitoring method, device, computer equipment and storage medium
CN109522838A (en) * 2018-11-09 2019-03-26 大连海事大学 A kind of safety cap image recognition algorithm based on width study
CN109919434A (en) * 2019-01-28 2019-06-21 华中科技大学 A kind of classroom performance intelligent Evaluation method based on deep learning
CN110119702A (en) * 2019-04-30 2019-08-13 西安理工大学 Facial expression recognizing method based on deep learning priori
CN110163054A (en) * 2018-08-03 2019-08-23 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method and device
CN110175596A (en) * 2019-06-04 2019-08-27 重庆邮电大学 The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks
CN110363124A (en) * 2019-07-03 2019-10-22 广州多益网络股份有限公司 Rapid expression recognition and application method based on face key points and geometric deformation
CN110472512A (en) * 2019-07-19 2019-11-19 河海大学 A kind of face state identification method and its device based on deep learning
CN110688874A (en) * 2018-07-04 2020-01-14 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN110705430A (en) * 2019-09-26 2020-01-17 江苏科技大学 Multi-person facial expression recognition method and system based on deep learning
CN110728193A (en) * 2019-09-16 2020-01-24 连尚(新昌)网络科技有限公司 Method and device for detecting richness characteristics of face image
CN110889672A (en) * 2019-11-19 2020-03-17 哈尔滨理工大学 Student card punching and class taking state detection system based on deep learning

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
CN108615010A (en) * 2018-04-24 2018-10-02 重庆邮电大学 Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern
CN110688874A (en) * 2018-07-04 2020-01-14 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN109117731A (en) * 2018-07-13 2019-01-01 华中师范大学 A kind of classroom instruction cognitive load measuring system
CN108921877A (en) * 2018-07-25 2018-11-30 大连海事大学 A kind of long term object track algorithm based on width study
CN109344682A (en) * 2018-08-02 2019-02-15 平安科技(深圳)有限公司 Classroom monitoring method, device, computer equipment and storage medium
CN110163054A (en) * 2018-08-03 2019-08-23 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method and device
CN109522838A (en) * 2018-11-09 2019-03-26 大连海事大学 A kind of safety cap image recognition algorithm based on width study
CN109919434A (en) * 2019-01-28 2019-06-21 华中科技大学 A kind of classroom performance intelligent Evaluation method based on deep learning
CN110119702A (en) * 2019-04-30 2019-08-13 西安理工大学 Facial expression recognizing method based on deep learning priori
CN110175596A (en) * 2019-06-04 2019-08-27 重庆邮电大学 The micro- Expression Recognition of collaborative virtual learning environment and exchange method based on double-current convolutional neural networks
CN110363124A (en) * 2019-07-03 2019-10-22 广州多益网络股份有限公司 Rapid expression recognition and application method based on face key points and geometric deformation
CN110472512A (en) * 2019-07-19 2019-11-19 河海大学 A kind of face state identification method and its device based on deep learning
CN110728193A (en) * 2019-09-16 2020-01-24 连尚(新昌)网络科技有限公司 Method and device for detecting richness characteristics of face image
CN110705430A (en) * 2019-09-26 2020-01-17 江苏科技大学 Multi-person facial expression recognition method and system based on deep learning
CN110889672A (en) * 2019-11-19 2020-03-17 哈尔滨理工大学 Student card punching and class taking state detection system based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117290747A (en) * 2023-11-24 2023-12-26 中国民用航空飞行学院 Eye movement data-based flight state monitoring method, storage medium and electronic equipment
CN117290747B (en) * 2023-11-24 2024-03-12 中国民用航空飞行学院 Eye movement data-based flight state monitoring method, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN110377710B (en) Visual question-answer fusion enhancement method based on multi-mode fusion
CN107766447B (en) Method for solving video question-answer by using multilayer attention network mechanism
CN107818306B (en) Video question-answering method based on attention model
CN111275713B (en) Cross-domain semantic segmentation method based on countermeasure self-integration network
CN107239801A (en) Video attribute represents that learning method and video text describe automatic generation method
Wang Online Learning Behavior Analysis Based on Image Emotion Recognition.
CN111125640B (en) Knowledge point learning path recommendation method and device
CN110889672A (en) Student card punching and class taking state detection system based on deep learning
US20220207649A1 (en) Unsupervised image-to-image translation method based on style-content separation
CN107766320A (en) A kind of Chinese pronoun resolution method for establishing model and device
Shen et al. The influence of artificial intelligence on art design in the digital age
CN109948473A (en) A kind of method neural network based promoting student's applied problem solution topic ability
CN111507241A (en) Lightweight network classroom expression monitoring method
Li et al. The application of artificial intelligence technology in art teaching taking architectural painting as an example
Bogucka et al. Projecting emotions from artworks to maps using neural style transfer
CN113989608A (en) Student experiment classroom behavior identification method based on top vision
Ma et al. A deep learning approach for online learning emotion recognition
CN112132075B (en) Method and medium for processing image-text content
CN116564144A (en) Digital virtual person online teaching system and method based on meta universe
CN113792626A (en) Teaching process evaluation method based on teacher non-verbal behaviors
Nithiyasree Facial emotion recognition of students using deep convolutional neural network
Zhu et al. Emotion Recognition in Learning Scenes Supported by Smart Classroom and Its Application.
CN113704610B (en) Learning style portrait generation method and system based on learning growth data
CN114580415B (en) Cross-domain graph matching entity identification method for educational examination
CN116781836B (en) Holographic remote teaching method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200807