CN111126297B - Experience analysis method based on learner expression - Google Patents

Experience analysis method based on learner expression Download PDF

Info

Publication number
CN111126297B
CN111126297B CN201911360147.0A CN201911360147A CN111126297B CN 111126297 B CN111126297 B CN 111126297B CN 201911360147 A CN201911360147 A CN 201911360147A CN 111126297 B CN111126297 B CN 111126297B
Authority
CN
China
Prior art keywords
matrix
sample
learner
expression
experience
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911360147.0A
Other languages
Chinese (zh)
Other versions
CN111126297A (en
Inventor
王刚
谭嵩
孙方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beike Haiteng Technology Co.,Ltd.
Original Assignee
Huainan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huainan Normal University filed Critical Huainan Normal University
Priority to CN201911360147.0A priority Critical patent/CN111126297B/en
Publication of CN111126297A publication Critical patent/CN111126297A/en
Application granted granted Critical
Publication of CN111126297B publication Critical patent/CN111126297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an experience analysis method based on learner expression, which comprises the steps of data acquisition and initialization, randomly generating input weight vectors and input biases of a hidden layer mapping function, generating a hidden layer output matrix, initializing the output weight matrix, updating a label approximation matrix, updating the output weight matrix, training and stopping judgment, online prediction of experience scores and the like. The method has the advantages of high prediction precision, no need of a large number of learners to experience scoring, high operation speed and the like.

Description

Experience analysis method based on learner expression
Technical Field
The invention belongs to the field of data analysis, and particularly relates to an experience analysis method based on learner expression.
Background
Currently, more and more learners discard the traditional learner mode, and further select learners on the intelligent terminal. In order to actually know the experience of the learner on the current learner, a camera on the intelligent terminal can be used for capturing the facial image of the learner, so that the expression information of the learner is obtained. However, during a learner's process, the learner's expression is changeable and complex. When the learner smiles, it does not represent a good experience, and similarly, when the learner exhibits an aversive expression, it does not indicate that the experience is poor. After each learner has finished, the system may request that the learner evaluate the experience. Of course, not every learner may complete the study, nor may every learner be willing to give an evaluation. Therefore, an experience analysis method mainly based on the expression of the learner needs to be established, experience prediction is performed for each learning experience, and data support is further provided for system improvement.
Disclosure of Invention
The invention provides an experience analysis method based on learner expression, which comprises the following steps:
step 1, data acquisition and initialization:
collecting face videos of learners in each study, and dividingAnalyzing the expression of each frame, dividing the expression into 8 categories of aversion, vitality, fear, happiness, sadness, surprise, shy and expression-free to form a feature vectorx (1) ,...,x (8) The ratio of aversion, qi, fear, happiness, sadness, surprise, shy and no expression in the whole video is x (1) ,...,x (8) The sum is 1, and x is expanded by using auxiliary features according to actual conditions to obtainIs N i A sample of dimensions; let sample set +.>Tag +.A learner experience scoring after each learning as a sample>For->Marking to obtain corresponding class label->Where l is the number of labeled samples, n is the number of all samples, u=n-l is the number of unlabeled samples; />Representing real number set,/->Representing a positive real number set;
initializing: the following parameters were manually set: lambda (lambda) 1 ,λ 2 θ, σ > 0, hidden layer node number N h > 0, maximum number of iterations E, number of iterations t=0;
step 2, randomly generating a hidden layerInput weight vector of mapping functionOffset from the input; b ε R, as follows:
randomly generating N h A, obtainingRandomly generating N h B, get->
Step 3, generating a hidden layer output function:
wherein G (a, b, x) is an activation function, x represents a sample, and superscript T represents matrix transformation;
step 4, generating a hidden layer output matrix:
H=[h(x 1 ),...,h(x n )] T
step 5, initializing an output weight matrix:
wherein ,W0 An output weight matrix W, with t=0, pinv (H) represents the pseudo-inverse of H,a matrix consisting of the first l rows of H;
step 6, updating a label approximate matrix, wherein the label approximate matrix is as follows:
wherein ,Yt+1 Label approach for t+1 iterationsMatrix-like, I n For an n-dimensional unit array, j= [ I ] l ,O l×u ;O u×l ,O u×u ],I l In the form of an l-dimensional unit array,v is 1 ×v 2 Zero matrix of dimensions, v 1 ,v 2 Taking u or l->O u×1 A zero matrix in u x 1 dimensions; l is the matrix L=D-A, A is the similarity matrix, the ith row and jth column of element A ij The method comprises the following steps:
wherein ,xi And x j For samples, i, j ε {1, …, n }, σ > 0 is Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith diagonal element D of D ii =∑ j A ij
Step 7: updating the output weight matrix as follows:
W t+1 =(H T H+θU t ) -1 H T Y t+1
wherein , wherein ,Wt+1 W, + at time t+1>Is W t+1 Line 1 to N of (2) h Row vector->Line 1 to N of W h A row vector;
step 8: the iteration number t is self-increased by 1, if t > E, then w=w is retained t+1 And jump to step 9, otherwiseJumping to step 6;
step 9: for a new sample x, its experience score is predicted using h (x) W.
Wherein, the activation function G (a, b, x) related to step 3 is:
or ,
or ,
wherein ,l>Nh
The method has the advantages of high prediction precision, stable performance, no need of experience scoring of a large number of learners, high operation speed and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
Detailed Description
The invention is further described below in connection with examples, but the scope of the invention is not limited thereto.
As shown in fig. 1, the present invention is embodied as follows:
step 1, data acquisition and initialization:
collecting face videos of learners in each learning, analyzing expressions of each frame, and dividing the expressions into 8 categories of aversion, vitality, fear, happiness, sadness, surprise, shy and expression-free to form feature vectorsx (1) ,...,x (8) The proportion of aversion, vital energy, fear, happiness, sadness, surprise, shy and expression-free in the whole videoX is then (1) ,...,x (8) The sum is 1, and x is expanded by using auxiliary features according to actual conditions to obtainIs N i A sample of dimensions; let sample set +.>Tag +.A learner experience scoring after each learning as a sample>For->Marking to obtain corresponding class label->Where l is the number of labeled samples, n is the number of all samples, u=n-l is the number of unlabeled samples; />Representing real number set,/->Representing a positive real number set;
initializing: the following parameters were manually set: lambda (lambda) 1 ,λ 2 θ, σ > 0, hidden layer node number N h > 0, maximum number of iterations E, number of iterations t=0;
step 2, randomly generating an input weight vector of the hidden layer mapping functionOffset from the input; b ε R, as follows:
randomly generating N h A, obtainingRandomly generating N h B, get->
Step 3, generating a hidden layer output function:
wherein G (a, b, x) is an activation function, x represents a sample, and superscript T represents matrix transformation;
step 4, generating a hidden layer output matrix:
H=[h(x 1 ),...,h(x n )] T
step 5, initializing an output weight matrix:
wherein ,W0 An output weight matrix W, with t=0, pinv (H) represents the pseudo-inverse of H,H l a matrix consisting of the first l rows of H;
step 6, updating a label approximate matrix, wherein the label approximate matrix is as follows:
wherein ,Yt+1 Approximating the matrix for the label for t+1 iterations, I n For an n-dimensional unit array, j= [ I ] l ,O l×u ;O u×l ,O u×u ],I l In the form of an l-dimensional unit array,v is 1 ×v 2 Zero matrix of dimensions, v 1 ,v 2 Taking u or l->O u×1 A zero matrix in u x 1 dimensions; l is the matrix L=D-A, A is the similarity matrix, the ith row and jth column of element A ij The method comprises the following steps:
wherein ,xi And x j For samples, i, j e { 1., n, sigma > 0 is Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith diagonal element D of D ii =∑ j A ij
Step 7: updating the output weight matrix as follows:
W t+1 =(H T H+θU t ) -1 H T Y t+1
wherein , wherein ,Wt+1 W, + at time t+1>Is W t+1 Line 1 to N of (2) h Row vector->Line 1 to N of W h A row vector;
step 8: the iteration number t is self-increased by 1, if t > E, then w=w is retained t+1 And jumping to the step 9, otherwise jumping to the step 6;
step 9: for a new sample x, its experience score is predicted using h (x) W.
Preferably, the activation function G (a, b, x) involved in step 3 is:
preferably, the activation function G (a, b, x) involved in step 3 is:
preferably, the activation function G (a, b, x) involved in step 3 is:
further preferably, l > N h
In step 1, when x is extended by using the auxiliary features according to actual situations, the features such as the type of the reading material, the target learner, the scenario development mode, whether the reading material is a three-dimensional image, whether the reading material has auxiliary means except vision, the main language of the text, the drawing style, the average number of words per page, and the like can be adopted.
The gaussian kernel width is generally taken to be σ=0.01, λ 1 ,λ 2 θ may be: lambda (lambda) 1 =0.3,λ 2 =0.7,θ=0.2。N h An integer between 100 and 1000 may be taken and E may be an integer between 3 and 20.
The above examples are provided for the purpose of describing the present invention only and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalents and modifications that do not depart from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (1)

1. The experience analysis method based on the learner expression is characterized by comprising the following steps of:
step 1, data acquisition and initialization:
collecting face videos of learners in each learning, analyzing expressions of each frame, and dividing the expressions into 8 categories of aversion, vitality, fear, happiness, sadness, surprise, shy and expression-free to form feature vectorsThe ratio of aversion, vital energy, fear, happiness, sadness, surprise, shy and no expression in the whole video is the ∈>The sum is 1, and the auxiliary characteristic pair is used according to the actual situation>Expanding to obtainIs->A sample of dimensions; let sample set +.>The learner experience after each learning is scored as label of sample +.>For->Marking to obtain corresponding class label->, wherein ,/>For the number of labeled samples, +.>For all sample numbers, +.>Is the number of unlabeled exemplars; />Representing real number set,/->Representing a positive real number set;
initializing: the following parameters were manually set:hidden layer node number->Maximum iteration number E, iteration number t=0;
step 2, randomly generating an input weight vector of the hidden layer mapping functionOffset from the input; />The following are provided:
random generationPersonal->Obtain->The method comprises the steps of carrying out a first treatment on the surface of the Random generation->Personal->Obtain->
Step 3, generating a hidden layer output function:
wherein ,to activate the function +.>The sample is indicated, superscript->Representing matrix transformation;
activation functionThe method comprises the following steps:
step 4, generating a hidden layer output matrix:
step 5, initializing an output weight matrix:
wherein ,output weight matrix of t=0 +.>,/>Representation->Pseudo-inverse of>Is->Before->A matrix of rows;
step 6, updating a label approximate matrix, wherein the label approximate matrix is as follows:
wherein ,a label approximation matrix for t+1 iterations, < >>Is a unit array of n dimensions,,/>is->Dimension unit array,/->Is->Zero matrix of dimension, ">Can be taken outOr->,/>,/>Is->A zero matrix of dimensions; />Is a matrix of the graph Laplace->Is a similarity matrix, the->Line->Column element->The method comprises the following steps:
wherein ,and->For the sample->Is Gaussian kernel wide, < >>Is->Degree matrix of->Is a diagonal array ++>Is>Diagonal elements->
Step 7: updating the output weight matrix as follows:
wherein ,, wherein ,/>Is indicated at->Time->Is->Line 1 to->Row vector->Is->Line 1 to->A row vector;
step 8: the iteration number t is self-increased by 1 ifThen keep +.>And jumping to the step 9, otherwise jumping to the step 6;
step 9: for new samplesAdopts->Predicting its experience score.
CN201911360147.0A 2019-12-25 2019-12-25 Experience analysis method based on learner expression Active CN111126297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360147.0A CN111126297B (en) 2019-12-25 2019-12-25 Experience analysis method based on learner expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360147.0A CN111126297B (en) 2019-12-25 2019-12-25 Experience analysis method based on learner expression

Publications (2)

Publication Number Publication Date
CN111126297A CN111126297A (en) 2020-05-08
CN111126297B true CN111126297B (en) 2023-10-31

Family

ID=70502568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360147.0A Active CN111126297B (en) 2019-12-25 2019-12-25 Experience analysis method based on learner expression

Country Status (1)

Country Link
CN (1) CN111126297B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001223B (en) * 2020-07-01 2023-11-24 安徽新知数字科技有限公司 Rapid virtualization construction method for real environment map
CN115506783B (en) * 2021-06-21 2024-09-27 中国石油化工股份有限公司 Lithology recognition method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085704A (en) * 2017-03-27 2017-08-22 杭州电子科技大学 Fast face expression recognition method based on ELM own coding algorithms
CN107392230A (en) * 2017-06-22 2017-11-24 江南大学 A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109919102A (en) * 2019-03-11 2019-06-21 重庆科技学院 A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system
CN109919099A (en) * 2019-03-11 2019-06-21 重庆科技学院 A kind of user experience evaluation method and system based on Expression Recognition
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A kind of user experience evaluation method and system based on ELMAN neural network
CN110390307A (en) * 2019-07-25 2019-10-29 首都师范大学 Expression recognition method, Expression Recognition model training method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11205103B2 (en) * 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085704A (en) * 2017-03-27 2017-08-22 杭州电子科技大学 Fast face expression recognition method based on ELM own coding algorithms
CN107392230A (en) * 2017-06-22 2017-11-24 江南大学 A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 The two-way assessment system of Classroom instruction quality based on deep learning
CN109919102A (en) * 2019-03-11 2019-06-21 重庆科技学院 A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system
CN109919099A (en) * 2019-03-11 2019-06-21 重庆科技学院 A kind of user experience evaluation method and system based on Expression Recognition
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A kind of user experience evaluation method and system based on ELMAN neural network
CN110390307A (en) * 2019-07-25 2019-10-29 首都师范大学 Expression recognition method, Expression Recognition model training method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Look-up Table Unit Activation Function for Deep Convolutional Neural Networks;Min Wang et al.;《2018 IEEE Winter Conference on Applications of Computer Vision》;第1225-1233页 *
基于联合稀疏和局部线性的极限学习机及应用;雒晓卓;《中国博士学位论文全文数据库 信息科技辑》(2017年第02期);第I140-45页 *

Also Published As

Publication number Publication date
CN111126297A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
Huang et al. Facial expression recognition with grid-wise attention and visual transformer
Huang et al. Like what you like: Knowledge distill via neuron selectivity transfer
Tang et al. Personalized age progression with bi-level aging dictionary learning
CN106250855B (en) Multi-core learning based multi-modal emotion recognition method
Ganin et al. Unsupervised domain adaptation by backpropagation
CN107808129B (en) Face multi-feature point positioning method based on single convolutional neural network
CN108537119B (en) Small sample video identification method
CN108765383B (en) Video description method based on deep migration learning
CN106056628A (en) Target tracking method and system based on deep convolution nerve network feature fusion
CN113361278B (en) Small sample named entity identification method based on data enhancement and active learning
CN110705490B (en) Visual emotion recognition method
CN111126297B (en) Experience analysis method based on learner expression
CN105205449A (en) Sign language recognition method based on deep learning
CN113392766A (en) Attention mechanism-based facial expression recognition method
CN105701225B (en) A kind of cross-media retrieval method based on unified association hypergraph specification
Zhao et al. Cbph-net: A small object detector for behavior recognition in classroom scenarios
Xu et al. Discriminative analysis for symmetric positive definite matrices on lie groups
CN111026898A (en) Weak supervision image emotion classification and positioning method based on cross space pooling strategy
Guo et al. Multi-level feature fusion pyramid network for object detection
Lian et al. Fast and accurate detection of surface defect based on improved YOLOv4
CN112329604A (en) Multi-modal emotion analysis method based on multi-dimensional low-rank decomposition
CN114625908A (en) Text expression package emotion analysis method and system based on multi-channel attention mechanism
Liang et al. RNTR-Net: A robust natural text recognition network
Tan et al. Wide Residual Network for Vision-based Static Hand Gesture Recognition.
Hu et al. Data-free dense depth distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240411

Address after: Building 24, 4th Floor, No. 68 Beiqing Road, Haidian District, Beijing, 100000, 0446

Patentee after: Beijing Beike Haiteng Technology Co.,Ltd.

Country or region after: China

Address before: 232001 cave West Road, Huainan, Anhui

Patentee before: HUAINAN NORMAL University

Country or region before: China

TR01 Transfer of patent right