CN111126297A - An Experience Analysis Method Based on Learner Expression - Google Patents

An Experience Analysis Method Based on Learner Expression Download PDF

Info

Publication number
CN111126297A
CN111126297A CN201911360147.0A CN201911360147A CN111126297A CN 111126297 A CN111126297 A CN 111126297A CN 201911360147 A CN201911360147 A CN 201911360147A CN 111126297 A CN111126297 A CN 111126297A
Authority
CN
China
Prior art keywords
matrix
learner
experience
method based
analysis method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360147.0A
Other languages
Chinese (zh)
Other versions
CN111126297B (en
Inventor
王刚
谭嵩
孙方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beike Haiteng Technology Co ltd
Original Assignee
Huainan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huainan Normal University filed Critical Huainan Normal University
Priority to CN201911360147.0A priority Critical patent/CN111126297B/en
Publication of CN111126297A publication Critical patent/CN111126297A/en
Application granted granted Critical
Publication of CN111126297B publication Critical patent/CN111126297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明涉及一种基于学习者表情的体验分析方法,包括数据采集与初始化、随机生成隐藏层映射函数的输入权重向量与输入偏置、生成隐藏层输出函数、生成隐藏层输出矩阵、初始化输出权重矩阵、更新标签近似矩阵、更新输出权重矩阵、训练停止判断、在线预测体验分数等步骤。本发明具有预测精度高、无需大量学习者体验打分、运算速度快等优点。

Figure 201911360147

The invention relates to an experience analysis method based on learner expressions, including data collection and initialization, random generation of input weight vectors and input biases of hidden layer mapping functions, generation of hidden layer output functions, generation of hidden layer output matrices, and initialization of output weights Matrix, update label approximation matrix, update output weight matrix, training stop judgment, online prediction experience score and other steps. The present invention has the advantages of high prediction accuracy, no need for a large number of learners to experience scoring, and high computing speed.

Figure 201911360147

Description

一种基于学习者表情的体验分析方法An Experience Analysis Method Based on Learner Expression

技术领域technical field

本发明属于数据分析领域,特别是涉及一种基于学习者表情的体验分析方法。The invention belongs to the field of data analysis, and in particular relates to an experience analysis method based on learner expressions.

背景技术Background technique

当前,越来越多的学习者抛弃了传统的学习者方式,进而选择在智能终端上学习者。为了切实了解学习者对当前的学习者体验,可以采用智能终端上的摄像头捕捉学习者的脸部图像,进而获取其表情信息。然而,在一次学习者过程中,学习者的表情是多变且复杂的。当学习者欢笑时,并不代表其体验良好,同理,当学习者表现出厌恶的表情时,也并非说明其体验不佳。在每次学习者结束后,系统可以请求学习者对该次体验进行评价。当然,并非每个学习者都能完成学习,也并非每个学习者都愿意给出评价。因此我们需要建立以学习者表情为主的体验分析方法,为每次学习体验进行体验预测,进而为系统的改进提供数据支持。At present, more and more learners abandon the traditional way of learning and choose to learn on smart terminals. In order to truly understand the learner's current experience of the learner, the camera on the smart terminal can be used to capture the face image of the learner, and then obtain the facial expression information of the learner. However, during a learner process, the learner's expressions are changeable and complex. When a learner laughs, it doesn't mean it's a good experience, and similarly, when a learner shows disgust, it doesn't mean it's a bad experience. After each learner finishes, the system can request the learner to rate the experience. Of course, not every learner can complete the learning, and not every learner is willing to give evaluation. Therefore, we need to establish an experience analysis method based on the learner's expression, to predict the experience for each learning experience, and then provide data support for the improvement of the system.

发明内容SUMMARY OF THE INVENTION

本发明提出一种基于学习者表情的体验分析方法,其过程如下:The present invention proposes an experience analysis method based on learner expressions, the process of which is as follows:

步骤1、数据采集与初始化:Step 1. Data acquisition and initialization:

采集每次学习中学习者的脸部视频,并分析每一帧的表情,将表情分为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情共8类,组成特征向量

Figure BDA0002336971930000011
x(1),...,x(8)分别为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情在整个视频中所占比例,则x(1),...,x(8)之和为1,按照实际情况使用辅助特征对x进行扩充,得到
Figure BDA0002336971930000012
为Ni维的样本;令样本集合
Figure BDA0002336971930000013
每次学习后的学习者体验打分作为样本的标签
Figure BDA0002336971930000014
Figure BDA0002336971930000015
进行标记,得到对应的类别标签
Figure BDA0002336971930000016
其中,l为有标签样本数量、n为所有样本数量,u=n-l为无标签样本数量;
Figure BDA0002336971930000017
表示实数集,
Figure BDA0002336971930000018
表示正实数集;Collect the face video of the learner in each learning, and analyze the expressions of each frame, divide the expressions into 8 categories of disgust, anger, fear, happiness, sadness, surprise, shyness and no expression, and form feature vectors.
Figure BDA0002336971930000011
x (1) ,...,x (8) are the proportions of disgust, anger, fear, happiness, sadness, surprise, shyness and expressionless in the whole video respectively, then x (1) ,...,x (8) The sum is 1, and the auxiliary feature is used to expand x according to the actual situation to obtain
Figure BDA0002336971930000012
is a sample of N i dimensions; let the sample set
Figure BDA0002336971930000013
The learner experience score after each study as a label for the sample
Figure BDA0002336971930000014
right
Figure BDA0002336971930000015
Tag, get the corresponding category label
Figure BDA0002336971930000016
Among them, l is the number of labeled samples, n is the number of all samples, and u=nl is the number of unlabeled samples;
Figure BDA0002336971930000017
represents the set of real numbers,
Figure BDA0002336971930000018
represents the set of positive real numbers;

初始化:人工设定以下参数:λ1,λ2,θ,σ>0,隐藏层节点数Nh>0,最大迭代次数E,迭代次数t=0;Initialization: manually set the following parameters: λ 1 , λ 2 , θ, σ > 0, the number of hidden layer nodes N h > 0, the maximum number of iterations E, the number of iterations t=0;

步骤2、随机生成隐藏层映射函数的输入权重向量

Figure BDA0002336971930000019
与输入偏置;b∈R,如下:Step 2. Randomly generate the input weight vector of the hidden layer mapping function
Figure BDA0002336971930000019
Biased from the input; b ∈ R, as follows:

随机生成Nh个a,得到

Figure BDA00023369719300000110
随机生成Nh个b,得到
Figure BDA00023369719300000111
Randomly generate N h a, get
Figure BDA00023369719300000110
Randomly generate N h b, get
Figure BDA00023369719300000111

步骤3、生成隐藏层输出函数:Step 3. Generate the hidden layer output function:

Figure BDA0002336971930000021
Figure BDA0002336971930000021

其中,G(a,b,x)为激活函数,x表示样本,上标T表示矩阵转制;Among them, G(a, b, x) is the activation function, x represents the sample, and the superscript T represents the matrix transformation;

步骤4、生成隐藏层输出矩阵:Step 4. Generate the hidden layer output matrix:

H=[h(x1),...,h(xn)]T H=[h(x 1 ),...,h(x n )] T

步骤5、初始化输出权重矩阵:Step 5. Initialize the output weight matrix:

Figure BDA0002336971930000022
Figure BDA0002336971930000022

其中,W0为t=0的输出权重矩阵W,pinv(H)表示H的伪逆矩阵,

Figure BDA0002336971930000023
为H的前l行组成的矩阵;Among them, W 0 is the output weight matrix W of t=0, pinv(H) represents the pseudo-inverse matrix of H,
Figure BDA0002336971930000023
is a matrix composed of the first l rows of H;

步骤6、更新标签近似矩阵,如下:Step 6. Update the label approximation matrix as follows:

Figure BDA0002336971930000024
Figure BDA0002336971930000024

其中,Yt+1为t+1次迭代的标签近似矩阵,In为n维的单位阵,J=[Il,Ol×u;Ou×l,Ou×u],Il为l维单位阵,

Figure BDA0002336971930000025
为v1×v2维的零矩阵,v1,v2可取u或者l,
Figure BDA0002336971930000026
Ou×1为u×1维的零矩阵;L为图拉普拉斯矩阵L=D-A,A为相似性矩阵,其第i行第j列元素Aij为:Among them, Y t+1 is the label approximation matrix of t+1 iterations, I n is an n-dimensional identity matrix, J=[I l , O l×u ; O u×l , O u×u ], I l is an l-dimensional unit matrix,
Figure BDA0002336971930000025
is a v 1 ×v 2 -dimensional zero matrix, v 1 , v 2 can be u or l,
Figure BDA0002336971930000026
O u×1 is a zero matrix of u×1 dimension; L is a graph Laplacian matrix L=DA, A is a similarity matrix, and the element A ij of the i-th row and the j-th column is:

Figure BDA0002336971930000027
Figure BDA0002336971930000027

其中,xi与xj为样本,i,j∈{1,…,n},σ>0为高斯核宽,D为A的度矩阵,D为对角阵,D的第i个对角元素dii=∑jAijAmong them, x i and x j are samples, i, j∈{1,...,n}, σ>0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith diagonal of D element d ii =∑ j A ij ;

步骤7:更新输出权重矩阵,如下:Step 7: Update the output weight matrix as follows:

Wt+1=(HTH+θUt)-1HTYt+1 W t+1 =(H T H+θU t ) -1 H T Y t+1

其中,

Figure BDA0002336971930000028
其中,Wt+1表示在t+1时刻的W,
Figure BDA0002336971930000029
为Wt+1的第1行至第Nh行向量,
Figure BDA00023369719300000210
为W的第1行至第Nh行向量;in,
Figure BDA0002336971930000028
Among them, W t+1 represents W at time t+1,
Figure BDA0002336971930000029
is the vector from the 1st row to the Nth row of W t+1 ,
Figure BDA00023369719300000210
is the vector from the 1st row to the Nth row of W;

步骤8:迭代次数t自增1,如果t>E,则保留W=Wt+1,并跳至步骤9,否则跳至步骤6;Step 8: The number of iterations t is incremented by 1. If t>E, keep W=W t+1 and skip to step 9, otherwise skip to step 6;

步骤9:对于新的样本x,采用h(x)W预测其体验分数。Step 9: For a new sample x, use h(x)W to predict its experience score.

其中,步骤3所涉及的激活函数G(a,b,x)为:Among them, the activation function G(a, b, x) involved in step 3 is:

Figure BDA00023369719300000211
Figure BDA00023369719300000211

或者,or,

Figure BDA00023369719300000212
Figure BDA00023369719300000212

或者,or,

Figure BDA0002336971930000031
Figure BDA0002336971930000031

其中,l>Nhwhere l>N h .

本发明具有预测精度高、性能稳定、无需大量学习者体验打分、运算速度快等优点。The invention has the advantages of high prediction accuracy, stable performance, no need for a large number of learners to experience and score, and fast calculation speed.

附图说明Description of drawings

图1为本发明方法流程图;Fig. 1 is the flow chart of the method of the present invention;

具体实施方式Detailed ways

下面结合实例对本发明作进一步描述,但本发明的保护范围并不限于此。The present invention will be further described below in conjunction with examples, but the protection scope of the present invention is not limited thereto.

如图1所示,本发明具体实现如下:As shown in Figure 1, the concrete realization of the present invention is as follows:

步骤1、数据采集与初始化:Step 1. Data acquisition and initialization:

采集每次学习中学习者的脸部视频,并分析每一帧的表情,将表情分为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情共8类,组成特征向量

Figure BDA0002336971930000032
x(1),...,x(8)分别为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情在整个视频中所占比例,则x(1),...,x(8)之和为1,按照实际情况使用辅助特征对x进行扩充,得到
Figure BDA0002336971930000033
为Ni维的样本;令样本集合
Figure BDA0002336971930000034
每次学习后的学习者体验打分作为样本的标签
Figure BDA0002336971930000035
Figure BDA0002336971930000036
进行标记,得到对应的类别标签
Figure BDA0002336971930000037
其中,l为有标签样本数量、n为所有样本数量,u=n-l为无标签样本数量;
Figure BDA0002336971930000038
表示实数集,
Figure BDA0002336971930000039
表示正实数集;Collect the face video of the learner in each learning, and analyze the expressions of each frame, divide the expressions into 8 categories of disgust, anger, fear, happiness, sadness, surprise, shyness and no expression, and form feature vectors.
Figure BDA0002336971930000032
x (1) ,...,x (8) are the proportions of disgust, anger, fear, happiness, sadness, surprise, shyness and expressionless in the whole video respectively, then x (1) ,...,x (8) The sum is 1, and the auxiliary feature is used to expand x according to the actual situation to obtain
Figure BDA0002336971930000033
is a sample of N i dimensions; let the sample set
Figure BDA0002336971930000034
The learner experience score after each study as a label for the sample
Figure BDA0002336971930000035
right
Figure BDA0002336971930000036
Tag, get the corresponding category label
Figure BDA0002336971930000037
Among them, l is the number of labeled samples, n is the number of all samples, and u=nl is the number of unlabeled samples;
Figure BDA0002336971930000038
represents the set of real numbers,
Figure BDA0002336971930000039
represents the set of positive real numbers;

初始化:人工设定以下参数:λ1,λ2,θ,σ>0,隐藏层节点数Nh>0,最大迭代次数E,迭代次数t=0;Initialization: manually set the following parameters: λ 1 , λ 2 , θ, σ > 0, the number of hidden layer nodes N h > 0, the maximum number of iterations E, the number of iterations t=0;

步骤2、随机生成隐藏层映射函数的输入权重向量

Figure BDA00023369719300000310
与输入偏置;b∈R,如下:Step 2. Randomly generate the input weight vector of the hidden layer mapping function
Figure BDA00023369719300000310
Biased from the input; b ∈ R, as follows:

随机生成Nh个a,得到

Figure BDA00023369719300000311
随机生成Nh个b,得到
Figure BDA00023369719300000312
Randomly generate N h a, get
Figure BDA00023369719300000311
Randomly generate N h b, get
Figure BDA00023369719300000312

步骤3、生成隐藏层输出函数:Step 3. Generate the hidden layer output function:

Figure BDA00023369719300000313
Figure BDA00023369719300000313

其中,G(a,b,x)为激活函数,x表示样本,上标T表示矩阵转制;Among them, G(a, b, x) is the activation function, x represents the sample, and the superscript T represents the matrix transformation;

步骤4、生成隐藏层输出矩阵:Step 4. Generate the hidden layer output matrix:

H=[h(x1),...,h(xn)]T H=[h(x 1 ),...,h(x n )] T

步骤5、初始化输出权重矩阵:Step 5. Initialize the output weight matrix:

Figure BDA00023369719300000314
Figure BDA00023369719300000314

其中,W0为t=0的输出权重矩阵W,pinv(H)表示H的伪逆矩阵,

Figure BDA0002336971930000041
Hl为H的前l行组成的矩阵;Among them, W 0 is the output weight matrix W of t=0, pinv(H) represents the pseudo-inverse matrix of H,
Figure BDA0002336971930000041
H l is a matrix composed of the first l rows of H;

步骤6、更新标签近似矩阵,如下:Step 6. Update the label approximation matrix as follows:

Figure BDA0002336971930000042
Figure BDA0002336971930000042

其中,Yt+1为t+1次迭代的标签近似矩阵,In为n维的单位阵,J=[Il,Ol×u;Ou×l,Ou×u],Il为l维单位阵,

Figure BDA0002336971930000043
为v1×v2维的零矩阵,v1,v2可取u或者l,
Figure BDA0002336971930000044
Ou×1为u×1维的零矩阵;L为图拉普拉斯矩阵L=D-A,A为相似性矩阵,其第i行第j列元素Aij为:Among them, Y t+1 is the label approximation matrix of t+1 iterations, I n is an n-dimensional identity matrix, J=[I l , O l×u ; O u×l , O u×u ], I l is an l-dimensional unit matrix,
Figure BDA0002336971930000043
is a v 1 ×v 2 -dimensional zero matrix, v 1 , v 2 can be u or l,
Figure BDA0002336971930000044
O u×1 is a zero matrix of u×1 dimension; L is a graph Laplacian matrix L=DA, A is a similarity matrix, and the element A ij of the i-th row and the j-th column is:

Figure BDA0002336971930000045
Figure BDA0002336971930000045

其中,xi与xj为样本,i,j∈{1,...,n},σ>0为高斯核宽,D为A的度矩阵,D为对角阵,D的第i个对角元素dii=∑jAijAmong them, x i and x j are samples, i, j∈{1,...,n}, σ>0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith of D Diagonal element d ii =∑ j A ij ;

步骤7:更新输出权重矩阵,如下:Step 7: Update the output weight matrix as follows:

Wt+1=(HTH+θUt)-1HTYt+1 W t+1 =(H T H+θU t ) -1 H T Y t+1

其中,

Figure BDA0002336971930000046
其中,Wt+1表示在t+1时刻的W,
Figure BDA0002336971930000047
为Wt+1的第1行至第Nh行向量,
Figure BDA0002336971930000048
为W的第1行至第Nh行向量;in,
Figure BDA0002336971930000046
Among them, W t+1 represents W at time t+1,
Figure BDA0002336971930000047
is the vector from the 1st row to the Nth row of W t+1 ,
Figure BDA0002336971930000048
is the vector from the 1st row to the Nth row of W;

步骤8:迭代次数t自增1,如果t>E,则保留w=Wt+1,并跳至步骤9,否则跳至步骤6;Step 8: The number of iterations t is incremented by 1. If t>E, keep w=W t+1 and skip to step 9, otherwise skip to step 6;

步骤9:对于新的样本x,采用h(x)W预测其体验分数。Step 9: For a new sample x, use h(x)W to predict its experience score.

优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:

Figure BDA0002336971930000049
Figure BDA0002336971930000049

优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:

Figure BDA00023369719300000410
Figure BDA00023369719300000410

优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:

Figure BDA00023369719300000411
Figure BDA00023369719300000411

进一步优选地,l>NhFurther preferably, l>N h .

在步骤1中,按照实际情况使用辅助特征对x进行扩充时,可以采用读物类别、目标学习者、情节展开方式、是否为三维影像、是否有视觉以外辅助手段、文本主要语言、绘图风格、每页平均字数等特征。In step 1, when using auxiliary features to expand x according to the actual situation, the type of reading material, target learner, plot development method, whether it is a 3D image, whether there is auxiliary means other than visual, the main language of the text, the drawing style, each Characteristics such as the average word count on a page.

高斯核宽度一般可取σ=0.01,λ1,λ2,θ可取:λ1=0.3,λ2=0.7,θ=0.2。Nh可取100到1000之间的整数,E可取3到20之间的整数。Generally, the Gaussian kernel width can take σ=0.01, λ 1 , λ 2 , and θ can take: λ 1 =0.3, λ 2 =0.7, θ=0.2. N h can take an integer between 100 and 1000, and E can take an integer between 3 and 20.

提供以上实施例仅仅是为了描述本发明的目的,而并非要限制本发明的范围。本发明的范围由所附权利要求限定。不脱离本发明的精神和原理而做出的各种等同替换和修改,均应涵盖在本发明的范围之内。The above embodiments are provided for the purpose of describing the present invention only, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent replacements and modifications made without departing from the spirit and principle of the present invention should be included within the scope of the present invention.

Claims (5)

1.一种基于学习者表情的体验分析方法,其特征在于包括以下步骤:1. a kind of experience analysis method based on learner's expression is characterized in that comprising the following steps: 步骤1、数据采集与初始化:Step 1. Data acquisition and initialization: 采集每次学习中学习者的脸部视频,并分析每一帧的表情,将表情分为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情共8类,组成特征向量
Figure FDA0002336971920000011
x(1),...,x(8)分别为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情在整个视频中所占比例,则x(1),...,x(8)之和为1,按照实际情况使用辅助特征对x进行扩充,得到
Figure FDA0002336971920000012
为Ni维的样本;令样本集合
Figure FDA0002336971920000013
每次学习后的学习者体验打分作为样本的标签
Figure FDA0002336971920000014
Figure FDA0002336971920000015
进行标记,得到对应的类别标签
Figure FDA0002336971920000016
其中,l为有标签样本数量、n为所有样本数量,u=n-l为无标签样本数量;
Figure FDA0002336971920000017
表示实数集,
Figure FDA0002336971920000018
表示正实数集;
Collect the face video of the learner in each learning, and analyze the expressions of each frame, divide the expressions into 8 categories of disgust, anger, fear, happiness, sadness, surprise, shyness and no expression, and form feature vectors.
Figure FDA0002336971920000011
x (1) ,...,x (8) are the proportions of disgust, anger, fear, happiness, sadness, surprise, shyness and expressionless in the whole video respectively, then x (1) ,...,x (8) The sum is 1, and the auxiliary feature is used to expand x according to the actual situation to obtain
Figure FDA0002336971920000012
is a sample of N i dimensions; let the sample set
Figure FDA0002336971920000013
The learner experience score after each study as a label for the sample
Figure FDA0002336971920000014
right
Figure FDA0002336971920000015
Tag, get the corresponding category label
Figure FDA0002336971920000016
Among them, l is the number of labeled samples, n is the number of all samples, and u=nl is the number of unlabeled samples;
Figure FDA0002336971920000017
represents the set of real numbers,
Figure FDA0002336971920000018
represents the set of positive real numbers;
初始化:人工设定以下参数:λ1,λ2,θ,σ>0,隐藏层节点数Nh>0,最大迭代次数E,迭代次数t=0;Initialization: manually set the following parameters: λ 1 , λ 2 , θ, σ > 0, the number of hidden layer nodes N h > 0, the maximum number of iterations E, the number of iterations t=0; 步骤2、随机生成隐藏层映射函数的输入权重向量
Figure FDA0002336971920000019
与输入偏置;b∈R,如下:
Step 2. Randomly generate the input weight vector of the hidden layer mapping function
Figure FDA0002336971920000019
Biased from the input; b ∈ R, as follows:
随机生成Nh个a,得到
Figure FDA00023369719200000110
随机生成Nh个b,得到
Figure FDA00023369719200000111
Randomly generate N h a, get
Figure FDA00023369719200000110
Randomly generate N h b, get
Figure FDA00023369719200000111
步骤3、生成隐藏层输出函数:Step 3. Generate the hidden layer output function:
Figure FDA00023369719200000112
Figure FDA00023369719200000112
其中,G(a,b,x)为激活函数,x表示样本,上标T表示矩阵转制;Among them, G(a, b, x) is the activation function, x represents the sample, and the superscript T represents the matrix transformation; 步骤4、生成隐藏层输出矩阵:Step 4. Generate the hidden layer output matrix: H=[h(x1),...,h(xn)]T H=[h(x 1 ),...,h(x n )] T 步骤5、初始化输出权重矩阵:Step 5. Initialize the output weight matrix:
Figure FDA00023369719200000113
Figure FDA00023369719200000113
其中,W0为t=0的输出权重矩阵W,pinv(H)表示H的伪逆矩阵,
Figure FDA00023369719200000114
Hl为H的前l行组成的矩阵;
Among them, W 0 is the output weight matrix W of t=0, pinv(H) represents the pseudo-inverse matrix of H,
Figure FDA00023369719200000114
H l is a matrix composed of the first l rows of H;
步骤6、更新标签近似矩阵,如下:Step 6. Update the label approximation matrix as follows:
Figure FDA00023369719200000115
Figure FDA00023369719200000115
其中,Yt+1为t+1次迭代的标签近似矩阵,In为n维的单位阵,J=[Il,Ol×u;Ou×l,Ou×u],Il为l维单位阵,
Figure FDA00023369719200000116
为v1×v2维的零矩阵,v1,v2可取u或者l,
Figure FDA00023369719200000117
Ou×1为u×1维的零矩阵;L为图拉普拉斯矩阵L=D-A,A为相似性矩阵,其第i行第j列元素Aij为:
Among them, Y t+1 is the label approximation matrix of t+1 iterations, I n is an n-dimensional identity matrix, J=[I l , O l×u ; O u×l , O u×u ], I l is an l-dimensional unit matrix,
Figure FDA00023369719200000116
is a v 1 ×v 2 -dimensional zero matrix, v 1 , v 2 can be u or l,
Figure FDA00023369719200000117
O u×1 is a zero matrix of u×1 dimension; L is a graph Laplacian matrix L=DA, A is a similarity matrix, and the element A ij of the i-th row and the j-th column is:
Figure FDA00023369719200000118
Figure FDA00023369719200000118
其中,xi与xj为样本,i,j∈{1,...,n},σ>0为高斯核宽,D为A的度矩阵,D为对角阵,D的第i个对角元素dii=∑jAijAmong them, x i and x j are samples, i, j∈{1,...,n}, σ>0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith of D Diagonal element d ii =∑ j A ij ; 步骤7:更新输出权重矩阵,如下:Step 7: Update the output weight matrix as follows: Wt+1=(HTH+θUt)-1HTYt+1 W t+1 =(H T H+θU t ) -1 H T Y t+1 其中,
Figure FDA0002336971920000021
其中,Wt+1表示在t+1时刻的W,
Figure FDA0002336971920000022
为Wt+1的第1行至第Nh行向量,
Figure FDA0002336971920000023
为W的第1行至第Nh行向量;
in,
Figure FDA0002336971920000021
Among them, W t+1 represents W at time t+1,
Figure FDA0002336971920000022
is the vector from the 1st row to the Nth row of W t+1 ,
Figure FDA0002336971920000023
is the vector from the 1st row to the Nth row of W;
步骤8:迭代次数t自增1,如果t>E,则保留
Figure FDA0002336971920000027
并跳至步骤9,否则跳至步骤6;
Step 8: The number of iterations t is incremented by 1, if t>E, keep it
Figure FDA0002336971920000027
And skip to step 9, otherwise skip to step 6;
步骤9:对于新的样本x,采用h(x)W预测其体验分数。Step 9: For a new sample x, use h(x)W to predict its experience score.
2.如权利要求1所述的一种基于学习者表情的体验分析方法,步骤3所涉及的激活函数G(a,b,x)为:2. a kind of experience analysis method based on learner's expression as claimed in claim 1, the activation function G (a, b, x) involved in step 3 is:
Figure FDA0002336971920000024
Figure FDA0002336971920000024
3.如权利要求1所述的一种基于学习者表情的体验分析方法,步骤3所涉及的激活函数G(a,b,x)为:3. a kind of experience analysis method based on learner's expression as claimed in claim 1, the activation function G (a, b, x) involved in step 3 is:
Figure FDA0002336971920000025
Figure FDA0002336971920000025
4.如权利要求1所述的一种基于学习者表情的体验分析方法,步骤3所涉及的激活函数G(a,b,x)为:4. a kind of experience analysis method based on learner's expression as claimed in claim 1, the activation function G (a, b, x) involved in step 3 is:
Figure FDA0002336971920000026
Figure FDA0002336971920000026
5.如权利要求1、2、3、4所述的任意一种基于学习者表情的体验分析方法,其特征在于,l>Nh5 . The experience analysis method based on learner expressions according to any one of claims 1 , 2 , 3 and 4 , wherein l>N h . 6 .
CN201911360147.0A 2019-12-25 2019-12-25 An experience analysis method based on learners’ expressions Active CN111126297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360147.0A CN111126297B (en) 2019-12-25 2019-12-25 An experience analysis method based on learners’ expressions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360147.0A CN111126297B (en) 2019-12-25 2019-12-25 An experience analysis method based on learners’ expressions

Publications (2)

Publication Number Publication Date
CN111126297A true CN111126297A (en) 2020-05-08
CN111126297B CN111126297B (en) 2023-10-31

Family

ID=70502568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360147.0A Active CN111126297B (en) 2019-12-25 2019-12-25 An experience analysis method based on learners’ expressions

Country Status (1)

Country Link
CN (1) CN111126297B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001223A (en) * 2020-07-01 2020-11-27 安徽新知数媒信息科技有限公司 Rapid virtualization construction method of real environment map
CN115506783A (en) * 2021-06-21 2022-12-23 中国石油化工股份有限公司 Lithology identification method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085704A (en) * 2017-03-27 2017-08-22 杭州电子科技大学 Fast Facial Expression Recognition Method Based on ELM Autoencoding Algorithm
CN107392230A (en) * 2017-06-22 2017-11-24 江南大学 A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 A two-way assessment system for classroom quality based on deep learning
CN109919099A (en) * 2019-03-11 2019-06-21 重庆科技学院 A method and system for user experience evaluation based on facial expression recognition
CN109919102A (en) * 2019-03-11 2019-06-21 重庆科技学院 A method and system for evaluating the experience of hugging machine for autism based on facial expression recognition
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A user experience evaluation method and system based on ELMAN neural network
CN110390307A (en) * 2019-07-25 2019-10-29 首都师范大学 Expression recognition method, expression recognition model training method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN107085704A (en) * 2017-03-27 2017-08-22 杭州电子科技大学 Fast Facial Expression Recognition Method Based on ELM Autoencoding Algorithm
CN107392230A (en) * 2017-06-22 2017-11-24 江南大学 A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability
CN109359521A (en) * 2018-09-05 2019-02-19 浙江工业大学 A two-way assessment system for classroom quality based on deep learning
CN109919099A (en) * 2019-03-11 2019-06-21 重庆科技学院 A method and system for user experience evaluation based on facial expression recognition
CN109919102A (en) * 2019-03-11 2019-06-21 重庆科技学院 A method and system for evaluating the experience of hugging machine for autism based on facial expression recognition
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A user experience evaluation method and system based on ELMAN neural network
CN110390307A (en) * 2019-07-25 2019-10-29 首都师范大学 Expression recognition method, expression recognition model training method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIN WANG ET AL.: "Look-up Table Unit Activation Function for Deep Convolutional Neural Networks", 《2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION》, pages 1225 - 1233 *
雒晓卓: "基于联合稀疏和局部线性的极限学习机及应用", 《中国博士学位论文全文数据库 信息科技辑》, no. 2017, pages 140 - 45 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001223A (en) * 2020-07-01 2020-11-27 安徽新知数媒信息科技有限公司 Rapid virtualization construction method of real environment map
CN112001223B (en) * 2020-07-01 2023-11-24 安徽新知数字科技有限公司 Rapid virtualization construction method for real environment map
CN115506783A (en) * 2021-06-21 2022-12-23 中国石油化工股份有限公司 Lithology identification method

Also Published As

Publication number Publication date
CN111126297B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN110334705B (en) A language recognition method for scene text images combining global and local information
CN106980683B (en) Blog text abstract generating method based on deep learning
Đukić et al. A low-shot object counting network with iterative prototype adaptation
CN111476315B (en) An Image Multi-label Recognition Method Based on Statistical Correlation and Graph Convolution Technology
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
CN106845499A (en) A kind of image object detection method semantic based on natural language
CN105701502A (en) Image automatic marking method based on Monte Carlo data balance
CN112819023A (en) Sample set acquisition method and device, computer equipment and storage medium
Zhang et al. Flexible auto-weighted local-coordinate concept factorization: A robust framework for unsupervised clustering
CN111291556A (en) Chinese entity relation extraction method based on character and word feature fusion of entity meaning item
Bawa et al. Emotional sentiment analysis for a group of people based on transfer learning with a multi-modal system
CN110516098A (en) An Image Annotation Method Based on Convolutional Neural Network and Binary Coded Features
CN116883681B (en) Domain generalization target detection method based on countermeasure generation network
CN113408418A (en) Calligraphy font and character content synchronous identification method and system
Xin et al. Hybrid dilated multilayer faster RCNN for object detection
CN114693997B (en) Image description generation method, device, equipment and medium based on transfer learning
CN108470025A (en) Partial-Topic probability generates regularization own coding text and is embedded in representation method
CN113569008A (en) A big data analysis method and system based on community governance data
CN110263808B (en) An Image Sentiment Classification Method Based on LSTM Network and Attention Mechanism
CN114898136A (en) Small sample image classification method based on feature self-adaption
Chen et al. STRAN: Student expression recognition based on spatio-temporal residual attention network in classroom teaching videos
CN114997175A (en) A Sentiment Analysis Method Based on Domain Adversarial Training
CN111126297A (en) An Experience Analysis Method Based on Learner Expression
CN114998688A (en) A large field of view target detection method based on improved YOLOv4 algorithm
CN115578755A (en) Lifelong learning pedestrian re-identification method based on knowledge updating and knowledge integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240411

Address after: Building 24, 4th Floor, No. 68 Beiqing Road, Haidian District, Beijing, 100000, 0446

Patentee after: Beijing Beike Haiteng Technology Co.,Ltd.

Country or region after: China

Address before: 232001 cave West Road, Huainan, Anhui

Patentee before: HUAINAN NORMAL University

Country or region before: China