CN111126297A - An Experience Analysis Method Based on Learner Expression - Google Patents
An Experience Analysis Method Based on Learner Expression Download PDFInfo
- Publication number
- CN111126297A CN111126297A CN201911360147.0A CN201911360147A CN111126297A CN 111126297 A CN111126297 A CN 111126297A CN 201911360147 A CN201911360147 A CN 201911360147A CN 111126297 A CN111126297 A CN 111126297A
- Authority
- CN
- China
- Prior art keywords
- matrix
- learner
- experience
- method based
- analysis method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 20
- 238000004458 analytical method Methods 0.000 title claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 54
- 230000006870 function Effects 0.000 claims abstract description 18
- 239000013598 vector Substances 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 10
- 206010041243 Social avoidant behaviour Diseases 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 3
- 238000013480 data collection Methods 0.000 abstract 1
- 238000000034 method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本发明涉及一种基于学习者表情的体验分析方法,包括数据采集与初始化、随机生成隐藏层映射函数的输入权重向量与输入偏置、生成隐藏层输出函数、生成隐藏层输出矩阵、初始化输出权重矩阵、更新标签近似矩阵、更新输出权重矩阵、训练停止判断、在线预测体验分数等步骤。本发明具有预测精度高、无需大量学习者体验打分、运算速度快等优点。
The invention relates to an experience analysis method based on learner expressions, including data collection and initialization, random generation of input weight vectors and input biases of hidden layer mapping functions, generation of hidden layer output functions, generation of hidden layer output matrices, and initialization of output weights Matrix, update label approximation matrix, update output weight matrix, training stop judgment, online prediction experience score and other steps. The present invention has the advantages of high prediction accuracy, no need for a large number of learners to experience scoring, and high computing speed.
Description
技术领域technical field
本发明属于数据分析领域,特别是涉及一种基于学习者表情的体验分析方法。The invention belongs to the field of data analysis, and in particular relates to an experience analysis method based on learner expressions.
背景技术Background technique
当前,越来越多的学习者抛弃了传统的学习者方式,进而选择在智能终端上学习者。为了切实了解学习者对当前的学习者体验,可以采用智能终端上的摄像头捕捉学习者的脸部图像,进而获取其表情信息。然而,在一次学习者过程中,学习者的表情是多变且复杂的。当学习者欢笑时,并不代表其体验良好,同理,当学习者表现出厌恶的表情时,也并非说明其体验不佳。在每次学习者结束后,系统可以请求学习者对该次体验进行评价。当然,并非每个学习者都能完成学习,也并非每个学习者都愿意给出评价。因此我们需要建立以学习者表情为主的体验分析方法,为每次学习体验进行体验预测,进而为系统的改进提供数据支持。At present, more and more learners abandon the traditional way of learning and choose to learn on smart terminals. In order to truly understand the learner's current experience of the learner, the camera on the smart terminal can be used to capture the face image of the learner, and then obtain the facial expression information of the learner. However, during a learner process, the learner's expressions are changeable and complex. When a learner laughs, it doesn't mean it's a good experience, and similarly, when a learner shows disgust, it doesn't mean it's a bad experience. After each learner finishes, the system can request the learner to rate the experience. Of course, not every learner can complete the learning, and not every learner is willing to give evaluation. Therefore, we need to establish an experience analysis method based on the learner's expression, to predict the experience for each learning experience, and then provide data support for the improvement of the system.
发明内容SUMMARY OF THE INVENTION
本发明提出一种基于学习者表情的体验分析方法,其过程如下:The present invention proposes an experience analysis method based on learner expressions, the process of which is as follows:
步骤1、数据采集与初始化:Step 1. Data acquisition and initialization:
采集每次学习中学习者的脸部视频,并分析每一帧的表情,将表情分为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情共8类,组成特征向量x(1),...,x(8)分别为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情在整个视频中所占比例,则x(1),...,x(8)之和为1,按照实际情况使用辅助特征对x进行扩充,得到为Ni维的样本;令样本集合每次学习后的学习者体验打分作为样本的标签对进行标记,得到对应的类别标签其中,l为有标签样本数量、n为所有样本数量,u=n-l为无标签样本数量;表示实数集,表示正实数集;Collect the face video of the learner in each learning, and analyze the expressions of each frame, divide the expressions into 8 categories of disgust, anger, fear, happiness, sadness, surprise, shyness and no expression, and form feature vectors. x (1) ,...,x (8) are the proportions of disgust, anger, fear, happiness, sadness, surprise, shyness and expressionless in the whole video respectively, then x (1) ,...,x (8) The sum is 1, and the auxiliary feature is used to expand x according to the actual situation to obtain is a sample of N i dimensions; let the sample set The learner experience score after each study as a label for the sample right Tag, get the corresponding category label Among them, l is the number of labeled samples, n is the number of all samples, and u=nl is the number of unlabeled samples; represents the set of real numbers, represents the set of positive real numbers;
初始化:人工设定以下参数:λ1,λ2,θ,σ>0,隐藏层节点数Nh>0,最大迭代次数E,迭代次数t=0;Initialization: manually set the following parameters: λ 1 , λ 2 , θ, σ > 0, the number of hidden layer nodes N h > 0, the maximum number of iterations E, the number of iterations t=0;
步骤2、随机生成隐藏层映射函数的输入权重向量与输入偏置;b∈R,如下:Step 2. Randomly generate the input weight vector of the hidden layer mapping function Biased from the input; b ∈ R, as follows:
随机生成Nh个a,得到随机生成Nh个b,得到 Randomly generate N h a, get Randomly generate N h b, get
步骤3、生成隐藏层输出函数:Step 3. Generate the hidden layer output function:
其中,G(a,b,x)为激活函数,x表示样本,上标T表示矩阵转制;Among them, G(a, b, x) is the activation function, x represents the sample, and the superscript T represents the matrix transformation;
步骤4、生成隐藏层输出矩阵:Step 4. Generate the hidden layer output matrix:
H=[h(x1),...,h(xn)]T H=[h(x 1 ),...,h(x n )] T
步骤5、初始化输出权重矩阵:Step 5. Initialize the output weight matrix:
其中,W0为t=0的输出权重矩阵W,pinv(H)表示H的伪逆矩阵,为H的前l行组成的矩阵;Among them, W 0 is the output weight matrix W of t=0, pinv(H) represents the pseudo-inverse matrix of H, is a matrix composed of the first l rows of H;
步骤6、更新标签近似矩阵,如下:Step 6. Update the label approximation matrix as follows:
其中,Yt+1为t+1次迭代的标签近似矩阵,In为n维的单位阵,J=[Il,Ol×u;Ou×l,Ou×u],Il为l维单位阵,为v1×v2维的零矩阵,v1,v2可取u或者l,Ou×1为u×1维的零矩阵;L为图拉普拉斯矩阵L=D-A,A为相似性矩阵,其第i行第j列元素Aij为:Among them, Y t+1 is the label approximation matrix of t+1 iterations, I n is an n-dimensional identity matrix, J=[I l , O l×u ; O u×l , O u×u ], I l is an l-dimensional unit matrix, is a v 1 ×v 2 -dimensional zero matrix, v 1 , v 2 can be u or l, O u×1 is a zero matrix of u×1 dimension; L is a graph Laplacian matrix L=DA, A is a similarity matrix, and the element A ij of the i-th row and the j-th column is:
其中,xi与xj为样本,i,j∈{1,…,n},σ>0为高斯核宽,D为A的度矩阵,D为对角阵,D的第i个对角元素dii=∑jAij;Among them, x i and x j are samples, i, j∈{1,...,n}, σ>0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith diagonal of D element d ii =∑ j A ij ;
步骤7:更新输出权重矩阵,如下:Step 7: Update the output weight matrix as follows:
Wt+1=(HTH+θUt)-1HTYt+1 W t+1 =(H T H+θU t ) -1 H T Y t+1
其中,其中,Wt+1表示在t+1时刻的W,为Wt+1的第1行至第Nh行向量,为W的第1行至第Nh行向量;in, Among them, W t+1 represents W at time t+1, is the vector from the 1st row to the Nth row of W t+1 , is the vector from the 1st row to the Nth row of W;
步骤8:迭代次数t自增1,如果t>E,则保留W=Wt+1,并跳至步骤9,否则跳至步骤6;Step 8: The number of iterations t is incremented by 1. If t>E, keep W=W t+1 and skip to step 9, otherwise skip to step 6;
步骤9:对于新的样本x,采用h(x)W预测其体验分数。Step 9: For a new sample x, use h(x)W to predict its experience score.
其中,步骤3所涉及的激活函数G(a,b,x)为:Among them, the activation function G(a, b, x) involved in step 3 is:
或者,or,
或者,or,
其中,l>Nh。where l>N h .
本发明具有预测精度高、性能稳定、无需大量学习者体验打分、运算速度快等优点。The invention has the advantages of high prediction accuracy, stable performance, no need for a large number of learners to experience and score, and fast calculation speed.
附图说明Description of drawings
图1为本发明方法流程图;Fig. 1 is the flow chart of the method of the present invention;
具体实施方式Detailed ways
下面结合实例对本发明作进一步描述,但本发明的保护范围并不限于此。The present invention will be further described below in conjunction with examples, but the protection scope of the present invention is not limited thereto.
如图1所示,本发明具体实现如下:As shown in Figure 1, the concrete realization of the present invention is as follows:
步骤1、数据采集与初始化:Step 1. Data acquisition and initialization:
采集每次学习中学习者的脸部视频,并分析每一帧的表情,将表情分为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情共8类,组成特征向量x(1),...,x(8)分别为厌恶、生气、恐惧、高兴、悲伤、惊讶、害羞与无表情在整个视频中所占比例,则x(1),...,x(8)之和为1,按照实际情况使用辅助特征对x进行扩充,得到为Ni维的样本;令样本集合每次学习后的学习者体验打分作为样本的标签对进行标记,得到对应的类别标签其中,l为有标签样本数量、n为所有样本数量,u=n-l为无标签样本数量;表示实数集,表示正实数集;Collect the face video of the learner in each learning, and analyze the expressions of each frame, divide the expressions into 8 categories of disgust, anger, fear, happiness, sadness, surprise, shyness and no expression, and form feature vectors. x (1) ,...,x (8) are the proportions of disgust, anger, fear, happiness, sadness, surprise, shyness and expressionless in the whole video respectively, then x (1) ,...,x (8) The sum is 1, and the auxiliary feature is used to expand x according to the actual situation to obtain is a sample of N i dimensions; let the sample set The learner experience score after each study as a label for the sample right Tag, get the corresponding category label Among them, l is the number of labeled samples, n is the number of all samples, and u=nl is the number of unlabeled samples; represents the set of real numbers, represents the set of positive real numbers;
初始化:人工设定以下参数:λ1,λ2,θ,σ>0,隐藏层节点数Nh>0,最大迭代次数E,迭代次数t=0;Initialization: manually set the following parameters: λ 1 , λ 2 , θ, σ > 0, the number of hidden layer nodes N h > 0, the maximum number of iterations E, the number of iterations t=0;
步骤2、随机生成隐藏层映射函数的输入权重向量与输入偏置;b∈R,如下:Step 2. Randomly generate the input weight vector of the hidden layer mapping function Biased from the input; b ∈ R, as follows:
随机生成Nh个a,得到随机生成Nh个b,得到 Randomly generate N h a, get Randomly generate N h b, get
步骤3、生成隐藏层输出函数:Step 3. Generate the hidden layer output function:
其中,G(a,b,x)为激活函数,x表示样本,上标T表示矩阵转制;Among them, G(a, b, x) is the activation function, x represents the sample, and the superscript T represents the matrix transformation;
步骤4、生成隐藏层输出矩阵:Step 4. Generate the hidden layer output matrix:
H=[h(x1),...,h(xn)]T H=[h(x 1 ),...,h(x n )] T
步骤5、初始化输出权重矩阵:Step 5. Initialize the output weight matrix:
其中,W0为t=0的输出权重矩阵W,pinv(H)表示H的伪逆矩阵,Hl为H的前l行组成的矩阵;Among them, W 0 is the output weight matrix W of t=0, pinv(H) represents the pseudo-inverse matrix of H, H l is a matrix composed of the first l rows of H;
步骤6、更新标签近似矩阵,如下:Step 6. Update the label approximation matrix as follows:
其中,Yt+1为t+1次迭代的标签近似矩阵,In为n维的单位阵,J=[Il,Ol×u;Ou×l,Ou×u],Il为l维单位阵,为v1×v2维的零矩阵,v1,v2可取u或者l,Ou×1为u×1维的零矩阵;L为图拉普拉斯矩阵L=D-A,A为相似性矩阵,其第i行第j列元素Aij为:Among them, Y t+1 is the label approximation matrix of t+1 iterations, I n is an n-dimensional identity matrix, J=[I l , O l×u ; O u×l , O u×u ], I l is an l-dimensional unit matrix, is a v 1 ×v 2 -dimensional zero matrix, v 1 , v 2 can be u or l, O u×1 is a zero matrix of u×1 dimension; L is a graph Laplacian matrix L=DA, A is a similarity matrix, and the element A ij of the i-th row and the j-th column is:
其中,xi与xj为样本,i,j∈{1,...,n},σ>0为高斯核宽,D为A的度矩阵,D为对角阵,D的第i个对角元素dii=∑jAij;Among them, x i and x j are samples, i, j∈{1,...,n}, σ>0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, and the ith of D Diagonal element d ii =∑ j A ij ;
步骤7:更新输出权重矩阵,如下:Step 7: Update the output weight matrix as follows:
Wt+1=(HTH+θUt)-1HTYt+1 W t+1 =(H T H+θU t ) -1 H T Y t+1
其中,其中,Wt+1表示在t+1时刻的W,为Wt+1的第1行至第Nh行向量,为W的第1行至第Nh行向量;in, Among them, W t+1 represents W at time t+1, is the vector from the 1st row to the Nth row of W t+1 , is the vector from the 1st row to the Nth row of W;
步骤8:迭代次数t自增1,如果t>E,则保留w=Wt+1,并跳至步骤9,否则跳至步骤6;Step 8: The number of iterations t is incremented by 1. If t>E, keep w=W t+1 and skip to step 9, otherwise skip to step 6;
步骤9:对于新的样本x,采用h(x)W预测其体验分数。Step 9: For a new sample x, use h(x)W to predict its experience score.
优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:
优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:
优选地,步骤3所涉及的激活函数G(a,b,x)为:Preferably, the activation function G(a, b, x) involved in step 3 is:
进一步优选地,l>Nh。Further preferably, l>N h .
在步骤1中,按照实际情况使用辅助特征对x进行扩充时,可以采用读物类别、目标学习者、情节展开方式、是否为三维影像、是否有视觉以外辅助手段、文本主要语言、绘图风格、每页平均字数等特征。In step 1, when using auxiliary features to expand x according to the actual situation, the type of reading material, target learner, plot development method, whether it is a 3D image, whether there is auxiliary means other than visual, the main language of the text, the drawing style, each Characteristics such as the average word count on a page.
高斯核宽度一般可取σ=0.01,λ1,λ2,θ可取:λ1=0.3,λ2=0.7,θ=0.2。Nh可取100到1000之间的整数,E可取3到20之间的整数。Generally, the Gaussian kernel width can take σ=0.01, λ 1 , λ 2 , and θ can take: λ 1 =0.3, λ 2 =0.7, θ=0.2. N h can take an integer between 100 and 1000, and E can take an integer between 3 and 20.
提供以上实施例仅仅是为了描述本发明的目的,而并非要限制本发明的范围。本发明的范围由所附权利要求限定。不脱离本发明的精神和原理而做出的各种等同替换和修改,均应涵盖在本发明的范围之内。The above embodiments are provided for the purpose of describing the present invention only, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent replacements and modifications made without departing from the spirit and principle of the present invention should be included within the scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911360147.0A CN111126297B (en) | 2019-12-25 | 2019-12-25 | An experience analysis method based on learners’ expressions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911360147.0A CN111126297B (en) | 2019-12-25 | 2019-12-25 | An experience analysis method based on learners’ expressions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111126297A true CN111126297A (en) | 2020-05-08 |
CN111126297B CN111126297B (en) | 2023-10-31 |
Family
ID=70502568
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911360147.0A Active CN111126297B (en) | 2019-12-25 | 2019-12-25 | An experience analysis method based on learners’ expressions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126297B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001223A (en) * | 2020-07-01 | 2020-11-27 | 安徽新知数媒信息科技有限公司 | Rapid virtualization construction method of real environment map |
CN115506783A (en) * | 2021-06-21 | 2022-12-23 | 中国石油化工股份有限公司 | Lithology identification method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085704A (en) * | 2017-03-27 | 2017-08-22 | 杭州电子科技大学 | Fast Facial Expression Recognition Method Based on ELM Autoencoding Algorithm |
CN107392230A (en) * | 2017-06-22 | 2017-11-24 | 江南大学 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
US20180165554A1 (en) * | 2016-12-09 | 2018-06-14 | The Research Foundation For The State University Of New York | Semisupervised autoencoder for sentiment analysis |
CN109359521A (en) * | 2018-09-05 | 2019-02-19 | 浙江工业大学 | A two-way assessment system for classroom quality based on deep learning |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A method and system for user experience evaluation based on facial expression recognition |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A method and system for evaluating the experience of hugging machine for autism based on facial expression recognition |
CN109934156A (en) * | 2019-03-11 | 2019-06-25 | 重庆科技学院 | A user experience evaluation method and system based on ELMAN neural network |
CN110390307A (en) * | 2019-07-25 | 2019-10-29 | 首都师范大学 | Expression recognition method, expression recognition model training method and device |
-
2019
- 2019-12-25 CN CN201911360147.0A patent/CN111126297B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180165554A1 (en) * | 2016-12-09 | 2018-06-14 | The Research Foundation For The State University Of New York | Semisupervised autoencoder for sentiment analysis |
CN107085704A (en) * | 2017-03-27 | 2017-08-22 | 杭州电子科技大学 | Fast Facial Expression Recognition Method Based on ELM Autoencoding Algorithm |
CN107392230A (en) * | 2017-06-22 | 2017-11-24 | 江南大学 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
CN109359521A (en) * | 2018-09-05 | 2019-02-19 | 浙江工业大学 | A two-way assessment system for classroom quality based on deep learning |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A method and system for user experience evaluation based on facial expression recognition |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A method and system for evaluating the experience of hugging machine for autism based on facial expression recognition |
CN109934156A (en) * | 2019-03-11 | 2019-06-25 | 重庆科技学院 | A user experience evaluation method and system based on ELMAN neural network |
CN110390307A (en) * | 2019-07-25 | 2019-10-29 | 首都师范大学 | Expression recognition method, expression recognition model training method and device |
Non-Patent Citations (2)
Title |
---|
MIN WANG ET AL.: "Look-up Table Unit Activation Function for Deep Convolutional Neural Networks", 《2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION》, pages 1225 - 1233 * |
雒晓卓: "基于联合稀疏和局部线性的极限学习机及应用", 《中国博士学位论文全文数据库 信息科技辑》, no. 2017, pages 140 - 45 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001223A (en) * | 2020-07-01 | 2020-11-27 | 安徽新知数媒信息科技有限公司 | Rapid virtualization construction method of real environment map |
CN112001223B (en) * | 2020-07-01 | 2023-11-24 | 安徽新知数字科技有限公司 | Rapid virtualization construction method for real environment map |
CN115506783A (en) * | 2021-06-21 | 2022-12-23 | 中国石油化工股份有限公司 | Lithology identification method |
Also Published As
Publication number | Publication date |
---|---|
CN111126297B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110334705B (en) | A language recognition method for scene text images combining global and local information | |
CN106980683B (en) | Blog text abstract generating method based on deep learning | |
Đukić et al. | A low-shot object counting network with iterative prototype adaptation | |
CN111476315B (en) | An Image Multi-label Recognition Method Based on Statistical Correlation and Graph Convolution Technology | |
CN112464865A (en) | Facial expression recognition method based on pixel and geometric mixed features | |
CN106845499A (en) | A kind of image object detection method semantic based on natural language | |
CN105701502A (en) | Image automatic marking method based on Monte Carlo data balance | |
CN112819023A (en) | Sample set acquisition method and device, computer equipment and storage medium | |
Zhang et al. | Flexible auto-weighted local-coordinate concept factorization: A robust framework for unsupervised clustering | |
CN111291556A (en) | Chinese entity relation extraction method based on character and word feature fusion of entity meaning item | |
Bawa et al. | Emotional sentiment analysis for a group of people based on transfer learning with a multi-modal system | |
CN110516098A (en) | An Image Annotation Method Based on Convolutional Neural Network and Binary Coded Features | |
CN116883681B (en) | Domain generalization target detection method based on countermeasure generation network | |
CN113408418A (en) | Calligraphy font and character content synchronous identification method and system | |
Xin et al. | Hybrid dilated multilayer faster RCNN for object detection | |
CN114693997B (en) | Image description generation method, device, equipment and medium based on transfer learning | |
CN108470025A (en) | Partial-Topic probability generates regularization own coding text and is embedded in representation method | |
CN113569008A (en) | A big data analysis method and system based on community governance data | |
CN110263808B (en) | An Image Sentiment Classification Method Based on LSTM Network and Attention Mechanism | |
CN114898136A (en) | Small sample image classification method based on feature self-adaption | |
Chen et al. | STRAN: Student expression recognition based on spatio-temporal residual attention network in classroom teaching videos | |
CN114997175A (en) | A Sentiment Analysis Method Based on Domain Adversarial Training | |
CN111126297A (en) | An Experience Analysis Method Based on Learner Expression | |
CN114998688A (en) | A large field of view target detection method based on improved YOLOv4 algorithm | |
CN115578755A (en) | Lifelong learning pedestrian re-identification method based on knowledge updating and knowledge integration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240411 Address after: Building 24, 4th Floor, No. 68 Beiqing Road, Haidian District, Beijing, 100000, 0446 Patentee after: Beijing Beike Haiteng Technology Co.,Ltd. Country or region after: China Address before: 232001 cave West Road, Huainan, Anhui Patentee before: HUAINAN NORMAL University Country or region before: China |