CN111126297A - Experience analysis method based on learner expression - Google Patents
Experience analysis method based on learner expression Download PDFInfo
- Publication number
- CN111126297A CN111126297A CN201911360147.0A CN201911360147A CN111126297A CN 111126297 A CN111126297 A CN 111126297A CN 201911360147 A CN201911360147 A CN 201911360147A CN 111126297 A CN111126297 A CN 111126297A
- Authority
- CN
- China
- Prior art keywords
- matrix
- learner
- expression
- sample
- experience
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000014509 gene expression Effects 0.000 title claims abstract description 20
- 238000004458 analytical method Methods 0.000 title claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims abstract description 55
- 230000006870 function Effects 0.000 claims abstract description 15
- 239000013598 vector Substances 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 9
- 206010063659 Aversion Diseases 0.000 claims description 6
- 206010034960 Photophobia Diseases 0.000 claims description 5
- 230000009191 jumping Effects 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000007405 data analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to an experience analysis method based on learner expression, which comprises the steps of data acquisition and initialization, random generation of input weight vectors and input biases of hidden layer mapping functions, hidden layer output function generation, hidden layer output matrix generation, output weight matrix initialization, tag approximation matrix updating, output weight matrix updating, training stop judgment, online experience score prediction and the like. The invention has the advantages of high prediction precision, no need of a large number of learners to experience scoring, high operation speed and the like.
Description
Technical Field
The invention belongs to the field of data analysis, and particularly relates to an experience analysis method based on learner expressions.
Background
Currently, more and more learners abandon the traditional learner method and choose learners on intelligent terminals. In order to really know the experience of the learner on the current learner, the camera on the intelligent terminal can be used for capturing the face image of the learner so as to acquire the expression information of the learner. However, the expression of a learner is variable and complex in a learner process. When the learner is laughing, the experience is not good, and similarly, when the learner shows aversive expression, the experience is not good. After each learner is finished, the system may request that the learner evaluate the experience. Of course, not every learner can complete learning, nor is every learner willing to give an assessment. Therefore, it is necessary to establish an experience analysis method based on the expression of the learner, so as to predict the experience of each learning experience and further provide data support for the improvement of the system.
Disclosure of Invention
The invention provides an experience analysis method based on learner expression, which comprises the following steps:
step 1, data acquisition and initialization:
collecting facial videos of a learner in each learning, analyzing the expression of each frame, dividing the expression into 8 types including aversion, anger, fear, happiness, sadness, surprise, photophobia and no expression, and forming a feature vectorx(1),...,x(8)The proportion of aversion, anger, fear, happiness, sadness, surprise, photophobia and no expression in the whole video is respectively x(1),...,x(8)The sum is 1, and the auxiliary characteristics are used for expanding x according to actual conditions to obtainIs NiA sample of dimensions; order sample collectionLabel for scoring learner experience after each learning as sampleTo pairMarking to obtain corresponding category labelWherein l is the number of labeled samples, n is the number of all samples, and u-n-l is the number of unlabeled samples;a set of real numbers is represented as,representing a positive real number set;
initialization: the following parameters were manually set: lambda [ alpha ]1,λ2Theta, sigma > 0, number of hidden layer nodes NhThe maximum iteration time E is more than 0, and the iteration time t is 0;
step 2, randomly generating an input weight vector of a hidden layer mapping functionIs offset from the input; b ∈ R, as follows:
Step 3, generating a hidden layer output function:
wherein G (a, b, x) is an activation function, x represents a sample, and superscript T represents matrix transfer;
step 4, generating a hidden layer output matrix:
H=[h(x1),...,h(xn)]T
step 5, initializing an output weight matrix:
wherein ,W0An output weight matrix W with t equal to 0, pinv (H) represents a pseudo-inverse matrix of H,a matrix composed of the first l rows of H;
and 6, updating the label approximation matrix as follows:
wherein ,Yt+1Tag approximation matrix for t +1 iterations, InIs a unit matrix of n dimensions, J ═ Il,Ol×u;Ou×l,Ou×u],IlIs a unit matrix in the dimension of l,is v is1×v2Zero matrix of dimensions, v1,v2It is possible to take u or l,Ou×1a zero matrix of u x 1 dimensions; l is graph Laplace matrix L ═ D-A, A is similarity matrix, its ith row and jth column element AijComprises the following steps:
wherein ,xiAnd xjFor the sample, i, j ∈ {1, …, n }, σ > 0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, the ith diagonal element D of Dii=∑jAij;
And 7: the output weight matrix is updated as follows:
Wt+1=(HTH+θUt)-1HTYt+1
wherein , wherein ,Wt+1Denotes W at the time t +1,is Wt+1Line 1 to line N ofhThe number of the row vectors is,is line 1 to line N of WhA row vector;
and 8: the iteration number t is increased by 1, if t is more than E, W is retainedt+1And jumping to the step 9, otherwise jumping to the step 6;
and step 9: for the new sample x, its experience score is predicted using h (x) W.
Wherein, the activation function G (a, b, x) involved in step 3 is:
or ,
or ,
wherein ,l>Nh。
The invention has the advantages of high prediction precision, stable performance, no need of a large amount of learner experience scoring, high operation speed and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
Detailed Description
The invention is further described below with reference to examples, but the scope of the invention is not limited thereto.
As shown in fig. 1, the present invention is specifically implemented as follows:
step 1, data acquisition and initialization:
collecting facial videos of a learner in each learning process, analyzing the expression of each frame, and classifying the expression into aversion, anger, fear and height8 categories of Xingxing, sadness, surprise, shy and blankness constitute the feature vectorx(1),...,x(8)The proportion of aversion, anger, fear, happiness, sadness, surprise, photophobia and no expression in the whole video is respectively x(1),...,x(8)The sum is 1, and the auxiliary characteristics are used for expanding x according to actual conditions to obtainIs NiA sample of dimensions; order sample collectionLabel for scoring learner experience after each learning as sampleTo pairMarking to obtain corresponding category labelWherein l is the number of labeled samples, n is the number of all samples, and u-n-l is the number of unlabeled samples;a set of real numbers is represented as,representing a positive real number set;
initialization: the following parameters were manually set: lambda [ alpha ]1,λ2Theta, sigma > 0, number of hidden layer nodes NhThe maximum iteration time E is more than 0, and the iteration time t is 0;
step 2, randomly generating an input weight vector of a hidden layer mapping functionIs offset from the input; b ∈ R, as follows:
Step 3, generating a hidden layer output function:
wherein G (a, b, x) is an activation function, x represents a sample, and superscript T represents matrix transfer;
step 4, generating a hidden layer output matrix:
H=[h(x1),...,h(xn)]T
step 5, initializing an output weight matrix:
wherein ,W0An output weight matrix W with t equal to 0, pinv (H) represents a pseudo-inverse matrix of H,Hla matrix composed of the first l rows of H;
and 6, updating the label approximation matrix as follows:
wherein ,Yt+1Tag approximation matrix for t +1 iterations, InIs a unit matrix of n dimensions, J ═ Il,Ol×u;Ou×l,Ou×u],IlIs a unit matrix in the dimension of l,is v is1×v2Zero matrix of dimensions, v1,v2It is possible to take u or l,Ou×1a zero matrix of u x 1 dimensions; l is graph Laplace matrix L ═ D-A, A is similarity matrix, its ith row and jth column element AijComprises the following steps:
wherein ,xiAnd xjFor the sample, i, j ∈ { 1.,. n }, σ > 0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, the ith diagonal element D of D isii=∑jAij;
And 7: the output weight matrix is updated as follows:
Wt+1=(HTH+θUt)-1HTYt+1
wherein , wherein ,Wt+1Denotes W at the time t +1,is Wt+1Line 1 to line N ofhThe number of the row vectors is,is line 1 to line N of WhA row vector;
and 8: the iteration number t is increased by 1, if t is more than E, W is retainedt+1And jumping to the step 9, otherwise jumping to the step 6;
and step 9: for the new sample x, its experience score is predicted using h (x) W.
Preferably, the activation function G (a, b, x) involved in step 3 is:
preferably, the activation function G (a, b, x) involved in step 3 is:
preferably, the activation function G (a, b, x) involved in step 3 is:
further preferably, l > Nh。
In step 1, when the auxiliary features are used to expand x according to actual conditions, the features such as reading category, target learner, plot expansion mode, whether three-dimensional image, whether auxiliary means other than vision exists, main language of text, drawing style, average word number per page, etc. can be adopted.
The Gaussian kernel width may be 0.01, lambda1,λ2And θ may be taken as: lambda [ alpha ]1=0.3,λ2=0.7,θ=0.2。NhAn integer between 100 and 1000 may be taken, and E an integer between 3 and 20 may be taken.
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.
Claims (5)
1. An experience analysis method based on learner expression is characterized by comprising the following steps:
step 1, data acquisition and initialization:
collecting facial videos of a learner in each learning process, analyzing the expression of each frame, and dividing the expression into aversion, anger, smell and the like,8 categories of fear, happiness, sadness, surprise, photophobia and blankness are formed into a feature vectorx(1),...,x(8)The proportion of aversion, anger, fear, happiness, sadness, surprise, photophobia and no expression in the whole video is respectively x(1),...,x(8)The sum is 1, and the auxiliary characteristics are used for expanding x according to actual conditions to obtainIs NiA sample of dimensions; order sample collectionLabel for scoring learner experience after each learning as sampleTo pairMarking to obtain corresponding category labelWherein l is the number of labeled samples, n is the number of all samples, and u-n-l is the number of unlabeled samples;a set of real numbers is represented as,representing a positive real number set;
initialization: the following parameters were manually set: lambda [ alpha ]1,λ2Theta, sigma > 0, number of hidden layer nodes NhThe maximum iteration time E is more than 0, and the iteration time t is 0;
step 2, randomly generating hidingInput weight vector of layer mapping functionIs offset from the input; b ∈ R, as follows:
Step 3, generating a hidden layer output function:
wherein G (a, b, x) is an activation function, x represents a sample, and superscript T represents matrix transfer;
step 4, generating a hidden layer output matrix:
H=[h(x1),...,h(xn)]T
step 5, initializing an output weight matrix:
wherein ,W0An output weight matrix W with t equal to 0, pinv (H) represents a pseudo-inverse matrix of H,Hla matrix composed of the first l rows of H;
and 6, updating the label approximation matrix as follows:
wherein ,Yt+1For t +1 iterationsOf the label approximation matrix, InIs a unit matrix of n dimensions, J ═ Il,Ol×u;Ou×l,Ou×u],IlIs a unit matrix in the dimension of l,is v is1×v2Zero matrix of dimensions, v1,v2It is possible to take u or l,Ou×1a zero matrix of u x 1 dimensions; l is graph Laplace matrix L ═ D-A, A is similarity matrix, its ith row and jth column element AijComprises the following steps:
wherein ,xiAnd xjFor the sample, i, j ∈ { 1.,. n }, σ > 0 is the Gaussian kernel width, D is the degree matrix of A, D is the diagonal matrix, the ith diagonal element D of D isii=∑jAij;
And 7: the output weight matrix is updated as follows:
Wt+1=(HTH+θUt)-1HTYt+1
wherein , wherein ,Wt+1Denotes W at the time t +1,is Wt+1Line 1 to line N ofhThe number of the row vectors is,is line 1 to line N of WhA row vector;
and 8: the iteration number t is increased by 1, if t > E, the iteration number is keptJumping to the step 9, otherwise jumping to the step 6;
and step 9: for the new sample x, its experience score is predicted using h (x) W.
5. the method as claimed in any one of claims 1, 2, 3 and 4, wherein l > Nh。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911360147.0A CN111126297B (en) | 2019-12-25 | 2019-12-25 | Experience analysis method based on learner expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911360147.0A CN111126297B (en) | 2019-12-25 | 2019-12-25 | Experience analysis method based on learner expression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111126297A true CN111126297A (en) | 2020-05-08 |
CN111126297B CN111126297B (en) | 2023-10-31 |
Family
ID=70502568
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911360147.0A Active CN111126297B (en) | 2019-12-25 | 2019-12-25 | Experience analysis method based on learner expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111126297B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001223A (en) * | 2020-07-01 | 2020-11-27 | 安徽新知数媒信息科技有限公司 | Rapid virtualization construction method of real environment map |
CN115506783A (en) * | 2021-06-21 | 2022-12-23 | 中国石油化工股份有限公司 | Lithology identification method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085704A (en) * | 2017-03-27 | 2017-08-22 | 杭州电子科技大学 | Fast face expression recognition method based on ELM own coding algorithms |
CN107392230A (en) * | 2017-06-22 | 2017-11-24 | 江南大学 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
US20180165554A1 (en) * | 2016-12-09 | 2018-06-14 | The Research Foundation For The State University Of New York | Semisupervised autoencoder for sentiment analysis |
CN109359521A (en) * | 2018-09-05 | 2019-02-19 | 浙江工业大学 | The two-way assessment system of Classroom instruction quality based on deep learning |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of user experience evaluation method and system based on Expression Recognition |
CN109934156A (en) * | 2019-03-11 | 2019-06-25 | 重庆科技学院 | A kind of user experience evaluation method and system based on ELMAN neural network |
CN110390307A (en) * | 2019-07-25 | 2019-10-29 | 首都师范大学 | Expression recognition method, Expression Recognition model training method and device |
-
2019
- 2019-12-25 CN CN201911360147.0A patent/CN111126297B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180165554A1 (en) * | 2016-12-09 | 2018-06-14 | The Research Foundation For The State University Of New York | Semisupervised autoencoder for sentiment analysis |
CN107085704A (en) * | 2017-03-27 | 2017-08-22 | 杭州电子科技大学 | Fast face expression recognition method based on ELM own coding algorithms |
CN107392230A (en) * | 2017-06-22 | 2017-11-24 | 江南大学 | A kind of semi-supervision image classification method for possessing maximization knowledge utilization ability |
CN109359521A (en) * | 2018-09-05 | 2019-02-19 | 浙江工业大学 | The two-way assessment system of Classroom instruction quality based on deep learning |
CN109919102A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of self-closing disease based on Expression Recognition embraces body and tests evaluation method and system |
CN109919099A (en) * | 2019-03-11 | 2019-06-21 | 重庆科技学院 | A kind of user experience evaluation method and system based on Expression Recognition |
CN109934156A (en) * | 2019-03-11 | 2019-06-25 | 重庆科技学院 | A kind of user experience evaluation method and system based on ELMAN neural network |
CN110390307A (en) * | 2019-07-25 | 2019-10-29 | 首都师范大学 | Expression recognition method, Expression Recognition model training method and device |
Non-Patent Citations (2)
Title |
---|
MIN WANG ET AL.: "Look-up Table Unit Activation Function for Deep Convolutional Neural Networks", 《2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION》, pages 1225 - 1233 * |
雒晓卓: "基于联合稀疏和局部线性的极限学习机及应用", 《中国博士学位论文全文数据库 信息科技辑》, no. 2017, pages 140 - 45 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001223A (en) * | 2020-07-01 | 2020-11-27 | 安徽新知数媒信息科技有限公司 | Rapid virtualization construction method of real environment map |
CN112001223B (en) * | 2020-07-01 | 2023-11-24 | 安徽新知数字科技有限公司 | Rapid virtualization construction method for real environment map |
CN115506783A (en) * | 2021-06-21 | 2022-12-23 | 中国石油化工股份有限公司 | Lithology identification method |
Also Published As
Publication number | Publication date |
---|---|
CN111126297B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376242B (en) | Text classification method based on cyclic neural network variant and convolutional neural network | |
CN110334705B (en) | Language identification method of scene text image combining global and local information | |
CN108647742B (en) | Rapid target detection method based on lightweight neural network | |
CN113486981B (en) | RGB image classification method based on multi-scale feature attention fusion network | |
CN108537119B (en) | Small sample video identification method | |
Taylor et al. | Learning invariance through imitation | |
CN108765383B (en) | Video description method based on deep migration learning | |
CN110717526A (en) | Unsupervised transfer learning method based on graph convolution network | |
CN106951911A (en) | A kind of quick multi-tag picture retrieval system and implementation method | |
CN110837846A (en) | Image recognition model construction method, image recognition method and device | |
CN110705490B (en) | Visual emotion recognition method | |
CN105701225B (en) | A kind of cross-media retrieval method based on unified association hypergraph specification | |
CN110175657B (en) | Image multi-label marking method, device, equipment and readable storage medium | |
CN113222011A (en) | Small sample remote sensing image classification method based on prototype correction | |
CN103020658B (en) | Recognition method for objects in two-dimensional images | |
CN117992805B (en) | Zero sample cross-modal retrieval method and system based on tensor product graph fusion diffusion | |
CN108470025A (en) | Partial-Topic probability generates regularization own coding text and is embedded in representation method | |
CN111126297A (en) | Experience analysis method based on learner expression | |
CN112364791A (en) | Pedestrian re-identification method and system based on generation of confrontation network | |
CN114898136B (en) | Small sample image classification method based on characteristic self-adaption | |
CN112784921A (en) | Task attention guided small sample image complementary learning classification algorithm | |
CN118036555B (en) | Low-sample font generation method based on skeleton transfer and structure contrast learning | |
CN114742014B (en) | Few-sample text style migration method based on associated attention | |
CN110442736B (en) | Semantic enhancer spatial cross-media retrieval method based on secondary discriminant analysis | |
CN116883746A (en) | Graph node classification method based on partition pooling hypergraph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240411 Address after: Building 24, 4th Floor, No. 68 Beiqing Road, Haidian District, Beijing, 100000, 0446 Patentee after: Beijing Beike Haiteng Technology Co.,Ltd. Country or region after: China Address before: 232001 cave West Road, Huainan, Anhui Patentee before: HUAINAN NORMAL University Country or region before: China |
|
TR01 | Transfer of patent right |