CN106971180A - A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary - Google Patents
A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary Download PDFInfo
- Publication number
- CN106971180A CN106971180A CN201710346931.0A CN201710346931A CN106971180A CN 106971180 A CN106971180 A CN 106971180A CN 201710346931 A CN201710346931 A CN 201710346931A CN 106971180 A CN106971180 A CN 106971180A
- Authority
- CN
- China
- Prior art keywords
- domain
- micro
- expression
- voice
- dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013526 transfer learning Methods 0.000 title claims abstract description 20
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000012360 testing method Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000008451 emotion Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 238000009795 derivation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/06—Decision making techniques; Pattern matching strategies
- G10L17/14—Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Machine Translation (AREA)
Abstract
A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary, including training stage and test phase.The present invention is projected to voice and micro- expression in public space by way of projection, and in order to simplify calculating, improves efficiency, and sparse dictionary expression is carried out to the data after projection;In order to further reduce the gap data in two domains, it is considered to the reconstruct between the dictionary in two domains is carried out, it is achieved thereby that the relevance of dictionary, so that the rarefaction representation matrix after projection generates bigger correlation.
Description
Technical field
The present invention relates to a kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary, belong to modal idenlification with
And the technical field of machine learning.
Background technology
Micro- expression be people under holddown or when attempting to hide real feelings a kind of time for revealing it is extremely short, not by
Autonomous facial expression.1966, Haggard and Issacs were found that this trickle expression first.Ekman et al. is to micro- table
Feelings have carried out a series of research, and micro- expression is identified a kind of clue of reliable detecting lie.In the last few years, with micro- table
The proposition of feelings concept and develop rapidly, it is in safety monitoring, case investigation, network security, military affairs, the business even field such as amusement
All show good application prospect.2002, micro- expression research achieved huge progress, and Ekman et al. develops micro- expression
Training tool (Micro Expression Training Tool, METT), the instrument effectively raises micro- Expression Recognition energy
Power.
With developing rapidly for machine learning and Expression Recognition algorithm, micro- expression automatic identification research, which is achieved, considerable enters
Step.Zhang et al. proposes a kind of new differentiation feature descriptor, is extracted light stream histogram and LBP-TOP features;He et al.
Propose a kind of multitask feature learning method of different weight reply different characteristic layer features;Ben et al. is proposed based on most
Bigization minimizes the maximal margin projection and tensor representation of internal Laplace operator;Wang et al. passes through one tensor of searching
The correlation maximization that subspace allows between sample, it is proposed that sparse tensor canonical correlation analysis;These are directed to different problems
The new theory of proposition all achieves obvious raising in specific field.
Occur in spite of increasing micro- expression recognition method, but due to the limitation of training samples number, these algorithms
It is difficult to train an effective model.And transfer learning has prominent advantage in this respect, transfer learning utilizes original neck
The problem of knowledge in domain solves the domain of dependence.Transfer learning can be classified as three parts:Inductive learning, shift learning is unsupervised to move
Move study.Chang et al. proposes using semi-supervised information to calculate the correlation between different characteristic;Yeh et al. proposes one
Plant and utilize CCA by the domain-adaptive algorithm of all data projections a to public space;These methods proposed respectively have feature,
For particular problem effect substantially, but at present from voice to the transfer learning of micro- expression not yet it has been proposed that.
Language and expression are two kinds of most intuitive ways that people reveal emotion.When emotion is fluctuated, the language of people can be sent out
Raw obvious change, the lifting of such as tone, the speed of word speed.Therefore for the identification of micro- expression, when sample size is not enough to
During support training valid model, voice is a kind of ideal aid, so the present invention relates to from abundant voice feelings
Effective information is searched out in sense sample helps micro- expression to be classified.The results show, this is a kind of the effective of science
Means.
The content of the invention
For at present from voice to the technical problem of the transfer learning blank of micro- expression, the present invention proposes a kind of based on language
Micro- expression recognition method of the sparse transfer learning of sound dictionary.The present invention is compared with other recognition methods, and first Application is arrived in voice
The identification of micro- expression, and recognition performance effectively improves.
Technical scheme is as follows:
A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary, including training stage and test phase;
The training stage comprises the following steps:
First, most representational feature is extracted to voice domain and micro- expression domain;The feature in the voice domain includes:In short-term
Energy, fundamental frequency, second order MFCC coefficients, first to fourth formant, the maximum of features described above, minimum value, average, middle position
Value and five groups of statistics of variance;The feature in micro- expression domain directly extracts LBP-TOP, and this feature is dropped using PCA algorithms
Dimension;
Then, the characteristic extracted is grouped, the feature set in voice domain and micro- expression domain is divided into training set
And test set;
Then, because the data of different field have larger difference, if directly carrying out the pairing of data, not only exist
It is difficult to say logical in physical significance, actual effect is also undesirable, therefore the present invention finds an optimal throwing by iterative algorithm
Shadow matrix so that the data projection in voice domain and micro- expression domain obtains voice simultaneously to a public space in public space
The sparse dictionary of the sparse dictionary in domain and micro- expression, for the degree of association of the dictionary that improves two domains, to the dilute of the voice domain
The sparse dictionary for dredging dictionary and micro- expression domain carries out mutual reconstruct;
Afterwards, the iteration by certain number of times and optimization, respectively obtain the dictionary in voice domain and the dictionary in micro- expression domain, language
The projection matrix of range, the projection matrix in micro- expression domain, the restructuring matrix in voice domain, the restructuring matrix in micro- expression domain, voice domain
Sparse coefficient representing matrix, the sparse coefficient representing matrix in micro- expression domain;
The test phase comprises the following steps:
Test set for giving voice domain and micro- expression domain, the projection matrix obtained first by on-line training is to two domains
Feature set row is projected;
Then, the dictionary and the dictionary in micro- expression domain in the voice domain obtained using training, two to projecting to public space
Individual characteristic of field carries out sparse reconstruct, obtains the respective sparse coefficient representing matrix in two domains;
Finally, the sparse coefficient representing matrix in two domains is carried out by machine recognition classic algorithm k nearest neighbor grader KNN
Classification and Identification.
According to currently preferred, the k nearest neighbor grader KNN refers to the nearest neighbor classifier based on Euclidean distance, its
It is as follows that the method for classification includes step:
First, feature extraction is carried out to voice domain and micro- expression domain, obtained
Two characteristic sets, whereinThe feature of voice domain and a sample in micro- expression domain is represented respectively;mx,myRepresent respectively
Voice domain and the intrinsic dimensionality in micro- expression domain, nx,nyThe sample size in voice domain and micro- expression domain is represented respectively;
Then, a pair of projection matrix W are found in trainingX,WY, by the Projection Character in two domains to public space, and pass through word
Allusion quotation represents sparse, i.e.,:
Wherein,What is represented is the projection matrix in voice domain,What is represented is the projection square in micro- expression domain
Battle array,What is represented is the dictionary in voice domain,Represent the dictionary in micro- expression domain;
The sparse coefficient representing matrix in voice domain and micro- expression domain is represented respectively;D represents the projection dimension that two domains project to public space
Degree;px,pyThe dictionary size in voice domain and micro- expression domain is represented respectively;Represent respectively voice domain and
The unit matrix in micro- expression domain;
Then, in order to allow the feature that two domains project to public subspace to have similar distribution, the present invention is to each
The dictionary in domain has carried out linear expression with the dictionary in another domain, and the data difference between not same area is reduced by reconstruct, represents
Form is as follows:
Wherein,The dictionary restructuring matrix of reconstructed voice collection and micro- expression collection is represented respectively, | | dxi
||2≤1,||dyj||2≤1,||VX||1≤τ,||VY||1≤ τ, τ=0.001, dxi, dyjD is represented respectivelyX, DYIn column vector;
Finally, the object function of micro- expression recognition method based on the sparse transfer learning of voice dictionary is as follows:
Wherein,||dxi||2≤1,||dyj||2≤1,||Vx||1≤τ,||Vy||1≤ τ, | | Sx
||1≤σ,||Sy||1≤ σ, τ=0.001, σ=0.001;By the solution of object function, final W is obtainedX,WY,DX DY。
For above-mentioned object function, using the strategy of variable alternative optimization, successive ignition is optimal effect.
The beneficial effects of the invention are as follows:
The invention provides micro- expression recognition method based on the sparse transfer learning of voice dictionary, first Application voice is to micro-
The transfer learning of expression, it is contemplated that the dictionary of same area does not have larger otherness, the present invention carries out that to the dictionary in two domains
This reconstruct is represented, further enhances the association in two domains.It is compared to other several methods, convergence of algorithm speed,
Time cost is low, and discrimination is significantly improved.
Brief description of the drawings
Influence of Fig. 1 difference dictionary sizes to classification;
Fig. 2 flow charts of the present invention;
Fig. 3-1, Fig. 3-2, Fig. 3-3 are respectively the speech waveform figure of three kinds of different emotions of CASIA corpus;
Different emotions sample instantiation figure in the micro- expression storehouses of Fig. 4 CASME.
Embodiment
The present invention is described in detail with example below in conjunction with the accompanying drawings, but not limited to this.
As Figure 1-4.
Embodiment 1,
A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary, including training stage and test phase;
The training stage comprises the following steps:
First, most representational feature is extracted to voice domain and micro- expression domain;The feature in the voice domain includes:In short-term
Energy, fundamental frequency, second order MFCC coefficients, first to fourth formant, the maximum of features described above, minimum value, average, middle position
Value and five groups of statistics of variance;The feature in micro- expression domain directly extracts LBP-TOP, and this feature is dropped using PCA algorithms
Dimension;
Then, the characteristic extracted is grouped, the feature set in voice domain and micro- expression domain is divided into training set
And test set;
Then, because the data of different field have larger difference, if directly carrying out the pairing of data, not only exist
It is difficult to say logical in physical significance, actual effect is also undesirable, therefore the present invention finds an optimal throwing by iterative algorithm
Shadow matrix so that the data projection in voice domain and micro- expression domain obtains voice simultaneously to a public space in public space
The sparse dictionary of the sparse dictionary in domain and micro- expression, for the degree of association of the dictionary that improves two domains, to the dilute of the voice domain
The sparse dictionary for dredging dictionary and micro- expression domain carries out mutual reconstruct;
Afterwards, the iteration by certain number of times and optimization, respectively obtain the dictionary in voice domain and the dictionary in micro- expression domain, language
The projection matrix of range, the projection matrix in micro- expression domain, the restructuring matrix in voice domain, the restructuring matrix in micro- expression domain, voice domain
Sparse coefficient representing matrix, the sparse coefficient representing matrix in micro- expression domain;
The test phase comprises the following steps:
Test set for giving voice domain and micro- expression domain, the projection matrix obtained first by on-line training is to two domains
Feature set row is projected;
Then, the dictionary and the dictionary in micro- expression domain in the voice domain obtained using training, two to projecting to public space
Individual characteristic of field carries out sparse reconstruct, obtains the respective sparse coefficient representing matrix in two domains;
Finally, the sparse coefficient representing matrix in two domains is carried out by machine recognition classic algorithm k nearest neighbor grader KNN
Classification and Identification.
Embodiment 2,
Recognition methods as described in Example 1, its difference is that the k nearest neighbor grader KNN refers to be based on Euclidean distance
Nearest neighbor classifier, its classify method include step it is as follows:
First, feature extraction is carried out to voice domain and micro- expression domain, obtained
Two characteristic sets, whereinThe feature of voice domain and a sample in micro- expression domain is represented respectively;mx,myRepresent respectively
Voice domain and the intrinsic dimensionality in micro- expression domain, nx,nyThe sample size in voice domain and micro- expression domain is represented respectively;
Then, a pair of projection matrix W are found in trainingX,WY, by the Projection Character in two domains to public space, and pass through word
Allusion quotation represents sparse, i.e.,:
Wherein,What is represented is the projection matrix in voice domain,What is represented is the projection square in micro- expression domain
Battle array,What is represented is the dictionary in voice domain,Represent the dictionary in micro- expression domain;
The sparse coefficient representing matrix in voice domain and micro- expression domain is represented respectively;D represents the projection dimension that two domains project to public space
Degree;px,pyThe dictionary size in voice domain and micro- expression domain is represented respectively;Represent respectively voice domain and
The unit matrix in micro- expression domain;
Then, in order to allow the feature that two domains project to public subspace to have similar distribution, the present invention is to each
The dictionary in domain has carried out linear expression with the dictionary in another domain, and the data difference between not same area is reduced by reconstruct, represents
Form is as follows:
Wherein,The dictionary restructuring matrix of reconstructed voice collection and micro- expression collection is represented respectively, | | dxi
||2≤1,||dyj||2≤1,||VX||1≤τ,||VY||1≤ τ, τ=0.001, dxi, dyjD is represented respectivelyX, DYIn column vector;
Finally, the object function of micro- expression recognition method based on the sparse transfer learning of voice dictionary is as follows:
Wherein,||dxi||2≤1,||dyj||2≤1,||Vx||1≤τ,||Vy||1≤ τ, | | Sx
||1≤σ,||Sy||1≤ σ, τ=0.001, σ=0.001;By the solution of object function, final W is obtainedX,WY,DX DY。
For follow-up algorithm steps writing simply, derivative of the object function to parameters is first obtained here;
Similarly,
Algorithm complete procedure is given below:
1. initiation parameter:
DX=rand (mx,px);DY=rand (my,py);
SX=rand (px,nx);SY=rand (py,ny);
VX=rand (nx,nx);VY=rand (ny,ny);
Error=10;Iter=1
2.while error≥0.05||iter≤25
3. fix DX,SX,DY,SY,VX,VY, by object function respectively to WX、WYDerivation is obtained:
Due to existingThe problem of constraint, it is impossible to simply make derivative be zero solution, the present invention is adopted
With generalized gradient Algorithm for Solving WX,WYValue, that is, obtain projection matrix WX、WY;
4. fix SX,SY,WX,WY,DY,VX,VY, object function is on DXConvex function, to DXDerivation is obtained:
OrderSolve:
Similarly fix SX,SY,WX,WY,DX,VX,VY, solve:
Obtain dictionary DX,DY;
5. obtain sparse coefficient representing matrix S using the OMP functions in K-SVD tool boxesX,SY, generalized time cost and reality
Accuracy rate is tested, iterations is set as 15 times, S is selected from optimal resultX,SY。
6. try to achieve V using lasso algorithmsX,VY;
Iter=
iter+1;
8.end
9. above-mentioned iteration obtains final SY,SX,WX,WY,DY,DX,VX,VY, utilize the projection matrix W tried to achieveX,WYTo surveying
Examination data are projected:
XTe,YTeThe test data in two domains is represented respectively;ZX,ZYThe data that two domains project to public space are represented respectively
Collection;Then according to dictionary DX, DYTry to achieve respective sparse coefficient representing matrix.
10. sparse coefficient representing matrix is classified using KNN.
Contrast experiment:
It is Chinese section using a kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary of the present invention
CASIA corpus and the micro- expression storehouses of CASME that institute's automation is recorded, this experiment are selected in above-mentioned two storehouse respectively
The class sample of happiness, sadness, surprise tri-, respectively there is 60.
In order to verify influence of the different factors to classifying quality, Fig. 1 gives influence of the different dictionary sizes to classification.Figure
1 can see, and in the case where dictionary size is different, experiment effect has obvious difference, and discrimination of the present invention can be arrived
76.7%, average recognition rate reaches 71.8% under different dictionaries.
In order to verify the validity of proposition method of the present invention, table 1 gives the corresponding algorithm of the method for the invention and its
The comparative experiments result of his method correspondence algorithm:
Table 1
Method | LBP-top | DTSA | FDM | The present invention |
Discrimination | 46.7% | 39.7% | 42.6% | 71.4% |
In order to make experimental result more convincing, the present invention, which is used, stays a check addition, and 20 experiments of progress are averaged
Value.
Claims (2)
1. a kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary, it is characterised in that the recognition methods includes
Training stage and test phase;
The training stage comprises the following steps:
First, feature is extracted to voice domain and micro- expression domain;
Then, the characteristic extracted is grouped, the feature set in voice domain and micro- expression domain is divided into training set and survey
Examination collection;
Then so that the data projection in voice domain and micro- expression domain obtains language simultaneously to a public space in public space
The sparse dictionary of range and the sparse dictionary of micro- expression, the sparse dictionary of sparse dictionary and micro- expression domain to the voice domain enter
The mutual reconstruct of row;
Afterwards, the iteration by certain number of times and optimization, respectively obtain the dictionary in voice domain and the dictionary in micro- expression domain, voice domain
Projection matrix, the projection matrix in micro- expression domain, the restructuring matrix in voice domain, the restructuring matrix in micro- expression domain, voice domain it is dilute
Sparse coefficient representing matrix, the sparse coefficient representing matrix in micro- expression domain;
The test phase comprises the following steps:
Test set for giving voice domain and micro- expression domain, the projection matrix obtained first by on-line training is to two characteristic of field
Collection row projection;
Then, the dictionary and the dictionary in micro- expression domain in the voice domain obtained using training, two domains to projecting to public space
Feature carries out sparse reconstruct, obtains the respective sparse coefficient representing matrix in two domains;
Finally, the sparse coefficient representing matrix in two domains is classified by machine recognition classic algorithm k nearest neighbor grader KNN
Identification.
2. a kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary according to claim 1, its feature
It is, the k nearest neighbor grader KNN refers to the nearest neighbor classifier based on Euclidean distance, its method classified includes step such as
Under:
First, feature extraction is carried out to voice domain and micro- expression domain, obtained
Two characteristic sets, whereinThe feature of voice domain and a sample in micro- expression domain is represented respectively;mx,myRepresent respectively
Voice domain and the intrinsic dimensionality in micro- expression domain, nx,nyThe sample size in voice domain and micro- expression domain is represented respectively;
Then, a pair of projection matrix W are found in trainingX,WY, by the Projection Character in two domains to public space, and pass through dictionary table
Show sparse, i.e.,:
Wherein,What is represented is the projection matrix in voice domain,What is represented is the projection matrix in micro- expression domain,What is represented is the dictionary in voice domain,Represent the dictionary in micro- expression domain; Respectively
Represent the sparse coefficient representing matrix in voice domain and micro- expression domain;D represents the projected dimensions that two domains project to public space;
px,pyThe dictionary size in voice domain and micro- expression domain is represented respectively;Voice domain and micro- table are represented respectively
The unit matrix in feelings domain;
Then, the data difference between not same area is reduced by reconstruct, representation is as follows:
Wherein,The dictionary restructuring matrix of reconstructed voice collection and micro- expression collection is represented respectively, | | dxi||2≤
1,||dyj||2≤1,||VX||1≤τ,||VY||1≤ τ, τ=0.001, dxi, dyjD is represented respectivelyX, DYIn column vector;
Finally, the object function of micro- expression recognition method based on the sparse transfer learning of voice dictionary is as follows:
Wherein,||dxi||2≤1,||dyj||2≤1,||Vx||1≤τ,||Vy||1≤ τ, | | Sx||1≤
σ,||Sy||1≤ σ, τ=0.001, σ=0.001;By the solution of object function, final W is obtainedX,WY,DX DY。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710346931.0A CN106971180B (en) | 2017-05-16 | 2017-05-16 | A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710346931.0A CN106971180B (en) | 2017-05-16 | 2017-05-16 | A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106971180A true CN106971180A (en) | 2017-07-21 |
CN106971180B CN106971180B (en) | 2019-05-07 |
Family
ID=59326529
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710346931.0A Active CN106971180B (en) | 2017-05-16 | 2017-05-16 | A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106971180B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657964A (en) * | 2017-08-15 | 2018-02-02 | 西北大学 | Depression aided detection method and grader based on acoustic feature and sparse mathematics |
CN108647628A (en) * | 2018-05-07 | 2018-10-12 | 山东大学 | A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary |
CN109409287A (en) * | 2018-10-25 | 2019-03-01 | 山东大学 | A kind of transfer learning method by macro sheet feelings to micro- expression |
CN110097020A (en) * | 2019-05-10 | 2019-08-06 | 山东大学 | A kind of micro- expression recognition method based on joint sparse dictionary learning |
CN111191475A (en) * | 2020-01-06 | 2020-05-22 | 天津工业大学 | Passive behavior identification method based on UHF RFID |
CN113159207A (en) * | 2021-04-28 | 2021-07-23 | 杭州电子科技大学 | Sparse representation classification method based on two-dimensional dictionary optimization |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258204A (en) * | 2012-02-21 | 2013-08-21 | 中国科学院心理研究所 | Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features |
CN103440509A (en) * | 2013-08-28 | 2013-12-11 | 山东大学 | Effective micro-expression automatic identification method |
CN104298981A (en) * | 2014-11-05 | 2015-01-21 | 河北工业大学 | Face microexpression recognition method |
CN106446810A (en) * | 2016-09-12 | 2017-02-22 | 合肥工业大学 | Computer vision method used for mental state analysis |
CN106650696A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Handwritten electrical element identification method based on singular value decomposition |
-
2017
- 2017-05-16 CN CN201710346931.0A patent/CN106971180B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258204A (en) * | 2012-02-21 | 2013-08-21 | 中国科学院心理研究所 | Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features |
CN103440509A (en) * | 2013-08-28 | 2013-12-11 | 山东大学 | Effective micro-expression automatic identification method |
CN104298981A (en) * | 2014-11-05 | 2015-01-21 | 河北工业大学 | Face microexpression recognition method |
CN106446810A (en) * | 2016-09-12 | 2017-02-22 | 合肥工业大学 | Computer vision method used for mental state analysis |
CN106650696A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Handwritten electrical element identification method based on singular value decomposition |
Non-Patent Citations (3)
Title |
---|
DEVANGINI PATEL等: "《Selective Deep Features for Micro-Expression Recognition》", 《2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》 * |
张轩阁等: "《基于光流与LBP-TOP特征结合的微表情识别》", 《吉林大学学报(信息科学版)》 * |
杨明强等: "《微表情自动识别综述》", 《计算机辅助设计与图形学学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657964A (en) * | 2017-08-15 | 2018-02-02 | 西北大学 | Depression aided detection method and grader based on acoustic feature and sparse mathematics |
CN108647628A (en) * | 2018-05-07 | 2018-10-12 | 山东大学 | A kind of micro- expression recognition method based on the sparse transfer learning of multiple features multitask dictionary |
CN108647628B (en) * | 2018-05-07 | 2021-10-26 | 山东大学 | Micro-expression recognition method based on multi-feature multi-task dictionary sparse transfer learning |
CN109409287A (en) * | 2018-10-25 | 2019-03-01 | 山东大学 | A kind of transfer learning method by macro sheet feelings to micro- expression |
CN109409287B (en) * | 2018-10-25 | 2021-05-14 | 山东大学 | Transfer learning method from macro expression to micro expression |
CN110097020A (en) * | 2019-05-10 | 2019-08-06 | 山东大学 | A kind of micro- expression recognition method based on joint sparse dictionary learning |
CN110097020B (en) * | 2019-05-10 | 2023-04-07 | 山东大学 | Micro-expression recognition method based on joint sparse dictionary learning |
CN111191475A (en) * | 2020-01-06 | 2020-05-22 | 天津工业大学 | Passive behavior identification method based on UHF RFID |
CN113159207A (en) * | 2021-04-28 | 2021-07-23 | 杭州电子科技大学 | Sparse representation classification method based on two-dimensional dictionary optimization |
CN113159207B (en) * | 2021-04-28 | 2024-02-09 | 杭州电子科技大学 | Sparse representation classification method based on two-dimensional dictionary optimization |
Also Published As
Publication number | Publication date |
---|---|
CN106971180B (en) | 2019-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106971180B (en) | A kind of micro- expression recognition method based on the sparse transfer learning of voice dictionary | |
Bavkar et al. | Multimodal sarcasm detection via hybrid classifier with optimistic logic | |
CN113378632B (en) | Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method | |
CN107145842B (en) | Face recognition method combining LBP characteristic graph and convolutional neural network | |
CN108564129B (en) | Trajectory data classification method based on generation countermeasure network | |
CN110969020B (en) | CNN and attention mechanism-based Chinese named entity identification method, system and medium | |
CN112269868B (en) | Use method of machine reading understanding model based on multi-task joint training | |
CN109063649B (en) | Pedestrian re-identification method based on twin pedestrian alignment residual error network | |
Khalil-Hani et al. | A convolutional neural network approach for face verification | |
CN109871885A (en) | A kind of plants identification method based on deep learning and Plant Taxonomy | |
CN107908642B (en) | Industry text entity extraction method based on distributed platform | |
CN110309343A (en) | A kind of vocal print search method based on depth Hash | |
CN105930792A (en) | Human action classification method based on video local feature dictionary | |
CN113378563B (en) | Case feature extraction method and device based on genetic variation and semi-supervision | |
CN104077598A (en) | Emotion recognition method based on speech fuzzy clustering | |
Sun et al. | Text-independent speaker identification based on deep Gaussian correlation supervector | |
Arora et al. | Palmhashnet: Palmprint hashing network for indexing large databases to boost identification | |
Parvathi et al. | Identifying relevant text from text document using deep learning | |
Xie et al. | Learning A Self-Supervised Domain-Invariant Feature Representation for Generalized Audio Deepfake Detection | |
Shen et al. | Multi-scale residual based siamese neural network for writer-independent online signature verification | |
Cheng et al. | Deep attentional fine-grained similarity network with adversarial learning for cross-modal retrieval | |
CN113628640A (en) | Cross-library speech emotion recognition method based on sample equalization and maximum mean difference | |
CN110825852B (en) | Long text-oriented semantic matching method and system | |
CN110148417B (en) | Speaker identity recognition method based on joint optimization of total change space and classifier | |
Trabelsi et al. | Comparison between GMM-SVM sequence kernel and GMM: application to speech emotion recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |