CN104616005A - Domain-self-adaptive facial expression analysis method - Google Patents

Domain-self-adaptive facial expression analysis method Download PDF

Info

Publication number
CN104616005A
CN104616005A CN201510103956.9A CN201510103956A CN104616005A CN 104616005 A CN104616005 A CN 104616005A CN 201510103956 A CN201510103956 A CN 201510103956A CN 104616005 A CN104616005 A CN 104616005A
Authority
CN
China
Prior art keywords
training
domain
data
prediction
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201510103956.9A
Other languages
Chinese (zh)
Inventor
丁小羽
王桥
夏睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yi Kai Data Analysis Technique Co Ltd
Original Assignee
Nanjing Yi Kai Data Analysis Technique Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yi Kai Data Analysis Technique Co Ltd filed Critical Nanjing Yi Kai Data Analysis Technique Co Ltd
Priority to CN201510103956.9A priority Critical patent/CN104616005A/en
Publication of CN104616005A publication Critical patent/CN104616005A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a domain-self-adaptive facial expression analysis method and belongs to the field of computer vision and emotion computing research. The method aims to solve the problem that the prediction precision is hindered by training and test data domain differences in automatic expression analysis, and is more aligned with the actual needs. The invention provides the domain-adaptive expression analysis method based on an object domain. The method comprises the following steps: defining a data domain for each tested object; defining the distance between object domains in a way of establishing an auxiliary prediction problem; selecting a group of objects similar to the data character of the tested object from a source data set to form a training set; on the training set, directly using part of tested object data in model training in a way of weighted cooperative training, and thus enabling a prediction model to be closer to the tested object domain. The method has the advantages that the isolation problem of training and testing data is solved, and the prediction model is adaptive to the testing data domain; the method has robustness for the domain differences and is wide in range of application.

Description

A kind of facial Expression Analysis method of domain-adaptive
Technical field
The invention belongs to computer vision and affection computation research field, specifically a kind of facial Expression Analysis method automatically.
Background technology
Automatic facial Expression Analysis is computer vision field long-standing studying a question.The target of the automatic Expression analysis of main flow is from image or video, extract a series of Facial action unit with semantic class information.Definition in usual employing FACS handbook.FACS (Facial Action Coding System) is the segmentation labeling system of a set of research facial expression that behaviorist proposes.Face action is decomposed into a series of expression motor unit (Action Unit, AU) by FACS system.Each motor unit is relevant to one or more facial muscle movements.
The Expression analysis research of Most current all supposes that training (source) data and test (target) data are from same Data distribution8.Usually following steps are adopted: first on the training be collected in advance (source) data set, train forecast model, then forecast model is applied to test (target) data set when analyzing.Video data for training requires band espressiove label, and label of expressing one's feelings obtained by manually marked by the mark person of professional training usually.
The video data that can be used at present training collects substantially under laboratory controlled imaging circumstances.Practical application then requires to test on the image under true environment.Appearance in true environment facial image, the change of the factor such as attitude and illumination often far exceeds the scope of training data.Now, data fields (domain) difference between training and testing data just can not be left in the basket.The forecast model that this species diversity causes traditional algorithm to be trained and draws can not obtain the performance consistent with in training set on test video.
For this reason, the present invention proposes a kind of Expression analysis method with domain-adaptive ability.The method supposes training data from test data from different data fields, and designs respective algorithms step, makes forecast model adaptive testing data fields.
Summary of the invention
The object of the invention is to solve training and test data field difference in Expression analysis and hinder the problem of precision of prediction, thus make Expression analysis system be more suitable for actual application environment.The present invention proposes a kind of Expression analysis method with domain-adaptive ability.Be conceived to individual mobile terminal applied environment, we define a data fields to each study subject.Object data field is made up of the video data of all collections from this object.First this method carrys out the distance between defining objects field by a kind of mode building auxiliary forecasting problem.This distance definition reflects the correlativity in object domain between geometric properties and appearance features.Based on the spacing of object domain, we choose the object close with tested object data character in source data set, composition training set.On training set, we adopt the mode of weighting coorinated training, and partial test object video data are directly used in model training, make forecast model further close to test data field.
Compared with prior art, advantage of the present invention is: solve training data and the isolated problem of test data, makes expression action prediction model be adapted to test data.Propose Expression analysis algorithm, to test and training field difference, there is better robustness, expand the practical ranges of Expression analysis technology.
Accompanying drawing explanation
Fig. 1: facial feature points detection result is illustrated
Fig. 2: reference man's face shape
Fig. 3: image alignment result is illustrated
Fig. 4: algorithm overall flow figure
Embodiment
The present invention is a kind of automatic facial Expression Analysis method with domain-adaptive ability.The face Facial action unit (Action Unit, AU) that the present invention defines in FACS is as the target of Expression analysis.AU is one group of motor unit being defined on facial muscle movements.Such as AU 12 represents the corners of the mouth and raises up, and is semantically substantially equal to " laughing at " this action.On the basis making full use of between two class facial image features relevant complementary character again, put forward the methods of the present invention can full automatic analytical test object video, provides the label whether specific AU occurs in each frame.
Utilize prior art, we can detect human face characteristic point.We select SDM (SupervisedDescent Machine) technology each frame in face video to detect human face characteristic point.Fig. 1 is shown in the testing result signal of human face characteristic point.
Except the expression information that we are concerned about, also contain head pose in face video, focal length, shooting angle and distance etc. influence factor.In order to eliminate the impact of these disturbing factors on Expression analysis, human face expression video aligns to reference man's face shape by we.Reference man's face shape of our chosen in advance fixed size (200X 200 pixel), as shown in Figure 2.To each facial image, we utilize ProcrustesAnalaysis to calculate optimum yardstick, and (in plane) rotates and translation transformation, make facial image and reference figuration closest.Procrustes Analaysis is a kind of research means in shape analysis field, can be used for shape alignment.We utilize the optimal transformation parameter calculated, and carry out texture to facial image.This process is called as image alignment, which ensures the face images participating in training and testing and compares on unified yardstick, and not by the impact that head pose (in plane) deflects.Facial image after alignment and unique point are as shown in Figure 3.
According in FACS handbook to the description of each AU, we define geometric properties and appearance features two category feature.Geometric properties refers to the dimensional measurements such as a series of angle and distances calculated by human face characteristic point, and comprise the folding angle of the corners of the mouth, nose is to the height etc. of the Distance geometry eyes at canthus.Geometric properties is designated as f by us 1.Meanwhile, appearance features then depicts texture in facial image, edge, the information such as direction of a curve.Appearance features is designated as f by us 2.The present invention selects the descriptor of SIFT (Scale Invariant Feature Transform) as appearance features.The extracting position of usual SIFT descriptor and yardstick detect son by SIFT and obtain, and our fixed extraction position of selecting one group to be determined by human face characteristic point in the present invention.The extraction yardstick of we fixing SIFT simultaneously.In conjunction with before the image alignment step that describes, we ensure that the SIFT descriptor in training and testing, face images extracted can compare on the unified platform.
Training objects set selected part based on auxiliary forecasting problem:
The input data of Expression analysis method of the present invention comprise the set of source data D with expression label s, and tested object data U={X t.The target of Expression analysis method is the expression label of prediction tested object x in L sfor the human face expression video collected in advance, and Y sit is then corresponding expression label.Y susually by manually provided by the mark person of professional training, the information whether specific AU occurs at each frame is comprised.X in U tfor tested object face face video.Here we suppose only there is a tested object in U, and we think that U forms the data fields of this object.
In the present invention, we propose a kind of strategy choosing training objects in set of source data L newly.From use in prior art all training datas or stochastic sampling different, we choose n the object close with U data characteristics in L, composition set L ', for training forecast model.By this selection, training dataset is changed into the L ' more adapting to U, to reach the object of domain-adaptive from the L had nothing to do with U by us.
We utilize geometric properties f 1with appearance features f 2between relevance select the object close with U.Design this strategy mainly based on following 2 considerations.
A. geometric properties and expression motor unit tight association that we will predict, simultaneous altitude is abstract, is relatively not easy the impact being subject to features of the object.In the training objects selection strategy proposed, the effect that geometric properties plays " bridge ".
B. by analyzing the relevance pattern of each object geometric properties and appearance features, we can find the object close with U character.Learn at the data fields close to U.
Particularly, first we utilize geometric properties f 1set up one group and assist forecasting problem.Suppose f 1comprise l 1dimension.To f 1in each dimension f 1(i), we set up and solve following auxiliary problem (AP):
A. assisted tag is set up.To each sample, set up assisted tag wherein f 1(i) mean value in L, and symbolic operation is got in sign () expression.
B. to whole samples of each object j, f is utilized 2training linear SVM model, predicts assisted tag Y 1(i).The weight vectors of training gained Linear SVM model is designated as w j(i).
To each object j, by solving above-mentioned l 1individual auxiliary problem (AP), we can obtain one group and assist forecast model weight vectors { w j(i) } i, wherein each w ji () is for object j is by f 1linear SVM weight vectors in i auxiliary forecasting problem that () sets up.By this group weight vectors splicing, we set up an auxiliary forecast model long vector W to each object j=[w j(1) t, w j(2) t..., w j(l 1) t] t, wherein l 1f 1dimension.W jdimension be l 1× l 2, wherein l 2f 2dimension.Auxiliary forecast model long vector W jrepresentative be relevance in corresponding objects data between appearance features and geometric properties, reacted the data characteristic of object.Therefore, object-based auxiliary forecast model matrix, the distance definition between any two object j and k can be corresponding W by we jand W kdistance between two matrixes.Particularly, the distance d between object j and k jkbe defined as: d jk=Σ | W j-W k|-Σ sign(W jο W k), wherein Σ represents all elements in superimposed vector, and ο represents hadamard product of matrices, and namely corresponding element is multiplied.In this new distance definition, Σ | W j-W k| the close situation of each corresponding element value is being investigated in the effect of item, and Σ sign (W jο W k) Xiang Ze investigates the consistent situation of corresponding element symbol.Consider W jphysical significance be the weight vectors of Linear SVM, in this application of the close object of searching data characteristic, symbol unanimously even than value close to even more important.Based on the distance between object, we choose n and the immediate object of U in L, composition training set L '.
Coorinated training part based on sample weighting:
Choosing on the training set L ' and tested object U obtained, we extract geometric properties and appearance features, utilize the method for weighting coorinated training, train corresponding geometric properties and appearance features fallout predictor respectively.In the present invention, we propose a set of strategy heavy to coorinated training sample weighting.By weighting, forecast model, by more effective adaptive testing data fields U, has better robustness simultaneously.
Fallout predictor training method selects linear SVM (linear Support Vector Machine, SVM).In order to make the output of different SVM model to compare together, when training, we adopt the method for PlattScaling to carry out the output for the treatment of S VM.Through this step, the output of each SVM model has unified Probabilistic.We use it for the degree of confidence of evaluation and foreca when testing.
Concrete, first we utilize Linear SVM to train geometric properties fallout predictor h on L ' 1with appearance features fallout predictor h 2.For convenience of discussing, we are by h 1and h 2training set be called L 1' and L 2'.On input test object data U, we adopt the method consistent with on L ', detect human face characteristic point, extract geometric properties f 1with appearance features f 2.Then, we circulate execution following steps:
A. by h 1be applied to tested object data U, obtain prediction label by h 2be applied to U, obtain prediction label
B., in U, h is chosen 1predict with high confidence, and not at L 2' middle sample.With as pseudo-label, add h 2training set L 2'.Similar, choose h 2high confidence level new samples, adds L 1'.
C. for L 1' and L 2' in each training sample x, give training weight Q x.
D. based on training weight Q x, re-training h 1and h 2.
The end condition of this cyclic process does not have new high confidence level sample to add L in U 1' or L 2'.So-called high confidence level refers to h 1or h 2degree of confidence during prediction is higher than the threshold value of setting in advance.
Training sample weight Q xaccording to from the positive and negative sample proportion of sample in U and h 1with h 2between difference calculate.Particularly:
Q x = 1 + a R skew ( x ) + b | h 1 ( x ) - h 2 ( x ) | ifx ∈ U 1 ifx ∈ L ′
Wherein a and b be preset be greater than zero constant, by choosing obtain by cross validation (crossvalidation) in a database.And R skewx () then investigates the degree of all positive and negative imbalanced training sets from sample in U.If the pseudo-label of x is+1, so R skew(x) for all from the ratio of the number of negative sample number and positive sample in sample in U.Otherwise pseudo-label is-1, R skewx () is then positive number of samples and the ratio of negative sample number.
By based on R skewx the mode of () weighted, when training forecast model, we give the less class of number of samples more weight.This strategy equilibrium is from the impact of positive and negative in U two aspect samples.Meanwhile, by giving | h 1(x)-h 2(x) | the more weight of larger sample, in fact we be exaggerated the effect of complementary character between geometric properties and appearance features.
Model Fusion part based on superiority prediction:
After weighting coorinated training process completes, we are by h 1and h 2merge the final Expression analysis forecast model h of composition.Merge the mode adopting weighted sum.In the present invention, we propose a kind of new weight calculation strategy.Model Weight is based on predicting that the geometric properties obtained is relative to the advantage of appearance features on U.Concrete has: h=v 1h 1+ v 2h 2, v 1and v 2by superiority prediction model h 1v2output on U calculates.
Superiority prediction model h 1v2object be that prediction geometric properties is relative to the advantage of appearance features.Here advantage refers at tested object data U, performs the advantage of performance during Expression analysis task.If h 1v2there is preferably performance, then to h with higher degree of confidence prediction geometric properties by the Expression analysis task of U 1larger weight v is set 1.Otherwise, then to h 2larger weight v is set 2.
Superiority prediction model h 1v2obtain by doing binaryzation classification based training on L.By carrying out house one cross validation (leave-one-out cross validation) in L, we can obtain on each object j, model of geometrical features and appearance features model performance separately.Model of geometrical features is had the object of performance advantage as positive sample by us, and appearance features model is had the object of performance advantage as negative sample, by { W j} jas training data, Linear SVM is utilized to train the forecast model h that gains the upper hand 1v2.We adopt the method for PlattScaling when training, make h 1v2output between [0,1], representative is the probability that geometric properties has performance advantage.
In testing, we are by h 1v2be applied to U.Export based on it, Model Fusion weight is set: V 1=h 1v2(U)+λ, v 2=1-h 1v2(U)+λ.Wherein λ is the regular terms pre-set, and object avoids the impact of a certain feature to be covered completely.λ is set to 1 usually.
Finally, h is applied on U by we, obtains Expression analysis result.Algorithm whole description is as follows:
Data: the set of source data L={X of band espressiove label s, Y s, and tested object data U={X t}
Result: tested object prediction expression label
1. according in FACS about the description of selected Facial action unit, definition geometric properties f 1with appearance features f 2
2. choose object set close with tested object U in L, composition training set
2.1 couples of f 1in each dimension f 1i (), sets up auxiliary forecasting problem, by f 2training linear SVM forecast model
2.2 couples of each object j, build auxiliary forecast model long vector W j=[w j(1) t, w j(2) t..., w j(l 1) t] t, wherein w ji () is for object j is by f 1linear SVM weight vectors in i auxiliary forecasting problem that () sets up, l 1f 1dimension
2.3 calculate the distance in L between all objects and U, and the distance between object j and k is defined as W jand W kbetween distance
2.4 select n with U apart from minimum object in L, composition training set L '
3. weighting coorinated training forecast model h 1and h 2
On L ', utilize f 1training forecast model h 1, utilize f 2training forecast model h 2
By h 1and h 2training set be designated as L respectively 1' and L 2'
Still there is high confidence level new samples in while U and can add training set L 1' and L 2' do
{
By h 1be applied on U, choose high confidence level sample, add L 2'; Similarly, by h 2high confidence level sample adds L 1'
To L 1' and L 2' in each training sample x, according to propose weight calculation strategy, give weight Q x
Based on sample weights Q x, re-training h 1and h 2
}
4. merge final forecast model h, prediction of output expression label
The geometric properties superiority prediction model h that training in advance is obtained 1v2be applied on U, calculate weight v 1and v 2
Fusion forecasting model, h=v 1h 1+ v 2h 2
Defeated s goes out prediction expression label,
Overall flow figure is shown in Fig. 4.Because the method for the invention has field adaptability, no longer require that test data and training data are from same distribution, the more realistic scene of the present invention, has potential application widely in man-machine interaction and affection computation field.
Above-described is a preferred embodiment of the present invention, is noted that under the premise without departing from the principles of the invention, and some improvements and modifications that the technician of this research field makes also should be considered as protection scope of the present invention.

Claims (6)

1. a domain-adaptive facial Expression Analysis method, is characterized in that following steps:
(1) according in FACS handbook about the description of expression motor unit, definition face facial geometric feature f 1with appearance features f 2;
(2) auxiliary forecasting problem is built, by geometric properties f 1calculate label, by appearance features f 2prediction label, record Prediction Parameters vector;
(3) by the distance between Prediction Parameters Definition of Vector object, distance definition comprises value close to two parts consistent with symbol, chooses with tested object U apart from close training objects in training set L, composition training set
(4) weighting coorinated training is adopted, training forecast model h 1and h 2, training sample weight calculation comprises positive and negative number of samples ratio and h 1with h 2absolute difference two factors of score value;
(5) the superiority prediction model h of training in advance is utilized 1v2, merge h 1and h 2obtain model h, export and finally predict the outcome.
2. method according to claim 1, is characterized in that:
Described geometric properties and appearance features have independently Expression analysis ability, have complementarity simultaneously.
3. method according to claim 1, is characterized in that: in described step (1), characterizing definition comprises further:
(1.1) geometry such as geometric properties is by angle, length tolerance composition, calculates based on human face characteristic point;
(1.2) appearance features selects the SIFT descriptor of fixed position and yardstick.
4. method according to claim 1, is characterized in that: described step (4) weighting coorinated training sample comprises further: in weight calculation, and all samples from tested object U of ration statistics of positive and negative number of samples, are designated as R skew(x).
5. method according to claim 1, is characterized in that: described step (4) weighting coorinated training
Sample comprises further: the concrete formula of weight calculation is:
Q x = 1 + a R skew ( x ) + b | h 1 ( x ) - h 2 ( x ) | if x ∈ U 1 if x ∈ L ′
Wherein a and b be preset be greater than zero constant, choose obtain by carrying out cross validation (crossvalidation) in a database.
6. method according to claim 1, is characterized in that: described step (5) comprises further: utilize training in advance superiority prediction model h 1v2, prediction geometric properties f 1relative to appearance features f on tested object U 2the probability of having the advantage, merges model of geometrical features h 1with appearance features model h 2.
CN201510103956.9A 2015-03-10 2015-03-10 Domain-self-adaptive facial expression analysis method Withdrawn CN104616005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510103956.9A CN104616005A (en) 2015-03-10 2015-03-10 Domain-self-adaptive facial expression analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510103956.9A CN104616005A (en) 2015-03-10 2015-03-10 Domain-self-adaptive facial expression analysis method

Publications (1)

Publication Number Publication Date
CN104616005A true CN104616005A (en) 2015-05-13

Family

ID=53150442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510103956.9A Withdrawn CN104616005A (en) 2015-03-10 2015-03-10 Domain-self-adaptive facial expression analysis method

Country Status (1)

Country Link
CN (1) CN104616005A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279380A (en) * 2015-11-05 2016-01-27 东南大学 Facial expression analysis-based depression degree automatic evaluation system
CN106469560A (en) * 2016-07-27 2017-03-01 江苏大学 A kind of speech-emotion recognition method being adapted to based on unsupervised domain
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
CN109615674A (en) * 2018-11-28 2019-04-12 浙江大学 The double tracer PET method for reconstructing of dynamic based on losses by mixture function 3D CNN
WO2019085495A1 (en) * 2017-11-01 2019-05-09 深圳市科迈爱康科技有限公司 Micro-expression recognition method, apparatus and system, and computer-readable storage medium
CN109902720A (en) * 2019-01-25 2019-06-18 同济大学 The image classification recognition methods of depth characteristic estimation is carried out based on Subspace Decomposition
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299238A (en) * 2008-07-01 2008-11-05 山东大学 Quick fingerprint image dividing method based on cooperating train
US20140185925A1 (en) * 2013-01-02 2014-07-03 International Business Machines Corporation Boosting object detection performance in videos
CN104102917A (en) * 2014-07-03 2014-10-15 中国石油大学(北京) Construction method of domain self-adaptive classifier, construction device for domain self-adaptive classifier, data classification method and data classification device
CN104318242A (en) * 2014-10-08 2015-01-28 中国人民解放军空军工程大学 High-efficiency SVM active half-supervision learning algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299238A (en) * 2008-07-01 2008-11-05 山东大学 Quick fingerprint image dividing method based on cooperating train
US20140185925A1 (en) * 2013-01-02 2014-07-03 International Business Machines Corporation Boosting object detection performance in videos
CN104102917A (en) * 2014-07-03 2014-10-15 中国石油大学(北京) Construction method of domain self-adaptive classifier, construction device for domain self-adaptive classifier, data classification method and data classification device
CN104318242A (en) * 2014-10-08 2015-01-28 中国人民解放军空军工程大学 High-efficiency SVM active half-supervision learning algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINMIN CHEN 等: "Co-Training for Domain Adapation", 《NIPS"11 PROCEEDING OF THE 24TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS》 *
胡孔兵: "基于自学习的直推式迁移学习方法研究", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279380A (en) * 2015-11-05 2016-01-27 东南大学 Facial expression analysis-based depression degree automatic evaluation system
CN105279380B (en) * 2015-11-05 2018-06-19 东南大学 A kind of Degree of Depression automatic evaluation system based on Expression analysis
CN106469560A (en) * 2016-07-27 2017-03-01 江苏大学 A kind of speech-emotion recognition method being adapted to based on unsupervised domain
CN106469560B (en) * 2016-07-27 2020-01-24 江苏大学 Voice emotion recognition method based on unsupervised domain adaptation
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
WO2019085495A1 (en) * 2017-11-01 2019-05-09 深圳市科迈爱康科技有限公司 Micro-expression recognition method, apparatus and system, and computer-readable storage medium
CN109615674A (en) * 2018-11-28 2019-04-12 浙江大学 The double tracer PET method for reconstructing of dynamic based on losses by mixture function 3D CNN
CN109902720A (en) * 2019-01-25 2019-06-18 同济大学 The image classification recognition methods of depth characteristic estimation is carried out based on Subspace Decomposition
CN109902720B (en) * 2019-01-25 2020-11-27 同济大学 Image classification and identification method for depth feature estimation based on subspace decomposition
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology

Similar Documents

Publication Publication Date Title
CN110276316B (en) Human body key point detection method based on deep learning
CN111553193B (en) Visual SLAM closed-loop detection method based on lightweight deep neural network
CN104616005A (en) Domain-self-adaptive facial expression analysis method
CN111126258B (en) Image recognition method and related device
EP3757905A1 (en) Deep neural network training method and apparatus
CN108765383B (en) Video description method based on deep migration learning
CN109961034A (en) Video object detection method based on convolution gating cycle neural unit
CN111709409A (en) Face living body detection method, device, equipment and medium
CN105160400A (en) L21 norm based method for improving convolutional neural network generalization capability
CN110796026A (en) Pedestrian re-identification method based on global feature stitching
CN107122375A (en) The recognition methods of image subject based on characteristics of image
CN106203483B (en) A kind of zero sample image classification method based on semantic related multi-modal mapping method
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN106023257A (en) Target tracking method based on rotor UAV platform
CN112949408B (en) Real-time identification method and system for target fish passing through fish channel
CN110738132B (en) Target detection quality blind evaluation method with discriminant perception capability
CN113111968A (en) Image recognition model training method and device, electronic equipment and readable storage medium
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN111738355A (en) Image classification method and device with attention fused with mutual information and storage medium
CN115690549A (en) Target detection method for realizing multi-dimensional feature fusion based on parallel interaction architecture model
CN111008570B (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
CN111967399A (en) Improved fast RCNN behavior identification method
CN115222954A (en) Weak perception target detection method and related equipment
CN110751005B (en) Pedestrian detection method integrating depth perception features and kernel extreme learning machine
CN116958740A (en) Zero sample target detection method based on semantic perception and self-adaptive contrast learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room R539, Building 2, Zijin Yougu, No. 12 Qizhou East Road, Jiangning District, Nanjing City, Jiangsu Province, 211100

Applicant after: Nanjing Yi Kai data analysis technique company limited

Address before: Room 5310, Dongshan International Enterprise R&D Park, Block 33 A, Dongshan Street, Jiangning District, Nanjing, Jiangsu Province, 211100

Applicant before: Nanjing Yi Kai data analysis technique company limited

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20150513