CN109558816A - A kind of mode identification method indicated based on multiple features - Google Patents

A kind of mode identification method indicated based on multiple features Download PDF

Info

Publication number
CN109558816A
CN109558816A CN201811368221.9A CN201811368221A CN109558816A CN 109558816 A CN109558816 A CN 109558816A CN 201811368221 A CN201811368221 A CN 201811368221A CN 109558816 A CN109558816 A CN 109558816A
Authority
CN
China
Prior art keywords
coefficient
special
feature
dictionary
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811368221.9A
Other languages
Chinese (zh)
Inventor
杨猛
柯康银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
National Sun Yat Sen University
Original Assignee
National Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Sun Yat Sen University filed Critical National Sun Yat Sen University
Priority to CN201811368221.9A priority Critical patent/CN109558816A/en
Publication of CN109558816A publication Critical patent/CN109558816A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to artificial intelligence fields, more specifically, it is related to a kind of mode identification method indicated based on multiple features, coefficient is separated into shared coefficient and Special coefficient by the present invention, to which the similitude of multiple features and particularity area strictly be distinguished, more effective condition is provided sufficiently to excavate similitude and the particularity of multiple features, and acts on last classifier, to improve identification classifying quality;And invention introduces similar weighted terms, so that model is to feature abnormalities point robust.

Description

A kind of mode identification method indicated based on multiple features
Technical field
The present invention relates to artificial intelligence fields, more particularly to a kind of mode identification method indicated based on multiple features.
Background technique
In computer vision and area of pattern recognition, since multiple features can bring more valuable information to sample, How identification mission effectively is improved in conjunction with using multiple features, is an important topic of academia and industry.Multiple features Similitude and particularity be then mainly to consider the problems of.On the one hand, the similitude of different characteristic is their shared information, is needed Make full use of the stability to keep identification mission.On the other hand, the particularity between multiple features can be brought additional valuable The information of value needs to make full use of to improve the effect of identification mission.
Currently based on the classification task that the multiple features combining of rarefaction representation indicates, effectively combine to a certain extent more Feature, and achieve certain achievement.Such as Yuan, which proposed multitask joint sparse in 2010, indicates (MTJSRC), Yang (RCR) is indicated Deng relaxing to cooperate in proposition in 2012, and Li et al. proposes to combine similar and special representation (JSSL) 2017.But MTJSRC assumes that multiple features have identical weight, and the discernment for ignoring different characteristic in practical application is not quite similar.RCR is proposed Coefficient regular terms in weighting class, but be not applied directly on last classifier.JSSL model is although achieve good Effect, but it be not necessarily in the class of RCR coefficient canonical there are, multiple features Special coefficient is reintroduced, because of class Interior coefficient regular terms has contained the peculiar coefficient of multiple features to a certain extent.In addition, JSSL is not accounted for for abnormal point Robustness, although he specificity characterization can tolerate noise to a certain extent.
Therefore, there is also many shortcomings for the multiple features combining classification task based on rarefaction representation.Such as different characteristic The disadvantages of how discrimination problem sufficiently excavates the similitude and particularity of multiple features, and algorithm complexity is higher.
Summary of the invention
In order to solve the deficiency that the prior art cannot strictly distinguish the similitude of multiple features and particularity area, the present invention Coefficient is separated into shared coefficient and special system by a kind of mode identification method indicated based on multiple features of offer volume, this method Number, to the similitude of multiple features and particularity area strictly be distinguished, for the similitude and particularity for sufficiently excavating multiple features More effective condition is provided, and acts on last classifier, to improve identification classifying quality;And invention introduces Similar weighted term, so that model is to feature abnormalities point robust.
To realize the above goal of the invention, the technical solution adopted is that:
A kind of mode identification method indicated based on multiple features, comprising the following steps:
Step S1: propose that shared and special expression model, model expression are as follows:
Wherein K indicates the number of feature, τ, λ1And λ2It is constant parameter, Indicate k-th of feature of query sample,For scalar, i.e. vector ykAn element, n represents the dimension of this feature vector;
Indicate the dictionary of k-th of feature,For feature vector, n indicate to The dimension of amount, m represent m-th of training sample;
It is a common system of the query sample for each characteristics dictionary Number, αC, mIt indicates the common coefficients about m-th of training sample, is scalar, c is the mark of common coefficients;
It is the peculiar coefficient for k-th of characteristics dictionary, ωkFor k-th of spy The weight of sign;Indicate the Special coefficient of k-th of feature about m-th of training sample, s is the mark of Special coefficient;
Step S2: shared factor alpha is initializedc, Special coefficientWith weights omegak, enable αc=0,ωk=0;
Step S3: carrying out alternating iteration to model, updates shared factor alphac, Special coefficientWith weights omegak, until entire Until model converges to a local minimum;
Step S4: in the shared factor alpha acquiredc, Special coefficientWith weights omegakOn the basis of, utilize minimal reconstruction error Seek the label of test sample:
Wherein, Dk,jIt is dictionary DkIn belong to the sub- dictionary of j class,Correspond to sub- dictionary Dk,jShared coefficient, It is for sub- dictionary Dk,jSpecial coefficient.
Preferably, specific step is as follows by step S3:
Step S301: shared factor alpha is updatedc, fixed peculiar coefficientAnd weights omegak, then pattern function is expressed as following shape Formula:
Step S302: K function is merged;
Due to first two be it is guidable, then can be rewritten are as follows:
Wherein, F (αc) it is first two of objective function;Due to F (αc) can lead, it can using project and iteration method (IPM) algorithm Acquire αc
Step S303: Special coefficient is updatedFixed shared factor alphacAnd weights omegak, then pattern function is represented by as follows Form:
Since objective function first item can be led, project and iteration method (IPM) algorithm can be used and acquire
Step S304: weights omega is updatedk, fixed shared factor alphacAnd Special coefficientUnder entropy principle, then model Function can be expressed as form:
Step S305: weight is obtained by derivation
Wherein γ is a constant, for constraining maximum entropy.
Compared with prior art, the beneficial effects of the present invention are:
(1) in the method indicated based on multiple features combining, coefficient is resolved into shared coefficient and Special coefficient, is sufficiently sent out The similitude and particularity of multiple features are dug, and is acted on last classifier, classifying quality is improved.
(2) present invention introduces the similar positve terms of a feature thenActive Learning one suitable power Value, and effect and last classification, so that the higher feature of distinction is effectively utilized, so that model is for feature abnormalities value Shandong Stick.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is the face figure that test set is chosen.
Fig. 3 is the segmentation figure to picture.
Fig. 4 is that accuracy of identification (%) compares.
Fig. 5 is the picture of LFW training set.
Fig. 6 is the picture of LFW test set.
Fig. 7 is that accuracy of identification (%) compares.
Specific embodiment
The attached figures are only used for illustrative purposes and cannot be understood as limitating the patent;
Below in conjunction with drawings and examples, the present invention is further elaborated.
Embodiment 1
As shown in Figure 1, a kind of mode identification method indicated based on multiple features, comprising the following steps:
Step S1: propose that shared and special expression model, model expression are as follows:
Wherein K indicates the number of feature, τ, λ1And λ2It is constant parameter, Indicate k-th of feature of query sample,For scalar, i.e. vector ykAn element.N represents the dimension of this feature vector;
Indicate the dictionary of k-th of feature,For feature vector, n indicate to The dimension of amount.M represents m-th of training sample;
It is a common system of the query sample for each characteristics dictionary Number, αC, mIt indicates the common coefficients about m-th of training sample, is scalar.In addition, c is the mark of common coefficients;
It is the peculiar coefficient for k-th of characteristics dictionary, ωkFor k-th of spy The weight of sign;Indicate the Special coefficient of k-th of feature about m-th of training sample, s is the mark of Special coefficient;
Step S2: shared factor alpha is initializedc, Special coefficientWith weights omegak, enable αc=0,ωk=0;
Step S3: carrying out alternating iteration to model, updates shared factor alphac, Special coefficientWith weights omegak, until entire Until model converges to a local minimum;
Step S4: in the shared factor alpha acquiredc, Special coefficientWith weights omegakOn the basis of, utilize minimal reconstruction error Seek the label of test sample:
Wherein, Dk,jIt is dictionary DkIn belong to the sub- dictionary of j class,Correspond to sub- dictionary Dk,jShared coefficient,It is For sub- dictionary Dk,jSpecial coefficient.
Preferably, specific step is as follows by step S3:
Step S301: shared factor alpha is updatedc, fixed peculiar coefficientAnd weights omegak, then pattern function is expressed as following shape Formula,
Step S302: K function is merged;
Due to first two be it is guidable, then can be rewritten are as follows:
Wherein, F (αc) it is first two of objective function;Due to F (αc) can lead, it can using project and iteration method (IPM) algorithm Acquire αc
Step S303: Special coefficient is updatedFixed shared factor alphacAnd weights omegak, then pattern function is represented by as follows Form:
Since objective function first item can be led, project and iteration method (IPM) algorithm can be used and acquire
Step S304: weights omega is updatedk, fixed shared factor alphacAnd Special coefficientUnder entropy principle, then model Function can be expressed as form:
Step S305: weight is obtained by derivation
Wherein γ is a constant, for constraining maximum entropy.
Embodiment 2
As shown in Figure 1, Figure 2, shown in Fig. 3 and Fig. 4, a specific test is present embodiments provided, test process is as follows:
(1) it is identified first in AR data set with more intersected human faces that face blocks,
(2) training set is chosen only comprising 800 face pictures of expression shape change in AR data set, and test set chooses 200 Comprising the face picture that sunglasses (or 200 scarfs) block, as shown in Figure 2.
(3)-and 83*64 is zoomed to all pictures, it is then partitioned into the fritter of 8 pieces of 20*30, as shown in Figure 3.And it will be every A fritter is adjusted to 600 dimensional vectors as a feature.K is equal to 8, DkFor the matrix of the 600*800 of training set composition.
(4) for each test sample, there are 8 600 dimensional feature vector yk.It can be in the hope of altogether by the optimization algorithm of front Enjoy factor alphac, Special coefficientWith weights omegak
(5) the label classification of test sample is obtained finally by minimal reconstruction error.
The comparison of final discrimination as shown in fig. 4, it can be seen that discrimination of the invention be higher than MTJSRC, RCR and JSSL shows that the abnormal mass robustness of the invention blocked for picture is very good.
Embodiment 3
As shown in Fig. 1, Fig. 5, Fig. 6 and Fig. 7, the present embodiment has used a subset of LFW, wherein including 143 people Face, everyone at least 11 face pictures.For everyone, we use preceding 11 picture as training set, such as Fig. 5 institute Show, it is remaining as test, as shown in Figure 6.We extract the Gray-level Map Features of every face picture, Fourier's feature, Gabor Feature and LBP feature.Remaining experimentation is identical as experiment above.
Experimental result is as shown in Figure 7, it is seen that the Classification and Identification rate that algorithm of the invention indicates multiple features combining obtains It is all higher than other methods.
Obviously, the above embodiment of the present invention be only to clearly illustrate example of the present invention, and not be pair The restriction of embodiments of the present invention.For those of ordinary skill in the art, may be used also on the basis of the above description To make other variations or changes in different ways.There is no necessity and possibility to exhaust all the enbodiments.It is all this Made any modifications, equivalent replacements, and improvements etc., should be included in the claims in the present invention within the spirit and principle of invention Protection scope within.

Claims (2)

1. a kind of mode identification method indicated based on multiple features, which comprises the following steps:
Step S1: propose that shared and special expression model, model expression are as follows:
Wherein K indicates the number of feature, τ, λ1And λ2It is constant parameter,It indicates K-th of feature of query sample,For scalar, i.e. vector ykAn element, n represents the dimension of this feature vector;
Indicate the dictionary of k-th of feature,For feature vector, n indicates vector Dimension, m represent m-th of training sample;
It is a common coefficients of the query sample for each characteristics dictionary, αC, mIt indicates the common coefficients about m-th of training sample, is scalar, c is the mark of common coefficients;
It is the Special coefficient for k-th of characteristics dictionary, ωkFor the power of k-th of feature Value;Indicate the Special coefficient of k-th of feature about m-th of training sample, s is the mark of Special coefficient;
Step S2: shared factor alpha is initializedc, Special coefficientWith weights omegak, enable αc=0,ωk=0;
Step S3: carrying out alternating iteration to model, updates shared factor alphac, Special coefficientWith weights omegak, until entire model Until converging to a local minimum;
Step S4: in the shared factor alpha acquiredc, Special coefficientWith weights omegakOn the basis of, it is sought using minimal reconstruction error The label of test sample:
Wherein, Dk,jIt is dictionary DkIn belong to the sub- dictionary of j class,Correspond to sub- dictionary Dk,jShared coefficient,Be for Sub- dictionary Dk,jSpecial coefficient.
2. a kind of mode identification method indicated based on multiple features according to claim 1, which is characterized in that step S3's Specific step is as follows:
Step S301: shared factor alpha is updatedc, fixed peculiar coefficientAnd weights omegak, pattern function is expressed as following form:
Step S302: K function is merged;
Due to first two be it is guidable, then can be rewritten are as follows:
Wherein, F (αc) it is first two of objective function;Due to F (αc) can lead, it can be acquired using project and iteration method (IPM) algorithm αc
Step S303: Special coefficient is updatedFixed shared factor alphacAnd weights omegak, then pattern function is represented by following shape Formula:
Since objective function first item can be led, project and iteration method (IPM) algorithm can be used and acquire
Step S304: weights omega is updatedk, fixed shared factor alphacAnd Special coefficientUnder entropy principle, then pattern function It can be expressed as form:
Wherein γ is a constant, for constraining maximum entropy;
Step S305: weight is obtained by derivation
CN201811368221.9A 2018-11-16 2018-11-16 A kind of mode identification method indicated based on multiple features Pending CN109558816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811368221.9A CN109558816A (en) 2018-11-16 2018-11-16 A kind of mode identification method indicated based on multiple features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811368221.9A CN109558816A (en) 2018-11-16 2018-11-16 A kind of mode identification method indicated based on multiple features

Publications (1)

Publication Number Publication Date
CN109558816A true CN109558816A (en) 2019-04-02

Family

ID=65866569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811368221.9A Pending CN109558816A (en) 2018-11-16 2018-11-16 A kind of mode identification method indicated based on multiple features

Country Status (1)

Country Link
CN (1) CN109558816A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178196A (en) * 2019-12-19 2020-05-19 东软集团股份有限公司 Method, device and equipment for cell classification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178196A (en) * 2019-12-19 2020-05-19 东软集团股份有限公司 Method, device and equipment for cell classification
CN111178196B (en) * 2019-12-19 2024-01-23 东软集团股份有限公司 Cell classification method, device and equipment

Similar Documents

Publication Publication Date Title
Rafique et al. Scene understanding and recognition: statistical segmented model using geometrical features and Gaussian naïve bayes
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
Wang et al. Face recognition based on deep learning
CN110443189B (en) Face attribute identification method based on multitask multi-label learning convolutional neural network
Hou et al. Improving variational autoencoder with deep feature consistent and generative adversarial training
CN111310668B (en) Gait recognition method based on skeleton information
CN108334816A (en) The Pose-varied face recognition method of network is fought based on profile symmetry constraint production
CN110348330A (en) Human face posture virtual view generation method based on VAE-ACGAN
Guo et al. Facial expression recognition influenced by human aging
Liu et al. Human motion estimation from a reduced marker set
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
Xu et al. Combining skeletal pose with local motion for human activity recognition
CN109685724A (en) A kind of symmetrical perception facial image complementing method based on deep learning
CN109726619A (en) A kind of convolutional neural networks face identification method and system based on parameter sharing
Paul et al. Extraction of facial feature points using cumulative histogram
CN108564061A (en) A kind of image-recognizing method and system based on two-dimensional principal component analysis
Huang et al. A parallel architecture of age adversarial convolutional neural network for cross-age face recognition
CN102592150A (en) Gait identification method of bidirectional two-dimensional principal component analysis based on fuzzy decision theory
Du et al. Age factor removal network based on transfer learning and adversarial learning for cross-age face recognition
Sulong et al. HUMAN ACTIVITIES RECOGNITION VIA FEATURES EXTRACTION FROM SKELETON.
CN114782979A (en) Training method and device for pedestrian re-recognition model, storage medium and terminal
Su et al. MMP-PCA face recognition method
CN109558816A (en) A kind of mode identification method indicated based on multiple features
Meng et al. Local visual primitives (LVP) for face modelling and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190402

RJ01 Rejection of invention patent application after publication