CN106503648A - Face identification method and device based on sparse projection binary-coding - Google Patents

Face identification method and device based on sparse projection binary-coding Download PDF

Info

Publication number
CN106503648A
CN106503648A CN201610917123.0A CN201610917123A CN106503648A CN 106503648 A CN106503648 A CN 106503648A CN 201610917123 A CN201610917123 A CN 201610917123A CN 106503648 A CN106503648 A CN 106503648A
Authority
CN
China
Prior art keywords
vector
object function
matrix
pixel
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610917123.0A
Other languages
Chinese (zh)
Inventor
明悦
范春晓
田雷
李扬
史家昆
翟正元
吴琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201610917123.0A priority Critical patent/CN106503648A/en
Publication of CN106503648A publication Critical patent/CN106503648A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of face identification method and device based on sparse projection binary-coding.The face identification method based on sparse projection binary-coding that the present invention is provided, the corresponding first pixel value difference vector of each pixel including obtaining each training sample in training set;The corresponding first binary feature vector of each pixel of the satisfaction based on the first object function of sparse projection matrix is obtained according to the first pixel value difference vector;All of first binary feature vector is clustered, multiple cluster centre words are obtained;Obtain the corresponding primary vector of each training sample;The corresponding secondary vector of image of face to be detected is obtained, according to primary vector and secondary vector, face recognition result is obtained.The face identification method based on sparse projection binary-coding of the present embodiment and device, recognition speed are fast, solve the problems, such as training sample overfitting, and good to the adaptability of data, improve accuracy and the rapidity of identification face.

Description

Face identification method and device based on sparse projection binary-coding
Technical field
A kind of the present invention relates to biological identification technology, more particularly to face identification method based on sparse projection binary-coding And device.
Background technology
From primitive society to information-intensive society now, during society is from publicly-owned to privately owned, for individual privacy Protection demand grows with each passing day.Increasing field needs to use reliable biological identification technology, and face is special as one kind Biological characteristic, with feature unique, relatively stable, obtain easy and possess the remarkable advantage such as untouchable and extensively closed Note.
Face recognition algorithms of the prior art mainly include the face identification method for representing face characteristic based on real number value With the face identification method based on dense binary feature vector representation face characteristic.Wherein, face characteristic is represented based on real number value Face identification method refer to extract from the facial image of input using the object function that is pre-designed and represented based on real number value Face characteristic, then Land use models recognition methodss judge which people input picture belongs to.It is based on dense binary feature vector table Show that the face identification method of face characteristic is referred to be converted into original based on dense two based on the face characteristic that real number value is represented The face characteristic of value tag vector representation, the projection matrix for using are dense projection matrix (the most elements i.e. in matrix For non-zero element), finally judge which people input picture belongs to using mode identification method.
But, for the face identification method for representing face characteristic based on real number value, on the one hand need substantial amounts of storage single Unit, calculating speed are slow, and general computer or mobile device are difficult to meet its storage and calculating demand;On the other hand, the method Localized variation in for facial image is sensitive, when the change degree of change of same facial image exceedes a certain threshold value When, the feature identification that grader will originally belong to same person by two is different people.Based on dense binary feature vector Represent in the face identification method of face characteristic, due to the impact that there is quantization error in two-value quantizing process all the time, equivalent When the dimension of the binary feature vector after change is less than the dimension of original feature vector, it is likely that cause losing for a large amount of discriminant informations Lose, so as to affect the ability of binary feature vector description facial image feature;Additionally, being leted others have a look at based on dense binary feature vector table Also there is the over-fitting of training sample in the face identification method of face feature, and over-fitting can cause face identification method in meter Bad adaptability during calculation to data, so that affect the performance of face identification method.
Content of the invention
The present invention provides a kind of face identification method and device based on sparse projection binary-coding, to overcome prior art In face identification method recognition speed is slow, the technical problem of training sample overfitting and the bad adaptability to data.
The present invention provides a kind of face identification method based on sparse projection binary-coding, including:
The corresponding first pixel value difference vector of each pixel of each training sample in acquisition training set;Wherein, the training Sample is facial image, and the training set includes several different facial images;
The each institute for meeting the first object function based on sparse projection matrix is obtained according to first pixel value difference vector State the corresponding first binary feature vector of pixel;
The all of first binary feature vector is clustered, multiple cluster centre-words are obtained;
By the first binary feature vector the first matrix of composition corresponding for each training sample, using the word to institute State the first matrix linearly to be rebuild, obtain the first linear reconstructed results, first is obtained according to the described first linear reconstructed results Vector;Wherein, the corresponding primary vector of each training sample;
Obtain each pixel corresponding second binary feature vector of the image of face to be detected, and by each described 2nd 2 Value tag vector the second matrix of composition, is linearly rebuild to second matrix using the word, is obtained the second linear weight Result is built, and secondary vector is obtained according to the described second linear reconstructed results;
According to the primary vector and the secondary vector, face recognition result is obtained.
Method as above, the first object function is as shown in formula one:
Wherein, R be sparse projection matrix, | R |0≤ m represents that the number of nonzero element in sparse projection matrix R is less than or equal to M, B is that the first binary feature is vectorial, and parameter m has incidence relation with the sparse degree of the sparse projection matrix R, and m is for just Integer, X are first pixel value difference vector.
Method as above, described by the first binary feature vector the first square of composition corresponding for each training sample Battle array, is linearly rebuild to first matrix using the word, is obtained the first linear reconstructed results, according to the First Line Property reconstructed results obtain primary vector;Wherein, the corresponding primary vector of each training sample, including:
First matrix is linearly rebuild using the word, obtained the first linear reconstructed results, described first The expression formula of linear reconstructed results is as shown in formula two:
Bi=ai1S1+ai2S2+...+aikSkFormula two;
Wherein, BiFor the first matrix of each first binary feature vector composition of i-th training sample, SkSingle for k-th Word, aikIt is k-th word when corresponding described first matrix of i-th training sample is linearly rebuild using the word Weight;
Corresponding for each described word in the expression formula of the described first linear reconstructed results weight is constituted primary vector (ai1, ai2... ..., aik).
Method as above, corresponding second binary feature of each pixel of the image of acquisition face to be detected to Amount, and by each second binary feature vector, second matrix of composition, second matrix is carried out linearly using the word Rebuild, obtain the second linear reconstructed results, secondary vector is obtained according to the described second linear reconstructed results, including:
Each pixel corresponding second pixel value difference vector of the image of face to be detected is obtained, and according to second picture Plain difference value vector obtains each pixel corresponding described second of the image of the face to be detected for meeting the first object function Binary feature vector;
Each second binary feature vector is constituted second matrix, and second matrix is adopted the word Linearly rebuild, obtained the described second linear reconstruction structure, expression formula such as three institute of formula of the second linear reconstructed results Show:
BTest=b1S1+b2S2+...+bkSkFormula three;
Wherein, BTestFor the second matrix of each second binary feature vector composition, SkFor k-th word, bkFor described The weight of k-th word when second matrix is linearly rebuild using the word;
By corresponding for each described word in the expression formula of the described second linear reconstructed results weight composition described second to Amount (b1, b2... ..., bk).
Method as above, described according to the primary vector and the secondary vector, face recognition result is obtained, is wrapped Include:
By the primary vector and secondary vector input grader;
Obtain the secondary vector of the grader return and the Euclidean distance of each primary vector;
Facial image corresponding to training sample corresponding with the primary vector that the Euclidean distance of secondary vector is most short is known Not Wei face to be detected image.
Method as above, obtain corresponding first pixel value difference of each pixel of each training sample in training set to Amount, including:
Each training sample is divided into multiple pieces;
With each pixel in each block as the first central pixel point, with r as radius, first center pixel is obtained First neighborhood territory pixel point of point, and according to clockwise, by the pixel value and described first of the first neighborhood territory pixel point The pixel value of imago vegetarian refreshments carries out difference operation, obtain length for (2 × r+1) × (2 × r+1) -1 dimension the first pixel value difference to Amount;
Each pixel corresponding second pixel value difference vector of the image of face to be detected is obtained, including:
It it is multiple pieces by the image division of the face to be detected;
With each pixel in each block as the second central pixel point, with r as radius, second center pixel is obtained Second neighborhood territory pixel point of point, and according to clockwise, by the pixel value and described second of the second neighborhood territory pixel point The pixel value of imago vegetarian refreshments carries out difference operation, obtain length for (2 × r+1) × (2 × r+1) -1 dimension the second pixel value difference to Amount;
Wherein, r is positive integer.
Method as above, described acquisition according to first pixel value difference vector are met based on sparse projection matrix The corresponding first binary feature vector of each described pixel of first object function, including:
The agent matrix S of R is introduced, by the first object functionIt is transformed to Second object function, second object function is as shown in formula four:
Wherein, variable α represents the penalty factor of agent matrix S, in order to equation of equilibrium four inWithThis two variable, | S |0Represent the number of non-zero element in agent matrix S, parameter m and the sparse projection square The sparse degree of battle array R has incidence relation;The quantization error between agent matrix S and binary-coding B is represented,Represent the error between agent matrix S and R;
Second object function is solved, the first binary feature vector is obtained.
Method as above, described second object function is solved, obtain first binary feature to Amount, including:
Initial value S is given at random by the S and B in second object function0And B0, and fixation second object function In S and B, update the R in second object function, obtain R1, specially:
Second object function is write the first expression formula instead, first expression formula is as shown in formula five:
Wherein, C1=SX, is fixed value;
The first expression formula is solved, R is obtained1, specially:
R is obtained by formula six or formula seven1
Wherein, thrm represents that m element of the maximum in the matrix that will be obtained retains, and remaining element sets to 0;Rt+1Represent The solution obtained using formula six (t+1) secondary iteration, RtThe solution using six last (i.e. (t) is secondary) iteration of formula is represented, when When iterationses are N, the solution obtained using formula six is restrained, and stops iteration, the R for now obtainingN=R1, wherein R1Represent to institute When stating the solution of the second object function, the R that first time iteration is obtained;
R1=thrm (S) formula seven;
Wherein, R1When expression is solved to second object function, the R that first time iteration is obtained.
Initial value B is given by the B in second object function0, the R in second object function is assigned to R1, and solid B and R in fixed second object function, updates the S in second object function, obtains S1, specially:
Second object function is write the second expression formula instead, second expression formula is as shown in formula eight:
Wherein, C2=(B+ α RX)/(1+ α) is fixed value;
The second expression formula is solved, S is obtained1, specially:
S is solved by formula nine1
RightCarry out singular value decomposition and obtain U, V:S1=VUT... formula nine;
Wherein, S1When expression is solved to the second object function, the S that first time iteration is obtained;
S in second object function is assigned to S1, the R in second object function is assigned to R1, and fixed described S and R in second object function, updates the B in second object function, obtains B1, specially:
Second object function is write the 3rd expression formula instead, the 3rd expression formula is as shown in formula ten:
Wherein, C3=SX, is fixed value;
The 3rd expression formula is solved, B is obtained1, specially:
B is obtained by formula 111
B=sign (C3)=sign (SX) formula 11;
Wherein, sign (*) is sign function, when independent variable * is more than 0, sign (*) function output 1, otherwise, sign (*) Function output 0, B1When expression is solved to second object function, the B that first time iteration is obtained, independent variable are Matrix C3In Element;
S in second object function is assigned to S1, the B in second object function is assigned to and B1, and fixed institute The S and B in the second object function is stated, the R in second object function is updated, is obtained R2
B in second object function is assigned to B1, the R in second object function is assigned to R2, and fixed described B and R in second object function, updates the S in second object function, obtains S2
S in second object function is assigned to S2, the R in second object function is assigned to R2, and fixed described S and R in second object function, updates the B in second object function, obtains B2
Repeat and the S in second object function is assigned to Sm-1, the B in second object function is assigned to and Bm-1, and the S and B in fixation second object function, the R in second object function is updated, R is obtainedm;By described B in two object functions is assigned to Bm-1, the R in second object function is assigned to Rm, and in fixation second object function B and R, update the S in second object function, obtain Sm;S in second object function is assigned to Sm, by described R in two object functions is assigned to Rm, and the S and R in fixation second object function, update in second object function B, obtains BmOperation, until complete M iteration, the B that the M time iteration is obtainedMThe as described each institute for meeting first object function State the corresponding first binary feature vector of pixel;Wherein, m represents the m time iteration, and described m, M are positive integer.
The present invention also provides a kind of face identification device based on sparse projection binary-coding, including:
Computing unit, the computing unit are used for corresponding first picture of each pixel for obtaining each training sample in training set Plain difference value vector;Wherein, the training sample is facial image, and the training set includes several different facial images;
The computing unit is additionally operable to be obtained according to first pixel value difference vector and meets based on sparse projection matrix The corresponding first binary feature vector of each described pixel of first object function;
Cluster cell, the cluster cell are clustered to all of first binary feature vector, are obtained multiple poly- Class center-word;
Primary vector acquiring unit, the primary vector acquiring unit are used for corresponding for each training sample first Binary feature vector the first matrix of composition, is linearly rebuild to first matrix using the word, is obtained first linear Reconstructed results, obtain primary vector according to the described first linear reconstructed results;Wherein, each training sample corresponding one first to Amount;
Secondary vector acquiring unit, the secondary vector acquiring unit are used for each pixel of the image for obtaining face to be detected The corresponding second binary feature vector of point, and by each second binary feature vector, second matrix of composition, using the word Second matrix is linearly rebuild, is obtained the second linear reconstructed results, obtained according to the described second linear reconstructed results Secondary vector;
Recognition unit, the recognition unit are used for according to the primary vector and the secondary vector, obtain recognition of face As a result.
The present invention provides a kind of face identification method and device based on sparse projection binary-coding.The base that the present invention is provided In the face identification method of sparse projection binary-coding, each pixel including obtaining each training sample in training set corresponding the One pixel value difference vector;Wherein, training sample is facial image, and training set includes several different facial images;According to One pixel value difference vector obtains corresponding first two-value of each pixel of the satisfaction based on the first object function of sparse projection matrix Characteristic vector;All of first binary feature vector is clustered, multiple cluster centre-words are obtained;By each training sample Corresponding the first binary feature vector the first matrix of composition, is linearly rebuild to the first matrix using word, is obtained first Linear reconstructed results, obtain primary vector according to the first linear reconstructed results;Wherein, each training sample corresponding one first to Amount;Obtain each pixel corresponding second binary feature vector of the image of face to be detected, and by each second binary feature to Amount the second matrix of composition, is linearly rebuild to the second matrix using word, is obtained the second linear reconstructed results, according to the second line Property reconstructed results obtain secondary vector, according to primary vector and secondary vector, obtain face recognition result.The present embodiment based on The face identification method of sparse projection binary-coding is using being entered to the pixel value difference vector of real value representation using sparse projection matrix Row coding obtains binary feature vector, and carries out cluster to binary feature vector and obtain word, is linearly rebuild per width using word The matrix of the corresponding binary feature vector composition of facial image, recognition speed are fast, solve the problems, such as training sample overfitting, And good to the adaptability of data, improve accuracy and the rapidity of the identification face of face identification method.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing Accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are these Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with Other accompanying drawings are obtained according to these accompanying drawings.
The flow chart of the face identification method embodiment one based on sparse projection binary-coding that Fig. 1 is provided for the present invention;
Fig. 2 is that the first pixel value difference vector of the present invention obtains schematic diagram;
The flow chart of the face identification method embodiment two based on sparse projection binary-coding that Fig. 3 is provided for the present invention;
The structural representation of the face identification device based on sparse projection binary-coding that Fig. 4 is provided for the present invention.
Specific embodiment
Purpose, technical scheme and advantage for making the embodiment of the present invention is clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, to the embodiment of the present invention in technical scheme be clearly and completely described, it is clear that described embodiment is The a part of embodiment of the present invention, rather than whole embodiments.Embodiment in based on the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
The flow chart of the face identification method embodiment one based on sparse projection binary-coding that Fig. 1 is provided for the present invention, Fig. 2 is that the first pixel value difference vector of the present invention obtains schematic diagram.
As shown in Fig. 1~2, the method for the present embodiment can include:
The corresponding first pixel value difference vector of each pixel of each training sample in S101, acquisition training set;Wherein, train Sample is facial image, and training set includes several different facial images;
S102, each picture for meeting the first object function based on sparse projection matrix according to the vector acquisition of the first pixel value difference The corresponding first binary feature vector of vegetarian refreshments;
S103, all of first binary feature vector is clustered, obtain multiple cluster centre-words;
S104, by corresponding for each training sample the first binary feature vector the first matrix of composition, using word to first Matrix is linearly rebuild, and obtains the first linear reconstructed results, obtains primary vector according to the first linear reconstructed results;Wherein, The corresponding primary vector of each training sample;
S105, the corresponding second binary feature vector of each pixel of the image for obtaining face to be detected, and by each second Binary feature vector the second matrix of composition, is linearly rebuild to the second matrix using word, is obtained the second linear reconstructed results, Secondary vector is obtained according to the second linear reconstructed results;
S106, according to primary vector and secondary vector, obtain face recognition result.
For step S101, obtain substantial amounts of different facial images in advance and constitute training set, each face in training set Image is referred to as training sample.
After training set is built up, the corresponding first pixel value difference vector of each pixel of each training sample in acquisition training set, Specially:Each training sample is divided into multiple pieces;With each pixel in each block as the first central pixel point, with r it is Radius, obtains the first neighborhood territory pixel point of the first central pixel point, and according to clockwise, by the picture of the first neighborhood territory pixel point Plain value carries out difference operation with the pixel value of the first central pixel point, obtain that length is tieed up for (2 × r+1) × (2 × r+1) -1 the One pixel value difference vector;Preferably, r takes 3.In order to clearly explain clockwise, illustrated using r=2 in the present embodiment suitable Conterclockwise implication, as shown in Fig. 2 the path with arrow dotted line in Fig. 2 is clockwise, each square represents one Pixel, A represent the first central pixel point, along the arrow dotted line direction successively by the pixel value and first of the first neighborhood territory pixel point The pixel value of central pixel point carries out difference operation, the first pixel value difference vector of 24 dimension of composition respectively.Wherein, the first neighborhood picture The number of vegetarian refreshments is also (2 × r+1) × (2 × r+1) -1.
In the step, it will be appreciated by persons skilled in the art that after the value of r is selected, as training sample is divided into Multiple pieces, have in each block many edge pixel points the first neighborhood territory pixel point number deficiency (2 × r+1) × (2 × R+1) -1, in order that arrive the first pixel value difference vector dimension identical, by each block the first neighborhood territory pixel point The edge pixel point of number deficiency (2 × r+1) × (2 × r+1) -1 is given up.
For step S102, after the first pixel value difference vector is obtained, obtained according to the first pixel value difference vector and meet base In the corresponding first binary feature vector of each pixel of the first object function of sparse projection matrix.In the present embodiment, One object function is as shown in formula one:
Wherein, R be sparse projection matrix, | R |0≤ m represents that the number of nonzero element in sparse projection matrix R is less than or equal to M, B is that the sparse degree of the first binary feature vector, parameter m and sparse projection matrix R has incidence relation, and m is positive integer, X is the first pixel value difference vector.
Wherein, the value rule of m is, the 4%~6% of the number of all elements in matrix.
The first pixel value difference vector is encoded using sparse projection matrix, the first binary feature after the quantization for obtaining Dimension of the dimension of vector far above the first pixel value difference vector, therefore, it is possible to minimize the first pixel value difference vector sum first Quantization error between binary feature vector, can lift the ability of the first binary feature vector description facial image.And, adopt The first pixel value difference vector is encoded with sparse projection matrix, so as to get the first binary feature vector have sparse (i.e. Most elements are property 0), can pass through the degree of rarefication for adjusting projection matrix, by face identification method calculating process Need the order of magnitude of modulation parameter to be adjusted to the level suitable with training data complexity, solve the data mistake in training sample The problem of degree fitting.
For step S103, after the corresponding first binary feature vector of each pixel for obtaining training set, to training set The corresponding first binary feature vector of all pixels point clustered, obtain multiple cluster centre-words;Can in the present embodiment With using K-Means clustering methods or SGONG clustering methods to corresponding first binary feature of all pixels point of training set Vector is learnt, and is obtained multiple cluster centre-words, but is not limited to above two clustering method.
For step S104, cluster in the corresponding first binary feature vector of all pixels point to training set, obtain To after multiple words, primary vector is obtained then, specifically, each training sample has multiple pixels, each pixel pair The first binary feature vector is answered, each first binary feature vector of a training sample is constituted the first square in order Battle array, then, is linearly rebuild to the first matrix using word, is obtained the first linear reconstructed results, according to the first linear reconstruction As a result primary vector is obtained;The corresponding primary vector of each training sample.
By the first binary feature vector the first matrix of composition corresponding for each training sample, the first matrix is entered using word Line is rebuild, compared with directly being represented with more preferable data adaptability using the first original matrix.
For step S105, after the corresponding primary vector of each training sample in training set is obtained, training process is completed, Start face recognition process, the image corresponding second that face to be detected is obtained using above-mentioned acquisition primary vector identical method Vector, specially:Obtain each pixel corresponding second binary feature vector of the image of face to be detected, and by each 2nd 2 Value tag vector the second matrix of composition, is linearly rebuild to the second matrix using word, is obtained the second linear reconstructed results, root Secondary vector is obtained according to the second linear reconstructed results.
In this step, the method and the method phase for obtaining the first binary feature vector of the second binary feature vector are obtained With.
For step S106, after primary vector and secondary vector is obtained, just can according to primary vector and secondary vector, Obtain face recognition result, the method that such as can adopt grader:By primary vector and secondary vector input grader so that point Class device calculates the Euclidean distance between each primary vector and secondary vector, obtain secondary vector that grader returns with each first to The Euclidean distance of amount, by the face figure corresponding to training sample corresponding with the primary vector that the Euclidean distance of secondary vector is most short As being identified as the image of face to be detected.
The face identification method based on sparse projection binary-coding of the present embodiment, including obtaining each training sample in training set The corresponding first pixel value difference vector of each pixel originally;Wherein, training sample is facial image, and training set includes several not Same facial image;The each picture for meeting the first object function based on sparse projection matrix is obtained according to the first pixel value difference vector The corresponding first binary feature vector of vegetarian refreshments;All of first binary feature vector is clustered, obtain multiple cluster centres- Word;By the first binary feature vector the first matrix of composition corresponding for each training sample, the first matrix is carried out using word Linear reconstruction, obtains the first linear reconstructed results, obtains primary vector according to the first linear reconstructed results;Wherein, each training The corresponding primary vector of sample;Each pixel corresponding second binary feature vector of the image of face to be detected is obtained, and By each second binary feature vector the second matrix of composition, the second matrix is linearly rebuild using word, obtained second linear Reconstructed results, obtain secondary vector according to the second linear reconstructed results, according to primary vector and secondary vector, obtain recognition of face As a result.The face identification method based on sparse projection binary-coding of the present embodiment is using sparse projection matrix to real-valued table The pixel value difference vector for showing carries out coding and obtains binary feature vector, and carries out cluster to binary feature vector and obtain word, adopts The matrix that the corresponding binary feature vector composition of every width facial image is linearly rebuild with word, recognition speed are fast, solve training The problem of sample overfitting, and good to the adaptability of data, improve the identification face of face identification method accuracy and Rapidity.
Below the technical scheme of embodiment of the method shown in Fig. 1 is described in detail.
First to obtaining satisfaction based on each of the first object function of sparse projection matrix according to the first pixel value difference vector The method of the corresponding first binary feature vector of pixel is illustrated.
Fig. 3 is the stream of the face identification method embodiment two based on sparse projection binary-coding provided in an embodiment of the present invention Cheng Tu, as shown in figure 3, the method for the present embodiment includes:
S301, the agent matrix S of the sparse projection matrix R being introduced in first object function, by first object functional transformation For the second object function;
S302, solves to the second object function, obtains the first binary feature vector.
For step S301, the agent matrix S of R is introduced, the size of s-matrix is identical with matrix R, by first object functionThe second object function is transformed to, the second object function is as shown in formula four:
Wherein, variable α represents the penalty factor of agent matrix S, in order to equation of equilibrium four inWithThis two variable, | S |0The number of non-zero element in agent matrix S is represented,Square is acted on behalf of in expression Quantization error between battle array S and binary-coding B,Represent the error between agent matrix S and R;Square is wherein acted on behalf of Battle array S is also sparse projection matrix.
For step S302, the second object function is solved, the method for obtaining the first binary feature vector, specifically For:
Initial value S is given at random by the S and B in the second object function0And B0, and the S that fixes in the second object function and B, The R in the second object function is updated, R is obtained1, specially:
Second object function is write the first expression formula instead, the first expression formula is as shown in formula five:
Wherein, C1=SX, is fixed value;
The first expression formula is solved, R is obtained1, specially:
R is obtained by formula six or formula seven1
Wherein, thrm represents that m element of the maximum in the matrix that will be obtained retains, and remaining element sets to 0;Rt+1Represent The solution obtained using formula six (t+1) secondary iteration, RtThe solution using six last (i.e. (t) is secondary) iteration of formula is represented, when When iterationses are N, the solution obtained using formula six is restrained, and stops iteration, the R for now obtainingN=R1, wherein R1Represent to the When two object functions are solved, the R that first time iteration is obtained;
R1=thrm (S) formula seven;
Wherein, R1When expression is solved to the second object function, the R that first time iteration is obtained.
Initial value B is given by the B in the second object function0, the R in the second object function is assigned to R1, and fix the second mesh B and R in scalar functions, updates the S in the second object function, obtains S1, specially:
Second object function is write the second expression formula instead, the second expression formula is as shown in formula eight:
Wherein, C2=(B+ α RX)/(1+ α) is fixed value;
The second expression formula is solved, S is obtained1, specially:
S is solved by formula nine1
RightCarry out singular value decomposition and obtain U, V:S1=VUT.... formula nine;
Wherein, S1When expression is solved to the second object function, the S that first time iteration is obtained;
S in second object function is assigned to S1, the R in the second object function is assigned to R1, and fix the second object function In S and R, update the second object function in B, obtain B1, specially:
Second object function is write the 3rd expression formula instead, the 3rd expression formula is as shown in formula ten:
Wherein, C3=SX, is fixed value;
The 3rd expression formula is solved, B is obtained1, specially:
B is obtained by formula 111
B=sign (C3)=sign (SX) formula 11;
Wherein, sign (*) is sign function, when independent variable * is more than 0, sign (*) function output 1, otherwise, sign (*) Function output 0, B1When expression is solved to the second object function, the B that first time iteration is obtained;Independent variable refers to Matrix C3In unit Element.
S in second object function is assigned to S1, the B in the second object function is assigned to and B1, and fix the second target letter S and B in number, updates the R in the second object function, obtains R2
B in second object function is assigned to B1, the R in the second object function is assigned to R2, and fix the second object function In B and R, update the second object function in S, obtain S2
S in second object function is assigned to S2, the R in the second object function is assigned to R2, and fix the second object function In S and R, update the second object function in B, obtain B2
Repeat and the S in the second object function is assigned to Sm-1, the B in the second object function is assigned to and Bm-1, and solid S and B in fixed second object function, updates the R in the second object function, obtains Rm;B in second object function is assigned to Bm-1, the R in the second object function is assigned to Rm, and the B that fixes in the second object function and R, update in the second object function S, obtains Sm;S in second object function is assigned to Sm, the R in the second object function is assigned to Rm, and fix the second target letter S and R in number, updates the B in the second object function, obtains BmOperation, until complete M iteration, the M time iteration is obtained BMAs meet each pixel corresponding first binary feature vector of first object function;Wherein, m represents the m time iteration, m, M is positive integer.
Wherein, the value of M can determine according to actual needs, be not construed as limiting in the present embodiment.
Below the derivation of the formula six in a upper embodiment, formula 11 is introduced.
First the derivation of formula six is illustrated;
For the solution procedure of expression formula one is relatively difficult, the 3rd object function, the 3rd object function such as formula ten is introduced Shown in two:
Wherein,
According to the property of F- norms, expression formulaIt is that perseverance is more than 0, Therefore, the numerical value of the 3rd object function will not be less than the numerical value of expression formula one, and therefore the optimal solution of formula 12 still meets table Reach formula one;
Using the 3rd object function of solution of iteration, that is, fix S and update R, the 3rd object function is transformed to expression formula four, Expression four is as shown in formula 13:
Wherein,Const represents the constant for being independent of R;
The vector form of expression formula four is shown below:
Wherein, the solution of the formula can be expressed asThe solution of its matrix form can be generalized for formula ten Four:
Solution R is iterated using formula 14, iterative formula is formula six.
Secondly, the derivation of formula 11 is illustrated.
The deployable form for formula one of expression formula two:As matrix B is orthogonal , thereforeFor a constant, therefore, above formula is then further simplified as:
Wherein, n represents the number of training sample, and c represents the dimension of binary feature vector, in order to maximize above formula, needs As (C3)ijDuring more than or equal to 0, B is allowedij=1, as (C3)ijDuring less than 0, B is allowedij=1, then it is converted into asking to formula 11 Solution.
Secondly, to above-described embodiment in step " by the first binary feature vector composition corresponding for each training sample the One matrix, is linearly rebuild to the first matrix using word, obtains the first linear reconstructed results, according to the first linear reconstruction knot Fruit obtains primary vector " illustrate.
By the first binary feature vector the first matrix of composition corresponding for each training sample, the first matrix is entered using word Line is rebuild, and obtains the first linear reconstructed results, obtains primary vector according to the first linear reconstructed results, including:
The first matrix is linearly rebuild using word, obtained the first linear reconstructed results, the first linear reconstructed results Expression formula as shown in formula two:
Bi=ai1S1+ai2S2+...+aikSkFormula two;
Wherein, BiFor the first matrix of each first binary feature vector composition of i-th training sample, SkSingle for k-th Word, aikIt is the power of k-th word when corresponding described first matrix of i-th training sample is linearly rebuild using word Weight;
Corresponding for each word in the expression formula of the first linear reconstructed results weight is constituted primary vector (ai1, ai2... ..., aik).
Then, to above-described embodiment in step " obtain each pixel the corresponding 2nd 2 of the image of face to be detected Value tag vector, and by each second binary feature vector the second matrix of composition, the second matrix is linearly rebuild using word, The second linear reconstructed results are obtained, and secondary vector are obtained according to the second linear reconstructed results " illustrate.
Each pixel corresponding second binary feature vector of the image of face to be detected is obtained, and will each second two-value spy Vector the second matrix of composition is levied, the second matrix is linearly rebuild using word, is obtained the second linear reconstructed results, according to the Bilinear reconstructed results obtain secondary vector, including:
Each pixel corresponding second pixel value difference vector of the image of face to be detected is obtained, and according to the second pixel difference Value vector obtains each pixel corresponding second binary feature vector of the image of the face to be detected for meeting first object function;
Each second binary feature vector is constituted second matrix, and second matrix is adopted the word Linearly rebuild, obtained the described second linear reconstruction structure, expression formula such as three institute of formula of the second linear reconstructed results Show:
BTest=b1S1+b2S2+...+bkSkFormula three;
Wherein, BTestFor the second matrix of each second binary feature vector composition, SkFor k-th word, bkFor described The weight of k-th word when second matrix is linearly rebuild using the word;
By corresponding for each described word in the expression formula of the described second linear reconstructed results weight composition described second to Amount (b1, b2... ..., bk).
Wherein, the method for obtaining the second pixel value difference vector is identical with the method for obtaining the first pixel value difference, specially:
It it is multiple pieces by the image division of face to be detected;
With each pixel in each block as the second central pixel point, with r as radius, the second central pixel point is obtained Second neighborhood territory pixel point, and according to clockwise, by the pixel value of the second neighborhood territory pixel point and the picture of the second central pixel point Plain value carries out difference operation, obtains second pixel value difference vector of the length for (2 × r+1) × (2 × r+1) -1 dimensions;
The method for obtaining the second binary feature vector is identical with the method for obtaining the first binary feature vector, no longer goes to live in the household of one's in-laws on getting married herein State.
The structural representation of the face identification device based on sparse projection binary-coding that Fig. 4 is provided for the present invention, such as Fig. 4 Shown, the device of the present embodiment can include:Computing unit 401, cluster cell 402, primary vector acquiring unit 403, second Vectorial acquiring unit 404, recognition unit 405.
Computing unit 401, corresponding first pixel value difference of each pixel for obtaining each training sample in training set to Amount;Wherein, training sample is facial image, and training set includes several different facial images;
Computing unit 401 is additionally operable to obtain the first mesh met based on sparse projection matrix according to the first pixel value difference vector The corresponding first binary feature vector of each pixel of scalar functions;
Cluster cell 402, for clustering to all of first binary feature vector, obtains multiple cluster centres-mono- Word;
Primary vector acquiring unit 403, for constituting first by the first binary feature vector corresponding for each training sample Matrix, is linearly rebuild to the first matrix using word, obtains the first linear reconstructed results, according to the first linear reconstructed results Obtain primary vector;Wherein, the corresponding primary vector of each training sample;
Secondary vector acquiring unit 404, corresponding second two-value of each pixel for obtaining the image of face to be detected Characteristic vector, and by each second binary feature vector the second matrix of composition, the second matrix is linearly rebuild using word, obtain To the second linear reconstructed results, secondary vector is obtained according to the second linear reconstructed results;
Recognition unit 405, for according to primary vector and secondary vector, obtaining face recognition result.
The device of the present embodiment, can be used for the technical scheme for executing embodiment of the method shown in Fig. 1, and which realizes principle and skill Art effect is similar to, and here is omitted.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above-mentioned each method embodiment can be led to Cross the related hardware of programmed instruction to complete.Aforesaid program can be stored in a computer read/write memory medium.The journey Sequence upon execution, executes the step of including above-mentioned each method embodiment;And aforesaid storage medium includes:ROM, RAM, magnetic disc or Person's CD etc. is various can be with the medium of store program codes.
Finally it should be noted that:Various embodiments above only in order to technical scheme to be described, rather than a limitation;To the greatest extent Pipe has been described in detail to the present invention with reference to foregoing embodiments, it will be understood by those within the art that:Its according to So the technical scheme described in foregoing embodiments can be modified, or which part or all technical characteristic are entered Row equivalent;And these modifications or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology The scope of scheme.

Claims (9)

1. a kind of face identification method based on sparse projection binary-coding, it is characterised in that include:
The corresponding first pixel value difference vector of each pixel of each training sample in acquisition training set;Wherein, the training sample For facial image, the training set includes several different facial images;
The each described picture for meeting the first object function based on sparse projection matrix is obtained according to first pixel value difference vector The corresponding first binary feature vector of vegetarian refreshments;
The all of first binary feature vector is clustered, multiple cluster centre-words are obtained;
By corresponding for each training sample the first binary feature vector the first matrix of composition, using the word to described the One matrix is linearly rebuild, and obtains the first linear reconstructed results, obtains primary vector according to the described first linear reconstructed results; Wherein, the corresponding primary vector of each training sample;
Each pixel corresponding second binary feature vector of the image of face to be detected is obtained, and will each second two-value spy Vector the second matrix of composition is levied, second matrix is linearly rebuild using the word, obtain the second linear reconstruction knot Really, secondary vector is obtained according to the described second linear reconstructed results;
According to the primary vector and the secondary vector, face recognition result is obtained.
2. method according to claim 1, it is characterised in that the first object function is as shown in formula one:
Wherein, R be sparse projection matrix, | R |0≤ m represents the number of nonzero element in sparse projection matrix R less than or equal to m, B For the first binary feature vector, the sparse degree of parameter m and the sparse projection matrix R has an incidence relation, and m is positive integer, X For first pixel value difference vector.
3. method according to claim 2, it is characterised in that described by the first corresponding for each training sample two-value Characteristic vector constitutes the first matrix, first matrix is linearly rebuild using the word, obtains the first linear reconstruction As a result, primary vector is obtained according to the described first linear reconstructed results;Wherein, the corresponding primary vector of each training sample, Including:
First matrix is linearly rebuild using the word, obtained the first linear reconstructed results, described first is linear The expression formula of reconstructed results is as shown in formula two:
Bi=ai1S1+ai2S2+...+aikSkFormula two;
Wherein, BiFor the first matrix of each first binary feature vector composition of i-th training sample, SkFor k-th word, aik It is the power of k-th word when corresponding described first matrix of i-th training sample is linearly rebuild using the word Weight;
Corresponding for each described word in the expression formula of the described first linear reconstructed results weight is constituted primary vector (ai1, ai2... ..., aik).
4. method according to claim 3, it is characterised in that each pixel pair of the image of acquisition face to be detected The the second binary feature vector that answers, and by each second binary feature vector, second matrix of composition, using the word to institute State the second matrix linearly to be rebuild, obtain the second linear reconstructed results, second is obtained according to the described second linear reconstructed results Vector, including:
Each pixel corresponding second pixel value difference vector of the image of face to be detected is obtained, and according to second pixel difference Value vector obtains corresponding second two-value of each pixel of the image of the face to be detected for meeting the first object function Characteristic vector;
Each second binary feature vector is constituted second matrix, and second matrix is carried out using the word Linear reconstruction, obtains the described second linear reconstruction structure, and the expression formula of the second linear reconstructed results is as shown in formula three:
BTest=b1S1+b2S2+...+bkSkFormula three;
Wherein, BTestFor the second matrix of each second binary feature vector composition, SkFor k-th word, bkFor described second The weight of k-th word when matrix is linearly rebuild using the word;
Corresponding for each described word in the expression formula of the described second linear reconstructed results weight is constituted the secondary vector (b1, b2... ..., bk).
5. method according to claim 1, it is characterised in that described according to the primary vector and the secondary vector, Face recognition result is obtained, including:
By the primary vector and secondary vector input grader;
Obtain the secondary vector of the grader return and the Euclidean distance of each primary vector;
Facial image corresponding to training sample corresponding with the primary vector that the Euclidean distance of secondary vector is most short is identified as The image of face to be detected.
6. method according to claim 4, it is characterised in that each pixel for obtaining each training sample in training set is corresponding The first pixel value difference vector, including:
Each training sample is divided into multiple pieces;
With each pixel in each block as the first central pixel point, with r as radius, first central pixel point is obtained First neighborhood territory pixel point, and according to clockwise, by imago in the pixel value and described first of the first neighborhood territory pixel point The pixel value of vegetarian refreshments carries out difference operation, obtains first pixel value difference vector of the length for (2 × r+1) × (2 × r+1) -1 dimensions;
Each pixel corresponding second pixel value difference vector of the image of face to be detected is obtained, including:
It it is multiple pieces by the image division of the face to be detected;
With each pixel in each block as the second central pixel point, with r as radius, second central pixel point is obtained Second neighborhood territory pixel point, and according to clockwise, by imago in the pixel value and described second of the second neighborhood territory pixel point The pixel value of vegetarian refreshments carries out difference operation, obtains second pixel value difference vector of the length for (2 × r+1) × (2 × r+1) -1 dimensions;
Wherein, r is positive integer.
7. method according to claim 2, it is characterised in that described acquisition according to first pixel value difference vector meets The corresponding first binary feature vector of each described pixel based on the first object function of sparse projection matrix, including:
The agent matrix S of R is introduced, by the first object functionIt is transformed to second Object function, second object function is as shown in formula four:
Wherein, variable α represents the penalty factor of agent matrix S, in order to equation of equilibrium four inWith This two variable, S0Represent the number of non-zero element in agent matrix S, the sparse journey of parameter m and the sparse projection matrix R Degree has incidence relation;The quantization error between agent matrix S and binary-coding B is represented,Table Show the error between agent matrix S and R;
Second object function is solved, the first binary feature vector is obtained.
8. method according to claim 7, it is characterised in that described second object function is solved, obtain The first binary feature vector, including:
Initial value S is given at random by the S and B in second object function0And B0, and the S in fixation second object function And B, the R in second object function is updated, R is obtained1, specially:
Second object function is write the first expression formula instead, first expression formula is as shown in formula five:
Wherein, C1=SX, is fixed value;
The first expression formula is solved, R is obtained1, specially:
R is obtained by formula six or formula seven1
Wherein, thrm represents that m element of the maximum in the matrix that will be obtained retains, and remaining element sets to 0;Rt+1Represent and utilize The solution that formula six (t+1) secondary iteration is obtained, RtThe solution using six last (i.e. (t) is secondary) iteration of formula is represented, works as iteration When number of times is N, the solution obtained using formula six is restrained, and stops iteration, the R for now obtainingN=R1, wherein R1Represent to described the When two object functions are solved, the R that first time iteration is obtained;
R1=thrm (S) formula seven;
Wherein, R1When expression is solved to second object function, the R that first time iteration is obtained.
Initial value B is given by the B in second object function0, the R in second object function is assigned to R1, and fixed institute The B and R in the second object function is stated, the S in second object function is updated, is obtained S1, specially:
Second object function is write the second expression formula instead, second expression formula is as shown in formula eight:
Wherein, C2=(B+ α RX)/(1+ α) is fixed value;
The second expression formula is solved, S is obtained1, specially:
S is solved by formula nine1
RightCarry out singular value decomposition and obtain U, V:
Wherein, S1When expression is solved to the second object function, the S that first time iteration is obtained;
S in second object function is assigned to S1, the R in second object function is assigned to R1, and fixed described second S and R in object function, updates the B in second object function, obtains B1, specially:
Second object function is write the 3rd expression formula instead, the 3rd expression formula is as shown in formula ten:
Wherein, C3=SX, is fixed value;
The 3rd expression formula is solved, B is obtained1, specially:
B is obtained by formula 111
B=sign (C3)=sign (SX) formula 11;
Wherein, sign (*) is sign function, when independent variable * is more than 0, sign (*) function output 1, otherwise, sign (*) function Output 0, B1When expression is solved to second object function, the B that first time iteration is obtained, independent variable are Matrix C3In element;
S in second object function is assigned to S1, the B in second object function is assigned to and B1, and fixation described the S and B in two object functions, updates the R in second object function, obtains R2
B in second object function is assigned to B1, the R in second object function is assigned to R2, and fixed described second B and R in object function, updates the S in second object function, obtains S2
S in second object function is assigned to S2, the R in second object function is assigned to R2, and fixed described second S and R in object function, updates the B in second object function, obtains B2
Repeat and the S in second object function is assigned to Sm-1, the B in second object function is assigned to and Bm-1, And the S and B in fixation second object function, the R in second object function is updated, R is obtainedm;By second mesh B in scalar functions is assigned to Bm-1, the R in second object function is assigned to Rm, and the B in fixation second object function And R, the S in second object function is updated, S is obtainedm;S in second object function is assigned to Sm, by described second R in object function is assigned to Rm, and the S and R in fixation second object function, the B in second object function is updated, Obtain BmOperation, until complete M iteration, the B that the M time iteration is obtainedMAs described meet each described of first object function The corresponding first binary feature vector of pixel;Wherein, m represents the m time iteration, and described m, M are positive integer.
9. a kind of face identification device based on sparse projection binary-coding, it is characterised in that include:
Computing unit, the computing unit are used for corresponding first pixel difference of each pixel for obtaining each training sample in training set Value vector;Wherein, the training sample is facial image, and the training set includes several different facial images;
The computing unit is additionally operable to obtain first met based on sparse projection matrix according to first pixel value difference vector The corresponding first binary feature vector of each described pixel of object function;
Cluster cell, the cluster cell are clustered to all of first binary feature vector, are obtained in multiple clusters The heart-word;
Primary vector acquiring unit, the primary vector acquiring unit are used for the first corresponding for each training sample two-value Characteristic vector constitutes the first matrix, first matrix is linearly rebuild using the word, obtains the first linear reconstruction As a result, primary vector is obtained according to the described first linear reconstructed results;Wherein, the corresponding primary vector of each training sample;
Secondary vector acquiring unit, the secondary vector acquiring unit are used for each pixel pair of the image for obtaining face to be detected The the second binary feature vector that answers, and by each second binary feature vector, second matrix of composition, using the word to institute State the second matrix linearly to be rebuild, obtain the second linear reconstructed results, second is obtained according to the described second linear reconstructed results Vector;
Recognition unit, the recognition unit are used for according to the primary vector and the secondary vector, obtain face recognition result.
CN201610917123.0A 2016-10-20 2016-10-20 Face identification method and device based on sparse projection binary-coding Pending CN106503648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610917123.0A CN106503648A (en) 2016-10-20 2016-10-20 Face identification method and device based on sparse projection binary-coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610917123.0A CN106503648A (en) 2016-10-20 2016-10-20 Face identification method and device based on sparse projection binary-coding

Publications (1)

Publication Number Publication Date
CN106503648A true CN106503648A (en) 2017-03-15

Family

ID=58318135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610917123.0A Pending CN106503648A (en) 2016-10-20 2016-10-20 Face identification method and device based on sparse projection binary-coding

Country Status (1)

Country Link
CN (1) CN106503648A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273842A (en) * 2017-06-09 2017-10-20 北京工业大学 Selective ensemble face identification method based on CSJOGA algorithms
CN110874385A (en) * 2018-08-10 2020-03-10 阿里巴巴集团控股有限公司 Data processing method, device and system
CN113436061A (en) * 2021-07-01 2021-09-24 中科人工智能创新技术研究院(青岛)有限公司 Face image reconstruction method and system
CN113536974A (en) * 2021-06-28 2021-10-22 杭州电子科技大学 Face binary feature extraction method based on sparse constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102782708A (en) * 2009-12-02 2012-11-14 高通股份有限公司 Fast subspace projection of descriptor patches for image recognition
CN104978549A (en) * 2014-04-03 2015-10-14 北京邮电大学 Three-dimensional face image feature extraction method and system
CN105930834A (en) * 2016-07-01 2016-09-07 北京邮电大学 Face identification method and apparatus based on spherical hashing binary coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102782708A (en) * 2009-12-02 2012-11-14 高通股份有限公司 Fast subspace projection of descriptor patches for image recognition
CN104978549A (en) * 2014-04-03 2015-10-14 北京邮电大学 Three-dimensional face image feature extraction method and system
CN105930834A (en) * 2016-07-01 2016-09-07 北京邮电大学 Face identification method and apparatus based on spherical hashing binary coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LEI TIAN 等: "Learning iterative quantization binary codes for face recognition", 《NEUROCOMPUTING》 *
夏炎: "大规模图像数据中相似图像的快速搜索", 《中国博士学位论文全文数据库(电子期刊)》 *
范引娣: "基于高阶结构约束的稀疏人脸识别算法", 《计算机与现代化》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273842A (en) * 2017-06-09 2017-10-20 北京工业大学 Selective ensemble face identification method based on CSJOGA algorithms
CN110874385A (en) * 2018-08-10 2020-03-10 阿里巴巴集团控股有限公司 Data processing method, device and system
CN110874385B (en) * 2018-08-10 2023-11-14 阿里巴巴集团控股有限公司 Data processing method, device and system
CN113536974A (en) * 2021-06-28 2021-10-22 杭州电子科技大学 Face binary feature extraction method based on sparse constraint
CN113436061A (en) * 2021-07-01 2021-09-24 中科人工智能创新技术研究院(青岛)有限公司 Face image reconstruction method and system
CN113436061B (en) * 2021-07-01 2022-08-09 中科人工智能创新技术研究院(青岛)有限公司 Face image reconstruction method and system

Similar Documents

Publication Publication Date Title
Han et al. Unsupervised generative modeling using matrix product states
CN108596248B (en) Remote sensing image classification method based on improved deep convolutional neural network
Zhang et al. A lightweight and discriminative model for remote sensing scene classification with multidilation pooling module
CN104166859B (en) Based on SSAE and FSALS SVM Classification of Polarimetric SAR Image
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN110852227A (en) Hyperspectral image deep learning classification method, device, equipment and storage medium
CN106503648A (en) Face identification method and device based on sparse projection binary-coding
CN107067367A (en) A kind of Image Super-resolution Reconstruction processing method
CN106845471A (en) A kind of vision significance Forecasting Methodology based on generation confrontation network
CN103295198B (en) Based on redundant dictionary and the sparse non-convex compressed sensing image reconstructing method of structure
CN111681178B (en) Knowledge distillation-based image defogging method
CN104850837B (en) The recognition methods of handwriting
CN111667445B (en) Image compressed sensing reconstruction method based on Attention multi-feature fusion
CN107992891A (en) Based on spectrum vector analysis multi-spectral remote sensing image change detecting method
CN110288526B (en) Optimization method for improving imaging quality of single-pixel camera by image reconstruction algorithm based on deep learning
CN105608690A (en) Graph theory and semi supervised learning combination-based image segmentation method
CN104318214B (en) A kind of cross-view face identification method shifted based on structuring dictionary domain
CN105844635A (en) Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106991355A (en) The face identification method of the analytical type dictionary learning model kept based on topology
Wang et al. In-context learning unlocked for diffusion models
CN109389171A (en) Medical image classification method based on more granularity convolution noise reduction autocoder technologies
CN114332545B (en) Image data classification method and device based on low-bit pulse neural network
CN109165699A (en) Fine granularity image classification method
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN108171328A (en) A kind of convolution algorithm method and the neural network processor based on this method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170315

RJ01 Rejection of invention patent application after publication