CN104951756A - Face recognition method based on compressed sensing - Google Patents

Face recognition method based on compressed sensing Download PDF

Info

Publication number
CN104951756A
CN104951756A CN201510309822.2A CN201510309822A CN104951756A CN 104951756 A CN104951756 A CN 104951756A CN 201510309822 A CN201510309822 A CN 201510309822A CN 104951756 A CN104951756 A CN 104951756A
Authority
CN
China
Prior art keywords
msub
mtd
mrow
mover
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510309822.2A
Other languages
Chinese (zh)
Inventor
于爱华
李刚
常丽萍
李胜
白煌
姜倩茹
洪涛
徐智星
候北平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lover Health Science and Technology Development Co Ltd
Original Assignee
Zhejiang Lover Health Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lover Health Science and Technology Development Co Ltd filed Critical Zhejiang Lover Health Science and Technology Development Co Ltd
Priority to CN201510309822.2A priority Critical patent/CN104951756A/en
Publication of CN104951756A publication Critical patent/CN104951756A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face recognition method based on compressed sensing. The face recognition method includes constructing a dictionary database according to face samples and setting requirements and preprocessing tested facial images to column vector, designing projection matrix according to the constructed dictionary database; inputting projection value y of the column vector under the projection matrix into function to solve, when p traverses from 1 to P, solving to obtain all Sp, judging the Sp by the function, outputting judgement types; reconstructing image data according to the judgement types, and rearranging to obtain reconstruction images. On one hand, hardware requirements of high-speed sampling and big data transmission; on the other hand, system identification rate can be effectively improved, and effectiveness of the face recognition method is proved by experimental simulation.

Description

Face recognition method based on compressed sensing
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face recognition method and system based on compressed sensing.
Background
The face recognition technology is receiving high attention from academia as one of the important research directions of machine vision and image processing technology. With the continuous and deep application of digital technology, the resolution of daily images is higher and higher, which puts high demands on hardware devices for image information acquisition, transmission, storage, processing and the like. To alleviate the pressure of information storage and transmission, the current solution is signal compression, such as the discrete cosine transform-based JPEG standard and the wavelet transform-based JPEG2000 standard.
The traditional mode identification process comprises image acquisition, image compression, image decompression, feature extraction and image identification, wherein the identification technology generally adopts high-speed sampling of image information, most redundant information is discarded according to the correlation among image data pixel points to obtain compressed data for transmission, a background receiving end reconstructs an image, and image features are extracted for identity identification.
Because the traditional face recognition technology needs to sample images at high speed and compress a large amount of data to transmit to a background for identity judgment, the traditional face recognition technology puts high requirements on channels and image processing hardware equipment, and the system recognition accuracy is not high in a complex environment.
Disclosure of Invention
The invention aims to provide a face recognition method and a face recognition system based on compressed sensing, and aims to solve the problems that the existing face recognition technology puts high requirements on channels and image processing hardware equipment and is low in recognition accuracy.
The invention is realized in such a way that a face recognition method based on compressed sensing comprises the following steps:
s1, constructing a dictionary base psi ═ psi according to the setting requirement according to the face sample1,...,Ψp,...,ΨP]To test the face image x0Preprocessing to form a column vector x;
s2, designing a projection matrix phi according to the construction dictionary library psi;
s3, inputting the projection value y of the column vector x under the projection matrix phi into a function <math> <mrow> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mi>p</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mover> <mi>s</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math> Solving forWhen P is traversed from 1 to P, allWherein P is the number of individual face samples, Vp、ΣpAndthe projection matrix phi, the dictionary base psi and the input projection value y are used to obtain,is of any size ofThe vector of (a); passing function <math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <msub> <mi>min</mi> <mi>p</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <msub> <mi>&psi;</mi> <mi>p</mi> </msub> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>p</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>P</mi> <mo>]</mo> </mrow> </math> To pairPerforming discrimination and outputting discrimination typeWherein, thereinReferred to as the projection matrix.Is a sample set of the p-th person,sparse coefficients of the dictionary sub-block p;
s4, judging type according to the outputReconstructing the image data intoRearrangingA reconstructed image is obtained.
Preferably, the method further comprises the step of before step S1
S0, inputting initial conditions, wherein the initial conditions comprise face library samples formed by P personal face samples and test face images x0
Preferably, in step S1, the construction process of the dictionary library includes the following steps:
suppose that a face library stores P face samples, where each person has many samples with different angles, different expressions, and different illuminations, and the size of each sample is the same;
randomly selecting Q different samples for each person, forming a column vector by each sample image according to the same arrangement rule and respectively doing l2Norm normalization processing, the size of which is set to be Nx 1, and the norm normalization processing is used as an atom in a dictionary library to form the dictionary library:
for any sub-block of dictionary with P being more than or equal to 1 and less than or equal to PIs a sample set of a pth person, where L ═ FQ; for 1 ═ L ≦ L,and | | | ψl||21 is a column vector of the dictionary.
Preferably, in step S2, the projection matrix Φ is functionally defined as:
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>U</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <msubsup> <mi>U</mi> <mi>&Psi;</mi> <mi>T</mi> </msubsup> <mo>;</mo> </mrow> </math> wherein U is an orthogonal matrix of arbitrary size mxm; v22Is of any size ofAn orthogonal matrix of (a); u shapeΨIs the U matrix of the SVD decomposition for Ψ, for W11Decomposition eigenvalue V11
Aiming at the characteristic that redundant information in an image is not needed in the traditional identification technology, the Compressed Sensing (CS) theory adopts a technology of directly compressing and sampling an image signal, thereby avoiding resource waste caused by high-speed sampling and information discarding processes. For an input high-dimensional signalLinearly projecting the projection value y on a matrix phi to obtain a projection value y, wherein the process is as follows:
wherein,referred to as the projection matrix. The theory of CS is that when M<<N, how to solve the original high-dimensional signal x for a given projection value y and projection matrix Φ. It is clear that equation (1) is an underdetermined problem, i.e., the number of equations is less than the number of unknowns, and there are countless multiple solutions. Therefore, x also needs to be limited in the solving process. The sparsity constraint is a key factor in CS theory, and this condition requires that the signal x can be composed of L basis vectors { psilLinearly means:
<math> <mrow> <mi>x</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>L</mi> </munderover> <msub> <mi>s</mi> <mn>1</mn> </msub> <msub> <mi>&psi;</mi> <mn>1</mn> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>&Psi;s</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,called a dictionary (matrix), s is a sparse vector with most elements zero, if s contains K non-zero elements, x is called K sparse at Ψ. Substituting the formula (2) into the formula (1)) to obtain:
<math> <mrow> <mi>y</mi> <mo>=</mo> <mi>&Phi;x</mi> <mo>=</mo> <mi>&Phi;&Psi;s</mi> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>Ds</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,referred to as an equivalent dictionary. If the projection value y is transmitted, the receiving end uses the sparse representation coefficient s of the projection value y under the equivalent dictionary D to perform image reconstruction and recognition classification, which is a Compressed Sensing based Classifier (CSC) to be researched herein, and includes three processes of Compressed sampling, sparse representation and recognition reconstruction.
When M < < N, the data volume of y is far less than x, which greatly reduces the pressure of transmitting data through a channel and storing and processing data in the background. However, in practical application, the projection matrix Φ has a large influence on image reconstruction and recognition effects, and therefore, the projection matrix optimization design is also one of the main contents to be studied herein. For the CS system, the projection matrix has the functions of compressing the input signal on one hand and extracting the characteristics of the signal on the other hand, and the projection matrix after the optimization design can greatly improve the accuracy of signal identification, classification and recovery.
Based on the theory, the invention overcomes the defects of the prior art, and provides a face recognition method and a face recognition system based on compressed sensing.
Drawings
FIG. 1 is a graph of the change of the recognition rate with the change of the face recognition method based on compressed sensing;
FIG. 2 is a curve of change of the recognition rate with M of the face recognition method based on compressed sensing;
FIG. 3 is a curve of change of the recognition rate of the face recognition method based on compressive sensing to the PIE library system along with M;
FIG. 4 is a curve of the face recognition method based on compressed sensing varying with M.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A face recognition method based on compressed sensing is characterized by comprising the following steps:
s1, constructing a dictionary base psi ═ psi according to the setting requirement according to the face sample1,...,Ψp,...,ΨP]To test the face image x0The pre-processing forms a column vector x.
In step S1, assume that a face library stores P face samples, each of which has a plurality of samples with different angles, different expressions, and different illuminations, and each sample has the same size. Randomly selecting Q different samples for each person, forming a column vector by each sample image according to the same arrangement rule and respectively doing l2Norm normalization, with size set to N × 1, as an atom in the dictionary library, thus forming the dictionary library:for any sub-block of dictionary with P being more than or equal to 1 and less than or equal to PIs the sample set of the pth individual, and is easy to see L ═ PQ. L is more than or equal to 1 and less than or equal to L,and | | | ψl||21 is one of a dictionaryColumn vectors, i.e., atoms.
S2, designing a projection matrix phi according to the construction dictionary library psi;
s3, inputting the projection value y of the column vector x under the projection matrix phi into a function <math> <mrow> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mi>p</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mover> <mi>s</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math> Solving forWhen P is traversed from 1 to P, allWherein P is the number of individual face samples, Vp、ΣpAndthe projection matrix phi, the dictionary base psi and the input projection value y are used to obtain,is of any size ofThe vector of (a); s4, pass function <math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <msubsup> <mrow> <munder> <mi>min</mi> <mi>p</mi> </munder> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <msub> <mi>&psi;</mi> <mi>p</mi> </msub> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>p</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>P</mi> <mo>]</mo> </mrow> </math> To pairPerforming discrimination and outputting discrimination typeWherein, inReferred to as a projection matrix;is a sample set of the p-th person,sparse coefficients of dictionary sub-block p.
In step S3, for any input test sample, it is first resized and formed into an N × 1 column vector x according to the above-mentioned rule for arranging images, and then the expression equation of x under the dictionary base Ψ is:
x=Ψs+∈ (4)
wherein,is indicative of an error. The invention is based on CS face recognition, and the test sample x is compressed and projected to obtain a projection signal(M < N), the procedure is as follows:
<math> <mrow> <mi>y</mi> <mo>=</mo> <mi>&Phi;x</mi> <mo>=</mo> <mi>&Phi;&Psi;s</mi> <mo>+</mo> <mi>&Phi;</mi> <mo>&Element;</mo> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mi>Ds</mi> <mo>+</mo> <mi>e</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,to design a projection matrix with certain properties,in the form of an equivalent dictionary,is the projection domain error. CS theory requires that signal x is sparsely represented in dictionary Ψ, i.e., s contains many zero elements, so that s can be accurately reconstructed from the measurement y. For the face recognition method of the invention, the dictionary database is composed of samples of P different persons, so that the property of block sparsity is utilized when s is reconstructed.
Definition of s = [ s 1 T , . . . , s p T , . . . , s P T ] T , Any of the p is a group of p,rewrite (5) formula:
y=Φ(Ψ1s1+…+Ψpsp+…+ΨPsP)+e (6)
for all the selection cases of s, the requirement is that non-zero elements can only exist in a certain spAnd the other parts are all zero values. Resolving formula (6) into the following problem:
<math> <mrow> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <msub> <mi>s</mi> <mi>p</mi> </msub> </munder> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <msub> <mi>&Psi;</mi> <mi>p</mi> </msub> <msub> <mi>s</mi> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mo>&ForAll;</mo> <mi>p</mi> <mo>=</mo> <mn>1,2</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>P</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
in obtainingThen, the result is applied to a face recognition method, and the following problems are formed:
<math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>p</mi> </munder> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <msub> <mi>&Psi;</mi> <mi>p</mi> </msub> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>p</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>P</mi> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
obtained at this timeIs the result of the method's discrimination of the input x.
Given a well-designed projection matrix Φ, the difficulty of the above problem is mainly focused on how to solve equation (7) accurately. For a certain p, let(7) The formula cost function translates into:
let DpThe Singular Value Decomposition (SVD) of (1) is as follows:
<math> <mrow> <msub> <mi>D</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>U</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>p</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>p</mi> <mi>T</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein,
substituting the formula (10) into the formula (9) to obtain:
<math> <mrow> <mfenced open='' close=' '> <mtable> <mtr> <mtd> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>p</mi> </msub> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mi></mi> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mrow> <mfenced open='||' close='||' separators=' '> <mtable> <mtr> <mtd> <mi>y</mi> <mo>-</mo> <mi></mi> <msub> <mi>U</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>p</mi> </msub> </mtd> <mtd> <mi></mi> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>p</mi> <mi>T</mi> </msubsup> <msub> <mi>s</mi> <mi>p</mi> </msub> </mtd> </mtr> </mtable> <mi></mi> </mfenced> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> <mo></mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mtd> </mtr> <mtr> <mtd> <mi></mi> <mo>=</mo> <msubsup> <mrow> <mfenced open='||' close='||'> <mtable> <mtr> <mtd> <msubsup> <mi>U</mi> <mi>p</mi> <mi>T</mi> </msubsup> <mi>y</mi> <mo>-</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>p</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>p</mi> <mi>T</mi> </msubsup> <msub> <mi>s</mi> <mi>p</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi></mi> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mtd> </mtr> <mtr> <mtd> <mover> <mrow> <mo>=</mo> <mi></mi> </mrow> <mi>&Delta;</mi> </mover> <msubsup> <mfenced open='||' close='||' separators=' '> <mtable> <mtr> <mtd> <mover> <mi>y</mi> <mo>~</mo> </mover> </mtd> </mtr> </mtable> <mrow> <mo>-</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>p</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mover> <mi>s</mi> <mo>~</mo> </mover> </mrow> </mfenced> <mn>2</mn> <mn>2</mn> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
order to
y ~ = y ~ 1 y ~ 2 , s ~ = s ~ 1 s ~ 2
WhereinAndall sizes of(11) The formula is developed as follows:
<math> <mrow> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>p</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>&Sigma;</mi> <mi>p</mi> </msub> <msub> <mover> <mi>s</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
note that the second term on the right of the equal sign of the above formula is associated with spIrrelevant, therefore when:
(12) the minimum value is obtained. In this case, the solution of the formula (7) is:
<math> <mrow> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mi>p</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mover> <mi>s</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, Vp、ΣpAndis obtained from the projection matrix, the dictionary and the input projection value,is of any size ofThe vector of (2). When P is traversed from 1 to P, allIs obtained by the formula (8)The result of the input x is known.
S4, judging type according to the outputReconstructing the image data intoAnd rearranging to obtain a reconstructed image.
In step S4, since the method is to transmit the compressed projection value y to the background for determination, if it is required to reconstruct the input image at this time, it is possible to reconstruct the image data into image data based on the determination result
According to the CS-based face recognition method, the projection value is transmitted instead of the image itself, so that the working efficiency is improved, and the bandwidth pressure of a channel for transmitting a large amount of data is relieved.
In a further implementation process, in order to achieve greater improvement in the accuracy of the input value determination, in the embodiment of the present invention, in step S2, the designing the projection matrix Φ according to the constructed dictionary library Ψ is defined as:
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>U</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <msubsup> <mi>U</mi> <mi>&Psi;</mi> <mi>T</mi> </msubsup> <mo>;</mo> </mrow> </math> wherein U is an orthogonal matrix of arbitrary size mxm; v22Is of any size ofAn orthogonal matrix of (a); wherein U is an orthogonal matrix of arbitrary size mxm; v22Is of any size ofAn orthogonal matrix of (a); u shapeΨIs the U matrix of the SVD decomposition for Ψ, for W11Decomposition eigenvalue V11
In the embodiment of the present invention, the design process of the projection matrix Φ is specifically as follows:
using said marks, dictionary sub-blocksDictionary libraryEquivalent dictionaryProjection matrixWherein
The projection matrix optimization in the CS system is based on the following equation:
<math> <mrow> <munder> <mi>min</mi> <mi>&Phi;</mi> </munder> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>G</mi> <mo>-</mo> <msub> <mi>G</mi> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>G</mi> <mo>=</mo> <msup> <mi>D</mi> <mi>&tau;</mi> </msup> <mi>D</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, | | - | F is defined as the Frobenius norm, G is the Gram matrix of the equivalent dictionary D, G is only related to the projection matrix phi for a given dictionary Ψ, GtIs a target Gram matrix. (14) The purpose of the formula is to enable the Gram matrix corresponding to the equivalent dictionary to approximate a given target Gram matrix with certain properties by designing a projection matrix.
For signals which cannot be completely sparsely represented under a dictionary, such as image signals, if the projection matrix phi is designed to enable the equivalent dictionary D to have the property of the dictionary psi, the CS system has very good performance, and the target Gram matrix is selected asFor the face image sample of the invention, the sparse representation equation under the dictionary Ψ is as formula (4), and in general, ∈ is not all-zero vector, so that G can be consideredΨThe projection matrix Φ is designed as the target Gram matrix.
In the embodiment of the present invention, the dictionary library Ψ is composed of P face samples of different persons, and even a sample of the same person may have a poor correlation due to differences in angle, expression, illumination, and the like, that is, the same sub-block ΨpThe inner product between each two atoms is small. On the other hand, for two different people, i.e., different dictionary sub-blocks, we want the correlation between atoms as small as possible. Order:
the target Gram matrix is modified as follows:
Gt=ΨτΨ+Δ (15)
wherein the correction matrixCan be expressed as:
for any 1 ≤ i ≤ P, 1 ≤ j ≤ P, and ΔijAll of which are equal to ΨijThe same; m is more than or equal to 1 and less than or equal to L, n is more than or equal to 1 and less than or equal to LmnIs the element of the corresponding position in Δ and:
<math> <mrow> <msub> <mi>&delta;</mi> <mi>mn</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mo>-</mo> <mi>&eta;</mi> <mo>,</mo> </mtd> <mtd> <mi>i</mi> <mo>&NotEqual;</mo> <mi>j</mi> </mtd> </mtr> <mtr> <mtd> <mi>&eta;</mi> <mo>,</mo> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mi>j</mi> <mo>,</mo> <mi>m</mi> <mo>&NotEqual;</mo> <mi>n</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>i</mi> <mo>=</mo> <mi>j</mi> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow> </math>
where η is a small constant greater than zero, called the correction constant. G constructed by the formula (15)tThe method not only reduces the correlation among atoms of different dictionary sub-blocks, but also properly strengthens the correlation among atoms in the same dictionary sub-block. It should be noted that each atom in the dictionary library is normalized, i.e. 1 ≦ L, | | ψ is arbitraryl||21, the largest interatomic product is therefore 1, i.e. GΨDiagonal elements of (a). In order to update GtThe physical significance is more clear, and the invention forces GtThe value of the medium element is at most 1, so that the diagonal elements do not change their size, while the non-diagonal elements are not allowed to exceed 1 and be larger than zero after correction, which imposes certain requirements on the selection of the correction constant η.
Thus, the problem of forming the projection matrix design herein is as follows:
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>&Phi;</mi> </munder> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>G</mi> <mo>-</mo> <msub> <mi>G</mi> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>G</mi> <mo>=</mo> <msup> <mi>&Psi;</mi> <mi>&tau;</mi> </msup> <msup> <mi>&Phi;</mi> <mi>&tau;</mi> </msup> <mi>&Phi;&Psi;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein G istIs defined by the formula (15).
The SVD decomposition for the dictionary Ψ is as follows:
<math> <mrow> <mi>&Psi;</mi> <mo>=</mo> <msub> <mi>U</mi> <mi>&Psi;</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>&Psi;</mi> <mi>T</mi> </msubsup> </mrow> </math>
wherein,easy obtaining:
<math> <mrow> <mi>G</mi> <mo>=</mo> <msub> <mi>V</mi> <mi>&Psi;</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mi>W</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>&Psi;</mi> <mi>T</mi> </msubsup> <mo>;</mo> </mrow> </math>
wherein <math> <mrow> <mi>W</mi> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <msubsup> <mi>U</mi> <mi>&Psi;</mi> <mi>&tau;</mi> </msubsup> <msup> <mi>&Phi;</mi> <mi>&tau;</mi> </msup> <mi>&Phi;</mi> <msub> <mi>U</mi> <mi>&Psi;</mi> </msub> <mo>.</mo> </mrow> </math>
Order toThe size of the upper left corner of the representation matrix W isThen the cost function of equation (17) can be expanded as:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>G</mi> <mo>-</mo> <msub> <mi>G</mi> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <msubsup> <mfenced open='||' close='||'> <mtable> <mtr> <mtd> <mi></mi> <msub> <mi>V</mi> <mi>&Psi;</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mi></mi> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mi>W</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mi></mi> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mi>&Psi;</mi> <mi>&tau;</mi> </msubsup> <mi></mi> <mo>-</mo> <msub> <mi>G</mi> <mi>t</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>F</mi> <mn>2</mn> </msubsup> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <msubsup> <mfenced open='||' close='||]' separators=' '> <mtable> <mtr> <mtd> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> <msub> <mi>W</mi> <mrow> <mn>11</mn> <mi></mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mtd> </mtr> </mtable> <mi></mi> <msubsup> <mi>V</mi> <mi>&Psi;</mi> <mi>&tau;</mi> </msubsup> <msub> <mi>G</mi> <mi>t</mi> </msub> <msub> <mi>V</mi> <mi>&Psi;</mi> </msub> <mi></mi> </mfenced> <mi>F</mi> <mn>2</mn> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow> </math>
order to <math> <mrow> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <msubsup> <mi>V</mi> <mi>&Psi;</mi> <mi>&tau;</mi> </msubsup> <msub> <mi>G</mi> <mi>t</mi> </msub> <msub> <mi>V</mi> <mi>&Psi;</mi> </msub> <mo>,</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>=</mo> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>:</mo> <mover> <mi>N</mi> <mo>~</mo> </mover> <mo>,</mo> <mn>1</mn> <mo>:</mo> <mover> <mi>N</mi> <mo>~</mo> </mover> <mo>)</mo> </mrow> </mrow> </math> Is thatThe size of the upper left corner isThen (18) is converted into:
<math> <mrow> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>G</mi> <mo>-</mo> <msub> <mi>G</mi> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> <msub> <mi>W</mi> <mn>11</mn> </msub> <msub> <mi>&Sigma;</mi> <mi>&Psi;</mi> </msub> <mo>-</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> </mrow> </math>
the last two items on the right side of the equal sign of the above formula are independent of the projection matrix and are defined(17) The formula is equivalent to:
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mi>&Phi;</mi> </munder> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>W</mi> <mo>~</mo> </mover> <mn>11</mn> </msub> <mo>-</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow> </math>
setting: <math> <mrow> <msub> <mover> <mi>W</mi> <mo>~</mo> </mover> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>W</mi> </msub> <msub> <mi>&Lambda;</mi> <mi>W</mi> </msub> <msubsup> <mi>V</mi> <mi>W</mi> <mi>&tau;</mi> </msubsup> <mo>,</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>=</mo> <msub> <mi>V</mi> <mi>t</mi> </msub> <msub> <mi>&Lambda;</mi> <mi>t</mi> </msub> <msubsup> <mi>V</mi> <mi>t</mi> <mi>&tau;</mi> </msubsup> </mrow> </math> are respectively asAnda form of eigenvalue decomposition, requirement ΛWAnd ΛtAll in descending order, as known from the document "r.a. horns and c.r.johnson, Matrix Analysis, Cambridge University Press,2nd edition, 2012", Corollary 7.4.9.3 on page 468):
<math> <mrow> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>W</mi> <mo>~</mo> </mover> <mn>11</mn> </msub> <mo>-</mo> <msubsup> <mover> <mi>G</mi> <mo>~</mo> </mover> <mi>t</mi> <mn>11</mn> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>&GreaterEqual;</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&Lambda;</mi> <mi>W</mi> </msub> <mo>-</mo> <mo>-</mo> <msub> <mi>&Lambda;</mi> <mi>t</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mi>F</mi> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow> </math>
when getting Vw=VtThe above formula is true. Due to the limitations of the projection matrix phi,does not exceed M, so ΛWIs at most M, when the M non-zero elements are equal to ΛtWhen M elements with the maximum absolute value are included, (20)) the right side of the equation inequality sign takes the minimum value, and the eigenvalue matrix at this time is recorded asTherefore, when:
<math> <mrow> <msub> <mover> <mi>W</mi> <mo>~</mo> </mover> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>t</mi> </msub> <msub> <mover> <mi>&Lambda;</mi> <mo>~</mo> </mover> <mi>W</mi> </msub> <msubsup> <mi>V</mi> <mi>t</mi> <mi>&tau;</mi> </msubsup> <mo>;</mo> </mrow> </math>
(20) the equation equals and the minimum value is obtained. Further can obtainW is to be11Carrying out SVD decomposition to obtain: <math> <mrow> <msub> <mi>W</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>V</mi> <mn>11</mn> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mn>11</mn> <mn>2</mn> </msubsup> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msubsup> <mi>V</mi> <mn>11</mn> <mi>&tau;</mi> </msubsup> <mo>;</mo> </mrow> </math>
from this, one can choose: <math> <mrow> <mi>W</mi> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mn>11</mn> <mn>2</mn> </msubsup> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>&tau;</mi> </msup> <mo>;</mo> </mrow> </math>
wherein, V22Is an orthogonal matrix of arbitrary size (N-N) x (N-N). Because of the fact thatThe above discussion yields a solution of equation (17):
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>U</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>&tau;</mi> </msup> <msubsup> <mi>U</mi> <mi>&Psi;</mi> <mi>&tau;</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow> </math>
where U is an orthogonal matrix of arbitrary size M × M.
From the above discussion, it can be seen that the present invention is directed to a projection matrix ΦThe optimization design is only related to the dictionary Ψ and the correction constant η, so for fixed Ψ and η, the system only needs to obtain Φ offline, a projection matrix design step is not needed for each input test image, and the formula (21) is an analytic solution result, so that the calculation cost is not high. In addition, there are two degrees of freedom U and V in this result22This provides the possibility to further improve the system performance.
In order to verify the actual effect of the invention, in the embodiment of the invention, the performance of the CS-based face recognition method and the improvement condition of the projection matrix optimization on the system performance are verified through experimental simulation. The face sample library used in the experiment comprises ORL library, Yale-EXTENDED library (recorded as Yale-E) and CMU PIE library (recorded as PIE).
Dictionary is respectively constructed for each face library in simulationP, each dictionary sub-blockThe invention designs a projection matrix through a dictionaryEach face sample in each bank is preprocessed in a size of 32 × 32 and formed into a 1024 × 1 column vector, i.e., N — 1024. For 40 persons in the ORL library, randomly selecting 200 atoms of 5 face samples from each person to form a dictionary library, wherein P is 40, and Q is 5; for 15 different Yale library people, each person randomly selects 8 face samples and 120 atoms to form a dictionary library, wherein P is 15, and Q is 8; for 38 different Yale-E library people, each person randomly selects 50 face samples, and 1900 atoms form a dictionary library, wherein P is 38, and Q is 50; for 68 different people in the PIE library, 2720 face samples are randomly selected from each person to form a dictionary library, wherein P is 68, and Q is 40. For the rest samples in each face library, 5 samples are randomly selected as test signals as much as possible, and 10 experiments are repeated for each identificationAnd taking the arithmetic mean of the 10 experimental results as a final result to carry out recognition rate analysis.
(1) Projection matrix optimization parameter setting
A. Selecting the eta value:
the influence of the correction constant η on the system performance is firstly tested. And setting the compressed projection value M to be 80, wherein the compression ratio is 1024/80, taking values of different eta, and designing a projection matrix phi through a text optimization algorithm. Figure 1 depicts the system recognition rate versus η for different face libraries. From the analysis of FIG. 1, for Yale library and Yale-E library, the correction constant eta has no improvement effect on the system recognition rate, and even when eta is too large, the recognition rate can be reduced, wherein eta is greater than 0.06 black line change trend in the figure; for the ORL library and the PIE library, the system identification rate is improved by properly selecting eta. Comprehensively, the fixed correction constant η is 0.03 in the subsequent simulation, and a good recognition effect can be obtained for all the four face libraries.
B. Selecting an M value:
from the CS theoretical analysis, when the compressed projection value M is larger, the accuracy of the system to reconstruct the image is relatively higher, but the corresponding pressure of the channel transmission data and the background processing data is also larger at this time. Therefore, for different application scenarios, the advantages and disadvantages are balanced. The influence of the selection of the M value on the system recognition rate is mainly verified, for different M values, a projection matrix phi is designed through the optimization algorithm, and a curve of the system recognition rate changing along with M for different face libraries is depicted in figure 2. From the analysis of fig. 2, the system recognition rate does not vary monotonically with the compressed projection value M. Different M values, the Yale library recognition rate is basically unchanged; for the ORL library and the Yale-E library, when M is 80, the system identification rate is stabilized at a more ideal position; on the other hand, the PIE library tends to increase in recognition rate as a whole, rather than monotonically increasing with the M value. In consideration of the effectiveness, M is fixed to 80 in the subsequent simulation, that is, the compression ratio is 1024/80.
(2) CSC Performance testing
The test signals are subjected to projection compression according to the designed projection matrix, and then the projection values are identified and classified, the classification method respectively adopts KNN, SVM, NNSRC and CSC of the text, and the identification rate statistics of the four face libraries are as follows 1:
TABLE 1 comparison of recognition rates of different classifiers
As can be seen from Table 1, the CSC all obtained the maximum system recognition rate for the three face libraries ORL, Yale and Yale-E, but the effect is slightly worse than that of the NNSRC system when the CSC is applied to the PIE library.
(3) Projection matrix optimization algorithm performance test
The test signal is subjected to projection compression according to the designed projection matrix, and then the projection values are identified and classified, wherein the classification method respectively adopts different compression algorithms, and the identification rate statistics of four face libraries are as follows 2:
TABLE 2 comparison of recognition rates for different compression methods
The data in table 2 show that the projection matrix design method of the present invention achieves the maximum value in the system recognition rate compared with the random sampling and the PCA, especially the improvement of the Yale library and the PIE library; but the PIE library recognition rate remains low relative to the case without compression. The simulation of the selected part of the M value of section 4.1 indicates that the system recognition rate of the PIE library has a tendency to improve along with the increase of M, so that the system recognition rate of the PIE library is attempted to be increased continuously, and the change curve of the system recognition rate of the PIE library is shown in figure 3. As can be seen from fig. 3, for the PIE face library, when the value of M is 120, the system recognition rate already exceeds the uncompressed condition in table 2, and when M is 240, the system recognition rate reaches 98.24%, and the compression rate is 1024/240.
(4) Image reconstruction effect testing
The projection matrix optimization algorithm is applied to the CSC, and image reconstruction is carried out on the classification result in the background. Let the test signal beReconstructing the signal asThe Mean Square Error (MSE) of the two is defined as:
<math> <mrow> <msub> <mi>&sigma;</mi> <mi>mse</mi> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>x</mi> <mo>^</mo> </mover> <mo>-</mo> <mi>x</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow> </math>
the Signal reconstruction performance is measured by Peak Signal to Noise Ratio (PSNR), and is defined as follows:
<math> <mrow> <msub> <mi>&sigma;</mi> <mi>psnr</mi> </msub> <mover> <mo>=</mo> <mi>&Delta;</mi> </mover> <mn>10</mn> <mo>&times;</mo> <mi>log</mi> <mn>10</mn> <mo>[</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msup> <mn>2</mn> <mi>r</mi> </msup> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msub> <mi>&sigma;</mi> <mi>mse</mi> </msub> </mfrac> <mo>]</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein r ═ 8 represents the number of coded bits per pixel. FIG. 4 depicts σpsnrThe variation curve of the projection value M is compressed. Each face bin σ in fig. 4psnrThe trend along with M is basically consistent with the trend of the system identification rate in FIG. 4, and is not monotonically increased along with the value of M, but the general trend is rising.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects: the invention is based on a compressed sensing classifier, performs projection compression on input signals, transmits projection values, and a background uses sparse representation errors of the projection values to recognize and classify the input signals; in addition, the invention carries out optimization design aiming at the projection matrix of the system, defines a new measure, enables the Gram matrix of the equivalent dictionary to approach a corrected dictionary Gram matrix by designing the projection matrix, and utilizes matrix decomposition to obtain an analytic solution corresponding to the optimal projection matrix. Simulation results prove that for a proper projection matrix, the identification method can reduce the pressure of the system for processing data on one hand and can effectively improve the identification rate of the system on the other hand.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (4)

1. A face recognition method based on compressed sensing is characterized by comprising the following steps:
s1, constructing a dictionary base psi ═ psi according to the setting requirement according to the face sample1,...,Ψp,...,ΨP]To test the face image x0Preprocessing to form a column vector x;
s2, designing a projection matrix phi according to the construction dictionary library psi;
s3, inputting the projection value y of the column vector x under the projection matrix phi into a function <math> <mrow> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mi>p</mi> </msub> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msubsup> <mi>&Sigma;</mi> <mi>p</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mover> <mi>y</mi> <mo>~</mo> </mover> <mn>1</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mover> <mi>s</mi> <mo>~</mo> </mover> <mn>2</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math> Solving forWhen P is traversed from 1 to P, allWherein P is the number of individual face samples, Vp、ΣpAndthe projection matrix phi, the dictionary base psi and the input projection value y are used to obtain,is of any size ofThe vector of (a); passing function <math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>arg</mi> <msub> <mi>min</mi> <mi>p</mi> </msub> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <msub> <mi>&psi;</mi> <mi>p</mi> </msub> <msub> <mover> <mi>s</mi> <mo>^</mo> </mover> <mi>p</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>,</mo> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> <mi>p</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>P</mi> <mo>]</mo> </mrow> </math> To pairPerforming discrimination and outputting discrimination typeWherein, thereinReferred to as a projection matrix;is a sample set of the p-th person,is the sparse coefficient of dictionary sub-block p;
s4, judging type according to the outputReconstructing the image data intoAnd rearranging to obtain a reconstructed image.
2. The method for recognizing human face based on compressed sensing according to claim 1, further comprising before step S1
S0, inputting initial conditions, wherein the initial conditions comprise face library samples formed by P personal face samples and test face images x0
3. The compressed sensing-based face recognition method according to claim 1, wherein in step S1, the dictionary library is constructed by the following steps:
suppose that a face library stores P face samples, where each person has many samples with different angles, different expressions, and different illuminations, and the size of each sample is the same;
randomly selecting Q different samples for each person, forming a column vector by each sample image according to the same arrangement rule and respectively doing l2Norm normalization processing, the size of which is set to be Nx 1, and the norm normalization processing is used as an atom in a dictionary library to form the dictionary library:
for any sub-block of dictionary with P being more than or equal to 1 and less than or equal to PIs a sample set of p individuals, where L = PQ; for L which is more than or equal to 1 and less than or equal to 1,and | | | ψl||21 is a column vector of the dictionary.
4. The method for recognizing a human face based on compressed sensing as claimed in claim 2, wherein in step S2, the projection matrix Φ is functionally defined as:
<math> <mrow> <mover> <mi>&Phi;</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>U</mi> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>&Sigma;</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <msup> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>V</mi> <mn>11</mn> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>V</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <msubsup> <mi>U</mi> <mi>&Psi;</mi> <mi>T</mi> </msubsup> <mo>;</mo> </mrow> </math> wherein U is an orthogonal matrix of arbitrary size mxm; v22Is of any size ofAn orthogonal matrix of (a); u shapeΨIs the U matrix of the SVD decomposition for Ψ, for W11Decomposition eigenvalue V11
CN201510309822.2A 2015-06-08 2015-06-08 Face recognition method based on compressed sensing Pending CN104951756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510309822.2A CN104951756A (en) 2015-06-08 2015-06-08 Face recognition method based on compressed sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510309822.2A CN104951756A (en) 2015-06-08 2015-06-08 Face recognition method based on compressed sensing

Publications (1)

Publication Number Publication Date
CN104951756A true CN104951756A (en) 2015-09-30

Family

ID=54166398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510309822.2A Pending CN104951756A (en) 2015-06-08 2015-06-08 Face recognition method based on compressed sensing

Country Status (1)

Country Link
CN (1) CN104951756A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844261A (en) * 2016-04-21 2016-08-10 浙江科技学院 3D palmprint sparse representation recognition method based on optimization feature projection matrix
CN106131865A (en) * 2016-07-19 2016-11-16 浪潮软件集团有限公司 Network quality analysis method based on high-speed rail line
CN106874946A (en) * 2017-02-06 2017-06-20 浙江科技学院 A kind of novel classification recognizer based on subspace analysis
CN107766832A (en) * 2017-10-30 2018-03-06 国网浙江省电力公司绍兴供电公司 A kind of face identification method for field operation construction management
CN107944344A (en) * 2017-10-30 2018-04-20 国网浙江省电力公司绍兴供电公司 Power supply enterprise's construction mobile security supervision platform
CN107992897A (en) * 2017-12-14 2018-05-04 重庆邮电大学 Commodity image sorting technique based on convolution Laplce's sparse coding
CN109199432A (en) * 2018-06-26 2019-01-15 南京邮电大学 A kind of parallelly compressed cognitive method of Multi-path synchronous acquisition cardiechema signals
CN109800719A (en) * 2019-01-23 2019-05-24 南京大学 Low resolution face identification method based on sub-unit and compression dictionary rarefaction representation
CN110503078A (en) * 2019-08-29 2019-11-26 的卢技术有限公司 A kind of remote face identification method and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006003270A1 (en) * 2004-06-04 2006-01-12 France Telecom Method for recognising faces by means of a two-dimensional linear discriminant analysis
CN104463148A (en) * 2014-12-31 2015-03-25 南京信息工程大学 Human face recognition method based on image reconstruction and Hash algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006003270A1 (en) * 2004-06-04 2006-01-12 France Telecom Method for recognising faces by means of a two-dimensional linear discriminant analysis
CN104463148A (en) * 2014-12-31 2015-03-25 南京信息工程大学 Human face recognition method based on image reconstruction and Hash algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUANG BAI ET AL: ""Alternating Optimization of Sensing Matrix and Sparsifying Dictionary for Compressed Sensing"", 《IEEE TRANSACTIONS OF SIGNAL PROCESSING》 *
唐苗: "基于稀疏表达的人脸识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844261A (en) * 2016-04-21 2016-08-10 浙江科技学院 3D palmprint sparse representation recognition method based on optimization feature projection matrix
CN106131865A (en) * 2016-07-19 2016-11-16 浪潮软件集团有限公司 Network quality analysis method based on high-speed rail line
CN106874946A (en) * 2017-02-06 2017-06-20 浙江科技学院 A kind of novel classification recognizer based on subspace analysis
CN106874946B (en) * 2017-02-06 2019-08-16 浙江科技学院 A kind of classifying identification method based on subspace analysis
CN107766832A (en) * 2017-10-30 2018-03-06 国网浙江省电力公司绍兴供电公司 A kind of face identification method for field operation construction management
CN107944344A (en) * 2017-10-30 2018-04-20 国网浙江省电力公司绍兴供电公司 Power supply enterprise's construction mobile security supervision platform
CN107992897A (en) * 2017-12-14 2018-05-04 重庆邮电大学 Commodity image sorting technique based on convolution Laplce's sparse coding
CN109199432A (en) * 2018-06-26 2019-01-15 南京邮电大学 A kind of parallelly compressed cognitive method of Multi-path synchronous acquisition cardiechema signals
CN109199432B (en) * 2018-06-26 2021-09-03 南京邮电大学 Parallel compression sensing method for multi-path synchronous acquisition of heart sound signals
CN109800719A (en) * 2019-01-23 2019-05-24 南京大学 Low resolution face identification method based on sub-unit and compression dictionary rarefaction representation
CN109800719B (en) * 2019-01-23 2020-08-18 南京大学 Low-resolution face recognition method based on sparse representation of partial component and compression dictionary
CN110503078A (en) * 2019-08-29 2019-11-26 的卢技术有限公司 A kind of remote face identification method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN104951756A (en) Face recognition method based on compressed sensing
Mandal et al. Curvelet based face recognition via dimension reduction
Wang et al. Facial expression recognition based on local phase quantization and sparse representation
Sumithra et al. A review of various linear and non linear dimensionality reduction techniques
Renna et al. Classification and reconstruction of high-dimensional signals from low-dimensional features in the presence of side information
CN103903261B (en) Spectrum image processing method based on partition compressed sensing
CN112966632B (en) Vibration signal imaging-based fault identification method and system
Bengua et al. Matrix product state for higher-order tensor compression and classification
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
Abdulrahman et al. Face recognition using enhancement discrete wavelet transform based on MATLAB
Vedaldi et al. Joint data alignment up to (lossy) transformations
Thiry et al. The unreasonable effectiveness of patches in deep convolutional kernels methods
Borgi et al. Regularized shearlet network for face recognition using single sample per person
CN108304833A (en) Face identification method based on MBLBP and DCT-BM2DPCA
Shiau et al. A sparse representation method with maximum probability of partial ranking for face recognition
He et al. Random combination for information extraction in compressed sensing and sparse representation-based pattern recognition
Dharani et al. Face recognition using wavelet neural network
Shejin et al. Significance of dictionary for sparse coding based face recognition
Jiang et al. Bregman iteration algorithm for sparse nonnegative matrix factorizations via alternating l 1-norm minimization
Sahu et al. Image compression methods using dimension reduction and classification through PCA and LDA: A review
CN112766081A (en) Palm print identification method and system based on principal component and sparse representation
Borgi et al. ShearFace: Efficient extraction of anisotropic features for face recognition
Shafiee et al. Efficient sparse representation classification using adaptive clustering
CN112417234B (en) Data clustering method and device and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20200825