CN111104868B - Cross-quality face recognition method based on convolutional neural network characteristics - Google Patents

Cross-quality face recognition method based on convolutional neural network characteristics Download PDF

Info

Publication number
CN111104868B
CN111104868B CN201911164077.1A CN201911164077A CN111104868B CN 111104868 B CN111104868 B CN 111104868B CN 201911164077 A CN201911164077 A CN 201911164077A CN 111104868 B CN111104868 B CN 111104868B
Authority
CN
China
Prior art keywords
quality
image block
low
image
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911164077.1A
Other languages
Chinese (zh)
Other versions
CN111104868A (en
Inventor
汪焰南
高广谓
吴松松
邓松
张皖
岳东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201911164077.1A priority Critical patent/CN111104868B/en
Publication of CN111104868A publication Critical patent/CN111104868A/en
Application granted granted Critical
Publication of CN111104868B publication Critical patent/CN111104868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a cross-quality face recognition method based on convolutional neural network characteristics, which comprises the steps of firstly obtaining image blocks of each characteristic point of a high-quality training sample image, a low-quality measurement sample image and a high-low quality training dictionary sample image; secondly, designing a deep convolutional neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; performing linear representation on the feature vector of the test image block and the feature vector of the training image block again; then, similarity measurement is carried out on the feature representation of the low-quality measurement image block and the feature representation of the high-resolution training image block, and the category of each test image block is output; and finally, for an image block set of which the face image is divided into S key points of the face, voting the classification result of the image block at each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.

Description

Cross-quality face recognition method based on convolutional neural network characteristics
Technical Field
The invention relates to an image recognition method, in particular to a cross-quality face recognition method based on convolutional neural network characteristics, and belongs to the technical field of pattern recognition and biological characteristic recognition.
Background
The face recognition technology is a popular research topic based on computer, image processing and pattern recognition. In the past, with the wide application of face recognition in various social fields, such as criminal case identification, public security system, monitoring and the like, face recognition technology has gained more and more attention.
In the process of face recognition, the problem of low recognition accuracy caused by inconsistent face image quality exists, and the recognition work is sometimes difficult to complete. The existing face detection method comprises the following steps:
[1]X.Cao,Y.Wei,F.Wen,J.Sun,“Face alignment by explicit shape regression”Int.J.Computer.Vis.107(2)(2014),pp.177–190.
the existing optimization solving method comprises the following steps:
[2]E.Hale,W.Yin,Y.Zhang,“Fixed-point contiuation for l1-minimization:methodology and convergence”,SIAM J.Optim.19(3)(2008)1107–1130.
the existing method has the defects that the identification between images with different qualities cannot be processed in time, and the identification efficiency is greatly reduced due to factors such as illumination, shielding and the like.
Disclosure of Invention
The invention aims to solve the technical problem of providing a cross-quality face recognition method based on convolutional neural network characteristics to overcome the defects of the prior art.
The invention provides a cross-quality face recognition method based on convolutional neural network characteristics, which comprises the following steps:
s1, sampling the low-quality face image to the resolution as high-quality image, and obtaining image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low quality training dictionary sample image by the face feature point detection technology [1 ]; go to step S2;
s2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; go to step S3;
s3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method; go to step S4;
s4, carrying out similarity measurement on the linear representation of the feature vector of the low-quality measurement attempt image block and the linear representation of the feature vector of the high-quality training image block, and outputting the category of each low-quality measurement attempt image block; go to step S5;
and S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
By the method, the image is partitioned by the human face characteristic point detection technology, and the image blocks of the key parts of the human face are extracted, so that the influence of unnecessary part information on the recognition result is avoided, and the calculation complexity is reduced. The vector is obtained by learning based on the feature representation convolution neural network, the direct extraction of pixel point features is replaced, and then linear feature representation is carried out, so that the low-quality face images can be identified.
As a further technical solution of the present invention, the specific method of step S2 is as follows:
designing a deep convolutional neural network, wherein the neural network consists of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
The specific method of step S3 is as follows:
s301, for a low-quality measurement trying image block feature vector y, performing the following linear representation on the low-quality measurement trying image block by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image,
y=x 1 A 1 +x 2 A 2 +...x i A i +...+x N A N +E
wherein A is i RepresentThe image block feature vector of the corresponding position on the ith low-quality training dictionary sample image, wherein i is {1,2 i Representing a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error item; go to step S302;
s302, linear representation of the image block set of the high-quality training image block at the corresponding position on the high-quality training dictionary sample image is obtained by using a weighted sparse coding regular regression representation method.
In step S301, the method of solving the low-quality-measurement-attempt-block-representative vector is as follows:
first, define
Figure BDA0002286940980000031
For each low-quality test image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
Figure BDA0002286940980000032
wherein
Figure BDA0002286940980000033
Represents the L2 norm, λ is the regularization parameter, W is the given local similarity matrix, x represents the coefficient vector of the low quality measurement attempt patch;
secondly, for solving conveniently, an auxiliary variable z is required to be introduced to express the above model as,
Figure BDA0002286940980000034
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
Figure BDA0002286940980000041
where μ is a penalty parameter and μ >0, P, p is a lagrange multiplier, tr (·) is a trace operation, T is a transition rank of the matrix, and F is a Frobenius norm of the matrix.
The specific optimization process of parameters in the augmented Lagrange function is as follows:
< a > optimization of z
Figure BDA0002286940980000042
z k+1 Can be solved by a soft threshold method [2 ]]Obtaining, wherein k represents the iteration of the k step;
< b > optimization of x
Figure BDA0002286940980000043
Wherein H ═ vec (A) 1 ),vec(A 2 ),...,vec(A N )],
Figure BDA0002286940980000044
Figure BDA0002286940980000045
k denotes the iteration of the k-th step, resulting in a solution,
x k+1 =(H T H+W T W) -1 (H T e k+1 +W T b k+1 );
< c > select proper ε, check convergence condition
max(||y-A(x k+1 )|| ,||z k+1 -Wx k+1 || )<ε
If the maximum iteration times are reached or the termination condition is met, outputting x k+1 As x, otherwise, returning to step<a>。
The specific method of step S302 is as follows:
for each high quality training image block feature directionQuantity y 1 Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y 1 =c 1 G 1 +c 2 G 2 +...c i G i +...+c N G N +E 2
wherein G is i Representing image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2 i Representing the coefficient corresponding to the ith element in the coefficient vector x, E 2 Representing the residual terms.
In step S302, the solving method for the representative coefficients of the high-quality training image block is as follows:
first, define
Figure BDA0002286940980000051
For each high-quality training image block feature vector, a weighted sparse coding regular regression representation method is used to obtain the linear representation of each high-quality training image block in the low-quality training sample image block, a regression model of each high-quality training image block is expressed as,
Figure BDA0002286940980000052
wherein
Figure BDA0002286940980000053
Denotes the L2 norm, λ is the regularization parameter, W 1 C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced 1 The above model is represented as,
Figure BDA0002286940980000054
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
Figure BDA0002286940980000055
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier and tr (·) is the trace operation.
The specific optimization process of parameters in the augmented Lagrange function is as follows:
<a>z 1 is optimized
Figure BDA0002286940980000061
Figure BDA0002286940980000062
Can be solved by a soft threshold method [2 ]]Obtaining;
< b > c optimization
Figure BDA0002286940980000063
Wherein H ═ vec (G) 1 ),vec(G 2 ),...,vec(G N )],
Figure BDA0002286940980000064
Figure BDA0002286940980000065
So as to obtain the following solution,
c k+1 =(H T H+W 1 T W 1 ) -1 (H T e k+1 +W 1 T b k+1 );
<c>selecting an appropriate ε 1 Checking the convergence condition
Figure BDA0002286940980000066
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is output k+1 As c, otherwise, returning to the step<a>。
The specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained 1 ,c 2 ,…c L ]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
Figure BDA0002286940980000067
where η is the balance parameter, then the reconstruction error e for each class i (x) It is calculated that,
Figure BDA0002286940980000071
wherein, c i Linear representation of class i low-quality metric attempt blocks on high-quality training dictionary image blocks, function delta i Representing the weight associated with the i-th class, w * Representing the combining coefficient.
Compared with the prior art, the technical scheme adopted by the invention has the following technical effects: the invention solves the identification difficulty caused by different image quality between images, and improves the identification rate compared with the prior art; the technical scheme solves the identification difficulty caused by factors such as shielding, illumination and the like, and effectively solves the problem by extracting and blocking the characteristic points of the face image.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection authority of the present invention is not limited to the following embodiments.
The embodiment provides a cross-quality face recognition method based on convolutional neural network characteristics, which is characterized by comprising the following steps of:
s1, the low-quality face image is up-sampled to the resolution same as that of the high-quality image, and image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low quality training dictionary sample image are obtained through the face feature point detection technology [1 ].
The high-quality image and the low-quality image have different resolutions, and the low-quality test image and the low-quality training dictionary sample image are up-sampled to the resolution same as the high-quality image through an up-sampling technology; effective information of the recognizable parts of the human face is less, and characteristic points such as effective parts of eyes, a nose, a mouth and the like are extracted from low-quality measurement sample images, high-quality training sample images and high-low quality training dictionary sample images by adopting a human face characteristic point detection technology; then the effective parts are partitioned.
S2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network.
The specific method of step S2 is as follows:
designing a deep convolutional neural network, as shown in table 1 below, the neural network is composed of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
TABLE 1
Figure BDA0002286940980000081
Figure BDA0002286940980000091
And S3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method.
The specific method of step S3 is as follows:
s301, for a low-quality measurement attempt image block feature vector y, the low-quality measurement attempt image block is linearly represented by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image as follows,
y=x 1 A 1 +x 2 A 2 +...x i A i +...+x N A N +E
wherein A is i And representing image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1, 2., N }, N represents the number of the low-quality training dictionary sample images, xi represents a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error term.
The solution for the low quality measurement attempt block's representative vector is as follows:
first, define
Figure BDA0002286940980000092
For each low-quality test image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
Figure BDA0002286940980000093
wherein
Figure BDA0002286940980000094
A representative vector representing the L2 norm, λ is the regularization parameter, W is the given local similarity matrix, x represents the low quality measurement attempt patch;
secondly, for solving conveniently, an auxiliary variable z is required to be introduced to express the above model as,
Figure BDA0002286940980000095
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
Figure BDA0002286940980000101
where μ is a penalty parameter and μ >0, P, p is the lagrange multiplier, tr (·) is the trace operation, T is the transition rank of the matrix, and F is the Frobenius norm of the matrix.
The specific optimization process of the parameters in the augmented lagrangian function is as follows:
< a > z optimization
Figure BDA0002286940980000102
z k+1 Can be solved by a soft threshold method [2 ]]Obtaining, wherein k represents the iteration of the k step; (ii) a
< b > x optimization
Figure BDA0002286940980000103
Wherein H ═ vec (A) 1 ),vec(A 2 ),...,vec(A N )],
Figure BDA0002286940980000104
Figure BDA0002286940980000105
k denotes the iteration of the k-th step, resulting in a solution,
x k+1 =(H T H+W T W) -1 (H T e k+1 +W T b k+1 );
< c > select the appropriate ε, check the convergence criteria
max(||y-A(x k+1 )|| ,||z k+1 -Wx k+1 || )<ε
If the maximum iteration times are reached or the termination condition is met, outputting x k+1 As x, otherwise, returning to step<a>。
S302, after the expression coefficient vector of the low-quality image block feature vector is obtained, further, a weighted sparse coding regular regression expression method is used for obtaining linear expression of the image block set of the high-quality training image block at the corresponding position on the high-quality training dictionary sample image.
For each high quality training image block feature vector y 1 Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y 1 =c 1 G 1 +c 2 G 2 +...c i G i +...+c N G N +E 2
wherein G is i Representing image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2 i Representing the coefficient corresponding to the ith element in the coefficient vector x, E 2 Representing the residual terms.
The solving method of the representation coefficients of the high-quality training image block is as follows:
first, define
Figure BDA0002286940980000111
For each high-quality training image block feature vector, a weighted sparse coding regular regression representation method is used to obtain the linear representation of each high-quality training image block in the low-quality training sample image block, a regression model of each high-quality training image block is expressed as,
Figure BDA0002286940980000112
wherein
Figure BDA0002286940980000113
Denotes the L2 norm, λ is the regularization parameter, W 1 C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced 1 The above model is represented as,
Figure BDA0002286940980000114
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
Figure BDA0002286940980000115
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier, tr (-) is the trace operation.
The specific optimization process of the parameters in the augmented lagrangian function is as follows:
<a>z 1 is optimized
Figure BDA0002286940980000121
Figure BDA0002286940980000122
Can be solved by a soft threshold method [2 ]]Obtaining;
< b > c optimization
Figure BDA0002286940980000123
Wherein H ═ vec (G) 1 ),vec(G 2 ),...,vec(G N )],
Figure BDA0002286940980000124
Figure BDA0002286940980000125
So as to obtain the following solution,
c k+1 =(H T H+W 1 T W 1 ) -1 (H T e k+1 +W 1 T b k+1 );
<c>selecting an appropriate ε 1 Checking the convergence condition
Figure BDA0002286940980000126
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is output k+1 As c, otherwise, returning to step<a>。
S4, similarity measures the linear representation of the feature vector of the low quality training image block and the linear representation of the feature vector of the high quality training image block, and outputs a class for each test image block.
The specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained 1 ,c 2 ,…c L ]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
Figure BDA0002286940980000131
where η is the balance parameter, then the reconstruction error for each class is calculated as,
Figure BDA0002286940980000132
wherein, c i Linear representation of class i low-quality metric attempt blocks on high-quality training dictionary image blocks, function delta i Representing the weight associated with the ith class, w * Representing the combining coefficient.
And S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
For each low-quality test image, dividing the low-quality test image into an image block set of key points of S human faces, wherein for each low-quality image block, the class with the minimum reconstruction error is a classification result, and each low-quality test image tries to generate S classification results; and then voting decision is carried out on the S same or different classification results, and the classification result which is classified into the class with the largest number of image blocks is the final classification result of the low-quality test image.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (8)

1. A cross-quality face recognition method based on convolutional neural network features is characterized by comprising the following steps:
s1, sampling the low-quality face image to the resolution as high-quality image, and obtaining image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low-quality training dictionary sample image by the face feature point detection technology; go to step S2;
s2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; go to step S3;
s3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method; the specific method comprises the following steps:
s301, for a low-quality measurement trying image block feature vector y, performing the following linear representation on the low-quality measurement trying image block by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image,
y=x 1 A 1 +x 2 A 2 +...x i A i +...+x N A N +E
wherein A is i Representing the image block feature vector of the corresponding position on the ith low-quality training dictionary sample image, wherein i is {1,2 i Representing a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error item; go to step S302;
s302, obtaining linear representation of an image block set of a high-quality training image block at a corresponding position on a high-quality training dictionary sample image by using a weighted sparse coding regular regression representation method; go to step S4;
s4, carrying out similarity measurement on the linear representation of the feature vector of the low-quality measurement attempt image block and the linear representation of the feature vector of the high-quality training image block, and outputting the category of each low-quality measurement attempt image block; go to step S5;
and S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
2. The cross-quality face recognition method based on convolutional neural network characteristics as claimed in claim 1, wherein the specific method of step S2 is as follows:
designing a deep convolutional neural network, wherein the neural network consists of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
3. The cross-quality face recognition method based on convolutional neural network characteristics as claimed in claim 1, wherein in step S301, the solution method of the low-quality vector representation of the image block is as follows:
first, define
Figure FDA0003690336680000021
For each low-quality test image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
Figure FDA0003690336680000022
wherein
Figure FDA0003690336680000023
Represents the L2 norm, λ is the regularization parameter, W is the given local similarity matrix, x represents the coefficient vector of the low quality measurement attempt patch;
secondly, for the convenience of solving, an auxiliary variable z is required to be introduced to express the above model as,
Figure FDA0003690336680000031
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
Figure FDA0003690336680000032
where μ is a penalty parameter and μ >0, P, p are both lagrange multipliers, tr (·) is a trace operation, T is the transition rank of the matrix, and F is the Frobenius norm of the matrix.
4. The cross-quality face recognition method based on the convolutional neural network characteristics as claimed in claim 3, wherein the specific optimization process of the parameters in the augmented Lagrangian function is as follows:
< a > z optimization
Figure FDA0003690336680000033
z k+1 Can be obtained by a soft threshold method, wherein k represents the iteration of the kth step;
< b > optimization of x
Figure FDA0003690336680000034
Wherein H ═ vec (A) 1 ),vec(A 2 ),...,vec(A N )],
Figure FDA0003690336680000035
Figure FDA0003690336680000036
k denotes the iteration of the k-th step, resulting in a solution,
x k+1 =(H T H+W T W) -1 (H T e k+1 +W T b k+1 );
< c > select proper ε, check convergence condition
max(||y-A(x k+1 )|| ,||z k+1 -Wx k+1 || )<ε
If the maximum iteration times are reached or the termination condition is met, outputtingx k+1 As x, otherwise, returning to step<a>。
5. The cross-quality face recognition method based on the convolutional neural network feature as claimed in claim 4, wherein the specific method in step S302 is as follows:
for each high quality training image block feature vector y 1 Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y 1 =c 1 G 1 +c 2 G 2 +...c i G i +...+c N G N +E 2
wherein G is i Representing image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2 i Representing the coefficient corresponding to the ith element in the coefficient vector x, E 2 Representing the residual terms.
6. The cross-quality face recognition method based on the convolutional neural network feature as claimed in claim 5, wherein in step S302, the method for solving the representation coefficients of the high-quality training image block is as follows:
first, define
Figure FDA0003690336680000041
For each high-quality training image block feature vector, a weighted sparse coding regular regression representation method is used to obtain the linear representation of each high-quality training image block in the low-quality training sample image block, a regression model of each high-quality training image block is expressed as,
Figure FDA0003690336680000042
wherein
Figure FDA0003690336680000043
Denotes the L2 norm, λ is the regularization parameter, W 1 C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced 1 The above model is represented as,
Figure FDA0003690336680000051
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
Figure FDA0003690336680000052
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier, tr (-) is the trace operation.
7. The cross-quality face recognition method based on the convolutional neural network characteristics as claimed in claim 6, wherein the specific optimization process of parameters in the augmented Lagrangian function is as follows:
<a>z 1 is optimized
Figure FDA0003690336680000053
z 1 k+1 The solution of (c) can be obtained by a soft threshold method;
< b > c optimization
Figure FDA0003690336680000054
Wherein H ═ vec (G) 1 ),vec(G 2 ),...,vec(G N )],
Figure FDA0003690336680000055
Figure FDA0003690336680000056
So as to obtain the following solution,
c k+1 =(H T H+W 1 T W 1 ) -1 (H T e k+1 +W 1 T b k+1 );
<c>selecting an appropriate ε 1 Checking the convergence condition
Figure FDA0003690336680000061
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is output k+1 As c, otherwise, returning to step<a>。
8. The cross-quality face recognition method based on the convolutional neural network feature of claim 7, wherein the specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained 1 ,c 2 ,...c L ]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
Figure FDA0003690336680000062
where h is a balance parameter, then the reconstruction error e for each class i (x) It is calculated that,
Figure FDA0003690336680000063
wherein, c i A linear representation of the ith low-quality training image block over the high-quality training dictionary image block, function delta i Representing the weight associated with the ith, w * Representing the combining coefficient.
CN201911164077.1A 2019-11-25 2019-11-25 Cross-quality face recognition method based on convolutional neural network characteristics Active CN111104868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911164077.1A CN111104868B (en) 2019-11-25 2019-11-25 Cross-quality face recognition method based on convolutional neural network characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911164077.1A CN111104868B (en) 2019-11-25 2019-11-25 Cross-quality face recognition method based on convolutional neural network characteristics

Publications (2)

Publication Number Publication Date
CN111104868A CN111104868A (en) 2020-05-05
CN111104868B true CN111104868B (en) 2022-08-23

Family

ID=70421226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911164077.1A Active CN111104868B (en) 2019-11-25 2019-11-25 Cross-quality face recognition method based on convolutional neural network characteristics

Country Status (1)

Country Link
CN (1) CN111104868B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381070B (en) * 2021-01-08 2021-08-31 浙江科技学院 Fast robust face recognition method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103592A (en) * 2017-04-07 2017-08-29 南京邮电大学 A kind of Face Image with Pose Variations quality enhancement method based on double-core norm canonical
CN108520201A (en) * 2018-03-13 2018-09-11 浙江工业大学 A kind of robust human face recognition methods returned based on weighted blend norm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103592A (en) * 2017-04-07 2017-08-29 南京邮电大学 A kind of Face Image with Pose Variations quality enhancement method based on double-core norm canonical
CN108520201A (en) * 2018-03-13 2018-09-11 浙江工业大学 A kind of robust human face recognition methods returned based on weighted blend norm

Also Published As

Publication number Publication date
CN111104868A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN109522818B (en) Expression recognition method and device, terminal equipment and storage medium
CN103824054B (en) A kind of face character recognition methods based on cascade deep neural network
CN113011357B (en) Depth fake face video positioning method based on space-time fusion
CN108875459B (en) Weighting sparse representation face recognition method and system based on sparse coefficient similarity
CN106909938B (en) Visual angle independence behavior identification method based on deep learning network
CN110532925B (en) Driver fatigue detection method based on space-time graph convolutional network
CN107491729B (en) Handwritten digit recognition method based on cosine similarity activated convolutional neural network
CN110969073B (en) Facial expression recognition method based on feature fusion and BP neural network
Hu et al. Single sample face recognition under varying illumination via QRCP decomposition
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN113807356B (en) End-to-end low-visibility image semantic segmentation method
CN109117795B (en) Neural network expression recognition method based on graph structure
CN110163156A (en) It is a kind of based on convolution from the lip feature extracting method of encoding model
CN114694255B (en) Sentence-level lip language recognition method based on channel attention and time convolution network
CN109002771A (en) A kind of Classifying Method in Remote Sensing Image based on recurrent neural network
CN111104868B (en) Cross-quality face recognition method based on convolutional neural network characteristics
CN114581965A (en) Training method of finger vein recognition model, recognition method, system and terminal
CN110688966A (en) Semantic-guided pedestrian re-identification method
CN114170657A (en) Facial emotion recognition method integrating attention mechanism and high-order feature representation
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
CN113850182A (en) Action identification method based on DAMR-3 DNet
CN116704585A (en) Face recognition method based on quality perception
CN114387524B (en) Image identification method and system for small sample learning based on multilevel second-order representation
CN114944002B (en) Text description-assisted gesture-aware facial expression recognition method
CN110287761A (en) A kind of face age estimation method analyzed based on convolutional neural networks and hidden variable

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant