CN111104868A - Cross-quality face recognition method based on convolutional neural network characteristics - Google Patents
Cross-quality face recognition method based on convolutional neural network characteristics Download PDFInfo
- Publication number
- CN111104868A CN111104868A CN201911164077.1A CN201911164077A CN111104868A CN 111104868 A CN111104868 A CN 111104868A CN 201911164077 A CN201911164077 A CN 201911164077A CN 111104868 A CN111104868 A CN 111104868A
- Authority
- CN
- China
- Prior art keywords
- quality
- image block
- low
- image
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention provides a cross-quality face recognition method based on convolutional neural network characteristics, which comprises the steps of firstly obtaining image blocks of each characteristic point of a high-quality training sample image, a low-quality measurement sample image and a high-low quality training dictionary sample image; secondly, designing a deep convolutional neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; performing linear representation on the feature vector of the test image block and the feature vector of the training image block again; then, similarity measurement is carried out on the feature representation of the low-quality measurement image block and the feature representation of the high-resolution training image block, and the category of each test image block is output; and finally, for an image block set of which the face image is divided into S key points of the face, voting the classification result of the image block at each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
Description
Technical Field
The invention relates to an image recognition method, in particular to a cross-quality face recognition method based on convolutional neural network characteristics, and belongs to the technical field of pattern recognition and biological characteristic recognition.
Background
The face recognition technology is a popular research topic based on computer, image processing and pattern recognition. In the past, with the wide application of face recognition in various social fields, such as criminal case identification, public security system, monitoring and the like, face recognition technology has gained more and more attention.
In the process of face recognition, the problem of low recognition accuracy caused by inconsistent face image quality exists, and the recognition work is sometimes difficult to complete. The existing face detection method comprises the following steps:
[1]X.Cao,Y.Wei,F.Wen,J.Sun,“Face alignment by explicit shaperegression”Int.J.Computer.Vis.107(2)(2014),pp.177–190.
the existing optimization solving method comprises the following steps:
[2]E.Hale,W.Yin,Y.Zhang,“Fixed-point contiuation for l1-minimization:methodology and convergence”,SIAM J.Optim.19(3)(2008)1107–1130.
the existing method has the defects that the identification between images with different qualities cannot be processed in time, and the identification efficiency is greatly reduced due to factors such as illumination, shielding and the like.
Disclosure of Invention
The invention aims to solve the technical problem of providing a cross-quality face recognition method based on convolutional neural network characteristics to overcome the defects of the prior art.
The invention provides a cross-quality face recognition method based on convolutional neural network characteristics, which comprises the following steps:
s1, sampling the low-quality face image to the resolution as high-quality image, and obtaining image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low quality training dictionary sample image by the face feature point detection technology [1 ]; go to step S2;
s2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; go to step S3;
s3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method; go to step S4;
s4, carrying out similarity measurement on the linear representation of the feature vector of the low-quality measurement attempt image block and the linear representation of the feature vector of the high-quality training image block, and outputting the category of each low-quality measurement attempt image block; go to step S5;
and S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
By the method, the image is partitioned by the human face characteristic point detection technology, and the image blocks of the key parts of the human face are extracted, so that the influence of unnecessary part information on the recognition result is avoided, and the calculation complexity is reduced. The vector is obtained by learning based on the feature expression convolutional neural network, the direct extraction of pixel point features is replaced, and then linear feature expression is carried out, so that the low-quality face images can be identified.
As a further technical solution of the present invention, the specific method of step S2 is as follows:
designing a deep convolutional neural network, wherein the neural network consists of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
The specific method of step S3 is as follows:
s301, for a low-quality measurement attempt image block feature vector y, the low-quality measurement attempt image block is linearly represented by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image as follows,
y=x1A1+x2A2+...xiAi+...+xNAN+E
wherein A isiRepresenting the image block feature vector of the corresponding position on the ith low-quality training dictionary sample image, wherein i is {1,2iRepresenting a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error item; go to step S302;
s302, linear representation of the image block set of the high-quality training image block at the corresponding position on the high-quality training dictionary sample image is obtained by using a weighted sparse coding regular regression representation method.
In step S301, the method of solving the low-quality-measurement-attempt-block-representative vector is as follows:
first, defineFor each low-quality measurement image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
whereinDenotes the L2 norm, λ is positiveQuantizing the parameters, wherein W is a given local similarity matrix, and x represents a coefficient vector of the low-quality measurement attempt image block;
secondly, for the convenience of solving, an auxiliary variable z is required to be introduced to express the above model as,
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
where μ is a penalty parameter and μ >0, P, p is the lagrange multiplier, tr (·) is the trace operation, T is the transition rank of the matrix, and F is the Frobenius norm of the matrix.
The specific optimization process of parameters in the augmented Lagrange function is as follows:
< a > optimization of z
zk+1Can be solved by a soft threshold method [2 ]]Obtaining, wherein k represents the iteration of the k step;
< b > optimization of x
Wherein H ═ vec (A)1),vec(A2),...,vec(AN)], k denotes the iteration of the k-th step, resulting in a solution,
xk+1=(HTH+WTW)-1(HTek+1+WTbk+1);
< c > select proper ε, check convergence condition
max(||y-A(xk+1)||∞,||zk+1-Wxk+1||∞)<ε
If the maximum iteration times are reached or the termination condition is met, outputting xk+1As x, otherwise, returning to step<a>。
The specific method of step S302 is as follows:
for each high quality training image block feature vector y1Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y1=c1G1+c2G2+...ciGi+...+cNGN+E2
wherein G isiRepresenting image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2iRepresenting the coefficient corresponding to the ith element in the coefficient vector x, E2Representing the residual terms.
In step S302, the solving method for the representative coefficients of the high-quality training image block is as follows:
first, defineFor each high-quality training image block feature vector, a weighted sparse coding regular regression representation method is used to obtain the linear representation of each high-quality training image block in the low-quality training sample image block, a regression model of each high-quality training image block is expressed as,
whereinDenotes the L2 norm, λ is the regularization parameter, W1C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced1The above model is represented as,
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier, tr (-) is the trace operation.
The specific optimization process of parameters in the augmented Lagrange function is as follows:
<a>z1is optimized
< b > c optimization
ck+1=(HTH+W1 TW1)-1(HTek+1+W1 Tbk+1);
<c>selecting an appropriate ε1Checking the convergence condition
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is outputk+1As c, otherwise, returning to step<a>。
The specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained1,c2,…cL]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
where η is a balance parameter, then the reconstruction error e for each classi(x) It is calculated that,
wherein, ciLinear representation of class i low-quality metric attempt blocks on high-quality training dictionary image blocks, function deltaiRepresenting the weight associated with the i-th class, w*Representing the combining coefficient.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: the invention solves the identification difficulty caused by different image quality between images, and improves the identification rate compared with the prior art; the technical scheme solves the identification difficulty caused by factors such as shielding, illumination and the like, and effectively solves the problem by extracting and blocking the characteristic points of the face image.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection authority of the present invention is not limited to the following embodiments.
The embodiment provides a cross-quality face recognition method based on convolutional neural network characteristics, which is characterized by comprising the following steps of:
s1, the low-quality face image is up-sampled to the resolution same as that of the high-quality image, and image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low quality training dictionary sample image are obtained through the face feature point detection technology [1 ].
The high-quality image and the low-quality image have different resolutions, and the low-quality test image and the low-quality training dictionary sample image are up-sampled to the resolution same as the high-quality image through an up-sampling technology; effective information of the recognizable parts of the human face is less, and characteristic points of low-quality measurement sample images, high-quality training sample images and high-quality training dictionary sample images are extracted by adopting a human face characteristic point detection technology, such as effective parts of eyes, nose, mouth and the like; then the effective parts are divided into blocks.
S2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network.
The specific method of step S2 is as follows:
designing a deep convolutional neural network, as shown in table 1 below, the neural network is composed of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
TABLE 1
And S3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method.
The specific method of step S3 is as follows:
s301, for a low-quality measurement attempt image block feature vector y, the low-quality measurement attempt image block is linearly represented by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image as follows,
y=x1A1+x2A2+...xiAi+...+xNAN+E
wherein A isiAnd representing image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1, 2., N }, N represents the number of the low-quality training dictionary sample images, xi represents a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error term.
The solution for the low quality measurement attempt block's representative vector is as follows:
first, defineFor each low-quality measurement image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
whereinA representative vector representing the L2 norm, λ is the regularization parameter, W is the given local similarity matrix, x represents the low quality measurement attempt patch;
secondly, for the convenience of solving, an auxiliary variable z is required to be introduced to express the above model as,
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
where μ is a penalty parameter and μ >0, P, p is the lagrange multiplier, tr (·) is the trace operation, T is the transition rank of the matrix, and F is the Frobenius norm of the matrix.
The specific optimization process of the parameters in the augmented lagrangian function is as follows:
< a > optimization of z
zk+1Can be solved by a soft threshold method [2 ]]Obtaining, wherein k represents the iteration of the k step; (ii) a
< b > optimization of x
Wherein H ═ vec (A)1),vec(A2),...,vec(AN)], k denotes the iteration of the k-th step, resulting in a solution,
xk+1=(HTH+WTW)-1(HTek+1+WTbk+1);
< c > select proper ε, check convergence condition
max(||y-A(xk+1)||∞,||zk+1-Wxk+1||∞)<ε
If the maximum iteration times are reached or the termination condition is met, outputting xk+1As x, otherwise, returning to step<a>。
S302, after the expression coefficient vector of the low-quality image block feature vector is obtained, further, a weighted sparse coding regular regression expression method is used for obtaining linear expression of the image block set of the high-quality training image block at the corresponding position on the high-quality training dictionary sample image.
For each high quality training image block feature vector y1Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y1=c1G1+c2G2+...ciGi+...+cNGN+E2
wherein G isiRepresenting image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2iRepresenting the coefficient corresponding to the ith element in the coefficient vector x, E2Representing the residual terms.
The solving method of the representation coefficients of the high-quality training image block is as follows:
first, defineFor each high-quality training image block feature vector, applying weighted sparse codingThe canonical regression representation method obtains its linear representation in the low-quality training sample image patch, expresses its regression model as,
whereinDenotes the L2 norm, λ is the regularization parameter, W1C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced1The above model is represented as,
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier, tr (-) is the trace operation.
The specific optimization process of the parameters in the augmented lagrangian function is as follows:
<a>z1is optimized
< b > c optimization
ck+1=(HTH+W1 TW1)-1(HTek+1+W1 Tbk+1);
<c>selecting an appropriate ε1Checking the convergence condition
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is outputk+1As c, otherwise, returning to step<a>。
S4, similarity measures the linear representation of the feature vector of the low quality training image block and the linear representation of the feature vector of the high quality training image block, and outputs a class for each test image block.
The specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained1,c2,…cL]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
where η are balance parameters, then the reconstruction error for each class is calculated as,
wherein, ciLinear representation of class i low-quality metric attempt blocks on high-quality training dictionary image blocks, function deltaiRepresenting the weight associated with the i-th class, w*Representing the combining coefficient.
And S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
For each low-quality test image, dividing the low-quality test image into an image block set of key points of S human faces, wherein for each low-quality image block, the class with the minimum reconstruction error is a classification result, and each low-quality test image tries to generate S classification results; and then voting decision is carried out on the S same or different classification results, and the classification result which is classified into the class with the largest number of image blocks is the final classification result of the low-quality test image.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.
Claims (9)
1. A cross-quality face recognition method based on convolutional neural network features is characterized by comprising the following steps:
s1, sampling the low-quality face image to the resolution as high-quality image, and obtaining image blocks of each feature point of the high-quality training sample image, the low-quality measurement sample image and the high-low-quality training dictionary sample image by the face feature point detection technology; go to step S2;
s2, designing a deep convolution neural network, and obtaining a feature vector for each feature point image block through learning of the neural network; go to step S3;
s3, performing linear representation on the feature vector of the low-quality measurement trial image block and the feature vector of the high-quality training image block by using a weighted sparse coding regular regression representation method; go to step S4;
s4, carrying out similarity measurement on the linear representation of the feature vector of the low-quality measurement attempt image block and the linear representation of the feature vector of the high-quality training image block, and outputting the category of each low-quality measurement attempt image block; go to step S5;
and S5, for a face image which is divided into image block sets of S face key points, voting the image block classification result of each key point position, distributing the image to the class with the largest number of votes, and outputting the class of the final low-quality test image.
2. The cross-quality face recognition method based on the convolutional neural network feature of claim 1, wherein the specific method of step S2 is as follows:
designing a deep convolutional neural network, wherein the neural network consists of 10 convolutional layers, 10 normalization layers and 9 activation layers;
for each feature point image block, the block size is 32 × 32, and learning by the convolutional neural network results in a feature vector of 1 × 128.
3. The cross-quality face recognition method based on the convolutional neural network feature as claimed in claim 2, wherein the specific method of step S3 is as follows:
s301, for a low-quality measurement attempt image block feature vector y, the low-quality measurement attempt image block is linearly represented by using the image block feature vector of the corresponding position on the low-quality training dictionary sample image as follows,
y=x1A1+x2A2+...xiAi+...+xNAN+E
wherein A isiRepresenting the image block feature vector of the corresponding position on the ith low-quality training dictionary sample image, wherein i is {1,2iRepresenting a coefficient corresponding to the ith element in the coefficient vector x, and E represents a residual error item; go to step S302;
s302, linear representation of the image block set of the high-quality training image block at the corresponding position on the high-quality training dictionary sample image is obtained by using a weighted sparse coding regular regression representation method.
4. The cross-quality face recognition method based on convolutional neural network features as claimed in claim 3, wherein in step S301, the method for solving the low-quality vector representation of the image block is as follows:
first, defineFor each low-quality measurement image block, a weighted sparse coding canonical regression representation method is used to obtain a linear representation of the low-quality training sample image block, a regression model of the linear representation is expressed as,
whereinRepresents the L2 norm, λ is the regularization parameter, W is the given local similarity matrix, x represents the coefficient vector of the low quality measurement attempt patch;
secondly, for the convenience of solving, an auxiliary variable z is required to be introduced to express the above model as,s.t.z=Wx;
finally, by introducing two auxiliary variables P and P, the augmented lagrange function of the above equation is expressed as,
where μ is a penalty parameter and μ >0, P, p are both lagrange multipliers, tr (·) is a trace operation, T is the transition rank of the matrix, and F is the Frobenius norm of the matrix.
5. The cross-quality face recognition method based on the convolutional neural network characteristics as claimed in claim 4, wherein the specific optimization process of parameters in the augmented Lagrangian function is as follows:
< a > optimization of z
zk+1Can be obtained by a soft threshold method, wherein k represents the iteration of the kth step;
< b > optimization of x
Wherein H ═ vec (A)1),vec(A2),...,vec(AN)], k denotes the iteration of the k-th step, resulting in a solution,
xk+1=(HTH+WTW)-1(HTek+1+WTbk+1);
< c > select proper ε, check convergence condition
max(||y-A(xk+1)||∞,||zk+1-Wxk+1||∞)<ε
If the maximum iteration times are reached or the termination condition is met, outputting xk+1As x, otherwise, returning to step<a>。
6. The cross-quality face recognition method based on the convolutional neural network feature as claimed in claim 5, wherein the specific method in step S302 is as follows:
for each high quality training image block feature vector y1Performing linear representation by using image block feature vectors at corresponding positions on the high-quality training dictionary sample image,
y1=c1G1+c2G2+...ciGi+...+cNGN+E2
wherein G isiRepresenting image block feature vectors of corresponding positions on the ith low-quality training dictionary sample image, wherein i is {1,2iRepresenting the coefficient corresponding to the ith element in the coefficient vector x, E2Representing the residual terms.
7. The cross-quality face recognition method based on the convolutional neural network feature as claimed in claim 6, wherein in step S302, the method for solving the representation coefficients of the high-quality training image block is as follows:
first, defineFor each high-quality training image block feature vector, a weighted sparse coding regular regression representation method is used to obtain the linear representation of each high-quality training image block in the low-quality training sample image block, a regression model of each high-quality training image block is expressed as,
whereinDenotes the L2 norm, λ is the regularization parameter, W1C represents the representation coefficient of the high-quality training image block;
secondly, for solving conveniently, an auxiliary variable z needs to be introduced1The above model is represented as,
wherein, by introducing two auxiliary variables M and M, the augmented Lagrangian function of the above formula is expressed as,
where μ is a penalty parameter and μ >0, M, M is the Lagrangian multiplier, tr (-) is the trace operation.
8. The cross-quality face recognition method based on the convolutional neural network characteristics as claimed in claim 7, wherein the specific optimization process of parameters in the augmented Lagrangian function is as follows:
<a>z1is optimized
< b > c optimization
ck+1=(HTH+W1 TW1)-1(HTek+1+W1 Tbk+1);
<c>selecting an appropriate ε1Checking the convergence condition
Wherein | · | | is a given norm, and if the maximum iteration number is reached or the above termination condition is satisfied, c is outputk+1As c, otherwise, returning to step<a>。
9. The cross-quality face recognition method based on the convolutional neural network feature of claim 8, wherein the specific method of step S4 is as follows:
for a low quality training image block feature vector y, the linear representation x of the low quality training dictionary image block feature vector is obtained through step S3, and for a high quality training image block set, the linear representation c ═ c of the high quality training dictionary image block is obtained1,c2,…cL]And L represents the number of high-quality training dictionary images, the combination coefficient w can be obtained by the following formula,
where η is a balance parameter, then the reconstruction error e for each classi(x) It is calculated that,
wherein, ciLinear representation of class i low-quality metric attempt blocks on high-quality training dictionary image blocks, function deltaiRepresenting the weight associated with the i-th class, w*Representing the combining coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911164077.1A CN111104868B (en) | 2019-11-25 | 2019-11-25 | Cross-quality face recognition method based on convolutional neural network characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911164077.1A CN111104868B (en) | 2019-11-25 | 2019-11-25 | Cross-quality face recognition method based on convolutional neural network characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111104868A true CN111104868A (en) | 2020-05-05 |
CN111104868B CN111104868B (en) | 2022-08-23 |
Family
ID=70421226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911164077.1A Active CN111104868B (en) | 2019-11-25 | 2019-11-25 | Cross-quality face recognition method based on convolutional neural network characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111104868B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103592A (en) * | 2017-04-07 | 2017-08-29 | 南京邮电大学 | A kind of Face Image with Pose Variations quality enhancement method based on double-core norm canonical |
CN108520201A (en) * | 2018-03-13 | 2018-09-11 | 浙江工业大学 | A kind of robust human face recognition methods returned based on weighted blend norm |
-
2019
- 2019-11-25 CN CN201911164077.1A patent/CN111104868B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107103592A (en) * | 2017-04-07 | 2017-08-29 | 南京邮电大学 | A kind of Face Image with Pose Variations quality enhancement method based on double-core norm canonical |
CN108520201A (en) * | 2018-03-13 | 2018-09-11 | 浙江工业大学 | A kind of robust human face recognition methods returned based on weighted blend norm |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
Also Published As
Publication number | Publication date |
---|---|
CN111104868B (en) | 2022-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522818B (en) | Expression recognition method and device, terminal equipment and storage medium | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN113011357B (en) | Depth fake face video positioning method based on space-time fusion | |
CN104392246B (en) | It is a kind of based between class in class changes in faces dictionary single sample face recognition method | |
CN106909938B (en) | Visual angle independence behavior identification method based on deep learning network | |
CN112818764B (en) | Low-resolution image facial expression recognition method based on feature reconstruction model | |
CN110532925B (en) | Driver fatigue detection method based on space-time graph convolutional network | |
CN112560967B (en) | Multi-source remote sensing image classification method, storage medium and computing device | |
CN107491729B (en) | Handwritten digit recognition method based on cosine similarity activated convolutional neural network | |
CN108734199A (en) | High spectrum image robust classification method based on segmentation depth characteristic and low-rank representation | |
CN110969073B (en) | Facial expression recognition method based on feature fusion and BP neural network | |
Hu et al. | Single sample face recognition under varying illumination via QRCP decomposition | |
CN109117795B (en) | Neural network expression recognition method based on graph structure | |
CN114694255B (en) | Sentence-level lip language recognition method based on channel attention and time convolution network | |
CN109002771A (en) | A kind of Classifying Method in Remote Sensing Image based on recurrent neural network | |
CN113807356B (en) | End-to-end low-visibility image semantic segmentation method | |
CN111104868B (en) | Cross-quality face recognition method based on convolutional neural network characteristics | |
Liu et al. | Research on face recognition technology based on an improved LeNet-5 system | |
CN114581965A (en) | Training method of finger vein recognition model, recognition method, system and terminal | |
CN110688966A (en) | Semantic-guided pedestrian re-identification method | |
CN114170657A (en) | Facial emotion recognition method integrating attention mechanism and high-order feature representation | |
CN116704585A (en) | Face recognition method based on quality perception | |
CN114387524B (en) | Image identification method and system for small sample learning based on multilevel second-order representation | |
CN114944002A (en) | Text description assisted gesture perception facial expression recognition method | |
CN115410035A (en) | Air traffic controller unsafe behavior classification method based on monitoring video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |