CN114821703A - Distance adaptive thermal infrared face recognition method - Google Patents

Distance adaptive thermal infrared face recognition method Download PDF

Info

Publication number
CN114821703A
CN114821703A CN202210253355.6A CN202210253355A CN114821703A CN 114821703 A CN114821703 A CN 114821703A CN 202210253355 A CN202210253355 A CN 202210253355A CN 114821703 A CN114821703 A CN 114821703A
Authority
CN
China
Prior art keywords
thermal infrared
net
network
image
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210253355.6A
Other languages
Chinese (zh)
Other versions
CN114821703B (en
Inventor
殷光强
李超
米尔卡米力江·亚森
郑雨晴
刘亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210253355.6A priority Critical patent/CN114821703B/en
Publication of CN114821703A publication Critical patent/CN114821703A/en
Application granted granted Critical
Publication of CN114821703B publication Critical patent/CN114821703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image recognition, in particular to a distance self-adaptive thermal infrared face recognition method, which comprises the steps of constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, and processing a thermal infrared image by utilizing the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN; based on prior information obtained by near infrared, local feature extraction is respectively carried out on near infrared and processed thermal infrared images by using different feature algorithms, full feature fusion is carried out on the extracted different features, dimension reduction processing is carried out on the fused features, and then classification and identification are carried out on the dimension reduced features. By the method, the problem that when the distance changes, the thermal infrared imaging is greatly influenced by the distance, so that the face recognition is difficult can be effectively solved.

Description

Distance adaptive thermal infrared face recognition method
Technical Field
The invention relates to the technical field of image recognition, in particular to a distance adaptive thermal infrared face recognition method.
Background
Image enhancement is one of the key preprocessing methods of the face recognition technology. Aiming at the problem of too low resolution of a face image caused by distance, the current image enhancement is mainly divided into an indirect method and a direct method. The indirect method is to indirectly process the problem of too low resolution by utilizing super-resolution algorithm enhancement. Most super-resolution algorithms are intended to improve the visual effect of images, and neglect the recognition rate as an evaluation criterion for image enhancement. The direct method is to extract the stable face features capable of distinguishing different faces from the low-resolution face image and is divided into feature-based and structure-based extraction modes. For example, the robust feature extraction of the human face is performed by using the color feature, the texture feature, the image subspace information and the like of the human face, and the structural feature extraction is performed by using a coupling local preserving mapping mode, but the problem that the feature dimension of a low-resolution image is not matched exists, and the direct feature extraction is difficult.
The training set of the existing thermal infrared face recognition algorithm model is not enough and representative enough, and although a high recognition rate is achieved under a data set processed in a laboratory, the problem of low recognition rate due to poor generalization capability is shown when the actual image acquired by infrared equipment is faced.
Disclosure of Invention
In order to solve the technical problems, the invention provides a distance self-adaptive thermal infrared face recognition method, which can effectively solve the problem that the thermal infrared imaging is greatly influenced by the distance to cause the difficulty in face recognition when the distance is changed, and can solve the problem that the recognition rate is low due to poor generalization capability when the actual image acquired by infrared equipment is faced.
The invention is realized by adopting the following technical scheme:
a distance adaptive thermal infrared face recognition method is characterized in that: the method comprises the following steps:
S 1 constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, and processing the thermal infrared image by using the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN; the improved thermal infrared image super-resolution enhancement network Retinex-CNN comprises a decomposition network Decompose-Net and a denoising enhancement network Enhance-Net of a reflection component; the decomposition network Decompose-Net comprises decomposition of low-illumination image reflection components and illumination components;
S 2 based on the prior information obtained by near infrared, local feature extraction is respectively carried out on the near infrared image and the processed thermal infrared image by using different feature algorithms, full feature fusion is carried out on the extracted different features, dimension reduction processing is carried out on the fused features, and then the dimension reduced features are classified and identified.
Said step S 1 The method for constructing and training the improved thermal infrared image super-resolution enhancement network Retinex-CNN specifically comprises the following steps: and constructing and training a decomposition network Decompose-Net and constructing a denoising enhancement network Enhance-Net of the reflection component.
The method for constructing and training the decomposition network Decompose-Net specifically comprises the following steps:
S 11 constructing a twin decomposition network Decompose-Net with a depth of 5, the decomposition network Decompose-Net being composed of a plurality of convolution layers and an active layer;
S 12 face image S to be normally illuminated, respectively normal And low illumination face image S low Inputting the decomposition network Decompose-Net, decomposing under the guidance of Retinex theory to obtain corresponding reflection component R normal 、R low And corresponding illumination components I normal 、I low
S 13 By the step S 12 Respectively calculating the Loss function of the decomposition network Decompose-Net in the training stage, namely calculating three Loss components of the Loss function in the training stage, namely reconstructing the imageLoss component L recon A reflectance uniformity component L ir And luminance smoothing loss component L is
S 14 Obtaining a Loss function of the decomposition network Decompose-Net by using the preset weight parameters and the three Loss components of the Loss function in the training stage, and training by using the Loss function to obtain the trained decomposition network Decompose-Net.
Said step S 13 A loss component L of the reconstructed image recon The calculation method comprises the following steps:
L recon =∑ i=low,normalj=low,normal λ ij ||R i ·I j -S j || 1
the reflectance uniformity component L ir The calculation method comprises the following steps:
L ir =||R low -R normal || 1
the luminance smoothing loss component L is The calculation method comprises the following steps:
Figure BDA0003547554280000021
wherein the content of the first and second substances,
Figure BDA0003547554280000022
meaning that the illumination components are graded,
Figure BDA0003547554280000023
representing the gradient, λ, of the reflected component g Is a preset weight parameter.
Said step S 14 The Loss function of the medium decomposition network Decompose-Net is the sum of three Loss components with weights, and specifically comprises the following steps:
L=λ ij L reconir L iris L is
wherein λ is ij For reconstructing the loss component L of the image recon Corresponding weight, λ ir Is a reflectance uniformity component L ir Corresponding weight, λ is Smoothing loss component L for luminance is The corresponding weight.
Said step S 1 The construction of the denoising enhancement network Enhance-Net of the reflection component specifically refers to: and constructing a denoising enhancement network Enhance-Net for the low-illumination reflection component, which is composed of BM3D denoising enhancement and fractional differential enhancement.
Said step S 1 The specific processing of the thermal infrared image by utilizing the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN is as follows: inputting the thermal infrared image into a trained decomposition network Decompose-Net, and decomposing to obtain a low-illumination reflection component R low And the low illumination reflection component R is combined low Inputting the denoising enhancement network Enhance-Net of the reflection component to generate a final reflection component R final And obtaining a processed thermal infrared image.
Said step S 2 The method specifically comprises the following steps:
S 21 selecting three feature algorithms of SIFT, LBP and HOG, and respectively extracting local features of the near-infrared image and the processed thermal infrared image;
S 22 respectively performing feature fusion on two or more good features extracted from the near-infrared image and the processed thermal infrared image by adopting a continuity feature combination algorithm with a weight value to generate a new feature;
S 23 searching a projection matrix of each view to the public subspace by using a multi-view smooth discriminant analysis subspace method;
S 24 projecting the new features fused with the near-infrared images and the thermal-infrared images into a subspace at the same time, and performing dimension reduction processing;
S 25 classifying and identifying by using an NN classifier;
S 26 face recognition is performed.
Said step S 22 The good characteristics in (1) are specifically: an informative feature independent of other features.
Compared with the prior art, the invention has the beneficial effects that:
1. the method starts from the thermal infrared face recognition principle, and because the thermal infrared image has low pixel, the pixel loss is serious under the condition of distance change, and effective characteristics are difficult to extract, the near infrared image is introduced to perform auxiliary training in the training stage. The improved thermal infrared image super-resolution enhancement network Retinex-CNN achieves the purpose of enhancing the image quality and is convenient for further feature extraction; and then based on prior information obtained by near infrared, a full-feature fusion method is utilized to realize rapid detection and efficient identification of the thermal infrared face images within a certain range of distance, and finally, the differentiation rate and the identification accuracy rate of the thermal infrared face images are improved when the distance changes.
2. In the invention, the low-illumination face image S is processed low Resolved reflection component R low Performing image enhancement to obtain R final The edge information of the image can be enhanced, and the influence of the illumination component can be eliminated, so that the image details can be enhanced.
3. Decomposing the image under the guidance of Retinex theory, wherein the original image is the product of a reflection component R and an illumination component I, and the reflection component can be a high-frequency component and can better describe the intrinsic information and edge characteristics of the image; the illumination component is a low-frequency component and is influenced by illumination, and under the condition of low illumination, the illumination component can influence the image characteristic expression, so that the two components are separated, the reflection component is obtained from the original image, and the influence of illumination unevenness is eliminated, so that the image enhancement effect is achieved.
4. The method utilizes the near-infrared feature extraction as prior information, utilizes the prior information of the near-infrared image, and improves the identification performance by extracting the features insensitive to the difference between the two modes according to an invariant feature extraction algorithm.
5. The invention projects the new features fused with the near infrared and thermal infrared images into the subspace at the same time, can realize the dimension reduction processing, can reduce the calculated amount without losing effective information, can also characterize the properties of the original sample space to a certain extent, and is convenient for improving the classification and identification effects at the later stage.
Drawings
The invention will be described in further detail with reference to the following description taken in conjunction with the accompanying drawings and detailed description, in which:
FIG. 1 is a schematic structural diagram of an improved thermal infrared image super-resolution enhancement network Retinex-CNN in the invention;
FIG. 2 is a schematic flow chart of full feature fusion in the present invention;
fig. 3 is a schematic flow chart of the SIFT feature algorithm in the present invention.
Detailed Description
Example 1
As a basic implementation mode of the invention, the invention comprises a distance self-adaptive thermal infrared face recognition method, which comprises the following steps:
S 1 constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, and processing the thermal infrared image by using the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN.
The improved thermal infrared image super-resolution enhancement network Retinex-CNN comprises a decomposition network Decompose-Net and a denoising enhancement network Enhance-Net of reflection components. The decomposition network decomplexe-Net includes decomposition of low-illumination image reflectance and illumination components.
S 2 Based on the prior information obtained by near infrared, according to an invariant feature extraction algorithm, the identification performance is improved by extracting features insensitive to the difference between two modes, namely, different feature algorithms are utilized to respectively extract local features of near infrared and processed thermal infrared images, full feature fusion is carried out on the extracted different features, the fused features are subjected to dimensionality reduction, and then the dimensionality reduced features are classified and identified.
Example 2
As a preferred embodiment of the present invention, the present invention includes a distance adaptive thermal infrared face recognition method, including the steps of:
S 1 constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, specifically comprising constructing and training a decomposition networkDecompose-Net and an Enhance network Enhance-Net for constructing the denoising of the reflection component.
The method for constructing and training the decomposition network Decompose-Net specifically comprises the following steps:
S 11 constructing a twin, depth-5 decomposition network Decompose-Net, said decomposition network Decompose-Net being comprised of a plurality of convolutional layers and active layers.
S 12 Face image S to be normally illuminated, respectively normal And low-illumination face image S low Inputting the decomposition network Decompose-Net, decomposing under the guidance of Retinex theory to obtain corresponding reflection component R normal 、R low And corresponding illumination components I normal 、I low
S 13 By the step S 12 Respectively calculating Loss functions of the decomposition network Decompose-Net in a training stage, namely calculating three Loss components of the Loss functions in the training stage, namely Loss components L of a reconstructed image recon A reflectance uniformity component L ir And luminance smoothing loss component L is
S 14 Obtaining a Loss function of the decomposition network Decompose-Net by using the preset weight parameters and the three Loss components of the Loss function in the training stage, and training by using the Loss function to obtain the trained decomposition network Decompose-Net.
And constructing a denoising enhancement network Enhance-Net for the low-illumination reflection component, which is composed of BM3D denoising enhancement and fractional differential enhancement.
The thermal infrared image is processed by utilizing the improved thermal infrared image super-resolution enhancement network Retinex-CNN which is completed by training, and the method specifically comprises the following steps: inputting the thermal infrared image into a trained decomposition network Decompose-Net, and decomposing to obtain a low-illumination reflection component R low And the low illumination reflection component R is combined low The denoising enhancement network Enhance-Net of the input reflection component generates a final reflection component R final And obtaining a processed thermal infrared image.
S 2 Based on near-infrared acquisitionAnd (3) checking information, respectively extracting local features of the near infrared image and the processed thermal infrared image by using different feature algorithms, performing full feature fusion on the extracted different features, performing dimension reduction processing on the fused features, and classifying and identifying the dimension reduced features.
Example 3
As another preferred embodiment of the present invention, the present invention includes a distance adaptive thermal infrared face recognition method, including the steps of:
S 1 constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, and processing the thermal infrared image by using the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN. The improved thermal infrared image super-resolution enhancement network Retinex-CNN comprises a decomposition network Decompose-Net and a denoising enhancement network Enhance-Net of reflection components. The decomposition network Decompose-Net comprises decomposition of low-illumination image reflection components and illumination components;
S 2 based on the prior information obtained by near infrared, local feature extraction is respectively carried out on the near infrared image and the processed thermal infrared image by using different feature algorithms, full feature fusion is carried out on the extracted different features, dimension reduction processing is carried out on the fused features, and then the dimension reduced features are classified and identified.
Said step S 2 The method specifically comprises the following steps:
S 21 selecting three feature algorithms of SIFT, LBP and HOG, and respectively extracting local features of the near infrared image and the processed thermal infrared image.
S 22 And respectively carrying out feature fusion on two or more good features extracted from the near infrared image and the processed thermal infrared image by adopting a continuity feature combination algorithm with a weight value to generate a new feature.
S 23 And searching a projection matrix of each view to the public subspace by using a multi-view smooth discriminant analysis subspace method.
S 24 Projecting the new feature fused with the near-infrared and thermal-infrared images onto the sub-image simultaneouslyAnd in the space, performing dimension reduction processing.
S 25 Classifying and identifying by using the NN classifier.
S 26 Face recognition is performed.
Example 4
As a best implementation mode of the present invention, the present invention includes a distance adaptive thermal infrared face recognition method, wherein distance adaptive specifically means: the method for enhancing the resolution and the recognition accuracy of the thermal infrared face image when the distance changes comprises the following steps:
S 1 referring to the attached figure 1 of the specification, an improved thermal infrared image super-resolution enhancement network Retinex-CNN is constructed and trained, and comprises a decomposition network Decompose-Net and a denoising enhancement network enhancement-Net of a reflection component. The decomposition network decomplexe-Net includes decomposition of low-illumination image reflectance and illumination components.
The construction and training of the improved thermal infrared image super-resolution enhancement network Retinex-CNN can achieve the purpose of image enhancement, and specifically can comprise the construction and training of a decomposition network Decompose-Net and the construction of a denoising enhancement network enhancement-Net of a reflection component.
The constructing and training of the decomposition network Decompose-Net specifically may include:
S 11 constructing a twin, depth-5 decomposition network Decompose-Net, said decomposition network Decompose-Net being comprised of a plurality of convolutional layers and active layers.
S 12 Face image S to be normally illuminated, respectively normal And low-illumination face image S low Inputting the decomposition network Decompose-Net, decomposing under the guidance of Retinex theory to obtain corresponding reflection component R normal 、R low And corresponding illumination components I normal 、I low
In the embodiment, based on an algorithm of Retinex theory, an original image is a product of a reflection component R and an illumination component I, wherein the reflection component can be a high-frequency component, and can better describe the intrinsic information and edge characteristics of the image; the illumination component is a low-frequency component, and is affected by illumination. Under the condition of low illumination, the illumination component can influence the image characteristic expression, so that the two components are separated, the reflection component is obtained from the original image, and the influence of illumination unevenness is eliminated, so as to achieve the image enhancement effect.
S 13 By the step S 12 Respectively calculating Loss functions of the decomposition network Decompose-Net in a training stage, namely calculating three Loss components of the Loss functions in the training stage, namely Loss components L of a reconstructed image recon A reflectance uniformity component L ir Sum luminance smoothing loss component L is
A loss component L of the reconstructed image recon The calculation method comprises the following steps:
L recon =∑ i=low,normalj=low,normal λ ij R i ·I j -S j || 1
the component is mainly calculated by using S low And S normal Learned reflection component R low And R normal Multiplying the error value generated when the input image corresponding to the illumination component is restored by the illumination component.
The reflectance uniformity component L ir The calculation method comprises the following steps:
L ir =||R low -R normal || 1
the luminance smoothing loss component L is The method utilizes the smoothness theory of the illumination image, combines a method of minimizing total variation to add gradient weight to the total variation of the image, and reduces the loss of image texture and boundary as much as possible on the premise of ensuring smooth recovery of the image. The specific calculation method comprises the following steps:
Figure BDA0003547554280000081
wherein the content of the first and second substances,
Figure BDA0003547554280000082
meaning that the illumination components are graded,
Figure BDA0003547554280000083
representing the gradient, λ, of the reflected component g Is a preset weight parameter.
S 14 Obtaining a Loss function of the decomposition network Decompose-Net by using a preset weight parameter and three Loss components of the Loss function in the training stage, so as to obtain that the Loss function of the decomposition network ecompose-Net is the sum of the weighted three Loss components, specifically:
L=λ ij L reconir L iris L is
wherein λ is ij For reconstructing the loss component L of the image recon Corresponding weight, λ ir Is a reflectance uniformity component L ir Corresponding weight, λ is Smoothing loss component L for luminance is The corresponding weight.
And training by using the loss function to obtain a trained decomposition network Decompose-Net.
The method for constructing the denoising enhancement network Enhance-Net of the reflection component specifically comprises the following steps: and constructing a denoising and enhancing network Enhance-Net for the low-illumination reflection component, which consists of BM3D denoising and enhancing and fractional order differential enhancing.
The thermal infrared image is processed by using the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN, namely the thermal infrared image is input into the trained decomposition network Decompose-Net, and the low-illumination reflection component R is obtained by decomposition low And the low illumination reflection component R is combined low And inputting a denoising enhancement network Enhance-Net of the reflection component, denoising the low-illumination reflection component by using a BM3D algorithm, and performing fractional differential enhancement on the denoised output image. The BM3D algorithm can improve high-frequency edge information of an image, nonlinearly retains medium-low frequency information of image texture details and smooth regions, and generates a final reflection component R final Can enhance the edge information of the image and eliminate the influence of illumination components, thereby increasingStrong image details, obtaining a processed thermal infrared image, namely obtaining an enhanced result R final For step S 2 The feature extraction of (1).
The specific principle of denoising the low-illumination reflection component by using the BM3D algorithm is as follows: an image is first divided into small pixel slices of smaller size, after a reference slice is selected, small slices similar to the reference slice are sought to form a 3D block. This process will yield 3D blocks. All similar blocks are then 3D transformed. The transformed 3D block is thresholded for shrinkage, which is also a process of removing noise. And then 3D inverse transform is performed. And finally, restoring all the 3D blocks into the image after weighted average.
S 2 Referring to the attached figure 2 of the specification, based on prior information obtained by near infrared, local feature extraction is respectively carried out on near infrared and processed thermal infrared images by using different feature algorithms, full feature fusion is carried out on the extracted different features, dimension reduction processing is carried out on the fused features, and then the dimension reduced features are classified and identified. The method specifically comprises the following steps:
S 21 and utilizing prior information of the near-infrared image, namely extracting features of the near-infrared image to exist as the prior information, and improving the identification performance by extracting features insensitive to the difference between the two modes according to an invariant feature extraction algorithm. And selecting three feature algorithms of SIFT, LBP and HOG, and respectively extracting local features of the near infrared image and the processed thermal infrared image. The three feature algorithms of SIFT, LBP and HOG are conventional technical means in the field, and for the extraction process of the SIFT feature algorithm, reference may be made to the attached fig. 3 of the specification.
The local features are features extracted from local regions of the image, and include edges, corners, lines, curves, regions with special attributes, and the like. According to the characteristics of the three algorithms of SIFT, LBP and HOG, the extracted local features are different, for example, points which are quite prominent and cannot be changed due to illumination noise, such as corner points, edge points, dark regions and the like, are extracted from SIFT, and local textures are extracted from LBP.
S 22 Using a continuity feature combination algorithm with weights, respectivelyAnd performing feature fusion on two or more good features extracted from the near-infrared image and the processed thermal infrared image to generate a new feature, so that the capability of extracting the features of the thermal infrared image can be enhanced. Wherein a good feature specifically refers to a feature that is independent of other features and has information.
S 23 Finding projection matrices of the respective views into a common subspace using a Multi-view Smooth Discriminant Analysis (MSDA).
The objective function of the MSDA method is as follows:
Figure BDA0003547554280000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003547554280000101
the dispersion matrix within the representation class,
Figure BDA0003547554280000102
representing an inter-class scatter matrix. J (alpha) represents a discrete Laplace regularization function, lambda is a parameter for controlling smoothness, and lambda is greater than or equal to 0 and less than or equal to 1.
The laplacian is defined as follows:
Figure BDA0003547554280000103
wherein the content of the first and second substances,
Figure BDA0003547554280000104
representing a function defined on the ROI, a penalty function J representing a function
Figure BDA0003547554280000105
Smoothness of (d). Based on the application on the face image, a discrete Laplace smoothing function is selected and used. According to the discrete laplace regularization method:
J(α)=||Λ·α|| 2 =α T Λ T Λα,
where Λ represents a eigenvalue diagonal matrix. Finally, the objective function of the MSDA method can be solved by using the eigenvalue obtained by the generalized eigen decomposition:
Figure BDA0003547554280000106
S 24 and projecting the new features after the near-infrared image and the thermal infrared image are fused into the subspace at the same time for dimension reduction processing, wherein better classification effect can be achieved after the dimension reduction of the features.
S 25 Performing classification identification by using the NN classifier.
S 26 Face recognition is performed.
In summary, after reading the present disclosure, those skilled in the art should make various other modifications without creative efforts according to the technical solutions and concepts of the present disclosure, which are within the protection scope of the present disclosure.

Claims (9)

1. A distance adaptive thermal infrared face recognition method is characterized in that: the method comprises the following steps:
S 1 constructing and training an improved thermal infrared image super-resolution enhancement network Retinex-CNN, and processing the thermal infrared image by using the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN; the improved thermal infrared image super-resolution enhancement network Retinex-CNN comprises a decomposition network Decompose-Net and a denoising enhancement network Enhance-Net of a reflection component; the decomposition network Decompose-Net comprises decomposition of low-illumination image reflection components and illumination components;
S 2 based on the prior information obtained by near infrared, local feature extraction is respectively carried out on the near infrared image and the processed thermal infrared image by using different feature algorithms, full feature fusion is carried out on the extracted different features, dimension reduction processing is carried out on the fused features, and then the dimension reduced features are classified and identified.
2. The distance adaptive thermal infrared face recognition method according to claim 1, characterized in that: said step S 1 The method for constructing and training the improved thermal infrared image super-resolution enhancement network Retinex-CNN specifically comprises the following steps: and constructing and training a decomposition network Decompose-Net and constructing a denoising enhancement network Enhance-Net of the reflection component.
3. The distance adaptive thermal infrared face recognition method according to claim 2, characterized in that: the method for constructing and training the decomposition network Decompose-Net specifically comprises the following steps:
S 11 constructing a twin decomposition network Decompose-Net with a depth of 5, the decomposition network Decompose-Net being composed of a plurality of convolution layers and an active layer;
S 12 face image S to be normally illuminated, respectively normal And low illumination face image S low Inputting the decomposition network Decompose-Net, decomposing under the guidance of Retinex theory to obtain corresponding reflection component R normal 、R low And corresponding illumination components I normal 、I low
S 13 By the step S 12 Respectively calculating Loss functions of the decomposition network Decompose-Net in a training stage, namely calculating three Loss components of the Loss functions in the training stage, namely Loss components L of a reconstructed image recon A reflectance uniformity component L ir And luminance smoothing loss component L is
S 14 Obtaining a Loss function of the decomposition network Decompose-Net by using the preset weight parameters and the three Loss components of the Loss function in the training stage, and training by using the Loss function to obtain the trained decomposition network Decompose-Net.
4. The distance adaptive thermal infrared face recognition method according to claim 3, characterized in that: said step S 13 A loss component L of the reconstructed image recon The calculation method comprises the following steps:
L recon =∑ i=low,normalj=low,normal λ ij ||R i ·I j -S j || 1
the reflectance uniformity component L ir The calculation method comprises the following steps:
L ir =||R low -R normal || 1
the luminance smoothing loss component L is The calculation method comprises the following steps:
Figure FDA0003547554270000021
wherein the content of the first and second substances,
Figure FDA0003547554270000022
meaning that the illumination components are graded,
Figure FDA0003547554270000023
representing the gradient, λ, of the reflected component g Is a preset weight parameter.
5. The distance adaptive thermal infrared face recognition method according to claim 4, characterized in that: said step S 14 The Loss function of the medium decomposition network Decompose-Net is the sum of three Loss components with weights, and specifically comprises the following steps:
L=λ ij L reconir L iris L is
wherein λ is ij For reconstructing the loss component L of the image recon Corresponding weight, λ ir Is a reflectance uniformity component L ir Corresponding weight, λ is Smoothing loss component L for luminance is The corresponding weight.
6. The distance adaptive thermal infrared face recognition method according to claim 2, characterized in that: said step (c) isS 1 The construction of the denoising enhancement network Enhance-Net of the reflection component specifically refers to: and constructing a denoising enhancement network Enhance-Net for the low-illumination reflection component, which is composed of BM3D denoising enhancement and fractional differential enhancement.
7. The distance adaptive thermal infrared face recognition method according to claim 1, characterized in that: said step S 1 The specific processing of the thermal infrared image by utilizing the trained improved thermal infrared image super-resolution enhancement network Retinex-CNN is as follows: inputting the thermal infrared image into a trained decomposition network Decompose-Net, and decomposing to obtain a low-illumination reflection component R low And the low illumination reflection component R is combined low Inputting the denoising enhancement network Enhance-Net of the reflection component to generate a final reflection component R final And obtaining a processed thermal infrared image.
8. The distance adaptive thermal infrared face recognition method according to claim 1, characterized in that: said step S 2 The method specifically comprises the following steps:
S 21 selecting three feature algorithms of SIFT, LBP and HOG, and respectively extracting local features of the near-infrared image and the processed thermal infrared image;
S 22 respectively performing feature fusion on two or more good features extracted from the near-infrared image and the processed thermal infrared image by adopting a continuity feature combination algorithm with a weight value to generate a new feature;
S 23 searching a projection matrix of each view to a public subspace by using a multi-view smooth discriminant analysis subspace method;
S 24 projecting the new features fused with the near-infrared images and the thermal-infrared images into a subspace at the same time, and performing dimension reduction processing;
S 25 classifying and identifying by using an NN classifier;
S 26 face recognition is performed.
9. The method of claim 8A distance adaptive thermal infrared face recognition method is characterized in that: said step S 22 The good characteristics in (1) are specifically: an informative feature independent of other features.
CN202210253355.6A 2022-03-15 2022-03-15 Distance self-adaptive thermal infrared face recognition method Active CN114821703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210253355.6A CN114821703B (en) 2022-03-15 2022-03-15 Distance self-adaptive thermal infrared face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210253355.6A CN114821703B (en) 2022-03-15 2022-03-15 Distance self-adaptive thermal infrared face recognition method

Publications (2)

Publication Number Publication Date
CN114821703A true CN114821703A (en) 2022-07-29
CN114821703B CN114821703B (en) 2023-07-28

Family

ID=82528546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210253355.6A Active CN114821703B (en) 2022-03-15 2022-03-15 Distance self-adaptive thermal infrared face recognition method

Country Status (1)

Country Link
CN (1) CN114821703B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116749A (en) * 2013-03-12 2013-05-22 上海洪剑智能科技有限公司 Near-infrared face identification method based on self-built image library
CN106845450A (en) * 2017-02-22 2017-06-13 武汉科技大学 Dark surrounds face identification method based near infrared imaging Yu deep learning
CN109243030A (en) * 2018-09-13 2019-01-18 浙江工业大学 A kind of control method and system of night contactless access control system
CN110378234A (en) * 2019-06-20 2019-10-25 合肥英威晟光电科技有限公司 Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building
CN111340698A (en) * 2020-02-17 2020-06-26 北京航空航天大学 Multispectral image spectral resolution enhancement method based on neural network
EP3751458A1 (en) * 2019-06-14 2020-12-16 Sobolt B.V. Method for thermographic analysis using a hybrid convolutional neural network
CN112287839A (en) * 2020-10-29 2021-01-29 广西科技大学 SSD infrared image pedestrian detection method based on transfer learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116749A (en) * 2013-03-12 2013-05-22 上海洪剑智能科技有限公司 Near-infrared face identification method based on self-built image library
CN106845450A (en) * 2017-02-22 2017-06-13 武汉科技大学 Dark surrounds face identification method based near infrared imaging Yu deep learning
CN109243030A (en) * 2018-09-13 2019-01-18 浙江工业大学 A kind of control method and system of night contactless access control system
EP3751458A1 (en) * 2019-06-14 2020-12-16 Sobolt B.V. Method for thermographic analysis using a hybrid convolutional neural network
CN110378234A (en) * 2019-06-20 2019-10-25 合肥英威晟光电科技有限公司 Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building
CN111340698A (en) * 2020-02-17 2020-06-26 北京航空航天大学 Multispectral image spectral resolution enhancement method based on neural network
CN112287839A (en) * 2020-10-29 2021-01-29 广西科技大学 SSD infrared image pedestrian detection method based on transfer learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIPING YANG等: "adversarial reconstruction cnn for illumination robust frontal face image recovery and recognition", vol. 15, no. 2, pages 18 - 33 *
马乐;陈峰;李敏;: "基于改进生成对抗网络的红外图像超分辨率重建", vol. 50, no. 02, pages 246 - 251 *

Also Published As

Publication number Publication date
CN114821703B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
Zhao et al. Detail-preserving image denoising via adaptive clustering and progressive PCA thresholding
CN108510499B (en) Image threshold segmentation method and device based on fuzzy set and Otsu
Bhandari et al. A novel fuzzy clustering-based histogram model for image contrast enhancement
Xiao et al. Fast closed-form matting using a hierarchical data structure
Malladi et al. Image denoising using superpixel-based PCA
Chakraborty PRNU-based image manipulation localization with discriminative random fields
Hu et al. Single image dehazing algorithm based on sky segmentation and optimal transmission maps
Zhang Two-step non-local means method for image denoising
Kumar et al. Automatic image segmentation using wavelets
Liu et al. Deep neural network with deformable convolution and side window convolution for image denoising
Shahdoosti et al. A new compressive sensing based image denoising method using block-matching and sparse representations over learned dictionaries
Liu et al. Face hallucination via multiple feature learning with hierarchical structure
Huo et al. Two-stage image denoising algorithm based on noise localization
Shirai et al. Character shape restoration of binarized historical documents by smoothing via geodesic morphology
Agam et al. Degraded document image enhancement
CN114821703B (en) Distance self-adaptive thermal infrared face recognition method
Li et al. Super‐Resolution Reconstruction of Underwater Image Based on Image Sequence Generative Adversarial Network
Vijaya et al. A simple algorithm for image denoising based on ms segmentation
Drira et al. Mean-Shift segmentation and PDE-based nonlinear diffusion: toward a common variational framework for foreground/background document image segmentation
Zhao et al. Face Restoration Based on GANs and NST
Yu et al. EPLL image denoising with multi-feature dictionaries
Patel et al. Review of digital image forgery detection
Mičušík et al. Steerable semi-automatic segmentation of textured images
Tseng et al. Maximum-a-posteriori estimation for global spatial coherence recovery based on matting Laplacian

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant