CN110084110B - Near-infrared face image recognition method and device, electronic equipment and storage medium - Google Patents

Near-infrared face image recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110084110B
CN110084110B CN201910208630.0A CN201910208630A CN110084110B CN 110084110 B CN110084110 B CN 110084110B CN 201910208630 A CN201910208630 A CN 201910208630A CN 110084110 B CN110084110 B CN 110084110B
Authority
CN
China
Prior art keywords
depth feature
face image
dimensional depth
image
feature representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910208630.0A
Other languages
Chinese (zh)
Other versions
CN110084110A (en
Inventor
彭春蕾
王楠楠
高新波
李洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910208630.0A priority Critical patent/CN110084110B/en
Publication of CN110084110A publication Critical patent/CN110084110A/en
Application granted granted Critical
Publication of CN110084110B publication Critical patent/CN110084110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a near-infrared face image recognition method, a near-infrared face image recognition device, electronic equipment and a storage medium, wherein the near-infrared face image recognition method comprises the following steps: obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested; correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images; and carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations. According to the method, the near-infrared face image and the visible light face image are respectively represented by the first high-dimensional depth feature representation and the second high-dimensional depth feature representation, and the first high-dimensional depth feature representation and the second high-dimensional depth feature representation have stronger representation capability than feature representations adopted by the existing method, so that the identification accuracy of the near-infrared face image is improved.

Description

Near-infrared face image recognition method and device, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of artificial intelligence and pattern recognition, and particularly relates to a near-infrared face image recognition method and device, electronic equipment and a storage medium.
Background
Because the near-infrared imaging equipment is not interfered by illumination and night environment, the identification between the near-infrared human face image and the visible light human face image has wide application value in the field of social public safety. For example, near-infrared imaging equipment can be adopted in security monitoring to acquire a face image which is not interfered by illumination factors, and at the moment, the acquired near-infrared face image can be compared with a visible light face image in an identity database, so that the identity information of a target person can be confirmed. Due to the fact that imaging mechanisms of near-infrared and visible light face images are different, the collected images have great difference in texture, color and the like, and difficulty is brought to near-infrared and visible light face recognition. With more and more monitoring camera equipment supporting near-infrared imaging, near-infrared and visible light face recognition research has important significance.
Most of the existing near-infrared and visible light face recognition methods are directly matched in a feature space, and the basic idea is as follows: for the near-infrared face image and the visible light face image, firstly, robust face representation features are respectively extracted, then, a feature distance measurement mode is designed to measure the distance of the extracted face representation features, and finally, the near-infrared face image and the visible light face image with the minimum distance are used as recognition matching results.
However, in the process of near-infrared and visible light face recognition, the method only considers that the near-infrared and visible light images are directly compared in the feature space, and the initial comparison distance obtained by comparison is used as the recognition criterion, so that the recognition accuracy of the near-infrared face image is influenced.
Disclosure of Invention
In order to solve the above problems in the prior art, the invention provides a near-infrared face image recognition method, a near-infrared face image recognition device, an electronic device and a storage medium. The technical problem to be solved by the invention is realized by the following technical scheme:
a near-infrared face image recognition method comprises the following steps:
obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images;
and carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations.
In an embodiment of the present invention, obtaining a first high-dimensional depth feature representation according to a near-infrared face image to be tested includes:
dividing the near-infrared face image to be tested into a plurality of first image blocks with the same size, wherein two adjacent first image blocks along a first direction are mutually overlapped according to a first preset size;
based on the first image block, according to the trained model, obtaining a depth feature representation of the first image block;
and splicing the depth feature representations of all the first image blocks in sequence to obtain the first high-dimensional depth feature representation.
In an embodiment of the present invention, obtaining a depth feature representation of the first image block according to a trained model based on the first image block includes:
and inputting each first image block into the trained convolutional neural network model to correspondingly obtain the depth characteristic representation of the first image block.
In an embodiment of the present invention, obtaining M second high-dimensional depth feature representations correspondingly according to M visible light face images includes:
dividing each visible light face image into a plurality of second image blocks with the same size, wherein two adjacent second image blocks along a first direction are mutually overlapped according to a second preset size;
based on the second image block, according to the trained model, obtaining the depth feature representation of the second image block;
and splicing the depth feature representations of all the second image blocks of each visible light face image in sequence to obtain a second high-dimensional depth feature representation.
In an embodiment of the present invention, performing face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and M second high-dimensional depth feature representations includes:
calculating Euclidean distances of the first high-dimensional depth feature representation and each second high-dimensional depth feature representation to obtain an initial comparison distance matrix;
obtaining a first reordering weight and a second reordering weight according to the initial comparison distance matrix;
updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained;
and selecting the minimum matrix element value from the reordered comparison distance matrix, obtaining a final visible light face image according to the matrix element value, and finishing the identification of the near-infrared face image to be tested.
In an embodiment of the present invention, obtaining the first re-ordering weight and the second re-ordering weight according to the initial comparison distance matrix includes:
selecting K matrix elements from the initial comparison distance matrix according to the sequence from small to large;
based on the K matrix elements, utilizing a first reordering weight calculation formula to obtain a first reordering weight;
and calculating a formula by using a second reordering weight based on the K matrix elements to obtain a second reordering weight.
In an embodiment of the present invention, updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained includes:
based on the first reordering weight and the second reordering weight, utilizing an Euclidean distance calculation formula to obtain an updated Euclidean distance;
and updating the initial comparison distance matrix according to the updated Euclidean distance until a reordered comparison distance matrix is obtained.
An embodiment of the present invention further provides a near-infrared face image recognition apparatus, including:
the first processing module is used for obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
the second processing module is used for correspondingly obtaining M first high-dimensional depth feature representations according to the M visible light face images;
and the recognition module is used for carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M first high-dimensional depth feature representations.
An embodiment of the present invention further provides an electronic device, including a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the above embodiments when executing the program stored in the memory.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the method steps of any of the above embodiments.
The invention has the beneficial effects that:
according to the method, the near-infrared face image and the visible light face image are respectively represented by the first high-dimensional depth feature representation and the second high-dimensional depth feature representation, and the first high-dimensional depth feature representation and the second high-dimensional depth feature representation have stronger representation capability than feature representations adopted by the existing method, so that the identification accuracy of the near-infrared face image is improved.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flow chart of a near-infrared face image recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another near-infrared face image recognition method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a near-infrared face image recognition apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the specification, reference to the description of the term "one embodiment", "some embodiments", "an example", "a specific example", or "some examples", etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, various embodiments or examples described in this specification can be combined and combined by those skilled in the art.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a near-infrared face image recognition method according to an embodiment of the present invention. As shown in fig. 1, the identification method of the present embodiment includes:
step 1, obtaining a first high-dimensional depth feature representation according to a near-infrared face image to be tested;
in step 1, the near-infrared face image to be tested is an image to be identified, the first high-dimensional depth feature representation is obtained by dividing the near-infrared face image to be tested into a plurality of parts, processing each part by using a trained model to respectively obtain a processing result, and finally splicing all the processing results, so that the first high-dimensional depth feature representation of the embodiment can more accurately represent the features of the near-infrared face image to be tested, and has stronger representation capability.
Step 2, correspondingly obtaining M second high-dimensional depth feature representations according to M visible light face images, wherein M is a positive integer larger than zero;
in step 2, the M visible light face images are a data set, and may be an identity database for face comparison, and the identity database may be different according to the resources owned by the user, and is not specifically limited herein. The near-infrared facial image to be tested can be identified by comparing the near-infrared facial image to be tested with the data set, and if people the same as the people in the near-infrared facial image to be tested exist in the data set, the identification method of the embodiment can be utilized for identification, so that the identity of the people of the near-infrared facial image to be tested can be determined.
The second high-dimensional depth feature representation is obtained by dividing the visible light face image into a plurality of parts, processing each part by using the trained model to respectively obtain processing results, and finally splicing all the processing results, so that the second high-dimensional depth feature representation of the embodiment can more accurately represent the features of the visible light face image and has stronger representation capability.
The trained models in the steps 1 and 2 can be pre-trained convolutional neural network models, and can also be intelligent algorithm models such as other neural network models.
And 3, carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations.
The method compares the obtained first high-dimensional depth feature representation of the near-infrared face image to be tested with second high-dimensional depth feature representations respectively corresponding to the M near-infrared face images to be tested, for example, by calculating the Euclidean distance between the first high-dimensional depth feature representation and the second high-dimensional depth feature representations and then selecting the second high-dimensional depth feature representation corresponding to the value with the minimum Euclidean distance, the visible light face image corresponding to the second high-dimensional depth feature representation can be correspondingly obtained from the M visible light face images, and therefore the identification of the near-infrared face image to be tested is completed.
The first high-dimensional depth feature representation and the second high-dimensional depth feature representation both have strong characterization capability, so that the identification accuracy can be improved.
Referring to fig. 2, in an embodiment of the present invention, step 1 may specifically include step 1.1, step 1.2 and step 1.3, wherein,
1.1, dividing the near-infrared face image to be tested into a plurality of first image blocks with the same size, wherein two adjacent first image blocks along a first direction are mutually overlapped according to a first preset size;
specifically, the near-infrared face image to be tested in step 1 is divided into a plurality of first image blocks, all the first image blocks have the same size, and meanwhile, two adjacent first image blocks along the first direction are overlapped with each other according to a first preset size.
For example, the first image blocks are each 3 × 3 in size.
Preferably, the first preset size is 50% of one first image block.
Step 1.2, based on the first image block, according to the trained model, obtaining a depth feature representation of the first image block;
all the first image blocks obtained in the step 1.1 are respectively input into the trained model, and the output of the trained model is the depth feature representation corresponding to each first image block, that is, each first image block corresponds to one depth feature representation, so that the characterization capability of the depth feature representation obtained by the first image blocks in the step 1.1 is stronger.
For example, the trained model may be a pre-trained convolutional neural network model, or may be an intelligent algorithm model such as other neural network models.
For example, the initial convolutional neural network model is composed of seven convolutional units, each of which contains a convolutional layer, a normalization layer, and an activation layer. Constructing a convolutional neural network model by using the method disclosed in the literature "F.Schroff, D.Kalenechiko, and J.Philbin.facenet: a unified embedding for face Recognition and clustering. in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2015: 815-823"; the data set used to train the initial convolutional neural network model, consisting of 400 million image blocks of size 64x64, where all image blocks are cropped from images taken of natural scenes, such as those in the free girl image in new york, usa and national parks in youshan america, is obtained using the documents "m.brown, g.hua, and s.wind.cognitive learning of local image descriptors.ieee transformations on Pattern Analysis and Machine understanding, 33(1): 43-57,2011", from which the initial convolutional neural network model is trained to obtain the trained convolutional neural network model.
And inputting each first image block into the trained convolutional neural network model, taking a result output by the convolutional neural network model as a depth feature representation, and correspondingly obtaining a depth feature representation for each first image block.
And step 1.3, splicing the depth feature representations of all the first image blocks in sequence to obtain the first high-dimensional depth feature representation.
And (3) directly splicing the depth feature representations of all the first image blocks in the step 1.2 according to a sequence, wherein the result of the splicing process is the first high-dimensional depth feature representation, the sequence of the step can be the sequence from left to right and from top to bottom when the near-infrared face image to be tested is viewed, and of course, the splicing process can also be performed according to other sequences, which is not limited specifically herein.
In the embodiment, the near-infrared face image to be tested is divided into the plurality of first image blocks, and the adjacent two first image blocks are partially overlapped, so that each feature of the near-infrared face image to be tested can be more accurately represented through the depth feature representation corresponding to the first image block, and then the depth feature representations are spliced, so that the finally obtained first high-dimensional depth feature representation can more accurately represent the feature of the near-infrared face image to be tested, and the near-infrared face image to be tested has stronger representation capability.
In one embodiment of the invention, step 2 may specifically comprise step 2.1, step 2.2 and step 2.3, wherein,
step 2.1, dividing each visible light face image into a plurality of second image blocks with the same size, wherein two second image blocks adjacent to each other along the first direction are mutually overlapped according to a second preset size;
specifically, each visible light face image in step 2 is processed according to the following method: the visible light face image is divided into a plurality of second image blocks, the sizes of all the second image blocks are the same, meanwhile, two adjacent second image blocks in the first direction are mutually overlapped according to a second preset size, in this embodiment, the two adjacent second image blocks are subjected to overlapping processing, so that the two adjacent second image blocks can have the same feature information, and the representation capability represented by the second high-dimensional depth feature can be enhanced in this way, wherein the first direction is the direction from left to right when the visible light face image is viewed rightly, and also can be the direction from top to bottom when the visible light face image is viewed rightly.
For example, the second image blocks are each 3 × 3 in size.
Preferably, the second preset size is 50% of one second image block.
2.2, based on the second image block, according to the trained model, obtaining a depth feature representation of the second image block;
all the second image blocks obtained in the step 2.1 are respectively input into the trained model, and the output of the trained model is the depth feature representation corresponding to each second image block, that is, each second image block corresponds to one depth feature representation, so that the depth feature representation obtained by the second image blocks in the step 2.1 has stronger representation capability.
Each visible light face image corresponds to a plurality of second image blocks, namely each visible light face image can correspondingly obtain a plurality of depth feature representations according to the plurality of divided second image blocks.
For example, the trained model may be a pre-trained convolutional neural network model, or may be an intelligent algorithm model such as other neural network models, and the method is shown in step 1.2 and is not described herein again.
And inputting each second image block into the trained convolutional neural network model, taking the result output by the convolutional neural network model as depth feature representation, and correspondingly obtaining one depth feature representation for each first image block until the depth feature representation corresponding to each visible light face image is obtained.
The embodiment does not limit the specific model used, and all models capable of obtaining the depth feature representation can be applied to the method of the embodiment, so as to be easily known by those skilled in the art to process images through other trained models. Similarly, the structure of the convolutional neural network model illustrated in this embodiment is not unique, and the specific structure of the convolutional neural network model is not limited herein.
And 2.3, splicing the depth feature representations of all the second image blocks of each visible light face image in sequence to obtain a second high-dimensional depth feature representation.
And (3) directly splicing the depth feature representations of all the second image blocks corresponding to each visible light face image in the step 2.2 according to a sequence, wherein the result of the splicing process is the second high-dimensional depth feature representation of the visible light face image, each visible light face image corresponds to two first high-dimensional depth feature representations, that is, M visible light face images correspond to M second high-dimensional depth feature representations, the sequence of the step can be the sequence from left to right and from top to bottom when the visible light face image is viewed, of course, the splicing process can also be performed according to other sequences, and no specific limitation is imposed on the sequence.
In the embodiment, each visible light face image is divided into a plurality of second image blocks, and two adjacent second image blocks are partially overlapped, so that each feature of the visible light face image can be more accurately represented through the depth feature representation corresponding to the second image block, and then the depth feature representations are spliced, so that the finally obtained second high-dimensional depth feature representation of the visible light face image can more accurately represent the feature of the visible light face image, and the visible light face image has stronger representation capability.
In one embodiment of the invention, step 3 may specifically comprise step 3.1, step 3.2, step 3.3 and step 3.4, wherein,
step 3.1, calculating Euclidean distances represented by the first high-dimensional depth features and each second high-dimensional depth feature to obtain an initial comparison distance matrix;
specifically, the euclidean distances of the first high-dimensional depth feature representation corresponding to the near-infrared face image to be tested obtained in step 1.3 and the second high-dimensional depth feature representation corresponding to each visible light face image in step 2.3 are respectively calculated to obtain M euclidean distances, a set of all the obtained euclidean distances is used as an initial comparison distance matrix, and the initial comparison distance matrix is recorded as an initial comparison distance matrix
Figure BDA0001999806970000121
Recording the near-infrared human face image to be tested as ynTo be tested, a near-infrared face image ynThe corresponding first high-dimensional depth feature representation is denoted as f (y)n) Recording the v-th visible light face image as xvRecording the v-th visible light face image as xvThe corresponding second high-dimensional depth feature representation is denoted as f (x)v) Then, then
Figure BDA0001999806970000122
For the near-infrared face image y to be testednIs used to represent f (y) in the first high-dimensional depth feature representation ofn) And the v-th visible light face image xvIs represented by f (x)v) Wherein v is a positive integer greater than zero and less than or equal to M.
In this embodiment, a final recognition result may be obtained through the obtained initial comparison distance matrix, that is, a value with the minimum euclidean distance in the initial comparison distance matrix is selected, and a visible light face image corresponding to the euclidean distance is the recognition result.
In order to more accurately identify the near-infrared face image to be tested, the identification method can be further optimized through the step 3.2, the step 3.3 and the step 3.4, and the identification accuracy is improved.
Step 3.2, obtaining a first re-ordering weight and a second re-ordering weight according to the initial comparison distance matrix;
in order to further improve the identification accuracy, in this embodiment, a first reordering weight and a second reordering weight are set, where the first reordering weight is obtained by a first high-dimensional depth feature representation of a near-infrared face image to be tested and a second high-dimensional depth feature representation of a visible light face image, the second reordering weight is obtained by a first high-dimensional depth feature representation of a near-infrared face image corresponding to a certain visible light face image and a second high-dimensional depth feature representation of a visible light face image, and two near-infrared face images, i.e., two images, corresponding to the visible light face image are the same person.
In particular, step 3.2 may comprise step 3.2.1, step 3.2.2 and step 3.2.3, wherein,
3.2.1, selecting K matrix elements from the initial comparison distance matrix according to the sequence from small to large;
selecting K matrix elements from the initial comparison distance matrix according to the sequence of Euclidean distances from small to large, wherein the matrix elements are each Euclidean distance in the initial comparison distance matrix, each matrix element corresponds to one visible light face image, the K matrix elements correspond to K visible light face images, and the set of the K visible light face images is marked as { x1,x2,…,xk…,xKK is a positive integer greater than zero and less than or equal to M, and K is a positive integer greater than zero and less than or equal to K.
3.2.2, based on the K matrix elements, utilizing a first reordering weight calculation formula to obtain a first reordering weight;
specifically, the first reordering weight is:
wn={wn,1,wn,2,…,wn,k…,wn,K}
wherein, wnRepresents a first reordering weight vector, wn,kRepresenting a visible face image xkThe first high-dimensional depth feature representation of the corresponding near-infrared face image, visible light face image xkThe corresponding near-infrared face image, i.e. the two images, is the same person.
First reordering weight calculation formula wnThe calculation formula of (c) can be expressed as:
Figure BDA0001999806970000131
wherein, f (y)n) Representing a near-infrared face image y to be testednF (x) is a first high-dimensional depth feature representation ofk) Representing the K visible face image x in K matrix elementskIs represented by the second high-dimensional depth feature of (a).
3.2.2, based on the K matrix elements, calculating a formula by using a second reordering weight to obtain a second reordering weight;
specifically, the second reordering weight is:
wv={wv,1,wv,2,…,wv,k…,wv,K}
wherein, wvRepresents a first reordering weight vector, wv,kRepresenting a visible face image xkIs represented by the second high-dimensional depth feature of (a).
Second reordering weight calculation formula wvThe calculation formula of (c) can be expressed as:
Figure BDA0001999806970000141
wherein, f (y)v) Representing the v-th visible light face image xvThe first high-dimensional depth feature representation of the corresponding near-infrared face image, visible light face image xvThe corresponding near-infrared face image, i.e. two images are the same person, f (x)k) Representing the K visible face image x in K matrix elementskIs represented by the second high-dimensional depth feature of (a).
3.3, updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained;
in this embodiment, the euclidean distance value of the matrix element in the initial comparison distance matrix is recalculated by using the first reordering weight and the second reordering weight obtained in step 3.2, so that the original euclidean distance value of the matrix element in the initial comparison distance matrix is replaced by the newly obtained euclidean distance value, the initial comparison distance matrix is updated and optimized, the defect that the effect of the existing method on effective information and valuable information in the initial comparison distance matrix is ignored is overcome, and the accuracy of identifying the near-infrared face image by using the visible light face image is improved.
In particular, step 3.3 may comprise step 3.3.1 and step 3.3.2, wherein,
3.3.1, based on the first re-ordering weight and the second re-ordering weight, obtaining an updated Euclidean distance by using an Euclidean distance calculation formula;
specifically, the euclidean distance calculation formula for updating the matrix elements in the initial alignment distance matrix is as follows:
Figure BDA0001999806970000142
wherein,
Figure BDA0001999806970000151
for the near-infrared face image y to be testednIs used to represent f (y) in the first high-dimensional depth feature representation ofn) And the v-th visible light face image xvIs represented by f (x)v) Updated euclidean distances therebetween.
And 3.3.2, updating the initial comparison distance matrix according to the updated Euclidean distance until the reordered comparison distance matrix is obtained.
In particular, updated Euclidean distances are utilized
Figure BDA0001999806970000152
Substituted in the initial alignment distance matrix
Figure BDA0001999806970000153
Complete pair
Figure BDA0001999806970000154
Updating all matrix elements in the initial comparison distance matrix according to the steps 3.3.1 and 3.3.2 until all matrix elements in the initial comparison distance matrix are updated, and obtaining the reordered comparison distance matrix
Figure BDA0001999806970000155
And 3.3, recalculating the Euclidean distance between the near-infrared facial image to be tested and the visible light facial image by using the first reordering weight and the second reordering weight obtained in the step 3.2, thereby obtaining a reordered comparison distance matrix, wherein the reordered comparison distance matrix can more effectively reflect valuable information on the near-infrared facial image to be tested and the visible light facial image, and thus the identification accuracy of the near-infrared facial image can be improved.
And 3.4, selecting the minimum matrix element value from the reordered comparison distance matrix, obtaining the final visible light face image according to the matrix element value, and finishing the identification of the near-infrared face image to be tested.
In this embodiment, the smallest matrix element value is selected from the reordered comparison distance matrix, where the smallest matrix element value, that is, the updated euclidean distance, is the minimum value, and the visible light face image corresponding to the smallest matrix element value is the recognition result. The minimum matrix element value is obtained according to richer and more valuable information on the near-infrared face image and the visible light face image to be tested, so that the result is more accurate.
In order to better evaluate the recognition accuracy of the face recognition method provided by the embodiment, the recognition accuracy can be calculated.
Specifically, if the test is performed, M near-infrared face images can be taken out from the near-infrared-visible light face image set to form a near-infrared face image sample set, andand taking out M visible light face images which are in one-to-one correspondence with the near-infrared face images in the near-infrared face image sample set to form a visible light face image sample set, wherein the near-infrared face images are in one-to-one correspondence with the visible light face images, namely the mth visible light face image and the mth near-infrared face image are the same person, M is a positive integer which is larger than zero and smaller than or equal to M, and the near-infrared-visible light face image set means that the number of the near-infrared face images and the number of the visible light face images in the pair of sets are equal and the images are in one-to-one correspondence. The identification method is used for identifying the nth near-infrared face image in the M near-infrared face images, and the comparison distance matrix after reordering is used
Figure BDA0001999806970000161
Arranging the matrix elements in the order from small to large, and finding out the minimum distance
Figure BDA0001999806970000162
If h is n, it indicates correct recognition and the statistical parameter l is increased by 1. According to the method, until M near-infrared face images are processed, calculating the near-infrared-visible light face recognition rate t according to the following formula:
Figure BDA0001999806970000163
the effects of the present invention can be further explained by the following simulation experiments.
1. Simulation conditions
The simulation method is characterized in that MATLAB 2015a developed by Mathworks company in America is used for simulation on an operation system with a central processing unit of Inter (R) core (TM) i 7-47903.60 GHz CPU, NVIDIA Titan X GPU and Ubuntu 16.04. The database adopts Oulu-CASIA database of the Chinese academy of sciences automation research institute.
The methods compared in the experiment were as follows:
one is a method based on conventional characterization, noted KPS in the experiments, reference b.klare and a.jain.heterogenous surface registration using a kernel protocol models.ieee Transactions on Pattern Analysis and Machine Analysis, 35(6): 1410. sup. 1422, 2013.
The other method is based on robust depth feature representation, which is marked by IDR in experiments, and references are R.He, X.Wu, and Z.Sun, "Learning innovative depth representation for nirvis face recognition," in Proceedings of AAAI Conference on scientific understanding, 2017, pp.2000-2006.
2. Emulated content
According to the embodiment of the invention, the near infrared-visible light face recognition rate is calculated and compared with the recognition rates of the KPS method and the IDR method, and the results are shown in table 1.
TABLE 1 near Infrared-visible face recognition Rate
Method of producing a composite material KPS IDR The invention
Recognition rate 62.2% 94.3% 98.9%
As can be seen from table 1, the method for reordering the initial comparison distance matrix by using the high-dimensional depth features can fully mine valuable information in the initial comparison distance matrix, can obtain higher identification accuracy, and verifies the advancement of the invention.
Example two
Referring to fig. 3, fig. 3 is a schematic structural diagram of a near-infrared face image recognition apparatus according to an embodiment of the present invention. As shown in fig. 3, the identification apparatus includes:
the first processing module is used for obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
the second processing module is used for correspondingly obtaining M first high-dimensional depth feature representations according to the M visible light face images;
and the recognition module is used for carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M first high-dimensional depth feature representations.
In an embodiment of the present invention, the first processing module is specifically configured to divide the near-infrared face image to be tested into a plurality of first image blocks with the same size, and two adjacent first image blocks along a first direction are overlapped with each other according to a first preset size; based on the first image block, according to the trained model, obtaining a depth feature representation of the first image block; and splicing the depth feature representations of all the first image blocks in sequence to obtain the first high-dimensional depth feature representation.
In an embodiment of the present invention, the first processing module is further specifically configured to input each of the first image blocks to a trained convolutional neural network model, and correspondingly obtain a depth feature representation of the first image block.
In an embodiment of the present invention, the second processing module is specifically configured to divide each visible light face image into a plurality of second image blocks with the same size, and two adjacent second image blocks along the first direction are overlapped with each other according to a second preset size; based on the second image block, according to the trained model, obtaining the depth feature representation of the second image block; and splicing the depth feature representations of all the second image blocks of each visible light face image in sequence to obtain a second high-dimensional depth feature representation.
In an embodiment of the present invention, the identification module is specifically configured to calculate a euclidean distance between the first high-dimensional depth feature representation and each of the second high-dimensional depth feature representations, so as to obtain an initial comparison distance matrix; obtaining a first reordering weight and a second reordering weight according to the initial comparison distance matrix; updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained; and selecting the minimum matrix element value from the reordered comparison distance matrix, obtaining a final visible light face image according to the matrix element value, and finishing the identification of the near-infrared face image to be tested.
In an embodiment of the present invention, the identification module is further specifically configured to select K matrix elements from the initial comparison distance matrix according to a descending order; based on the K matrix elements, utilizing a first reordering weight calculation formula to obtain a first reordering weight; and calculating a formula by using a second reordering weight based on the K matrix elements to obtain a second reordering weight.
In an embodiment of the present invention, the identifying module is further specifically configured to obtain an updated euclidean distance by using a euclidean distance calculation formula based on the first reordering weight and the second reordering weight; and updating the initial comparison distance matrix according to the updated Euclidean distance until a reordered comparison distance matrix is obtained.
The face recognition device provided by the embodiment of the invention can execute the method embodiment, the realization principle and the technical effect are similar, and the details are not repeated.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 4, the electronic device 1100 includes: the system comprises a processor 1101, a communication interface 1102, a memory 1103 and a communication bus 1104, wherein the processor 1101, the communication interface 1102 and the memory 1103 are communicated with each other through the communication bus 1104;
a memory 1103 for storing a computer program;
the processor 1101 is configured to implement the above-mentioned method steps when executing the program stored in the memory 1103.
The processor 1101, when executing the computer program, implements the steps of: obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested; correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images; and carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations.
The electronic device provided by the embodiment of the present invention can execute the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Example four
Yet another embodiment of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images;
and carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations.
The computer-readable storage medium provided by the embodiment of the present invention may implement the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, this application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "module" or "system. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (9)

1. A near-infrared face image recognition method is characterized by comprising the following steps:
obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images;
performing face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations;
the face recognition of the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations comprises the following steps:
calculating Euclidean distances of the first high-dimensional depth feature representation and each second high-dimensional depth feature representation to obtain an initial comparison distance matrix;
obtaining a first reordering weight and a second reordering weight according to the initial comparison distance matrix;
updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained;
and selecting the minimum matrix element value from the reordered comparison distance matrix, obtaining a final visible light face image according to the matrix element value, and finishing the identification of the near-infrared face image to be tested.
2. The recognition method of claim 1, wherein deriving a first high-dimensional depth feature representation from the near-infrared face image to be tested comprises:
dividing the near-infrared face image to be tested into a plurality of first image blocks with the same size, wherein two adjacent first image blocks along a first direction are mutually overlapped according to a first preset size;
based on the first image block, according to the trained model, obtaining a depth feature representation of the first image block;
and splicing the depth feature representations of all the first image blocks in sequence to obtain the first high-dimensional depth feature representation.
3. The method according to claim 2, wherein obtaining the depth feature representation of the first image block according to the trained model based on the first image block comprises:
and inputting each first image block into the trained convolutional neural network model to correspondingly obtain the depth characteristic representation of the first image block.
4. The identification method according to claim 1, wherein correspondingly obtaining M second high-dimensional depth feature representations according to M visible light face images comprises:
dividing each visible light face image into a plurality of second image blocks with the same size, wherein two adjacent second image blocks along a first direction are mutually overlapped according to a second preset size;
based on the second image block, according to the trained model, obtaining the depth feature representation of the second image block;
and splicing the depth feature representations of all the second image blocks of each visible light face image in sequence to obtain a second high-dimensional depth feature representation.
5. The identification method according to claim 1, wherein obtaining a first re-ordering weight and a second re-ordering weight according to the initial comparison distance matrix comprises:
selecting K matrix elements from the initial comparison distance matrix according to the sequence from small to large;
based on the K matrix elements, utilizing a first reordering weight calculation formula to obtain a first reordering weight;
and calculating a formula by using a second reordering weight based on the K matrix elements to obtain a second reordering weight.
6. The identification method according to claim 1, wherein updating the initial alignment distance matrix according to the first reordering weight and the second reordering weight until a reordered alignment distance matrix is obtained comprises:
based on the first reordering weight and the second reordering weight, utilizing an Euclidean distance calculation formula to obtain an updated Euclidean distance;
and updating the initial comparison distance matrix according to the updated Euclidean distance until a reordered comparison distance matrix is obtained.
7. A near-infrared human face image recognition device is characterized by comprising:
the first processing module is used for obtaining a first high-dimensional depth feature representation according to the near-infrared face image to be tested;
the second processing module is used for correspondingly obtaining M second high-dimensional depth feature representations according to the M visible light face images;
the recognition module is used for carrying out face recognition on the near-infrared face image to be tested according to the first high-dimensional depth feature representation and the M second high-dimensional depth feature representations;
the identification module is specifically configured to calculate an euclidean distance between the first high-dimensional depth feature representation and each of the second high-dimensional depth feature representations to obtain an initial comparison distance matrix; obtaining a first reordering weight and a second reordering weight according to the initial comparison distance matrix; updating the initial comparison distance matrix according to the first reordering weight and the second reordering weight until a reordered comparison distance matrix is obtained; and selecting the minimum matrix element value from the reordered comparison distance matrix, obtaining a final visible light face image according to the matrix element value, and finishing the identification of the near-infrared face image to be tested.
8. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 6 when executing a program stored in a memory.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN201910208630.0A 2019-03-19 2019-03-19 Near-infrared face image recognition method and device, electronic equipment and storage medium Active CN110084110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910208630.0A CN110084110B (en) 2019-03-19 2019-03-19 Near-infrared face image recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910208630.0A CN110084110B (en) 2019-03-19 2019-03-19 Near-infrared face image recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110084110A CN110084110A (en) 2019-08-02
CN110084110B true CN110084110B (en) 2020-12-08

Family

ID=67413316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910208630.0A Active CN110084110B (en) 2019-03-19 2019-03-19 Near-infrared face image recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110084110B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205939B (en) * 2022-07-14 2023-07-25 北京百度网讯科技有限公司 Training method and device for human face living body detection model, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521609A (en) * 2011-12-02 2012-06-27 湖南大学 Near-infrared and visible light face image recognition method based on distributed compression sensing theory
CN102831379A (en) * 2011-06-14 2012-12-19 汉王科技股份有限公司 Face image recognition method and device
CN108596110A (en) * 2018-04-26 2018-09-28 北京京东金融科技控股有限公司 Image-recognizing method and device, electronic equipment, storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930258B (en) * 2012-11-13 2016-05-25 重庆大学 A kind of facial image recognition method
CN105608450B (en) * 2016-03-01 2018-11-27 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on depth convolutional neural networks
CN107248143B (en) * 2017-04-26 2020-12-25 中山大学 Depth image restoration method based on image segmentation
CN109376679A (en) * 2018-11-05 2019-02-22 绍兴文理学院 A kind of face identification system and method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831379A (en) * 2011-06-14 2012-12-19 汉王科技股份有限公司 Face image recognition method and device
CN102521609A (en) * 2011-12-02 2012-06-27 湖南大学 Near-infrared and visible light face image recognition method based on distributed compression sensing theory
CN108596110A (en) * 2018-04-26 2018-09-28 北京京东金融科技控股有限公司 Image-recognizing method and device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN110084110A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
US11487995B2 (en) Method and apparatus for determining image quality
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN111325115B (en) Cross-modal countervailing pedestrian re-identification method and system with triple constraint loss
CN104239858B (en) A kind of method and apparatus of face characteristic checking
CN106372581B (en) Method for constructing and training face recognition feature extraction network
CN106778604B (en) Pedestrian re-identification method based on matching convolutional neural network
CN104268593B (en) The face identification method of many rarefaction representations under a kind of Small Sample Size
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN103136516B (en) The face identification method that visible ray and Near Infrared Information merge and system
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN108875907B (en) Fingerprint identification method and device based on deep learning
CN106446754A (en) Image identification method, metric learning method, image source identification method and devices
CN109190643A (en) Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN110069992B (en) Face image synthesis method and device, electronic equipment and storage medium
CN104966075B (en) A kind of face identification method and system differentiating feature based on two dimension
WO2021169257A1 (en) Face recognition
CN106991355A (en) The face identification method of the analytical type dictionary learning model kept based on topology
CN116052218B (en) Pedestrian re-identification method
CN107918773A (en) A kind of human face in-vivo detection method, device and electronic equipment
CN107392191B (en) Method for judging identity, device and electronic equipment
CN110826534B (en) Face key point detection method and system based on local principal component analysis
CN115862055A (en) Pedestrian re-identification method and device based on comparison learning and confrontation training
CN110084110B (en) Near-infrared face image recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant