CN110084109A - A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium - Google Patents

A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110084109A
CN110084109A CN201910208628.3A CN201910208628A CN110084109A CN 110084109 A CN110084109 A CN 110084109A CN 201910208628 A CN201910208628 A CN 201910208628A CN 110084109 A CN110084109 A CN 110084109A
Authority
CN
China
Prior art keywords
face
image
training
resolution
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910208628.3A
Other languages
Chinese (zh)
Inventor
彭春蕾
王楠楠
高新波
李洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910208628.3A priority Critical patent/CN110084109A/en
Publication of CN110084109A publication Critical patent/CN110084109A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of low-resolution face image recognition methods, device, electronic equipment and storage mediums, comprising: the model after establishing training;The expression of the first depth local feature is obtained using the model after the training based on face low-resolution image to be tested;It is corresponding to obtain L the second depth local features expressions using the model after the training based on L face high-definition pictures;It is indicated according to the first depth local feature and L the second depth local features indicates, recognition of face is carried out to the face low-resolution image to be tested.The present invention is indicated using the first depth local feature and the second depth local feature indicates that treating test face low-resolution image and face high-definition picture respectively is indicated, and improves the robustness and accuracy rate of identification.

Description

A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium
Technical field
The invention belongs to artificial intelligence and mode identification technology, and in particular to a kind of low resolution recognition of face image Method, apparatus, electronic equipment and storage medium.
Background technique
Since the resolution ratio of image capture device such as monitor camera is different, face low-resolution image to face high-resolution Identification between rate image has wide significance and application value in social public security field.For example, monitor camera by Imaging device resolution ratio and the acquisition farther away influence of target range, can only often get the low-resolution image of target face. At this moment the face low-resolution image that how will acquire is compared with the face high-definition picture in police's citizenship data library It is right, for assisting the target person identity in police's confirmation monitoring image to be of great significance.By the collected face figure of institute As resolution ratio is different, face texture and in terms of there are great differences, identify people to using face high-definition picture Face low-resolution image brings certain difficulty.Therefore, universal with monitor camera, from low resolution monitoring image into Row recognition of face has great importance.
The basic ideas of the existing recognition methods using face high-definition picture identification face low-resolution image are: A kind of face characteristic is designed first and describes operator, and feature is carried out respectively to face low-resolution image and face high-definition picture It indicates, the distance between operator is then described as the standard for measuring two face similarity degrees according to feature, realization identified Journey.
But existing recognition methods is in face recognition process, only with the feature or distance metric square of engineer Battle array, to affect recognition accuracy.
Summary of the invention
In order to solve the above-mentioned problems in the prior art, the present invention provides a kind of identifications of low-resolution face image Method, apparatus, electronic equipment and storage medium.The technical problem to be solved in the present invention is achieved through the following technical solutions:
A kind of low-resolution face image recognition methods, comprising:
Model after establishing training;
The first depth local feature is obtained using the model after the training based on face low-resolution image to be tested It indicates;
It is corresponding to obtain L the second depth parts using the model after the training based on L face high-definition pictures Character representation;
It is indicated according to the first depth local feature and L the second depth local features indicates, to described to be measured It tries face low-resolution image and carries out recognition of face.
In one embodiment of the invention, the model after training is established, comprising:
Obtain M face low resolution training images and M face high-resolution training images, wherein M is greater than zero Positive integer;
Every face low resolution training image is divided into identical first image block of several sizes, obtains first Image block collection;
Every face high-resolution training image is divided into identical second image block of several sizes, obtains second Image block collection;
Mould using the first image block collection and the second image block collection training initial model, after obtaining the training Type.
In one embodiment of the invention, initial using the first image block collection and the second image block collection training Model, the model after obtaining the training, comprising:
The first image block collection and the second image block collection are input in initial model, stochastic gradient descent is utilized Initial model described in algorithm process, so that the cross-module state loss function is minimum;
Model according to the smallest cross-module state loss function, after obtaining the training.
In one embodiment of the invention, the cross-module state loss function are as follows:
Wherein, L is cross-module state loss function, yiIndicate i-th of first image blocks, xiExpression and yiFrom the same person's I-th of second image blocks of same position, xjIndicate j-th of second image blocks, and j ≠ i, f (yi) indicate the first image block yi's The expression of depth local feature, []+It indicates when []+Calculated value is taken when greater than zero, and works as []+Value when less than or equal to zero It is zero, fr(yi) indicate f (yi) r tie up element,Indicate that the depth local feature of all first image blocks indicates flat Mean value, λ value are 0.0001.
In one embodiment of the invention, it is based on face low-resolution image to be tested, utilizes the mould after the training Type obtains the expression of the first depth local feature, comprising:
The face low-resolution image to be tested is divided into the identical third image block of several sizes;
The depth local feature of the third image block is obtained according to the model after training based on the third image block It indicates;
The expression of the depth local features of all third image blocks is subjected to splicing in sequence, obtains described the One depth local feature indicates.
In one embodiment of the invention, based on L face high-definition pictures, using the model after the training, It is corresponding to obtain L the second depth local features expressions, comprising:
Every face high-definition picture is divided into identical 4th image block of several sizes;
The depth local feature of the 4th image block is obtained according to the model after training based on the 4th image block It indicates;
By the depth local features of all 4th image blocks of every face high-definition picture indicate according to Sequence carries out splicing, and corresponding the second depth local feature for obtaining every face high-definition picture indicates.
In one embodiment of the invention, it is indicated and L second depth according to the first depth local feature Local feature indicates, carries out recognition of face to the face low-resolution image to be tested, comprising:
The Euclidean distance that the first depth local feature indicates and each second depth local feature indicates is calculated, The smallest Euclidean distance is chosen, the identification to the face low-resolution image to be tested is completed.
One embodiment of the invention also provides a kind of low-resolution face image identification device, comprising:
Model building module, for establishing the model after training;
First processing module, for being obtained based on face low-resolution image to be tested using the model after the training First depth local feature indicates;
Second processing module, for opening face high-definition pictures based on L, using the model after the training, to deserved It is indicated to L the second depth local features;
Identification module, for being indicated and L the second depth local feature tables according to the first depth local feature Show, recognition of face is carried out to the face low-resolution image to be tested.
One embodiment of the invention also provides a kind of electronic equipment, including processor, communication interface, memory and communication are always Line, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method described in any of the above-described embodiment Step.
One embodiment of the invention also provides a kind of computer readable storage medium, in the computer readable storage medium It is stored with computer program, the step of method described in any of the above-described embodiment is realized when the computer program is executed by processor Suddenly.
Beneficial effects of the present invention:
The present invention is indicated using the first depth local feature and test face is treated in the expression of the second depth local feature respectively Low-resolution image and face high-definition picture are indicated, and improve the robustness and accuracy rate of identification.
The present invention is described in further details below with reference to accompanying drawings and embodiments.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of low-resolution face image recognition methods provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of another low-resolution face image recognition methods provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of low-resolution face image identification device provided in an embodiment of the present invention;
Fig. 4 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Further detailed description is done to the present invention combined with specific embodiments below, but embodiments of the present invention are not limited to This.
In the description of the present invention, it is to be understood that, term " first ", " second " are used for description purposes only, and cannot It is interpreted as indication or suggestion relative importance or implicitly indicates the quantity of indicated technical characteristic.Define as a result, " the One ", the feature of " second " can explicitly or implicitly include one or more of the features.In the description of the present invention, The meaning of " plurality " is two or more, unless otherwise specifically defined.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means that specific features described in conjunction with this embodiment or example or feature are contained in this In at least one embodiment or example of invention.In the present specification, schematic expression of the above terms are necessarily directed to It is identical embodiment or example.In addition, those skilled in the art can by different embodiments described in this specification or Example is engaged and is combined.
Embodiment one
Referring to Figure 1, Fig. 1 is that a kind of process of low-resolution face image recognition methods provided in an embodiment of the present invention is shown It is intended to.As shown in Figure 1, the low-resolution face image recognition methods of the present embodiment, comprising:
Step 1 establishes the model after training;
In step 1, a face low resolution-high-definition picture can be selected to collection, and from face low resolution-high score To concentrating, selection M opens face low resolution training images to resolution image and M opens face high-resolution training images, wherein the M Face low resolution training image and M face high-resolution training images are corresponding, i.e., resolution low for any one face For rate training image, there is a face high-resolution training image and its in M face high-resolution training images It is corresponding, that is, face low resolution training image and face high-resolution training image from the same person are acquired, utilizes the M Face low resolution training image and M face high-resolution training image training initial models, the mould obtained after the completion of training Type is the model after training, wherein M is the positive integer greater than zero.
Model after such as training can be preparatory trained convolutional neural networks model, or other nerve nets The intelligent algorithms model such as network model.
The present embodiment does not limit low resolution and high-resolution value specifically, and low resolution and high-resolution indicate The relative value of two resolution sizes does not limit the size of its numerical value individually, all to be identified using high-definition picture The method that the application scenarios of low-resolution image are suitable for the present embodiment, such as low resolution are 64 × 64, and high-resolution is 256×256。
Step 2 is based on face low-resolution image to be tested, and using the model after training, it is special to obtain the first depth part Sign indicates;
In step 2, face low-resolution image to be tested is the image identified, and the first depth part is special Sign indicate to be instructed later using obtained in step 1 by the way that face low-resolution image to be tested is divided into several parts Model after white silk handles each part, respectively obtains processing result, finally splices and obtains after all processing results, because The first depth local feature of this present embodiment indicates more accurately characterize the spy of face low-resolution image to be tested Sign has stronger characterization ability and robustness.
Step 3 opens face high-definition pictures based on L, corresponding to obtain L the second depth offices using the model after training Portion's character representation, wherein L is the positive integer greater than zero;
In step 3, L face high-definition pictures are a data set, can be the body for carrying out face alignment Part database, for example, police's citizenship data library, the identity database can be different according to the resource that user is possessed, herein It is not specifically limited.By the way that face low-resolution image to be tested to be compared with the data set, tester can be treated Face low-resolution image is identified, if existing in the data set identical with the personnel in face low-resolution image to be tested People can be identified using the recognition methods of the present embodiment, so that it is determined that the personnel of face low-resolution image to be tested Identity.
The corresponding second depth local feature expression of each face high-definition picture, the second depth local feature Expression be by the way that face high-definition picture is divided into several parts, later using training after model to each part into Row processing, respectively obtain processing result, finally splices and obtains after all processing results, therefore the second depth office of the present embodiment Portion's character representation can more accurately characterize the feature of face high-definition picture, have stronger characterization ability and robust Property.
Step 4 is indicated according to the expression of the first depth local feature and L the second depth local features, treats test face Low-resolution image carries out recognition of face;
The present invention is indicated and L people using the first depth local feature of obtained face low-resolution image to be tested The corresponding second depth local feature expression of face high-definition picture is compared, such as by calculating the first depth part Euclidean distance between character representation and the expression of the second depth local feature is chosen later corresponding to the smallest value of Euclidean distance Second depth local feature indicates, correspondence can obtain and the second depth local feature from L face high-definition pictures Corresponding face high-definition picture is indicated, to complete to treat test face low-resolution image.
Since the first depth local feature indicates to all have stronger characterization ability with the expression of the second depth local feature, from And it can be improved the accuracy rate and robustness of identification.
Fig. 2 is referred to, in invention one embodiment, step 1 can specifically include step 1.1, step 1.2, step 1.3 With step 1.4, wherein
Step 1.1 obtains M face low resolution training images and M face high-resolution training images;
In step 1.1, a face low resolution-high-definition picture is chosen to collection, wherein the face low resolution- High-definition picture includes several face low resolution training images and face high-resolution training image, the face to collection kind Low resolution training image is a kind of low-resolution image for possessing face, and face high-resolution training image is that one kind is gathered around There is the high-definition picture of face, M face low resolution training are taken out to concentration from face low resolution-high-definition picture Image forms face low resolution training sample set, and takes out the face low resolution concentrated with face low resolution training sample The one-to-one M of training image face high-resolution training images form face high-resolution training sample set, wherein face Low resolution training image and face high-resolution training image, which correspond, indicates low resolution figure of the acquisition from the same person Picture and high-definition picture understand that the m for concentrating face low resolution training sample face low resolution is instructed for convenience Practice the m face high-resolution training images that image and face high-resolution training sample are concentrated to be set as acquiring from same People, wherein m is positive integer greater than zero, and 0 < m≤M.
Every face low resolution training image is divided into identical first image block of several sizes by step 1.2, is obtained First image block collection;
Specifically, every face low resolution training image that face low resolution training sample is concentrated is divided into C The identical facial image block of size, the facial image block are denoted as the first image block, by all face low resolution training images pair The set for the first image block answered is denoted as face low-resolution image block collection, and face low-resolution image block collection is the first image Block collection, and the first image block collection is denoted as { y1,y2,…,yn,…,yN, wherein N=M × C, ynIndicate n-th of first images Block, wherein C, N, n are the positive integer greater than zero, and 0 < n≤N.
For example, the first tile size is 3 × 3.
Every face high-resolution training image is divided into identical second image block of several sizes by step 1.3, is obtained Second image block collection;
Specifically, every face high-resolution training image that face high-resolution training sample is concentrated is divided into C The identical facial image block of size, the facial image block are denoted as the second image block, by all face high-resolution training images pair The set for the second image block answered is denoted as face high-definition picture block collection, and face high-definition picture block collection is the second image Block collection, and the second image block collection is denoted as { x1,x2,…,xn,…,xN, wherein N=M × C, xnIndicate n-th of second images Block, wherein initial model is trained for convenience, therefore by ynWith xnIt is set as being derived from the low resolution of face of the same person The position of identical image block in rate training image and face high-resolution training image.
For example, the second tile size is 3 × 3.
Step 1.4, the model using the first image block collection and the second image block collection training initial model, after being trained;
In step 1.4, the obtained first image block collection of step 1.2 and step 1.3 and obtained second figure are utilized As block collection is trained initial model, thus the model after being trained, since it will be used to train the face of initial model Low resolution training image and face high-resolution training image are divided into several image blocks, enable initial model to each The feature of image block is learnt, to improve the robustness and recognition accuracy of the model after obtained training.
In an embodiment of the invention, step 1.4 can specifically include step 1.4.1 and step 1.4.2, wherein
Step 1.4.1, the first image block collection and the second image block collection are input in initial model, using under stochastic gradient Algorithm process initial model drops, so that cross-module state loss function is minimum;
Specifically, the first image block collection and the second image block collection are input in initial model, by the first image block collection and Second image block collection constructs cross-module state loss function, to be trained to the initial model, specifically using under stochastic gradient The algorithm process initial model is dropped, when making cross-module state loss function minimum, then stops the processing to initial model.
Wherein, cross-module state loss function can indicate are as follows:
Wherein, L is cross-module state loss function, yiIndicate i-th of first image blocks, xiExpression and yiFrom the same person's I-th of second image blocks of same position, xjIndicate j-th of second image blocks, and j ≠ i, f (yi) indicate the first image block yi's The expression of depth local feature, []+It indicates when []+Calculated value is taken when greater than zero, and works as []+Value when less than or equal to zero It is zero, fr(yi) indicate f (yi) r tie up element,Indicate that the depth local feature of all first image blocks indicates flat Mean value, λ value are 0.0001.
Step 1.4.2, the model according to the smallest cross-module state loss function, after being trained;
When handling the initial model using stochastic gradient descent algorithm, and when cross-module state loss function minimum, gained at this time The model arrived is the model after training, at this time the robustness of obtained model and the equal highest of recognition accuracy.
Model after such as training can be preparatory trained convolutional neural networks model, or other nerve nets The intelligent algorithms model such as network model.
For example, initial model is convolutional neural networks model, mainly it is made of seven convolution units, wherein each convolution list Member includes a convolutional layer, a normalization layer and an active coating.Using document " F.Schroff, D.Kalenichenko, and J.Philbin.Facenet:a unified embedding for face recognition and clustering.in Proceedings of IEEE Conference on Computer Vision and Pattern Method disclosed in Recognition, 2015:815-823 " constructs convolutional neural networks model.
In an embodiment of the invention, step 2 can specifically include step 2.1, step 2.2 and step 2.3, wherein
Face low-resolution image to be tested is divided into the identical third image block of several sizes by step 2.1;
Specifically, the face low-resolution image to be tested in step 2.1 is divided into the identical facial image of D size Block, the facial image block are denoted as third image block, wherein D is the positive integer greater than zero, and first can be enhanced in this way The characterization ability that depth local feature indicates.
For example, third tile size is 3 × 3.
Step 2.2 obtains the depth local feature of third image block according to the model after training based on third image block It indicates;
All third image blocks obtained in step 2.1 are separately input into step 1 after obtained training In model, the output of the model after the training is that depth local feature corresponding to each third image block indicates, i.e., each The corresponding depth local feature of third image block indicates, therefore passes through the obtained depth office of third image block of step 2.1 The characterization ability of portion's character representation is stronger.
Such as, by the way that each third image block to be input to the convolutional neural networks model after training, by the convolutional Neural net The result of network model output indicates that each third image block is corresponding to obtain a depth part spy as depth local feature Sign indicates, until obtaining the expression of depth local feature corresponding to each third image block of face low-resolution image to be tested Until.
The present embodiment does not limit specifically used model, all models that can obtain the expression of depth local feature Can be applied in the method for the present embodiment, thus as those skilled in the art for, can easily know and pass through Model after other training handles image.Similarly, the structure for the convolutional neural networks model that the present embodiment is illustrated is not It is uniquely, not limit herein the specific structure of convolutional neural networks model.
The expression of the depth local features of all third image blocks is carried out splicing by step 2.3 in sequence, obtains the One depth local feature indicates.
Specifically, the depth local feature expression of third image blocks all in step 2.3 is directly spelled in sequence Processing is connect, the result of splicing is that the first depth local feature of this face low-resolution image to be tested indicates, should The sequence when sequence of step can be faced for this face low-resolution image to be tested from left to right, from top to bottom, certainly, Splicing can also be carried out in other orders, be not particularly limited herein.
Face low-resolution image to be tested is divided into several third image blocks by the present embodiment, can be by face to be tested Each feature of low-resolution image more accurately indicates to carry out table by depth local feature corresponding to third image block Sign, then by indicating these depth local features to carry out splicing, so that finally obtained first depth local feature table Show the feature that can more accurately characterize face low-resolution image to be tested, there is stronger characterization ability.
In an embodiment of the invention, step 3 can specifically include step 3.1, step 3.2 and step 3.3, wherein
Every face high-definition picture is divided into identical 4th image block of several sizes by step 3.1;
Specifically, every face high-definition picture in step 3.1 is divided into the identical facial image of E size Block, the facial image block are denoted as the 4th image block, wherein E is positive integer greater than zero, and E and D can equal or not phases Deng, can be enhanced in this way the second depth local feature expression characterization ability.
For example, the 4th tile size is 3 × 3.
Step 3.2 obtains the depth local feature of the 4th image block according to the model after training based on the 4th image block It indicates;
All 4th image blocks obtained in step 3.1 are separately input into step 1 after obtained training In model, the output of the model after the training is that depth local feature corresponding to each 4th image block indicates, i.e., each The corresponding depth local feature of 4th image block indicates, therefore passes through the obtained depth office of the 4th image block of step 3.1 The characterization ability of portion's character representation is stronger.
Such as, by the way that each 4th image block to be input to the convolutional neural networks model after training, by the convolutional Neural net The result of network model output indicates that each 4th image block is corresponding to obtain a depth part spy as depth local feature Sign indicates, is expressed as until obtaining depth local feature corresponding to the 4th image block of each of this face high-definition picture Only.
The present embodiment does not limit specifically used model, all models that can obtain the expression of depth local feature Can be applied in the method for the present embodiment, thus as those skilled in the art for, can easily know and pass through Model after other training handles image.Similarly, the structure for the convolutional neural networks model that the present embodiment is illustrated is not It is uniquely, not limit herein the specific structure of convolutional neural networks model.
Step 3.3, by the depth local feature of all 4th image blocks of every face high-definition picture Expression carries out splicing in sequence, and correspondence obtains the second depth local feature table of every face high-definition picture Show;
Specifically, by the depth part of corresponding all 4th image blocks of every face high-definition picture in step 3.2 Character representation directly carries out splicing in sequence, and the result of splicing is the second of this face high-definition picture Depth local feature indicates that the corresponding second depth local feature of every face high-definition picture indicates, i.e. L faces High-definition picture is corresponding to obtain L the second depth local features expressions, and the sequence of the step can be the people's face high-resolution Sequence when image is faced from left to right, from top to bottom, it is of course also possible to carry out splicing in other orders, herein not Do concrete restriction.
Every face high-definition picture is divided into several 4th image blocks by the present embodiment, can be by face high-resolution Each feature of image is more accurately characterized by depth local feature corresponding to the 4th image block, then is passed through These depth local features are indicated to carry out splicing, so that the second depth office of finally obtained face high-definition picture Portion's character representation can more accurately characterize the feature of face high-definition picture, have stronger characterization ability.
In an embodiment of the invention, step 4 can be with specifically: calculating the first depth local feature indicates and each the The Euclidean distance that two depth local features indicate chooses the smallest Euclidean distance, completes to the face low resolution to be tested The identification of image.
Specifically, the corresponding first depth part of face low-resolution image to be tested obtained in step 2 is calculated separately The Euclidean distance that character representation second depth local feature corresponding with every face high-definition picture in step 3 indicates, i.e., L Euclidean distance can be obtained, choose the minimum value of L Euclidean distance, face high-definition picture corresponding to the minimum value is For recognition result.
The embodiment of the present invention is carried out due to treating test face low-resolution image using the expression of the first depth local feature Character representation indicates that face high-definition picture carries out character representation using the second depth local feature, compared with the conventional method, The recognition methods of the present embodiment can be modeled according to the distribution of face image data itself, in the mark sheet of facial image block It can be improved robustness and recognition accuracy during showing, overcome in existing method identification process and have ignored facial image itself Potential data distribution rule deficiency, improve and utilize face face high-definition picture identification face low-resolution image Recognition accuracy.
It, can be accurate to identifying in order to preferably assess the recognition accuracy of face identification method provided by the present embodiment Rate is calculated.
It specifically, can be by taking out K to concentration from face low resolution-high-definition picture if when testing It opens face low resolution test image and forms face low resolution test sample collection, and take out and face low resolution test sample The one-to-one K of face low resolution test image of concentration face high-resolution test chart pictures form face high resolution bathymetric Try sample set, wherein K is the positive integer greater than zero, and the kth low resolution of face that face low resolution test sample is concentrated The kth of this concentration of rate test image and face high-resolution test specimens face high-resolution training image is set as acquiring from same One people, wherein k, K are the positive integer greater than zero, and 0 < k≤K, using above-mentioned recognition methods to K face low resolution N-th face low resolution test image in test image is identified that recognition result is K face high-resolution tests The h of image face high-resolution test chart pictures, if n=h, then it represents that identification is correct, and statistical parameter l increases by 1.According to This method is until handled L face low resolution test images, and calculate near-infrared-visible light recognition of face according to the following formula Rate t:
Effect of the invention can be described further by following emulation experiment.
1. simulated conditions
It is Inter (R) Core (TM) i7-4790 3.60GHz CPU, NVIDIA that the present invention, which is in central processing unit, On Titan X GPU, 16.04 Ubuntu operating system, with Mathworks company, the U.S. exploitation MATLAB 2015a into Row emulation.Database uses Nanjing University NJU-ID database.
The method compared in experiment is as follows:
One is the method indicated based on sparse matrix, ESCM, bibliography J.Huo, Y.Gao are denoted as in experiment, Y.Shi,W.Yang,H.Yin.Ensemble of sparse cross-modal metrics for heterogeneous face recognition.in:ACM International Conference on Multimedia,2016,pp.1405– 1414.
Another kind is the method based on cross-module state distance study, and CML, bibliography J.Huo, Y.Gao are denoted as in experiment, Y.Shi,H.Yin.Cross-modal metric learning for AUC optimization.IEEE Transactions on Neural Networks and Learning Systems,29(10):4844-4856,2018
2. emulation content
According to the present invention described in specific embodiment, calculating face low resolution-high resolution identification accuracy rate, and with ESCM method, CML method recognition accuracy be compared, the results are shown in Table 1.
1 near-infrared of table-visible light face recognition accuracy rate
Method ESCM CML The present invention
Recognition accuracy 20.8% 30.9% 43.5%
As seen from Table 1, since depth local feature representation method of the present invention is examined during character representation The potential data distribution for having considered facial image itself, can obtain higher recognition effect, demonstrate advance of the invention.
Embodiment two
Fig. 3 is referred to, Fig. 3 is that a kind of structure of low-resolution face image identification device provided in an embodiment of the present invention is shown It is intended to.As shown in figure 3, the low-resolution face image identification device, comprising:
Model building module, for establishing the model after training;
First processing module, for obtaining first using the model after training based on face low-resolution image to be tested Depth local feature indicates;
Second processing module, for corresponding to using the model after training based on L face high-definition pictures and obtaining L Second depth local feature indicates;
Identification module, for indicating to indicate with L the second depth local features according to the first depth local feature, to be measured It tries face low-resolution image and carries out recognition of face.
In one embodiment of the invention, model building module is specifically used for obtaining M face low resolution training figures Picture and M face high-resolution training images, wherein M is the positive integer greater than zero;By every face low resolution training image It is divided into identical first image block of several sizes;Obtain the first image block collection;Every face high-resolution training image is drawn It is divided into identical second image block of several sizes;Obtain the second image block collection;Utilize the first image block collection and the second image block collection Training initial model, the model after being trained.
In one embodiment of the invention, model building module, also particularly useful for by the first image block collection and the second figure As block collection is input in initial model, initial model is handled using stochastic gradient descent algorithm, so that cross-module state loss function is most It is small;Model according to the smallest cross-module state loss function, after being trained.
Cross-module state loss function are as follows:
Wherein, L is cross-module state loss function, yiIndicate i-th of first image blocks, xiExpression and yiFrom the same person's I-th of second image blocks of same position, xjIndicate j-th of second image blocks, and j ≠ i, f (yi) indicate the first image block yi's The expression of depth local feature, []+It indicates when []+Calculated value is taken when greater than zero, and works as []+Value when less than or equal to zero It is zero, fr(yi) indicate f (yi) r tie up element,Indicate that the depth local feature of all first image blocks indicates flat Mean value, λ value are 0.0001.
In one embodiment of the invention, first processing module is specifically used for face low-resolution image to be tested It is divided into the identical third image block of several sizes;Third image block is obtained according to the model after training based on third image block Depth local feature indicate;The depth local feature expression of all third image blocks is subjected to splicing in sequence, is obtained It is indicated to the first depth local feature.
In one embodiment of the invention, Second processing module is specifically used for drawing every face high-definition picture It is divided into identical 4th image block of several sizes;The 4th image block is obtained according to the model after training based on the 4th image block Depth local feature indicates;By the depth local features of all 4th image blocks of every face high-definition picture indicate according to Sequence carries out splicing, and corresponding the second depth local feature for obtaining every face high-definition picture indicates.
In one embodiment of the invention, identification module indicates and every specifically for the first depth local feature of calculating The Euclidean distance that a second depth local feature indicates, chooses the smallest Euclidean distance, and test face low resolution is treated in completion The identification of image.
Identification device provided in an embodiment of the present invention can execute above method embodiment, realization principle and technology effect Seemingly, details are not described herein for fruit.
Embodiment three
Fig. 4 is referred to, Fig. 4 is the structural schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.As shown in figure 4, Electronic equipment 1100, comprising: processor 1101, communication interface 1102, memory 1103 and communication bus 1104, wherein place Device 1101, communication interface 1102 are managed, memory 1103 completes mutual communication by communication bus 1104;
Memory 1103, for storing computer program;
Processor 1101 when for executing the program stored on memory 1103, realizes above method step.
Processor 1101 realizes following steps when executing the computer program: the model after establishing training;Based on to be measured Face low-resolution image is tried, using the model after training, obtains the expression of the first depth local feature;Based on L face high scores Resolution image, it is corresponding to obtain L the second depth local features expressions using the model after training;It is special according to the first depth part Sign indicates and L the second depth local features indicate, treats test face low-resolution image and carries out recognition of face.
Electronic equipment provided in an embodiment of the present invention can execute above method embodiment, realization principle and technology effect Seemingly, details are not described herein for fruit.
Example IV
Another embodiment of the invention provides a kind of computer readable storage medium, is stored thereon with computer journey Sequence, above-mentioned computer program perform the steps of when being executed by processor
Model after establishing training;
The expression of the first depth local feature is obtained using the model after training based on face low-resolution image to be tested;
It is corresponding to obtain L the second depth local features using the model after training based on L face high-definition pictures It indicates;
It is indicated according to the first depth local feature and L the second depth local features indicates, treat the low resolution of test face Rate image carries out recognition of face.
Computer readable storage medium provided in an embodiment of the present invention can execute above method embodiment, realize former Reason is similar with technical effect, and details are not described herein.
It will be understood by those skilled in the art that embodiments herein can provide as method, apparatus (equipment) or computer journey Sequence product.Therefore, complete hardware embodiment, complete software embodiment or combining software and hardware aspects can be used in the application They are all referred to as " module " or " system " by the form of embodiment here.Moreover, the application can be used it is one or more its In include computer usable program code computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, Optical memory etc.) on the form of computer program product implemented.Computer program is stored/distributed in suitable medium, There is provided together with other hardware or as hardware a part, can also use other distribution forms, such as by Internet or Other wired or wireless telecommunication systems.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that Specific implementation of the invention is only limited to these instructions.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to of the invention Protection scope.

Claims (10)

1. a kind of low-resolution face image recognition methods characterized by comprising
Model after establishing training;
The expression of the first depth local feature is obtained using the model after the training based on face low-resolution image to be tested;
It is corresponding to obtain L the second depth local features using the model after the training based on L face high-definition pictures It indicates;
It is indicated according to the first depth local feature and L the second depth local features indicates, to the people to be tested Face low-resolution image carries out recognition of face.
2. recognition methods according to claim 1, which is characterized in that the model after establishing training, comprising:
Obtain M face low resolution training images and M face high-resolution training images, wherein M is just whole greater than zero Number;
Every face low resolution training image is divided into identical first image block of several sizes, obtains the first image Block collection;
Every face high-resolution training image is divided into identical second image block of several sizes, obtains the second image Block collection;
Model using the first image block collection and the second image block collection training initial model, after obtaining the training.
3. recognition methods according to claim 2, which is characterized in that utilize the first image block collection and second figure As block collection training initial model, the model after obtaining the training, comprising:
The first image block collection and the second image block collection are input in initial model, stochastic gradient descent algorithm is utilized The initial model is handled, so that the cross-module state loss function is minimum;
Model according to the smallest cross-module state loss function, after obtaining the training.
4. recognition methods according to claim 3, which is characterized in that the cross-module state loss function are as follows:
Wherein, L is cross-module state loss function, yiIndicate i-th of first image blocks, xiExpression and yiFrom the identical of the same person I-th of second image blocks of position, xjIndicate j-th of second image blocks, and j ≠ i, f (yi) indicate the first image block yiDepth Local feature expression, []+It indicates when []+Calculated value is taken when greater than zero, and works as []+Value is zero when less than or equal to zero, fr(yi) indicate f (yi) r tie up element,Indicate the average value that the depth local feature of all first image blocks indicates, λ value is 0.0001.
5. recognition methods according to claim 1, which is characterized in that be based on face low-resolution image to be tested, utilize Model after the training obtains the expression of the first depth local feature, comprising:
The face low-resolution image to be tested is divided into the identical third image block of several sizes;
Based on the third image block, according to the model after training, the depth local feature for obtaining the third image block is indicated;
The depth local feature expression of all third image blocks is subjected to splicing in sequence, it is deep to obtain described first Spending local feature indicates.
6. recognition methods according to claim 1, which is characterized in that based on L face high-definition pictures, using described Model after training, it is corresponding to obtain L the second depth local features expressions, comprising:
Every face high-definition picture is divided into identical 4th image block of several sizes;
Based on the 4th image block, according to the model after training, the depth local feature for obtaining the 4th image block is indicated;
The depth local feature of all 4th image blocks of every face high-definition picture is indicated in sequence Splicing is carried out, corresponding the second depth local feature for obtaining every face high-definition picture indicates.
7. recognition methods according to claim 1, which is characterized in that according to the first depth local feature expression and L A second depth local feature indicates, carries out recognition of face to the face low-resolution image to be tested, comprising:
The Euclidean distance that the first depth local feature indicates and each second depth local feature indicates is calculated, is chosen The smallest Euclidean distance completes the identification to the face low-resolution image to be tested.
8. a kind of low-resolution face image identification device characterized by comprising
Model building module, for establishing the model after training;
First processing module, for obtaining first using the model after the training based on face low-resolution image to be tested Depth local feature indicates;
Second processing module, for corresponding to using the model after the training based on L face high-definition pictures and obtaining L Second depth local feature indicates;
Identification module, it is right for indicating to indicate with L the second depth local features according to the first depth local feature The face low-resolution image to be tested carries out recognition of face.
9. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method and step as claimed in claim 1 to 7.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium Program, the computer program realize method and step as claimed in claim 1 to 7 when being executed by processor.
CN201910208628.3A 2019-03-19 2019-03-19 A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium Pending CN110084109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910208628.3A CN110084109A (en) 2019-03-19 2019-03-19 A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910208628.3A CN110084109A (en) 2019-03-19 2019-03-19 A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110084109A true CN110084109A (en) 2019-08-02

Family

ID=67413315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910208628.3A Pending CN110084109A (en) 2019-03-19 2019-03-19 A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110084109A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883925A (en) * 2021-03-23 2021-06-01 杭州海康威视数字技术股份有限公司 Face image processing method, device and equipment
WO2022127112A1 (en) * 2020-12-14 2022-06-23 奥比中光科技集团股份有限公司 Cross-modal face recognition method, apparatus and device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751108A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Face image recognition device and face image recognition method
CN107169455A (en) * 2017-05-16 2017-09-15 中山大学 Face character recognition methods based on depth local feature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751108A (en) * 2013-12-31 2015-07-01 汉王科技股份有限公司 Face image recognition device and face image recognition method
CN107169455A (en) * 2017-05-16 2017-09-15 中山大学 Face character recognition methods based on depth local feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHUNLEI PENG ET.AL: "DLFace: Deep local descriptor for cross-modality face recognition", 《SCIENCEDIRECT》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022127112A1 (en) * 2020-12-14 2022-06-23 奥比中光科技集团股份有限公司 Cross-modal face recognition method, apparatus and device, and storage medium
CN112883925A (en) * 2021-03-23 2021-06-01 杭州海康威视数字技术股份有限公司 Face image processing method, device and equipment
CN112883925B (en) * 2021-03-23 2023-08-29 杭州海康威视数字技术股份有限公司 Face image processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN109101930A (en) A kind of people counting method and system
CN111709409A (en) Face living body detection method, device, equipment and medium
CN105975959A (en) Face characteristic extraction modeling method based on neural network, face identification method, face characteristic extraction modeling device and face identification device
CN106203242A (en) A kind of similar image recognition methods and equipment
CN106469302A (en) A kind of face skin quality detection method based on artificial neural network
CN106548159A (en) Reticulate pattern facial image recognition method and device based on full convolutional neural networks
CN109359539A (en) Attention appraisal procedure, device, terminal device and computer readable storage medium
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN114332545B (en) Image data classification method and device based on low-bit pulse neural network
CN109726746A (en) A kind of method and device of template matching
CN108961308A (en) A kind of residual error depth characteristic method for tracking target of drift detection
CN110097029A (en) Identity identifying method based on Highway network multi-angle of view Gait Recognition
CN105893947A (en) Bi-visual-angle face identification method based on multi-local correlation characteristic learning
CN106447707A (en) Image real-time registration method and system
CN108985200A (en) A kind of In vivo detection algorithm of the non-formula based on terminal device
CN107992783A (en) Face image processing process and device
CN109492594A (en) Classroom participant&#39;s new line rate detection method based on deep learning network
CN110084109A (en) A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium
CN109815823A (en) Data processing method and Related product
CN112200263A (en) Self-organizing federal clustering method applied to power distribution internet of things
CN106650573B (en) A kind of face verification method and system across the age
CN113255701B (en) Small sample learning method and system based on absolute-relative learning framework
CN110390307A (en) Expression recognition method, Expression Recognition model training method and device
CN110288026A (en) A kind of image partition method and device practised based on metric relation graphics
CN110020597A (en) It is a kind of for the auxiliary eye method for processing video frequency examined of dizziness/dizziness and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190802

RJ01 Rejection of invention patent application after publication