CN107958247A - Method and apparatus for facial image identification - Google Patents
Method and apparatus for facial image identification Download PDFInfo
- Publication number
- CN107958247A CN107958247A CN201810045569.8A CN201810045569A CN107958247A CN 107958247 A CN107958247 A CN 107958247A CN 201810045569 A CN201810045569 A CN 201810045569A CN 107958247 A CN107958247 A CN 107958247A
- Authority
- CN
- China
- Prior art keywords
- facial image
- sample
- face
- face characteristic
- binaryzation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for facial image identification.One embodiment of this method includes:Obtain facial image to be identified;The facial image to be identified is imported to the face characteristic pre-established and generates model, obtains the binaryzation face characteristic of the facial image to be identified, wherein, the face characteristic generation model is used for the correspondence for characterizing facial image and binaryzation face characteristic;The binaryzation face characteristic of each facial image in the binaryzation face characteristic and the face image set pre-established is subjected to similarity measure, wherein, each facial image in the face image set marked the identity of indicated face object;According to similarity measure as a result, determining the identity for the face object that the facial image to be identified is included.This embodiment improves the recognition efficiency of facial image.
Description
Technical field
The invention relates to field of computer technology, and in particular to technical field of image processing, more particularly, to
The method and apparatus of facial image identification.
Background technology
Recognition of face, is a kind of biological identification technology that the facial feature information based on people carries out identification.At this stage
Can be by extracting the characteristics of image of facial image to be identified first, then by the characteristics of image and face of facial image to be identified
The characteristics of image of each facial image of known identities is contrasted in database, and face figure to be identified is determined according to comparing result
The identity for the face that picture includes.Although the above method can realize recognition of face, with the face in face database
Image increases, and the characteristics of image of facial image is increased, and the above method occurs that committed memory is big during contrast, right
Than it is of long duration the problems such as.
The content of the invention
The embodiment of the present application proposes the method and apparatus for facial image identification.
In a first aspect, the embodiment of the present application, which provides a kind of facial image that is used for, knows method for distinguishing, including:Obtain to be identified
Facial image;Above-mentioned facial image to be identified is imported to the face characteristic pre-established and generates model, obtains above-mentioned people to be identified
The binaryzation face characteristic of face image, wherein, above-mentioned face characteristic generation model is used to characterize facial image and binaryzation face
The correspondence of feature;By two of each facial image in above-mentioned binaryzation face characteristic and the face image set pre-established
Value face characteristic carries out similarity measure, wherein, each facial image in above-mentioned face image set marked indicated
Face object identity;According to similarity measure as a result, determining the face pair that above-mentioned facial image to be identified is included
The identity of elephant.
In certain embodiments, the above method further includes:The step of above-mentioned face characteristic of training generates model, including:Obtain
First sample facial image, the second sample facial image and the 3rd sample facial image are taken, wherein, above-mentioned first sample face figure
Face object as indicated by indicated face object with above-mentioned second sample facial image is identical, above-mentioned second sample face
Face object indicated by image is different with the face object indicated by above-mentioned 3rd sample facial image;Using machine learning side
Method, using above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image, based on what is built in advance
Loss function trains to obtain above-mentioned face characteristic generation model, wherein, above-mentioned loss function is to be based on above-mentioned first sample face
What the feature vector of image, the second sample facial image and the 3rd sample facial image obtained.
In certain embodiments, above-mentioned face characteristic generation model includes feature extraction network and binaryzation coding layer, its
In, features described above extraction network is convolutional neural networks;And it is above-mentioned use machine learning method, utilize above-mentioned first sample people
Face image, the second sample facial image and the 3rd sample facial image, train to obtain above-mentioned based on the loss function built in advance
Face characteristic generates model, including:By above-mentioned first sample facial image, the second sample facial image and the 3rd sample face figure
Above-mentioned first sample facial image, the second sample facial image and the 3rd sample are obtained as inputting initial convolutional neural networks respectively
The feature vector of facial image;By above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image
Feature vector be separately input into above-mentioned binaryzation coding layer, obtain above-mentioned first sample facial image, the second sample face figure
The binaryzation face characteristic of picture and the 3rd sample facial image, wherein, above-mentioned binaryzation coding layer is used for will based on hash algorithm
Feature vector is converted into the binary-coding of regular length;Based on above-mentioned first sample facial image, the second sample facial image and
The feature vector of 3rd sample facial image, and the network ginseng of the above-mentioned initial convolutional neural networks of above-mentioned loss function renewal
Number, obtains above-mentioned face characteristic generation model.
In certain embodiments, above-mentioned loss function is the weighted sum of first-loss function and the second loss function, wherein,
Above-mentioned first-loss function is the first distance and the difference of second distance, and above-mentioned first distance is above-mentioned first sample facial image
The distance between feature vector and the feature vector of above-mentioned second sample facial image, above-mentioned second distance is above-mentioned first sample
The distance between the feature vector of facial image and the feature vector of above-mentioned 3rd sample facial image, the second loss function is base
The spy of correlation of the feature vector on each component, above-mentioned second sample facial image in above-mentioned first sample facial image
Phase of the feature vector of correlation and above-mentioned threeth sample facial image of the sign vector on each component on each component
What closing property obtained.
In certain embodiments, the above-mentioned face characteristic pre-established that imports above-mentioned facial image to be identified generates mould
Type, obtains the binaryzation face characteristic of above-mentioned facial image to be identified, including:Above-mentioned facial image to be identified is inputted to above-mentioned
Feature extraction network, obtains the feature vector of above-mentioned facial image to be identified;By the feature vector of above-mentioned facial image to be identified
Input to above-mentioned binaryzation coding layer carries out binaryzation coding, obtains the binaryzation face characteristic of above-mentioned facial image to be identified.
In certain embodiments, it is above-mentioned by above-mentioned binaryzation face characteristic with it is each in the face image set that pre-establishes
The binaryzation face characteristic of facial image carries out similarity measure, including:For each people in above-mentioned face image set
Face image, performs following steps:The facial image is imported into above-mentioned face characteristic generation model, obtains the two-value of the facial image
Change face characteristic;Calculate the binaryzation face characteristic of above-mentioned facial image to be identified and the binaryzation face characteristic of the facial image
Between Hamming distance.
Second aspect, the embodiment of the present application provide a kind of device for facial image identification, including:Acquiring unit,
For obtaining facial image to be identified;Import unit, it is special for above-mentioned facial image to be identified to be imported the face pre-established
Sign generation model, obtains the binaryzation face characteristic of above-mentioned facial image to be identified, wherein, above-mentioned face characteristic generation model is used
In the correspondence of characterization facial image and binaryzation face characteristic;Computing unit, for by above-mentioned binaryzation face characteristic with
The binaryzation face characteristic of each facial image in the face image set pre-established carries out similarity measure, wherein, it is above-mentioned
Each facial image in face image set marked the identity of indicated face object;Determination unit, for root
According to similarity measure as a result, determining the identity for the face object that above-mentioned facial image to be identified is included.
In certain embodiments, above device further includes model training unit, and above-mentioned model training unit includes:Sample obtains
Unit is taken, for obtaining first sample facial image, the second sample facial image and the 3rd sample facial image, wherein, it is above-mentioned
Face object indicated by first sample facial image is identical with the face object indicated by above-mentioned second sample facial image, on
The face object stated indicated by the second sample facial image is different with the face object indicated by above-mentioned 3rd sample facial image;
Model training subelement, for using machine learning method, utilizing above-mentioned first sample facial image, the second sample facial image
With the 3rd sample facial image, train to obtain above-mentioned face characteristic generation model based on the loss function built in advance, wherein, on
It is the feature based on above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image to state loss function
What vector obtained.
In certain embodiments, above-mentioned face characteristic generation model includes feature extraction network and binaryzation coding layer, its
In, features described above extraction network is convolutional neural networks;And above-mentioned model training subelement is further used for:By above-mentioned first
Sample facial image, the second sample facial image and the 3rd sample facial image input initial convolutional neural networks and obtain respectively
State the feature vector of first sample facial image, the second sample facial image and the 3rd sample facial image;By above-mentioned first sample
The feature vector of this facial image, the second sample facial image and the 3rd sample facial image is separately input into above-mentioned binaryzation and compiles
Code layer, obtains the binaryzation face of above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image
Feature, wherein, above-mentioned binaryzation coding layer is used for the binary-coding that feature vector is converted into regular length based on hash algorithm;
Based on the feature vector of above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image, Yi Jishang
The network parameter of the above-mentioned initial convolutional neural networks of loss function renewal is stated, obtains above-mentioned face characteristic generation model.
In certain embodiments, above-mentioned loss function is the weighted sum of first-loss function and the second loss function, wherein,
Above-mentioned first-loss function is the first distance and the difference of second distance, and above-mentioned first distance is above-mentioned first sample facial image
The distance between feature vector and the feature vector of above-mentioned second sample facial image, above-mentioned second distance is above-mentioned first sample
The distance between the feature vector of facial image and the feature vector of above-mentioned 3rd sample facial image, the second loss function is base
The spy of correlation of the feature vector on each component, above-mentioned second sample facial image in above-mentioned first sample facial image
Phase of the feature vector of correlation and above-mentioned threeth sample facial image of the sign vector on each component on each component
What closing property obtained.
In certain embodiments, above-mentioned import unit is further used for:Above-mentioned facial image to be identified is inputted to above-mentioned
Feature extraction network, obtains the feature vector of above-mentioned facial image to be identified;By the feature vector of above-mentioned facial image to be identified
Input to above-mentioned binaryzation coding layer carries out binaryzation coding, obtains the binaryzation face characteristic of above-mentioned facial image to be identified.
In certain embodiments, above-mentioned computing unit is further used for:For each in above-mentioned face image set
Facial image, performs following steps:The facial image is imported into above-mentioned face characteristic generation model, obtains the two of the facial image
Value face characteristic;Binaryzation face characteristic and the binaryzation face of the facial image for calculating above-mentioned facial image to be identified are special
Hamming distance between sign.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing
Device;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple processors
During execution so that the method for said one or the realization of multiple processors as described in any implementation in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, wherein, the method as described in any implementation in first aspect is realized when which is executed by processor.
Method and apparatus provided by the embodiments of the present application for facial image identification, first by the face to be identified of acquisition
Image imports the face characteristic generation model pre-established, obtains the binaryzation face characteristic of above-mentioned facial image to be identified, and
The binaryzation feature of each facial image in above-mentioned binaryzation face characteristic and the face image set pre-established is carried out afterwards
Similarity measure, finally determines the identity of face object that above-mentioned facial image to be identified included according to similarity measure result
Mark, by using the two-value of the binaryzation face characteristic and each facial image in face image set of facial image to be identified
Change feature and carry out similarity measure, so as to reduce computation complexity, improve the recognition efficiency of facial image.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart for being used for facial image and knowing one embodiment of method for distinguishing according to the application;
Fig. 3 is the schematic diagram for being used for facial image and knowing an application scenarios of method for distinguishing according to the application;
Fig. 4 is the structure diagram according to one embodiment for being used for the device that facial image identifies of the application;
Fig. 5 is adapted for the structure diagram of the computer system of the terminal device for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
It illustrate only easy to describe, in attached drawing and invent relevant part with related.
It should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows that the facial image that is used for that can apply the application knows method for distinguishing or the dress for facial image identification
The exemplary system architecture 100 for the embodiment put.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send message etc..Various client applications, such as web browser applications, figure can be installed on terminal device 101,102,103
As the application of processing class, searching class application, social platform software etc..
Terminal device 101,102,103 can be had display screen and support the various electronics that facial image identifies to set
It is standby, include but not limited to smart mobile phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103
Facial image recognition result provides the background server supported.Background server can carry out the facial image to be identified of acquisition
Recognition of face etc. is handled, and handling result is fed back to terminal device.
It should be noted that the embodiment of the present application provided be used for facial image know method for distinguishing can be set by terminal
Standby 101,102,103 perform, and can also be performed by server 105.Correspondingly, the device for facial image identification can be set
It is placed in terminal device 101,102,103, can also be arranged in server 105.The application does not limit this.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realizing need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, it illustrates one embodiment for being used for facial image knowledge method for distinguishing according to the application
Flow 200.This is used for facial image and knows method for distinguishing, comprises the following steps:
Step 201, facial image to be identified is obtained.
In the present embodiment, the electronic equipment of method for distinguishing operation thereon is known (such as shown in Fig. 1 for facial image
Terminal device 101,102,103) can from local or other be used for store facial image to be identified electronic equipments in obtain
Facial image to be identified.
Step 202, facial image to be identified is imported to the face characteristic pre-established and generates model, obtains face to be identified
The binaryzation face characteristic of image.
In the present embodiment, based on the facial image to be identified obtained in step 201, above-mentioned electronic equipment can will be above-mentioned
Facial image to be identified imports the face characteristic generation model pre-established, so as to obtain the two-value of above-mentioned facial image to be identified
Change face characteristic.As a kind of example, above-mentioned binaryzation face characteristic can be the binary value of regular length.Herein, on
State the correspondence that face characteristic generation model can be used for characterizing facial image and binaryzation face characteristic.On as an example,
It can be that technical staff is pre- based on the statistics to a large amount of facial images and binaryzation face characteristic to state face characteristic generation model
Mapping table first formulating, being stored with multiple facial images and the correspondence of binaryzation face characteristic, in this way, above-mentioned electricity
Sub- equipment can be contrasted above-mentioned facial image to be identified and multiple facial images in the mapping table successively, if should
Certain facial image and above-mentioned facial image to be identified in mapping table is same or similar, then can be by the mapping table
In the facial image corresponding to binaryzation face characteristic of the binaryzation face characteristic as above-mentioned facial image to be identified.
In some optional implementations of the present embodiment, the above method can also include the above-mentioned face characteristic life of training
The step of into model, can specifically include:Above-mentioned electronic equipment or other be used to train above-mentioned face characteristic generation model
Electronic equipment can obtain first sample facial image, the second sample facial image and the 3rd sample facial image first, wherein,
The face object phase indicated by face object and above-mentioned second sample facial image indicated by above-mentioned first sample facial image
Together, the face object indicated by above-mentioned second sample facial image and the face object indicated by above-mentioned 3rd sample facial image
It is different.Afterwards, machine learning method can be used, utilizes above-mentioned first sample facial image, the second sample facial image and
Three sample facial images, train to obtain above-mentioned face characteristic generation model based on the loss function built in advance, wherein, above-mentioned damage
It can be the feature based on above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image to lose function
What vector obtained.As an example, above-mentioned face characteristic generation model can be neural network model, specific training process can wrap
Include:First, initial neural network model is built, which is used for realization according to Face image synthesis binaryzation
Face characteristic, what the network parameter of the initial neural network model can be randomly generated, it should be appreciated that above-mentioned initial nerve net
Network model can set the number of plies according to being actually needed, and initial neural network model each layer can be convolutional layer, pond layer,
Activation primitive layer, full articulamentum etc., the application does not limit this;Afterwards, can be by first sample facial image, the second sample
This facial image and the 3rd sample facial image are directed respectively into above-mentioned initial neural network model, obtain above-mentioned first sample face
The binaryzation face characteristic of image, the second sample facial image and the 3rd sample facial image, judges the loss letter built in advance
Number whether meet the default condition of convergence (such as, if less than default threshold value), herein, above-mentioned loss function can be base
Obtained in the feature vector of above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image, on
State the binaryzation face characteristic that feature vector can be above-mentioned initial neural network model final output or initial nerve
The feature vector of a certain layer (for example, last full articulamentum) output in network model, for example, above-mentioned loss function can be with
It is the difference of the first Euclidean distance and the second Euclidean distance, the first Euclidean distance can be the feature of above-mentioned first sample facial image
The vectorial Euclidean distance between the feature vector of above-mentioned second sample facial image, second distance can be above-mentioned first samples
Euclidean distance between the feature vector of facial image and the feature vector of above-mentioned 3rd sample facial image;Then, if really
Determine loss function and be unsatisfactory for the condition of convergence, then based on the loss function, above-mentioned initial neutral net is updated using gradient descent method
The network parameter of model, if it is determined that loss function meets the condition of convergence, it is determined that above-mentioned initial neural network model is training
The face characteristic generation model of completion.It should be noted that above-mentioned training process is merely illustrative the net of neural network model
The adjustment process of network parameter, it is believed that the network before initially neural network model adjusts for network parameter, the tune of network parameter
Journey is had suffered to be not limited in once, can be according to the degree of optimization of network and actual needs etc. repeatedly.
In some optional implementations, above-mentioned face characteristic generation model can include feature extraction network and two-value
Change coding layer, wherein, features described above extraction network can be convolutional neural networks;And using machine learning method, utilize
The same this facial image, the second sample facial image and the 3rd sample facial image, based on the loss function training built in advance
Face characteristic generation model is obtained, can be specifically included:It is possible, firstly, to by above-mentioned first sample facial image, the second sample people
Face image and the 3rd sample facial image input initial convolutional neural networks and obtain above-mentioned first sample facial image, second respectively
The feature vector of sample facial image and the 3rd sample facial image;Afterwards, can be by above-mentioned first sample facial image, second
The feature vector of sample facial image and the 3rd sample facial image is separately input into above-mentioned binaryzation coding layer, obtains above-mentioned
The binaryzation face characteristic of the same this facial image, the second sample facial image and the 3rd sample facial image, wherein, above-mentioned two
Value coding layer can be used for the binary-coding that feature vector is converted into regular length based on hash algorithm;Finally, can be with base
In the feature vector of above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image, and loss
The network parameter of the above-mentioned initial convolutional neural networks of function renewal, obtains above-mentioned face characteristic generation model, herein, above-mentioned the
The feature vector of the same this facial image, the second sample facial image and the 3rd sample facial image can refer to above-mentioned initial volume
The feature vector of product neutral net output, as an example, in training process, it can be determined that whether the loss function built in advance is full
The default condition of convergence of foot (such as, if less than default threshold value), if it is determined that loss function is unsatisfactory for the condition of convergence, then base
In the loss function, using the network parameter of the above-mentioned initial convolutional neural networks of gradient descent method renewal, if it is determined that loss letter
Number meets the condition of convergence, it is determined that the feature extraction network that initial convolutional neural networks are completed for training.Herein, it is above-mentioned initial
Convolutional neural networks can obtain by various modes, for example, generating network at random based on existing convolutional neural networks
What parameter obtained.It should be noted that the above process is merely illustrative the adjustment process of the network parameter of feature extraction network,
It is considered that initial convolutional neural networks are the network before parameter adjustment, feature extraction network is the network after parameter adjustment, net
The adjustment process of network parameter is not limited in once, can be according to the degree of optimization of network and actual needs etc. repeatedly.
Optionally, above-mentioned loss function can be the weighted sum of first-loss function and the second loss function, wherein, it is above-mentioned
First-loss function is the first distance and the difference of second distance, and above-mentioned first distance is the feature of above-mentioned first sample facial image
The distance between the vectorial and feature vector of above-mentioned second sample facial image (for example, Euclidean distance, COS distance etc.), on
State feature vector and above-mentioned 3rd sample facial image that second distance is above-mentioned first sample facial image feature vector it
Between distance (for example, Euclidean distance, COS distance etc.), the second loss function is based on above-mentioned first sample facial image
Correlation, the feature vector of above-mentioned second sample facial image correlation on each component of the feature vector on each component
What the correlation of property and the feature vector of above-mentioned 3rd sample facial image on each component obtained., it is herein, above-mentioned
The feature vector of first sample facial image, the second sample facial image and the 3rd sample facial image can refer to above-mentioned initial
The feature vector of convolutional neural networks output, as an example it is supposed that loss function is L, the feature of first sample facial image to
The feature vector of amount, the feature vector of the second sample facial image and the 3rd sample facial image includes N+1 component, then
L=L1+λL2,
L1=| | pi-pj||2-||pi-pk||2,
Wherein, L1Represent first-loss function, L2Represent the second loss function, λ represents weight, piRepresent first sample people
The feature vector of face image, pjRepresent the feature vector of the second sample facial image, pkRepresent the feature of the 3rd sample facial image
Vector.||pi-pj||2Represent piAnd pj2- norms, for representing piAnd pjEuclidean distance, | | pi-pk||2Represent piAnd pk's
2- norms, for representing piAnd pkEuclidean distance.qm1Represent m1 points in the feature vector of first sample facial image
Amount, qn1Represent the n-th 1 components in the feature vector of first sample facial image, wherein, m1 ≠ n1.qm2Represent the second sample
The m2 component in the feature vector of facial image, qn2Represent the n-th 2 points in the feature vector of the second sample facial image
Amount, wherein, m2 ≠ n2.qm3Represent the m3 component in the feature vector of the 3rd sample facial image, qn3Represent the 3rd sample
The n-th 3 components in the feature vector of facial image, wherein, m3 ≠ n3.Herein, λ can be set according to actual needs
It is fixed.
Optionally, above-mentioned steps 202 can specifically include:First, above-mentioned electronic equipment can be by above-mentioned face to be identified
Image inputs the feature vector for above-mentioned feature extraction network, obtaining above-mentioned facial image to be identified;Afterwards, above-mentioned electronic equipment
The feature vector of above-mentioned facial image to be identified can be inputted to above-mentioned binaryzation coding layer and carry out binaryzation coding, obtained
State the binaryzation face characteristic of facial image to be identified.
Step 203, by the two-value of each facial image in binaryzation face characteristic and the face image set pre-established
Change face characteristic and carry out similarity measure.
In the present embodiment, above-mentioned electronic equipment can be by the two of the above-mentioned facial image to be identified obtained in step 202
Value face characteristic and the binaryzation face characteristic of each facial image in the face image set pre-established carry out similarity
Calculate, as an example, above-mentioned electronic equipment can be by calculating binaryzation face characteristic and the people of above-mentioned facial image to be identified
The distance of the binaryzation face characteristic of each facial image in face image set is (for example, COS distance, Euclidean distance, Jie Kade
Distance etc.) carry out similarity measure.Herein, each facial image in above-mentioned face image set marked indicated
The identity of face object, for example, each facial image in above-mentioned face image set can mark in advance
The title of face object, the identity such as identity number.
In some optional implementations of the present embodiment, above-mentioned steps 203 can specifically include:For above-mentioned face
Each facial image in image collection, can perform following steps:First, above-mentioned electronic equipment can be by the facial image
Above-mentioned face characteristic generation model is imported, obtains the binaryzation face characteristic of the facial image;Then, above-mentioned electronic equipment can be with
Calculate the Hamming between the binaryzation face characteristic of above-mentioned facial image to be identified and the binaryzation face characteristic of the facial image
Distance.Hamming distance can be used for characterizing the similarity between facial image to be identified and the facial image.
Step 204, according to similarity measure as a result, determining the identity mark for the face object that facial image to be identified is included
Know.
In the present embodiment, above-mentioned electronic equipment can determine above-mentioned to wait to know according to the similarity measure result of step 203
The identity for the face object that others' face image is included, as an example, above-mentioned electronic equipment can be according to similarity measure
As a result, by it is in above-mentioned face image set, marked with the highest facial image of similarity of above-mentioned facial image to be identified
Identity of the identity as above-mentioned facial image to be identified.
With continued reference to Fig. 3, Fig. 3 is one being used for facial image and knowing the application scenarios of method for distinguishing according to the present embodiment
Schematic diagram.In the application scenarios of Fig. 3, terminal device obtains facial image 301 to be identified first;Afterwards, can will be to be identified
Facial image 301 imports the face characteristic generation model 302 pre-established, obtains the binaryzation people of facial image 301 to be identified
Face feature;It is then possible to by the binaryzation face characteristic of facial image 301 to be identified and the face image set pre-established
Each facial image binaryzation face characteristic carry out similarity measure, wherein, each facial image in face image set
It marked the identity of indicated face object;Finally, according to similarity measure as a result, determining facial image to be identified
The identity of the 301 face objects included.
The method that above-described embodiment of the application provides by using facial image to be identified binaryzation face characteristic with
The binaryzation feature of each facial image in face image set carries out similarity measure, so as to reduce computation complexity, carries
The high recognition efficiency of facial image.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides one kind to be used for face figure
As one embodiment of the device of identification, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, which specifically may be used
With applied in various electronic equipments.
As shown in figure 4, the device 400 for being used for facial image identification of the present embodiment includes:Acquiring unit 401, import list
Member 402, computing unit 403 and determination unit 404.Wherein, acquiring unit 401 is used to obtain facial image to be identified;Import single
Member 402 is used to importing above-mentioned facial image to be identified into the face characteristic generation model pre-established, obtains above-mentioned people to be identified
The binaryzation face characteristic of face image, wherein, above-mentioned face characteristic generation model is used to characterize facial image and binaryzation face
The correspondence of feature;Computing unit 403 is used in above-mentioned binaryzation face characteristic and the face image set pre-established
Each facial image binaryzation face characteristic carry out similarity measure, wherein, each face in above-mentioned face image set
The image tagged identity of indicated face object;Determination unit 404 is used for according to similarity measure as a result, determining
State the identity for the face object that facial image to be identified is included.
In the present embodiment, the acquiring unit 401 of the device 400 identified for facial image, import unit 402, calculating
Unit 403 and the specific processing of determination unit 404 and its caused technique effect can be corresponded in embodiment with reference to figure 2 respectively to be walked
Rapid 201, the related description of step 202, step 203 and step 204, details are not described herein.
In some optional implementations of the present embodiment, above device 400 can also include model training unit (figure
Not shown in), above-mentioned model training unit can include:Sample acquisition unit (not shown), for obtaining first sample
Facial image, the second sample facial image and the 3rd sample facial image, wherein, indicated by above-mentioned first sample facial image
Face object is identical with the face object indicated by above-mentioned second sample facial image, indicated by above-mentioned second sample facial image
Face object and face object indicated by above-mentioned 3rd sample facial image it is different;Model training subelement (does not show in figure
Go out), for using machine learning method, utilizing above-mentioned first sample facial image, the second sample facial image and the 3rd sample
Facial image, trains to obtain above-mentioned face characteristic generation model based on the loss function built in advance, wherein, above-mentioned loss function
It is that the feature vector based on above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image obtains
's.
In some optional implementations of the present embodiment, above-mentioned face characteristic generation model can include feature extraction
Network and binaryzation coding layer, wherein, features described above extraction network is convolutional neural networks;And above-mentioned model training subelement
It is further used for:Above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image are inputted respectively
Initial convolutional neural networks obtain above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image
Feature vector;By the feature vector of above-mentioned first sample facial image, the second sample facial image and the 3rd sample facial image
Above-mentioned binaryzation coding layer is separately input into, obtains above-mentioned first sample facial image, the second sample facial image and the 3rd sample
The binaryzation face characteristic of this facial image, wherein, above-mentioned binaryzation coding layer is used to turn feature vector based on hash algorithm
Turn to the binary-coding of regular length;Based on above-mentioned first sample facial image, the second sample facial image and the 3rd sample people
The feature vector of face image, and the network parameter of the above-mentioned initial convolutional neural networks of above-mentioned loss function renewal, obtain above-mentioned
Face characteristic generates model.
In some optional implementations of the present embodiment, above-mentioned loss function can be first-loss function and second
The weighted sum of loss function, wherein, above-mentioned first-loss function can be the difference of the first distance and second distance, above-mentioned first away from
From between the feature vector and the feature vector of above-mentioned second sample facial image that can be above-mentioned first sample facial image
Distance, above-mentioned second distance can be the feature vector of above-mentioned first sample facial image and above-mentioned 3rd sample facial image
The distance between feature vector, the second loss function can be the feature vectors based on above-mentioned first sample facial image each
The correlation and above-mentioned of correlation, the feature vector of above-mentioned second sample facial image on each component on component
Correlation of the feature vector of three sample facial images on each component obtains.
In some optional implementations of the present embodiment, above-mentioned import unit can be further used for:Treated above-mentioned
Identification facial image inputs the feature vector for above-mentioned feature extraction network, obtaining above-mentioned facial image to be identified;Treated above-mentioned
The feature vector of identification facial image, which is inputted to above-mentioned binaryzation coding layer, carries out binaryzation coding, obtains above-mentioned face to be identified
The binaryzation face characteristic of image.
In some optional implementations of the present embodiment, above-mentioned computing unit can be further used for:For above-mentioned
Each facial image in face image set, performs following steps:The facial image is imported into above-mentioned face characteristic generation
Model, obtains the binaryzation face characteristic of the facial image;Calculate the binaryzation face characteristic of above-mentioned facial image to be identified with
Hamming distance between the binaryzation face characteristic of the facial image.
Below with reference to Fig. 5, it illustrates suitable for for realizing the computer system 500 of the terminal device of the embodiment of the present application
Structure diagram.Terminal device shown in Fig. 5 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU, Central Processing Unit)
501, its can according to the program being stored in read-only storage (ROM, Read Only Memory) 502 or from storage part
508 programs being loaded into random access storage device (RAM, Random Access Memory) 503 and perform it is various appropriate
Action and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.CPU 501、ROM
502 and RAM 503 is connected with each other by bus 504.Input/output (I/O, Input/Output) interface 505 is also connected to
Bus 504.
I/O interfaces 505 are connected to lower component:Importation 506 including keyboard, mouse etc.;Penetrated including such as cathode
Spool (CRT, Cathode Ray Tube), liquid crystal display (LCD, Liquid Crystal Display) etc. and loudspeaker
Deng output par, c 507;Storage part 508 including hard disk etc.;And including such as LAN (LAN, Local Area
Network) the communications portion 509 of the network interface card of card, modem etc..Communications portion 509 is via such as internet
Network performs communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, such as disk,
CD, magneto-optic disk, semiconductor memory etc., are installed on driver 510, in order to the calculating read from it as needed
Machine program is mounted into storage part 508 as needed.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
Apply in example, which can be downloaded and installed by communications portion 509 from network, and/or from detachable media
511 are mounted.When the computer program is performed by central processing unit (CPU) 501, perform what is limited in the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer-readable recording medium either the two any combination.Computer-readable recording medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.
The more specifically example of computer-readable recording medium can include but is not limited to:Electrical connection with one or more conducting wires,
Portable computer diskette, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer-readable recording medium can any be included or store
The tangible medium of program, the program can be commanded the either device use or in connection of execution system, device.And
In the application, computer-readable signal media can include believing in a base band or as the data that a carrier wave part is propagated
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium beyond readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by instruction execution system, device either device use or program in connection.Included on computer-readable medium
Program code any appropriate medium can be used to transmit, include but not limited to:Wirelessly, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that marked at some as in the realization replaced in square frame
The function of note can also be with different from the order marked in attached drawing generation.For example, two square frames succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also to note
Meaning, the combination of each square frame and block diagram in block diagram and/or flow chart and/or the square frame in flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include acquiring unit, import unit, computing unit and determination unit.Wherein, the title of these units not structure under certain conditions
The paired restriction of the unit in itself, for example, acquiring unit is also described as " unit for obtaining facial image to be identified ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine computer-readable recording medium carries one or more program, when said one or multiple programs are performed by the device so that should
Device:Obtain facial image to be identified;Above-mentioned facial image to be identified is imported to the face characteristic pre-established and generates model, is obtained
To the binaryzation face characteristic of above-mentioned facial image to be identified, wherein, above-mentioned face characteristic generation model is used to characterize face figure
As the correspondence with binaryzation face characteristic;By in above-mentioned binaryzation face characteristic and the face image set pre-established
The binaryzation face characteristic of each facial image carries out similarity measure, wherein, each face figure in above-mentioned face image set
Identity as marked indicated face object;According to similarity measure as a result, determining above-mentioned facial image to be identified
Comprising face object identity.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (14)
1. one kind, which is used for facial image, knows method for distinguishing, including:
Obtain facial image to be identified;
The facial image to be identified is imported to the face characteristic pre-established and generates model, obtains the facial image to be identified
Binaryzation face characteristic, wherein, face characteristic generation model is used to characterize facial image and binaryzation face characteristic
Correspondence;
The binaryzation face of each facial image in the binaryzation face characteristic and the face image set pre-established is special
Sign carries out similarity measure, wherein, each facial image in the face image set marked indicated face object
Identity;
According to similarity measure as a result, determining the identity for the face object that the facial image to be identified is included.
2. according to the method described in claim 1, wherein, the method further includes:
The step of training face characteristic generates model, including:
First sample facial image, the second sample facial image and the 3rd sample facial image are obtained, wherein, the first sample
Face object indicated by facial image is identical with the face object indicated by the second sample facial image, second sample
Face object indicated by this facial image is different with the face object indicated by the 3rd sample facial image;
Using machine learning method, the first sample facial image, the second sample facial image and the 3rd sample face are utilized
Image, trains to obtain the face characteristic generation model based on the loss function built in advance, wherein, the loss function is base
Obtained in the feature vector of the first sample facial image, the second sample facial image and the 3rd sample facial image.
3. according to the method described in claim 2, wherein, the face characteristic generation model includes feature extraction network and two-value
Change coding layer, wherein, the feature extraction network is convolutional neural networks;And
It is described to use machine learning method, utilize the first sample facial image, the second sample facial image and the 3rd sample
Facial image, trains to obtain the face characteristic generation model based on the loss function built in advance, including:
The first sample facial image, the second sample facial image and the 3rd sample facial image are inputted into initial convolution respectively
Neutral net obtain the feature of the first sample facial image, the second sample facial image and the 3rd sample facial image to
Amount;
The feature vector difference of the first sample facial image, the second sample facial image and the 3rd sample facial image is defeated
Enter to the binaryzation coding layer, obtain the first sample facial image, the second sample facial image and the 3rd sample face
The binaryzation face characteristic of image, wherein, the binaryzation coding layer is used to be converted into feature vector based on hash algorithm solid
The binary-coding of measured length;
Based on the feature vector of the first sample facial image, the second sample facial image and the 3rd sample facial image, with
And the network parameter of the loss function renewal initial convolutional neural networks, obtain the face characteristic generation model.
4. according to the method described in claim 3, wherein, the loss function is first-loss function and the second loss function
Weighted sum, wherein, the first-loss function is the first distance and the difference of second distance, and first distance is first sample
The distance between the feature vector of this facial image and the feature vector of the second sample facial image, the second distance is
The distance between the feature vector of the first sample facial image and the feature vector of the 3rd sample facial image, second
Loss function is correlation of the feature vector based on the first sample facial image on each component, second sample
The feature vector of correlation and the threeth sample facial image of the feature vector of facial image on each component is each
What the correlation on a component obtained.
It is 5. described that the facial image to be identified is imported to the face pre-established according to the method described in claim 4, wherein
Feature generates model, obtains the binaryzation face characteristic of the facial image to be identified, including:
The facial image to be identified is inputted to the feature extraction network, obtain the feature of the facial image to be identified to
Amount;
The feature vector of the facial image to be identified is inputted to the binaryzation coding layer and carries out binaryzation coding, obtains institute
State the binaryzation face characteristic of facial image to be identified.
It is 6. described by the binaryzation face characteristic and the face figure pre-established according to the method described in claim 1, wherein
The binaryzation face characteristic of each facial image in image set conjunction carries out similarity measure, including:
For each facial image in the face image set, following steps are performed:
The facial image is imported into the face characteristic generation model, obtains the binaryzation face characteristic of the facial image;
Calculate between the binaryzation face characteristic of the facial image to be identified and the binaryzation face characteristic of the facial image
Hamming distance.
7. a kind of device for facial image identification, including:
Acquiring unit, for obtaining facial image to be identified;
Import unit, generates model for the facial image to be identified to be imported the face characteristic pre-established, obtains described
The binaryzation face characteristic of facial image to be identified, wherein, the face characteristic generation model is used to characterize facial image and two
The correspondence of value face characteristic;
Computing unit, for by each facial image in the binaryzation face characteristic and the face image set that pre-establishes
Binaryzation face characteristic carries out similarity measure, wherein, each facial image in the face image set marked meaning
The identity of the face object shown;
Determination unit, for according to similarity measure as a result, determining the face object that is included of the facial image to be identified
Identity.
8. device according to claim 7, wherein, described device further includes model training unit, the model training list
Member includes:
Sample acquisition unit, for obtaining first sample facial image, the second sample facial image and the 3rd sample facial image,
Wherein, the face object indicated by the first sample facial image and the face pair indicated by the second sample facial image
As identical, the face indicated by face object and the 3rd sample facial image indicated by the second sample facial image
Object is different;
Model training subelement, for using machine learning method, utilizing the first sample facial image, the second sample face
Image and the 3rd sample facial image, train to obtain the face characteristic generation model based on the loss function built in advance, its
In, the loss function is to be based on the first sample facial image, the second sample facial image and the 3rd sample facial image
Feature vector obtain.
9. device according to claim 8, wherein, the face characteristic generation model includes feature extraction network and two-value
Change coding layer, wherein, the feature extraction network is convolutional neural networks;And
The model training subelement is further used for:
The first sample facial image, the second sample facial image and the 3rd sample facial image are inputted into initial convolution respectively
Neutral net obtain the feature of the first sample facial image, the second sample facial image and the 3rd sample facial image to
Amount;
The feature vector difference of the first sample facial image, the second sample facial image and the 3rd sample facial image is defeated
Enter to the binaryzation coding layer, obtain the first sample facial image, the second sample facial image and the 3rd sample face
The binaryzation face characteristic of image, wherein, the binaryzation coding layer is used to be converted into feature vector based on hash algorithm solid
The binary-coding of measured length;
Based on the feature vector of the first sample facial image, the second sample facial image and the 3rd sample facial image, with
And the network parameter of the loss function renewal initial convolutional neural networks, obtain the face characteristic generation model.
10. device according to claim 9, wherein, the loss function is first-loss function and the second loss function
Weighted sum, wherein, the first-loss function is the difference of the first distance and second distance, and first distance is described first
The distance between the feature vector of sample facial image and the feature vector of the second sample facial image, the second distance
The distance between feature vector for feature vector and the 3rd sample facial image of the first sample facial image, the
Two loss functions are correlation of the feature vector based on the first sample facial image on each component, second sample
The feature vector of correlation and the threeth sample facial image of the feature vector of this facial image on each component exists
What the correlation on each component obtained.
11. device according to claim 10, wherein, the import unit is further used for:
The facial image to be identified is inputted to the feature extraction network, obtain the feature of the facial image to be identified to
Amount;
The feature vector of the facial image to be identified is inputted to the binaryzation coding layer and carries out binaryzation coding, obtains institute
State the binaryzation face characteristic of facial image to be identified.
12. device according to claim 7, wherein, the computing unit is further used for:
For each facial image in the face image set, following steps are performed:
The facial image is imported into the face characteristic generation model, obtains the binaryzation face characteristic of the facial image;
Calculate between the binaryzation face characteristic of the facial image to be identified and the binaryzation face characteristic of the facial image
Hamming distance.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors
Realize the method as described in any in claim 1-6.
14. a kind of computer-readable recording medium, is stored thereon with computer program, wherein, the computer program is by processor
The method as described in any in claim 1-6 is realized during execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045569.8A CN107958247A (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for facial image identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810045569.8A CN107958247A (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for facial image identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107958247A true CN107958247A (en) | 2018-04-24 |
Family
ID=61956317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810045569.8A Pending CN107958247A (en) | 2018-01-17 | 2018-01-17 | Method and apparatus for facial image identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107958247A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447142A (en) * | 2018-04-24 | 2018-08-24 | 上德智能科技(武汉)有限公司 | Attendance processing method and processing device based on recognition of face |
CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
CN110070046A (en) * | 2019-04-23 | 2019-07-30 | 北京市商汤科技开发有限公司 | Facial image recognition method and device, electronic equipment and storage medium |
CN110516513A (en) * | 2018-05-22 | 2019-11-29 | 深圳云天励飞技术有限公司 | A kind of face identification method and device |
CN111091080A (en) * | 2019-12-06 | 2020-05-01 | 贵州电网有限责任公司 | Face recognition method and system |
CN111275060A (en) * | 2018-12-04 | 2020-06-12 | 北京嘀嘀无限科技发展有限公司 | Recognition model updating processing method and device, electronic equipment and storage medium |
CN111353526A (en) * | 2020-02-19 | 2020-06-30 | 上海小萌科技有限公司 | Image matching method and device and related equipment |
CN113065495A (en) * | 2021-04-13 | 2021-07-02 | 深圳技术大学 | Image similarity calculation method, target object re-identification method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101030244A (en) * | 2006-03-03 | 2007-09-05 | 中国科学院自动化研究所 | Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic |
CN101145261A (en) * | 2007-10-11 | 2008-03-19 | 中国科学院长春光学精密机械与物理研究所 | ATM system automatic recognition device |
CN105608450A (en) * | 2016-03-01 | 2016-05-25 | 天津中科智能识别产业技术研究院有限公司 | Heterogeneous face identification method based on deep convolutional neural network |
CN106203356A (en) * | 2016-07-12 | 2016-12-07 | 中国计量大学 | A kind of face identification method based on convolutional network feature extraction |
CN106682233A (en) * | 2017-01-16 | 2017-05-17 | 华侨大学 | Method for Hash image retrieval based on deep learning and local feature fusion |
CN106777349A (en) * | 2017-01-16 | 2017-05-31 | 广东工业大学 | Face retrieval system and method based on deep learning |
CN106897667A (en) * | 2017-01-17 | 2017-06-27 | 桂林电子科技大学 | A kind of face retrieval method and system |
CN107918636A (en) * | 2017-09-07 | 2018-04-17 | 北京飞搜科技有限公司 | A kind of face method for quickly retrieving, system |
-
2018
- 2018-01-17 CN CN201810045569.8A patent/CN107958247A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101030244A (en) * | 2006-03-03 | 2007-09-05 | 中国科学院自动化研究所 | Automatic identity discriminating method based on human-body physiological image sequencing estimating characteristic |
CN101145261A (en) * | 2007-10-11 | 2008-03-19 | 中国科学院长春光学精密机械与物理研究所 | ATM system automatic recognition device |
CN105608450A (en) * | 2016-03-01 | 2016-05-25 | 天津中科智能识别产业技术研究院有限公司 | Heterogeneous face identification method based on deep convolutional neural network |
CN106203356A (en) * | 2016-07-12 | 2016-12-07 | 中国计量大学 | A kind of face identification method based on convolutional network feature extraction |
CN106682233A (en) * | 2017-01-16 | 2017-05-17 | 华侨大学 | Method for Hash image retrieval based on deep learning and local feature fusion |
CN106777349A (en) * | 2017-01-16 | 2017-05-31 | 广东工业大学 | Face retrieval system and method based on deep learning |
CN106897667A (en) * | 2017-01-17 | 2017-06-27 | 桂林电子科技大学 | A kind of face retrieval method and system |
CN107918636A (en) * | 2017-09-07 | 2018-04-17 | 北京飞搜科技有限公司 | A kind of face method for quickly retrieving, system |
Non-Patent Citations (3)
Title |
---|
HANJIANG LAI ET AL.: "Simultaneous Feature Learning and Hash Coding with Deep Neural Networks", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
RENLIANG WENG ET AL.: "Learning Cascaded Deep Auto-Encoder Networks for Face Alignment", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
朱翔: "基于深度学习的互联网图片人脸检索系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447142A (en) * | 2018-04-24 | 2018-08-24 | 上德智能科技(武汉)有限公司 | Attendance processing method and processing device based on recognition of face |
CN110516513A (en) * | 2018-05-22 | 2019-11-29 | 深圳云天励飞技术有限公司 | A kind of face identification method and device |
CN110516513B (en) * | 2018-05-22 | 2022-03-25 | 深圳云天励飞技术有限公司 | Face recognition method and device |
CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
CN111275060A (en) * | 2018-12-04 | 2020-06-12 | 北京嘀嘀无限科技发展有限公司 | Recognition model updating processing method and device, electronic equipment and storage medium |
CN111275060B (en) * | 2018-12-04 | 2023-12-08 | 北京嘀嘀无限科技发展有限公司 | Identification model updating processing method and device, electronic equipment and storage medium |
CN110070046A (en) * | 2019-04-23 | 2019-07-30 | 北京市商汤科技开发有限公司 | Facial image recognition method and device, electronic equipment and storage medium |
CN110070046B (en) * | 2019-04-23 | 2024-05-24 | 北京市商汤科技开发有限公司 | Face image recognition method and device, electronic equipment and storage medium |
CN111091080A (en) * | 2019-12-06 | 2020-05-01 | 贵州电网有限责任公司 | Face recognition method and system |
CN111353526A (en) * | 2020-02-19 | 2020-06-30 | 上海小萌科技有限公司 | Image matching method and device and related equipment |
CN113065495A (en) * | 2021-04-13 | 2021-07-02 | 深圳技术大学 | Image similarity calculation method, target object re-identification method and system |
CN113065495B (en) * | 2021-04-13 | 2023-07-14 | 深圳技术大学 | Image similarity calculation method, target object re-recognition method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107958247A (en) | Method and apparatus for facial image identification | |
CN108038469B (en) | Method and apparatus for detecting human body | |
CN108427939B (en) | Model generation method and device | |
CN107908789A (en) | Method and apparatus for generating information | |
CN107766940A (en) | Method and apparatus for generation model | |
US20190102605A1 (en) | Method and apparatus for generating information | |
CN109508681A (en) | The method and apparatus for generating human body critical point detection model | |
CN108898186A (en) | Method and apparatus for extracting image | |
CN107491534A (en) | Information processing method and device | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN107644209A (en) | Method for detecting human face and device | |
CN108052613A (en) | For generating the method and apparatus of the page | |
CN108038880A (en) | Method and apparatus for handling image | |
CN107273503A (en) | Method and apparatus for generating the parallel text of same language | |
CN108229419A (en) | For clustering the method and apparatus of image | |
CN107609506A (en) | Method and apparatus for generating image | |
CN108460365B (en) | Identity authentication method and device | |
CN107729928A (en) | Information acquisition method and device | |
CN108197652A (en) | For generating the method and apparatus of information | |
CN107590255A (en) | Information-pushing method and device | |
CN107578034A (en) | information generating method and device | |
CN107910060A (en) | Method and apparatus for generating information | |
CN107731229A (en) | Method and apparatus for identifying voice | |
CN108171191A (en) | For detecting the method and apparatus of face | |
CN108984399A (en) | Detect method, electronic equipment and the computer-readable medium of interface difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180424 |