CN105631403B - Face identification method and device - Google Patents

Face identification method and device Download PDF

Info

Publication number
CN105631403B
CN105631403B CN201510955509.6A CN201510955509A CN105631403B CN 105631403 B CN105631403 B CN 105631403B CN 201510955509 A CN201510955509 A CN 201510955509A CN 105631403 B CN105631403 B CN 105631403B
Authority
CN
China
Prior art keywords
face
group
feature
photo
neural networks
Prior art date
Application number
CN201510955509.6A
Other languages
Chinese (zh)
Other versions
CN105631403A (en
Inventor
张涛
张旭华
张胜凯
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to CN201510955509.6A priority Critical patent/CN105631403B/en
Publication of CN105631403A publication Critical patent/CN105631403A/en
Application granted granted Critical
Publication of CN105631403B publication Critical patent/CN105631403B/en

Links

Abstract

The disclosure is directed to a kind of face identification method and devices.Method includes: that first group of feature of the first human face region in photo is extracted by the first convolutional neural networks trained, and the first group profile shows the face characteristic in photo;Second group of feature of the second human face region in photo is extracted by the second convolutional neural networks trained, the second human face region is determined that the second group profile shows the clothing feature in photo as the second area where the face in photo;Merge first group of feature and second group of feature, dimension-reduction treatment is carried out to the feature combination after merging, obtains third group feature;Determine whether the face corresponding with reference face characteristic of the face in photo is same face according to the COS distance of third group feature and the reference face characteristic extracted.The face characteristic of the garment trim of user's human face region periphery and user can be combined together carry out recognition of face by disclosed technique scheme, greatly improve the accuracy rate of recognition of face.

Description

Face identification method and device

Technical field

This disclosure relates to image identification technical field more particularly to a kind of face identification method and device.

Background technique

During being classified as unit of face to user's photograph album, it is necessary first to obtain the complete of user mobile phone upload Portion's photo, to whole photos carry out Face datection extract face characteristic, by the face characteristic of extraction successively with classified people Face carries out similarity measurement, and the photo with similarity feature is divided into the same face photograph album.User's continuous shooting multiple In the case of photo, if wherein the face of 7 photos is than more visible, but other 3 photos are due to posture and illumination etc. Factor cannot directly extract face feature, and in this case, the recognition of face of 3 photos will fail, to influence face Discrimination.

Summary of the invention

To overcome the problems in correlation technique, the embodiment of the present disclosure provides a kind of face identification method and device, uses To improve the accuracy rate of recognition of face.

According to the first aspect of the embodiments of the present disclosure, a kind of face identification method is provided, comprising:

First group of feature of the first human face region in photo is extracted by the first convolutional neural networks trained, it is described First group profile shows the face characteristic in the photo, and first human face region is as the firstth area where the face in photo Domain determines;

Second group of feature of the second human face region in the photo is extracted by the second convolutional neural networks trained, Second human face region determines as the second area where the face in photo, the second area from the first area to Surrounding expansion setting pixel wide obtains, and second group profile shows the clothing feature in the photo;

Merge first group of feature and second group of feature, dimension-reduction treatment is carried out to the feature combination after merging, is obtained To third group feature, wherein the dimension of the third group feature is less than the dimension of the feature combination after the merging;

It is determined in the photo according to the third group feature and the COS distance of the reference face characteristic extracted Whether face face corresponding with the reference face characteristic is same face.

In one embodiment, the method may also include that

There is label face sample to be input to untrained first convolutional neural networks for first group that sets quantity, to described At least one convolutional layer of untrained convolutional neural networks and at least one full articulamentum are trained;

When the optimal weight parameter of the connection in determining untrained first convolutional neural networks between each node, It determines and obtains first convolutional neural networks trained;

There is the surrounding of label face sample to carry out expanding the setting pixel wide for first group of the setting quantity Region, obtaining second group has label face sample;

There is label face sample to be input to untrained second convolution nerve net for described second group of the setting quantity Network is trained at least one convolutional layer and at least one full articulamentum of untrained second convolutional neural networks;

When the optimal weight parameter of the connection in determining untrained second convolutional neural networks between each node, It determines and obtains second convolutional neural networks trained.

In one embodiment, the method may also include that

By first convolutional neural networks trained extract described first group have label face sample first set The characteristic parameter of measured length;

By second convolutional neural networks trained extract described second group have label face sample second set The characteristic parameter of measured length;

Merge the characteristic parameter of the first setting length and the characteristic parameter of the second setting length, after merging Characteristic parameter carries out linear discriminent analysis LDA training, obtains the projection matrix of the third setting length of the LDA.

In one embodiment, the COS distance according to the third group feature and the reference face characteristic extracted Determine whether the face face corresponding with the reference face characteristic in the photo is same face, it may include:

The third group feature is compared with the COS distance of the reference face characteristic extracted with preset threshold;

If the COS distance is greater than the preset threshold, the face in the photo and the reference face spy are determined Levying corresponding face is same face;

If the COS distance is less than or equal to the preset threshold, the face in the photo and the ginseng are determined Examining the corresponding face of face characteristic is different faces.

In one embodiment, the method may also include that

Detect the characteristic point of the face on the photo;

The first area where the face and right is determined from the photo according to the characteristic point of the face Setting pixel wide is expanded around and obtains second area in the first area;

First area progress affine transformation is obtained into first human face region according to preset reference characteristic point, it is described The size of first human face region is identical as the dimension of input layer of first convolutional neural networks;

Second area progress affine transformation is obtained into second human face region according to the preset reference characteristic point, The size of second human face region is identical as the dimension of input layer of second convolutional neural networks.

According to the second aspect of an embodiment of the present disclosure, a kind of face identification device is provided, comprising:

First extraction module is configured as extracting the first face in photo by the first convolutional neural networks trained First group of feature in region, first group profile show the face characteristic in the photo, and first human face region is by shining First area where face in piece determines;

Second extraction module is configured as extracting second in the photo by the second convolutional neural networks trained Second group of feature of human face region, second human face region determine as the second area where the face in photo, described Two regions are expanded setting pixel wide around by the first area and obtained, and second group profile shows in the photo Clothing feature;

First processing module is configured as merging first group of feature that first extraction module extracts and described Second group of feature that second extraction module extracts carries out dimension-reduction treatment to the feature combination after merging, obtains third group Feature, wherein the dimension of the third group feature is less than the dimension of the feature combination after the merging;

First determining module, is configured as the third group feature after merging according to the first processing module and has mentioned The COS distance of the reference face characteristic of taking-up determines the face corresponding with the reference face characteristic of the face in the photo It whether is same face.

In one embodiment, described device may also include that

First training module, first group for being configured as to set quantity have label face sample to be input to untrained One convolutional neural networks, at least one convolutional layer and at least one full articulamentum to the untrained convolutional neural networks into Row training;

Second determining module is configured as in determining untrained first convolutional neural networks between each node When the optimal weight parameter of connection, determines and the first convolution nerve net trained is obtained by first training module Network;

Zone broadening module is configured as having the surrounding of label face sample to expand for first group of the setting quantity Zhang Suoshu sets the region of pixel wide, and obtaining second group has label face sample;

Second training module is configured as having label face sample to be input to not for described second group of the setting quantity The second trained convolutional neural networks, at least one convolutional layer and at least one of untrained second convolutional neural networks A full articulamentum is trained;

Third determining module is configured as in determining untrained second convolutional neural networks between each node When the optimal weight parameter of connection, determines and the second convolution nerve net trained is obtained by second training module Network.

In one embodiment, described device may also include that

Third extraction module is configured as the first convolution mind trained described in determining by second determining module The described first group characteristic parameter for having the first setting length of label face sample is extracted through network;

4th extraction module is configured as having by described second group of the second convolutional neural networks extraction trained The characteristic parameter of second setting length of label face sample;

Second processing module is configured as merging the feature for the first setting length that the third extraction module extracts The characteristic parameter for the second setting length that parameter and the 4th extraction module extract, the characteristic parameter after merging is carried out Linear discriminent analyzes LDA training, obtains the projection matrix of the third setting length of the LDA.

In one embodiment, first determining module can include:

Comparative sub-module, is configured as the third group feature after merging the first processing module and has extracted The COS distance of reference face characteristic be compared with preset threshold;

First determines submodule, if the comparison result for being configured as the Comparative sub-module indicates that the COS distance is big In the preset threshold, determine that the face face corresponding with the reference face characteristic in the photo is same face;

Second determines submodule, if the comparison result for being configured as the Comparative sub-module indicates that the COS distance is small In or equal to the preset threshold, determine that the face face corresponding from the reference face characteristic in the photo is different Face.

In one embodiment, described device may also include that

Detection module is configured as detecting the characteristic point of the face on the photo;

4th determining module is configured as the characteristic point of the face detected according to the detection module from the photograph On piece determines the first area where the face and expands setting pixel wide around to the first area and obtains To the second area;

First conversion module is configured as determine the 4th determining module according to preset reference characteristic point described the One region carries out affine transformation and obtains first human face region, the size of first human face region and first convolution mind The dimension of input layer through network is identical;

Second conversion module is configured as the institute for determining the 4th determining module according to the preset reference characteristic point It states second area progress affine transformation and obtains second human face region, the size of second human face region and the volume Two The dimension of the input layer of product neural network is identical.

According to the third aspect of an embodiment of the present disclosure, a kind of face identification device is provided, comprising:

Processor;

Memory for storage processor executable instruction;

Wherein, the processor is configured to:

First group of feature of the first human face region in photo is extracted by the first convolutional neural networks trained, it is described First group profile shows the face characteristic in the photo, and first human face region is as the firstth area where the face in photo Domain determines;

Second group of feature of the second human face region in the photo is extracted by the second convolutional neural networks trained, Second human face region determines as the second area where the face in photo, the second area from the first area to Surrounding expansion setting pixel wide obtains, and second group profile shows the clothing feature in the photo;

Merge first group of feature and second group of feature, dimension-reduction treatment is carried out to the feature combination after merging, is obtained To third group feature, wherein the dimension of the third group feature is less than the dimension of the feature combination after the merging;

It is determined in the photo according to the third group feature and the COS distance of the reference face characteristic extracted Whether face face corresponding with the reference face characteristic is same face.

The technical scheme provided by this disclosed embodiment can include the following benefits:

Since the first CNN and the 2nd CNN are that have label face sample training to obtain by magnanimity, and the first CNN The first group profile extracted shows that the face characteristic in photo, the second group profile that the 2nd CNN is extracted show in photo Clothing feature, therefore the disclosure is by merging face characteristic with information such as garment trims except face, so as to The face characteristic of the garment trim of user's human face region periphery and user is combined together carry out recognition of face, is substantially increased The accuracy rate of recognition of face;By carrying out dimensionality reduction to third group feature, the calculating in face recognition process can be substantially reduced Complexity.

It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.

Detailed description of the invention

The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and be used to explain the principle of the present invention together with specification.

Figure 1A is the flow chart of face identification method shown according to an exemplary embodiment.

Figure 1B is the scene figure of face identification method shown according to an exemplary embodiment.

Fig. 2A is the schematic diagram how to be trained to convolutional neural networks shown according to an exemplary embodiment one.

Fig. 2 B is the structural schematic diagram according to the convolutional neural networks shown in an exemplary embodiment one.

Fig. 3 is the flow chart according to the face identification method shown in an exemplary embodiment two.

Fig. 4 is the flow chart for how determining the first human face region shown according to an exemplary embodiment three.

Fig. 5 is a kind of block diagram of face identification device shown according to an exemplary embodiment.

Fig. 6 is the block diagram of another face identification device shown according to an exemplary embodiment.

Fig. 7 is the block diagram of another face identification device shown according to an exemplary embodiment.

Fig. 8 is a kind of block diagram suitable for face identification device shown according to an exemplary embodiment.

Specific embodiment

Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.

Figure 1A is the flow chart of face identification method shown according to an exemplary embodiment, and Figure 1B is exemplary according to one Implement the scene figure of the face identification method exemplified;The face identification method can also be applied using on the server Electronic equipment (such as: with camera function smart phone and tablet computer) on, as shown in Figure 1A, the face identification method Include the following steps S101-S104:

In step s101, the of the first human face region in photo is extracted by the first convolutional neural networks for having trained One group of feature, the first group profile show the face characteristic in photo, and the first human face region is as first where the face in photo Region determines.

In one embodiment, photo can be acquired and be stored by the photo module (for example, camera) on electronic equipment On an electronic device, electronic equipment realizes recognition of face by executing embodiment described in the disclosure;In another embodiment, photo It is stored on the server after server can also be uploaded to by electronic equipment, server is by executing embodiment described in the disclosure Realize recognition of face.In one embodiment, the first convolutional neural networks (Convolutional Neural Networks, abbreviation It may include at least one convolutional layer and at least one full articulamentum for CNN), by the corresponding office of the first human face region in photo Portion's image is input to the input layer of the first convolutional neural networks after training, the convolution in the first convolutional neural networks after training Layer and full articulamentum go out second group of feature of expression face in the first face extracted region, pass through the first convolutional Neural after training First group of feature can be obtained in the output layer of network.

In step s 102, the of the second human face region in photo is extracted by the second convolutional neural networks for having trained Two groups of features, the second human face region determine as the second area where the face in photo, and second area is from first area to four Week expansion setting pixel wide obtains, and the second group profile shows the clothing feature in the photo.

In one embodiment, the structure of the 2nd CNN may refer to the description of above-mentioned first CNN, by by the second face area The corresponding local feature information input in domain to training after the 2nd CNN input layer, training after the 2nd CNN in convolutional layer and Full articulamentum extracts second group of feature for indicating face in the second human face region, passes through the output layer of the 2nd CNN after training Second group of feature can be obtained.

In step s 103, merge first group of feature and second group of feature, the feature combination after merging is carried out at dimensionality reduction Reason, obtains third group feature, wherein the dimension of third group feature is less than the dimension of the feature combination after merging.

In one embodiment, for example, if first group of feature of the first CNN output is the one-dimensional vector that length is 4096, 2nd CNN output second group of feature be similarly length be 4096 one-dimensional vector, to two length be 4096 it is one-dimensional to Amount directly conspires to create the one-dimensional vector that a length is 8192, and the one-dimensional vector which is 8192 is third group feature.One In embodiment, can be analyzed by housebroken linear discriminent (Linear Discriminant Analysis, referred to as LDA dimensionality reduction) is carried out to third group feature, in one embodiment, extracts first group by the first convolutional neural networks trained There is the characteristic parameter of the first setting length of label face sample, extracts second group by the second convolutional neural networks trained There is the characteristic parameter of the second setting length of label face sample, merges the characteristic parameter and the second setting length of the first setting length Characteristic parameter after merging is carried out linear discriminent analysis LDA training, obtains the third setting length of LDA by the characteristic parameter of degree The projection matrix of degree, for example, first group of have label face sample to export from the first CNN first sets length as 500 feature Parameter, second group there is label face sample to export from the 2nd CNN second set length as 500 characteristic parameter, after merging Characteristic parameter is the one-dimensional vector that length is 1000, to the length be 1000 one-dimensional vector by LDA training after, Ke Yicong LDA dimensionality reduction to third after training sets length as 200 characteristic parameter, so as to reduce the meter when calculating COS distance Calculate complexity.

In step S104, photo is determined according to third group feature and the COS distance of the reference face characteristic extracted In face with whether be same face with reference to the corresponding face of face characteristic.

It in one embodiment, can be by the COS distance of third group feature and the reference extracted face characteristic and default Threshold value is compared, if COS distance is greater than preset threshold, determines the face people corresponding with reference face characteristic in photo Face is same face, if COS distance is less than or equal to preset threshold, determines the face in photo and refers to face characteristic Corresponding face is different face.In one embodiment, can using with extract third group feature by the way of extracted Reference face characteristic out, as shown in Figure 1B, for example, containing the people of the face photograph album of user A, user B in face photograph album Face photograph album, the reference face characteristic extracted include the face characteristic of user A and the face characteristic of user B, the people of user A The face characteristic of face feature and user B are stored in memory module 15, and human face region locating module 11 is to the photo got It carries out Face detection and obtains the first human face region and the second human face region, the first human face region and the second human face region are distinguished The 2nd CNN122 after the first CNN121 and training after being input to training, the first CNN121 and the 2nd CNN122 are respectively to first After human face region and the second human face region carry out feature extraction, first group of feature and second group of feature, merging treatment are respectively obtained Module 13 merges first group of feature and second group of feature to obtain third group feature, and distance calculation module 14 calculates third group After the COS distance of the face characteristic of the face characteristic and user B of feature and user A, by being compared with preset threshold, So that it is determined that the face of face characteristic or user B that whether the face in photo is user A, if the face in photo is to use Family A can then store the photo into the face photograph album of user A.

In the present embodiment, since the first CNN and the 2nd CNN is that have label face sample training to obtain by magnanimity, And the first group profile that the first CNN is extracted shows the face characteristic in photo, the second group profile that the 2nd CNN is extracted Show the clothing feature in photo, therefore the disclosure is by melting the information such as face characteristic and garment trim except face It closes, carries out face knowledge so as to which the face characteristic of the garment trim of user's human face region periphery and user to be combined together Not, the accuracy rate of recognition of face is substantially increased;By carrying out dimensionality reduction to third group feature, can substantially reduce in recognition of face Computation complexity in the process.

In one embodiment, method may also include that

There is label face sample to be input to untrained first convolutional neural networks for first group that sets quantity, to not instructing At least one convolutional layer of experienced convolutional neural networks and at least one full articulamentum are trained;

When the optimal weight parameter of the connection in determining untrained first convolutional neural networks between each node, determine The first convolutional neural networks trained;

The region that the surrounding that first group that sets quantity has label face sample is carried out to expansion setting pixel wide, obtains Second group has label face sample;

There is label face sample to be input to untrained second convolutional neural networks for second group that sets quantity, to not instructing At least one convolutional layer of the second experienced convolutional neural networks and at least one full articulamentum are trained;

When the optimal weight parameter of the connection in determining untrained second convolutional neural networks between each node, determine The second convolutional neural networks trained.

In one embodiment, method may also include that

The first setting length that first group has label face sample is extracted by the first convolutional neural networks trained Characteristic parameter;

The second setting length that second group has label face sample is extracted by the second convolutional neural networks trained Characteristic parameter;

Merge the characteristic parameter of the first setting length and the characteristic parameter of the second setting length, by the characteristic parameter after merging Linear discriminent analysis LDA training is carried out, the projection matrix of the third setting length of LDA is obtained.

In one embodiment, photo is determined according to third group feature and the COS distance of the reference face characteristic extracted In face with whether be same face with reference to the corresponding face of face characteristic, it may include:

Third group feature is compared with the COS distance of the reference face characteristic extracted with preset threshold;

If COS distance is greater than preset threshold, determine that the face face corresponding with reference face characteristic in photo is same One face;

If COS distance is less than or equal to preset threshold, determine that the face in photo is corresponding with reference to face characteristic Face is different face.

In one embodiment, method may also include that

Detect the characteristic point of the face on photo;

According to the characteristic point of face from photo determine face where first area and first area is expanded around It opens setting pixel wide and obtains second area;

First area progress affine transformation is obtained into the first human face region, the first human face region according to preset reference characteristic point Size it is identical as the dimension of the input layer of the first convolutional neural networks;

Second area progress affine transformation is obtained into the second human face region, the second human face region according to preset reference characteristic point Size it is identical as the dimension of the input layer of the second convolutional neural networks.

Specifically how to realize recognition of face, please refers to subsequent embodiment.

So far, the above method that the embodiment of the present disclosure provides, can be by the garment trim and use of user's human face region periphery The face characteristic at family is combined together carry out recognition of face, greatly improves the accuracy rate of recognition of face;By to third group feature Dimensionality reduction is carried out, the computation complexity in face recognition process can be substantially reduced.

The technical solution of embodiment of the present disclosure offer is provided below with specific embodiment.

Fig. 2A is the schematic diagram how to be trained to convolutional neural networks shown according to an exemplary embodiment one, figure 2B is the structural schematic diagram according to the convolutional neural networks shown in an exemplary embodiment one;The present embodiment is implemented using the disclosure The above method that example provides, how by there is label face sample to the first convolutional neural networks and the second convolution nerve net Network illustrates for being trained, and as shown in Figure 2 A, includes the following steps:

In step s 201, label face sample is input to untrained first convolution mind by first group that sets quantity Through network, at least one convolutional layer of untrained convolutional neural networks and at least one full articulamentum are trained.

In step S202, the best weights of the connection in determining untrained first convolutional neural networks between each node When weight parameter, the first convolutional neural networks trained are determined.

In step S203, it is wide that the surrounding that first group that sets quantity has label face sample is subjected to expansion setting pixel The region of degree, obtaining second group has label face sample.

In step S204, there is label face sample to be input to untrained second convolution mind for second group that sets quantity Through network, at least one convolutional layer and full articulamentum of untrained second convolutional neural networks are trained.

In step S205, the best weights of the connection in determining untrained second convolutional neural networks between each node When weight parameter, the second convolutional neural networks trained are determined.

Before being trained to convolutional neural networks, need to prepare to set the face of quantity (can achieve ten thousand grades or more) Sample (for example, 50000 face sample), identifies respective first area from 50000 face samples respectively, First area progress affine transformation is obtained into the first human face region, first area is expanded to obtain respective second area, For example, the resolution ratio of face sample is 1000*1000, first area is the region that size is 100*100 in face sample, with the 20 pixels are expanded at the center in one region around, obtain second area 120*120, to ensure to believe Collar decorations of user etc. Breath is the first so as to ensure comprising in the second area, second area progress affine transformation being obtained the second human face region The size of face region and the second human face region is identical as the dimension of input layer of corresponding first CNN and the 2nd CNN.To this The face sample of a little magnanimity carries out label (label), for example, the label of all face samples of user E is all 1, the institute of user F The label for having face sample is all 2, etc., can prepare 50000 facial images of 1000 users, the people of each user Face sample is 50, and the quantity of the face sample of magnanimity can achieve 50000, by this 50000 face sample to the One CNN and the 2nd CNN are trained.

The structure of first CNN and the 2nd CNN is referred to the signal of Fig. 2 B, as shown in Figure 2 B, in the convolutional Neural net In network, including input layer, the first convolutional layer, the second convolutional layer, third convolutional layer, Volume Four lamination, the 5th convolutional layer, first are entirely Articulamentum, the second full articulamentum and output layer.50000 above-mentioned face samples are input to convolution mind as training sample Through being trained in network, and the classification results exported according to convolutional neural networks, constantly to each base of the convolutional neural networks The weight parameter of connection between upper node is adjusted.During continuous adjustment, the convolutional neural networks are based on defeated After the training sample entered is trained, for the classification results of output compared with the classification results that user demarcates, accuracy will be gradually It improves.At the same time, user can preset an accuracy threshold value, during continuous adjustment, if the convolution is refreshing For classification results through network output compared with the classification results that user demarcates, accuracy reaches pre-set accuracy threshold value Afterwards, the weight parameter connected between each base level nodes in the convolutional neural networks at this time is optimal weight parameter, at this time can be with Think that the convolutional neural networks are trained to finish.

In the present embodiment, by being trained to convolutional neural networks, there is mark to convolutional neural networks by classifier Label face sample carries out classification calibration, the volume when there is the quantity of label face sample to reach certain rank, after can making training Product neural network can recognize that the feature for being conducive to recognition of face in photo, avoid since the factors such as posture and illumination cannot In the case of directly extracting face feature, it still is able to identify user by information such as garment trims in photo, it is ensured that face The accuracy rate of identification.

Fig. 3 is the flow chart according to the face identification method shown in an exemplary embodiment two;The present embodiment utilizes this public affairs The above method for opening embodiment offer, is illustrated, such as how carrying out recognition of face by COS distance Shown in Fig. 3, include the following steps:

In step S301, the of the first human face region in photo is extracted by the first convolutional neural networks for having trained One group of feature, the first group profile show the face characteristic in photo.

In step s 302, the of the second human face region in photo is extracted by the second convolutional neural networks for having trained Two groups of features, the second human face region are the region that the first human face region expands that setting pixel wide is formed around, and second group special Sign indicates the clothing feature in photo.

In step S303, merge first group of feature and second group of feature, the feature combination after merging is carried out at dimensionality reduction Reason, obtains third group feature, wherein the dimension of third group feature is less than the dimension of the feature combination after merging.

The associated description of step S301 and step S303 refer to the description of above-mentioned Figure 1A illustrated embodiment, herein no longer in detail It states.

In step s 304, by the COS distance and preset threshold of third group feature and the reference face characteristic extracted It is compared, if COS distance is greater than preset threshold, executes step S304, if COS distance is less than or equal to default threshold Value executes step S305.

In step S305, if COS distance is greater than preset threshold, determines the face in photo and refer to face characteristic Corresponding face is same face.

In step S306, if COS distance is less than or equal to preset threshold, the face in photo and reference are determined The corresponding face of face characteristic is different face.

It in step s 304, can be by being trained to obtain one properly to face sample a large amount of in sample database Preset threshold, preset threshold can be the acceptable identification error rate of user, for example, if having in class in sample database 100,000 pairs of sample, 1,000,000 pairs of sample between class can pass through cosine to every a pair to keep millesimal identification error rate Distance calculates, and obtains the value between a 0-1, wherein the value of the COS distance of sample has 100,000 in class, and sample is remaining between class The value of chordal distance 1,000,000, that is, obtained the value of 1,100,000 COS distances, simultaneously by the values of 1,100,000 COS distances A suitable preset threshold is determined in conjunction with identification error rate.

The present embodiment on the basis of with above-described embodiment advantageous effects, by third group feature with extracted The COS distance of reference face characteristic identify face, since preset threshold can be obtained by a large amount of face sample training And the acceptable identification error rate of user is combined, therefore improve the accuracy of recognition of face to a certain extent.

Fig. 4 is the flow chart for how determining the first human face region shown according to an exemplary embodiment three;The present embodiment The above method provided using the embodiment of the present disclosure, is illustrated for how determining the first human face region, such as Fig. 4 It is shown, include the following steps:

In step S401, the characteristic point of the face on photo is detected.

In step S402, according to the characteristic point of face from photo determine face where first area and to first Setting pixel wide is expanded around and obtains second area in region.

For example, the resolution ratio of photo is 1000*1000, face place is obtained by human face detection tech in the related technology First area be 100*100, by expanding the width of 20 pixels around to first area, obtaining second area is 120* 120。

In step S403, first area progress affine transformation is obtained by the first face area according to preset reference characteristic point Domain, the size of the first human face region are identical as the dimension of the input layer of the first convolutional neural networks.

In step s 404, second area progress affine transformation is obtained by the second face area according to preset reference characteristic point Domain, the size of the second human face region are identical as the dimension of the input layer of the second convolutional neural networks.

In one embodiment, can establish include magnanimity face sample sample database, each face sample point Resolution is identical as the dimension of the input layer of convolutional neural networks after scaling, carries out people to each face sample in sample database Face detection, detects four characteristic points such as eyes central point, nose, mouth of face, passes through the eyes on the face sample of magnanimity Central point, nose, mouth characteristic point obtain a preset reference characteristic point.In one embodiment, due to being input to the first CNN's The dimension of the size of first area and the input layer of the first CNN may not be identical, therefore the first area that can also be will test Carry out affine transformation after obtain the first human face region, so that it is guaranteed that different size of first area after affine transformation with convolution The dimension of the input layer of neural network is identical, for example, the size of the first area intercepted from photo is 100 × 100, by imitative Penetrate obtained after being converted 224 × 224 the first human face region, so as to so that the first face region and convolutional Neural net The dimension of the input layer of network is identical, it is ensured that the image information of the first human face region can accurately input volume as shown in Figure 2 B The input layer of product neural network.The processing mode that second area obtains the second human face region obtains first referring to above-mentioned first area The description of human face region, this will not be detailed here.

In the present embodiment, the first human face region is carried out according to preset reference characteristic point by affine transformation by affine transformation and is obtained To the input layer that can support convolutional neural networks, it is ensured that the image information of the first human face region can accurately input training The input layer of convolutional neural networks afterwards.

Fig. 5 is a kind of block diagram of face identification device shown according to an exemplary embodiment, as shown in figure 5, face is known Other device includes:

First extraction module 51 is configured as extracting by the first convolutional neural networks trained the first in photo First group of feature in face region, the first group profile show the face characteristic in photo, and the first human face region is by the face in photo The first area at place determines;

Second extraction module 52 is configured as extracting the second people in photo by the second convolutional neural networks trained Second group of feature in face region, the second human face region are determined that second area is by as the second area where the face in photo One region is expanded setting pixel wide around and is obtained, and the second group profile shows the clothing feature in photo;

First processing module 53 is configured as merging the first group of feature and the second extraction that the first extraction module 51 extracts Second group of feature that module 52 is extracted carries out dimension-reduction treatment to the feature combination after merging, obtains third group feature, wherein The dimension of third group feature is less than the dimension of the feature combination after merging;

First determining module 54, is configured as third group feature according to first processing module 53, after merging and has extracted The COS distance of reference face characteristic out determines whether the face corresponding with reference face characteristic of the face in photo is same Face.

Fig. 6 is the block diagram of another face identification device shown according to an exemplary embodiment, as shown in fig. 6, upper On the basis of stating embodiment illustrated in fig. 5, in one embodiment, device may also include that

First training module 55, being configured as setting first group of quantity, to have label face sample to be input to untrained First convolutional neural networks carry out at least one convolutional layer of untrained convolutional neural networks and at least one full articulamentum Training;

Second determining module 56 is configured as the company in determining untrained first convolutional neural networks between each node When the optimal weight parameter connect, the first convolutional neural networks trained by the first training module 55 are determined, first mentions Modulus block 51 extracts the first human face region in photo by the first convolutional neural networks that the training of the first training module 55 obtains First group of feature;

Zone broadening module 57 is configured as having the surrounding of label face sample to expand for first group that sets quantity The region of pixel wide is set, obtaining second group has label face sample;

Second training module 58 is configured as having label people for second group of the setting quantity that zone broadening module 57 obtains Face sample is input to untrained second convolutional neural networks, at least one convolution of untrained second convolutional neural networks Layer and at least one full articulamentum are trained;

Third determining module 59 is configured as the company in determining untrained second convolutional neural networks between each node When the optimal weight parameter connect, the second convolutional neural networks trained by the second training module 58 are determined, second mentions Modulus block 52 extracts the second human face region in photo by the second convolutional neural networks that the training of the second training module 58 obtains Second group of feature.

In one embodiment, device may also include that

Third extraction module 60 is configured as the first convolution nerve net trained determined by the second determining module 56 Network extracts first group of characteristic parameter for having the first setting length of label face sample;

4th extraction module 61 is configured as having label people by second group of the second convolutional neural networks extraction trained The characteristic parameter of second setting length of face sample;

Second processing module 62 is configured as merging the characteristic parameter for the first setting length that third extraction module 60 extracts The characteristic parameter for the second setting length extracted with the 4th extraction module 61, carries out linear discriminent for the characteristic parameter after merging LDA training is analyzed, the projection matrix of the third setting length of LDA is obtained, first processing module 53 passes through Second processing module 62 Obtained projection matrix carries out dimension-reduction treatment to the feature combination after merging.

Fig. 7 is the block diagram of another face identification device shown according to an exemplary embodiment, as shown in fig. 7, upper On the basis of stating Fig. 5 and/or embodiment illustrated in fig. 6, in one embodiment, the first determining module 54 can include:

Comparative sub-module 541, the ginseng for being configured as the third group feature after merging first processing module and having extracted The COS distance for examining face characteristic is compared with preset threshold;

First determines submodule 542, if the comparison result for being configured as Comparative sub-module 541 indicates that COS distance is greater than Preset threshold determines that the face corresponding with reference face characteristic of the face in photo is same face;

Second determines submodule 543, if the comparison result for being configured as Comparative sub-module 541 indicates that COS distance is less than Or it is equal to preset threshold, determine that the face face corresponding from reference face characteristic in photo is different face.

In one embodiment, device may also include that

Detection module 63 is configured as the characteristic point of the face on detection photo;

4th determining module 64, the characteristic point for being configured as the face detected according to detection module 63 are determined from photo It first area where face and expands setting pixel wide around to first area and obtains second area;

First conversion module 65 is configured as the firstth area for determining the 4th determining module 64 according to preset reference characteristic point Domain carries out affine transformation and obtains the first human face region, the size of the first human face region and the input layer of the first convolutional neural networks Dimension is identical;

Second conversion module 66 is configured as the secondth area for determining the 4th determining module 64 according to preset reference characteristic point Domain carries out affine transformation and obtains the second human face region, the size of the second human face region and the input layer of the second convolutional neural networks Dimension is identical, and the size of the second human face region is identical as the dimension of the input layer of the second convolutional neural networks.

About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.

Fig. 8 is a kind of block diagram suitable for face identification device shown according to an exemplary embodiment.For example, device 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, and medical treatment is set It is standby, body-building equipment, personal digital assistant etc..

Referring to Fig. 8, device 800 may include following one or more components: processing component 802, memory 804, power supply Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and Communication component 816.

The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing element 802 may include that one or more processors 820 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate Interaction between media component 808 and processing component 802.

Memory 804 is configured as storing various types of data to support the operation in equipment 800.These data are shown Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.

Electric power assembly 806 provides electric power for the various assemblies of device 800.Electric power assembly 806 may include power management system System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.

Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 808 includes a front camera and/or rear camera.When equipment 800 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.

Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.

I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.

Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor module 814 can detecte the state that opens/closes of equipment 800, and the relative positioning of component, for example, it is described Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800 Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.

Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.

In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.

In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 804 of instruction, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..

Those skilled in the art will readily occur to its of the disclosure after considering specification and practicing disclosure disclosed herein Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.

It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (9)

1. a kind of face identification method, which is characterized in that the described method includes:
Extract first group of feature of the first human face region in photo by the first convolutional neural networks for having trained, described first Group profile shows the face characteristic in the photo, and first human face region is as the firstth area where the face in the photo Domain determines;
Second group of feature of the second human face region in the photo is extracted by the second convolutional neural networks trained, it is described Second human face region determines as the second area where the face in the photo, the second area from the first area to Surrounding expansion setting pixel wide obtains, and second group profile shows the clothing feature in the photo;
Merge first group of feature and second group of feature, dimension-reduction treatment is carried out to the feature combination after merging, obtains the Three groups of features, wherein the dimension of the third group feature is less than the dimension of the feature combination after the merging;
The face in the photo is determined according to the COS distance of the third group feature and the reference face characteristic extracted Whether face corresponding with the reference face characteristic is same face;
The method also includes:
Detect the characteristic point of the face on the photo;
The first area where the face is determined from the photo according to the characteristic point of the face and to described Setting pixel wide is expanded around and obtains second area in first area;
First area progress affine transformation is obtained into first human face region according to preset reference characteristic point, described first The size of human face region is identical as the dimension of input layer of first convolutional neural networks;
Second area progress affine transformation is obtained into second human face region according to the preset reference characteristic point, it is described The size of second human face region is identical as the dimension of input layer of second convolutional neural networks.
2. the method according to claim 1, wherein the method also includes:
There is label face sample to be input to untrained first convolutional neural networks for first group that sets quantity, is not instructed to described At least one convolutional layer of experienced convolutional neural networks and at least one full articulamentum are trained;
When the optimal weight parameter of the connection in determining untrained first convolutional neural networks between each node, determine Obtain first convolutional neural networks trained;
The region for thering is the surrounding of label face sample expand the setting pixel wide for first group of the setting quantity, Obtaining second group has label face sample;
There is label face sample to be input to untrained second convolutional neural networks for described second group of the setting quantity, it is right At least one convolutional layer and at least one full articulamentum of untrained second convolutional neural networks are trained;
When the optimal weight parameter of the connection in determining untrained second convolutional neural networks between each node, determine Obtain second convolutional neural networks trained.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
Described first group the first setting length for having label face sample is extracted by first convolutional neural networks trained The characteristic parameter of degree;
Described second group the second setting length for having label face sample is extracted by second convolutional neural networks trained The characteristic parameter of degree;
Merge the characteristic parameter of the first setting length and the characteristic parameter of the second setting length, by the feature after merging Parameter carries out linear discriminent analysis LDA training, obtains the projection matrix of the third setting length of the LDA.
4. the method according to claim 1, wherein described according to the third group feature and the ginseng extracted The COS distance for examining face characteristic determines whether the face face corresponding with the reference face characteristic in the photo is same One face, comprising:
The third group feature is compared with the COS distance of the reference face characteristic extracted with preset threshold;
If the COS distance is greater than the preset threshold, determine that the face in the photo refers to face characteristic pair with described The face answered is same face;
If the COS distance is less than or equal to the preset threshold, the face in the photo and the reference man are determined The corresponding face of face feature is different face.
5. a kind of face identification device, which is characterized in that described device includes:
First extraction module is configured as extracting the first human face region in photo by the first convolutional neural networks trained First group of feature, first group profile shows the face characteristic in the photo, and first human face region is by the photograph First area where face in piece determines;
Second extraction module is configured as extracting the second face in the photo by the second convolutional neural networks trained Second group of feature in region, second human face region determine as the second area where the face in the photo, described Two regions are expanded setting pixel wide around by the first area and obtained, and second group profile shows in the photo Clothing feature;
First processing module is configured as merging first group of feature and described second that first extraction module extracts Second group of feature that extraction module extracts carries out dimension-reduction treatment to the feature combination after merging, obtains third group feature, Wherein, the dimension of the third group feature is less than the dimension of the feature combination after the merging;
First determining module, is configured as the third group feature after merging according to the first processing module and has extracted Reference face characteristic COS distance determine face in the photo with it is described with reference to the corresponding face of face characteristic whether For same face;
Described device further include:
Detection module is configured as detecting the characteristic point of the face on the photo;
4th determining module is configured as the characteristic point of the face detected according to the detection module from the photo It determines the first area where the face and expands setting pixel wide around to the first area and obtain the Two regions;
First conversion module is configured as firstth area for determining the 4th determining module according to preset reference characteristic point Domain carries out affine transformation and obtains first human face region, the size of first human face region and the first convolution nerve net The dimension of the input layer of network is identical;
Second conversion module is configured as determine the 4th determining module according to the preset reference characteristic point described the Two regions carry out affine transformation and obtain second human face region, the size of second human face region and second convolution mind The dimension of input layer through network is identical.
6. device according to claim 5, which is characterized in that described device further include:
First training module, first group for being configured as to set quantity have label face sample to be input to the untrained first volume Product neural network, at least one convolutional layer and at least one full articulamentum to the untrained convolutional neural networks are instructed Practice;
Second determining module is configured as the connection in determining untrained first convolutional neural networks between each node Optimal weight parameter when, determine by first training module obtain described in the first convolutional neural networks for having trained;
Zone broadening module is configured as first group of surrounding for having label face sample of the setting quantity carrying out expansion institute The region of setting pixel wide is stated, obtaining second group has label face sample;
Second training module, be configured as has label face sample to be input to for described second group of the setting quantity does not train The second convolutional neural networks, at least one convolutional layer to untrained second convolutional neural networks and at least one is complete Articulamentum is trained;
Third determining module is configured as the connection in determining untrained second convolutional neural networks between each node Optimal weight parameter when, determine by second training module obtain described in the second convolutional neural networks for having trained.
7. device according to claim 6, which is characterized in that described device further include:
Third extraction module is configured as the first convolution nerve net trained described in determining by second determining module Network extracts the described first group characteristic parameter for having the first setting length of label face sample;
4th extraction module is configured as having label by described second group of the second convolutional neural networks extraction trained The characteristic parameter of second setting length of face sample;
Second processing module is configured as merging the characteristic parameter for the first setting length that the third extraction module extracts The characteristic parameter for the second setting length extracted with the 4th extraction module, the characteristic parameter after merging is carried out linear Discriminant analysis LDA training obtains the projection matrix of the third setting length of the LDA.
8. device according to claim 5, which is characterized in that first determining module includes:
Comparative sub-module, the ginseng for being configured as the third group feature after merging the first processing module and having extracted The COS distance for examining face characteristic is compared with preset threshold;
First determines submodule, if the comparison result for being configured as the Comparative sub-module indicates that the COS distance is greater than institute Preset threshold is stated, determines that the face face corresponding with the reference face characteristic in the photo is same face;
Second determines submodule, if the comparison result for being configured as the Comparative sub-module indicate the COS distance be less than or Person is equal to the preset threshold, determine face in the photo with it is described with reference to the corresponding face of face characteristic be different people Face.
9. a kind of face identification device, which is characterized in that described device includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to:
Extract first group of feature of the first human face region in photo by the first convolutional neural networks for having trained, described first Group profile shows that the face characteristic in the photo, first human face region are true as the first area where the face in photo It is fixed;
Second group of feature of the second human face region in the photo is extracted by the second convolutional neural networks trained, it is described Second human face region determines as the second area where the face in photo, the second area by the first area around Expansion setting pixel wide obtains, and second group profile shows the clothing feature in the photo;
Merge first group of feature and second group of feature, dimension-reduction treatment is carried out to the feature combination after merging, obtains the Three groups of features, wherein the dimension of the third group feature is less than the dimension of the feature combination after the merging;
The face in the photo is determined according to the COS distance of the third group feature and the reference face characteristic extracted Whether face corresponding with the reference face characteristic is same face;
The processor is also configured to
Detect the characteristic point of the face on the photo;
The first area where the face is determined from the photo according to the characteristic point of the face and to described Setting pixel wide is expanded around and obtains second area in first area;
First area progress affine transformation is obtained into first human face region according to preset reference characteristic point, described first The size of human face region is identical as the dimension of input layer of first convolutional neural networks;
Second area progress affine transformation is obtained into second human face region according to the preset reference characteristic point, it is described The size of second human face region is identical as the dimension of input layer of second convolutional neural networks.
CN201510955509.6A 2015-12-17 2015-12-17 Face identification method and device CN105631403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510955509.6A CN105631403B (en) 2015-12-17 2015-12-17 Face identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510955509.6A CN105631403B (en) 2015-12-17 2015-12-17 Face identification method and device

Publications (2)

Publication Number Publication Date
CN105631403A CN105631403A (en) 2016-06-01
CN105631403B true CN105631403B (en) 2019-02-12

Family

ID=56046316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510955509.6A CN105631403B (en) 2015-12-17 2015-12-17 Face identification method and device

Country Status (1)

Country Link
CN (1) CN105631403B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095876A (en) * 2016-06-06 2016-11-09 北京小米移动软件有限公司 Image processing method and device
CN106203333A (en) * 2016-07-08 2016-12-07 乐视控股(北京)有限公司 Face identification method and system
CN106295526B (en) * 2016-07-28 2019-10-18 浙江宇视科技有限公司 The method and device of Car image matching
CN106407912B (en) * 2016-08-31 2019-04-02 腾讯科技(深圳)有限公司 A kind of method and device of face verification
CN106407982B (en) * 2016-09-23 2019-05-14 厦门中控智慧信息技术有限公司 A kind of data processing method and equipment
CN108229263A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 The recognition methods of target object and device, robot
CN107578029A (en) * 2017-09-21 2018-01-12 北京邮电大学 Method, apparatus, electronic equipment and the storage medium of area of computer aided picture certification
CN110309691A (en) * 2018-03-27 2019-10-08 腾讯科技(深圳)有限公司 A kind of face identification method, device, server and storage medium
CN108629747B (en) * 2018-04-25 2019-12-10 腾讯科技(深圳)有限公司 Image enhancement method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408404A (en) * 2014-10-31 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN104899579A (en) * 2015-06-29 2015-09-09 小米科技有限责任公司 Face recognition method and face recognition device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7783135B2 (en) * 2005-05-09 2010-08-24 Like.Com System and method for providing objectified image renderings using recognition information from images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408404A (en) * 2014-10-31 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN104899579A (en) * 2015-06-29 2015-09-09 小米科技有限责任公司 Face recognition method and face recognition device

Also Published As

Publication number Publication date
CN105631403A (en) 2016-06-01

Similar Documents

Publication Publication Date Title
US9082235B2 (en) Using facial data for device authentication or subject identification
KR101906827B1 (en) Apparatus and method for taking a picture continously
CN104125396B (en) Image capturing method and device
WO2017181769A1 (en) Facial recognition method, apparatus and system, device, and storage medium
WO2016029641A1 (en) Photograph acquisition method and apparatus
WO2016011747A1 (en) Skin color adjustment method and device
CN104408426B (en) Facial image glasses minimizing technology and device
CN106295566B (en) Facial expression recognizing method and device
US8879803B2 (en) Method, apparatus, and computer program product for image clustering
CN104102927B (en) Execute the display device and method thereof of user's checking
CN105430262B (en) Filming control method and device
CN105426857B (en) Human face recognition model training method and device
CN103955481B (en) image display method and device
CN104753766B (en) Expression sending method and device
CN105809704B (en) Identify the method and device of image definition
CN105654420A (en) Face image processing method and device
CN106572299A (en) Camera switching-on method and device
CN103688273B (en) Amblyopia user is aided in carry out image taking and image review
CN105354543A (en) Video processing method and apparatus
CN106355573B (en) The localization method and device of object in picture
CN105224924A (en) Living body faces recognition methods and device
US20180357501A1 (en) Determining user authenticity with face liveness detection
CN104850828B (en) Character recognition method and device
CN105389304B (en) Event Distillation method and device
WO2019105285A1 (en) Facial attribute recognition method, electronic device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
GR01