CN106326867B - A kind of method and mobile terminal of recognition of face - Google Patents
A kind of method and mobile terminal of recognition of face Download PDFInfo
- Publication number
- CN106326867B CN106326867B CN201610736293.9A CN201610736293A CN106326867B CN 106326867 B CN106326867 B CN 106326867B CN 201610736293 A CN201610736293 A CN 201610736293A CN 106326867 B CN106326867 B CN 106326867B
- Authority
- CN
- China
- Prior art keywords
- face
- feature vector
- vector
- matching characteristic
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
The present invention provides the method and mobile terminal of a kind of recognition of face, this method comprises: the two-dimensional image data and range data of acquisition user's face;Extract the two-dimensional feature vector of the two-dimensional image data;By the range data, the feature vector for extracting preset matching characteristic point in face forms three-dimensional feature vector;Calculate the similarity of assemblage characteristic vector and matching characteristic vector;Judge whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold, determines that identification is verified.To make the recognition of face of mobile terminal while meet recognition accuracy height and recognition speed is fast.
Description
Technical field
The present invention relates to the communications field more particularly to the methods and mobile terminal of a kind of recognition of face.
Background technique
With the development of mobile communication technology, mobile terminal has become indispensable communication in people's daily life and sets
Standby, confidentiality and safety are more taken seriously.Face recognition technology because it has easy to operate and highly-safe advantage,
Gradually it is applied in mobile terminal system, such as system unlock, secure payment and application login etc..
By taking the system unlock that face recognition technology is applied to mobile terminal as an example.The current unlock based on recognition of face
It is main to be unlocked by two-dimension human face identification or three-dimensional face identification.The recognition speed identified by two-dimension human face is fast, but
It is lower that when external environment is undesirable, there are recognition accuracies, such as light is relatively strong, light is weaker and human face expression attitudes vibration
Deng;And it is higher by three-dimensional face identification recognition accuracy when external environment is undesirable, but due to being used in face when calculating
The range data of whole characteristic points, it is computationally intensive, keep recognition speed slower.It can not be same as it can be seen that existing in recognition of face
When meet recognition accuracy height and the fast problem of recognition speed.
Summary of the invention
The embodiment of the present invention provides the method and mobile terminal of a kind of recognition of face, with solve in recognition of face exist can not
Meet recognition accuracy height and the fast problem of recognition speed simultaneously.
In a first aspect, the embodiment of the invention provides a kind of methods of recognition of face, comprising:
Acquire the two-dimensional image data and range data of user's face;
Extract the two-dimensional feature vector of the two-dimensional image data;
By the range data, the feature vector for extracting preset matching characteristic point in face forms three-dimensional feature
Vector, wherein the matching characteristic point is the Partial Feature point in face;
Calculate the similarity of assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector is by described two
Dimensional feature vector and the three-dimensional feature vector combine to be formed, the matching characteristic vector be by, preset two-dimensional feature vector, with
And default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed;
Judge whether the similarity is greater than or equal to similarity threshold, if the similarity is more than or equal to described similar
Threshold value is spent, determination passes through recognition of face.
Second aspect, the embodiment of the present invention also provide a kind of mobile terminal, comprising:
Data acquisition module, for acquiring the two-dimensional image data and range data of user's face;
Two-dimensional feature vector extraction module, for extracting the two-dimensional feature vector of the two-dimensional image data;
Three-dimensional feature vector extraction module, for it is special to extract preset matching in face by the range data
The feature vector of sign point forms three-dimensional feature vector, wherein the matching characteristic point is the Partial Feature point in face;
Similarity calculation module, for calculating the similarity of assemblage characteristic vector Yu matching characteristic vector, wherein described group
Close feature vector combined and formed by the two-dimensional feature vector and the three-dimensional feature vector, the matching characteristic vector be by,
Default two-dimensional feature vector, and default three-dimensional feature vector corresponding with the matching characteristic point combine to be formed;
Authentication module is identified, for judging whether the similarity is greater than or equal to similarity threshold, if the similarity
More than or equal to the similarity threshold, determine that identification is verified.
In this way, acquiring the two-dimensional image data and range data of user's face in the embodiment of the present invention;Described in extraction
The two-dimensional feature vector of two-dimensional image data;By the range data, preset matching characteristic point in face is extracted
Feature vector forms three-dimensional feature vector;Calculate the similarity of assemblage characteristic vector and matching characteristic vector;Judge described similar
Whether degree is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold, determines identification verifying
Pass through.To make the recognition of face of mobile terminal while meet recognition accuracy height and recognition speed is fast.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is a kind of flow diagram of the method for recognition of face provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram for calculating depth value in the embodiment of the present invention using principle of triangulation;
Fig. 3 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention;
Fig. 5 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention;
Fig. 6 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention;
Fig. 7 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention;
Fig. 8 is a kind of structural schematic diagram for mobile terminal that the present invention applies example offer;
Fig. 9 is the structural schematic diagram of another mobile terminal provided in an embodiment of the present invention;
Figure 10 is the structure of the three-dimensional feature vector extraction module in another mobile terminal provided in an embodiment of the present invention
Schematic diagram;
Figure 11 is the structure of the three-dimensional feature vector extraction module in another mobile terminal provided in an embodiment of the present invention
Schematic diagram;
Figure 12 is the structure of the three-dimensional feature vector extraction module in another mobile terminal provided in an embodiment of the present invention
Schematic diagram;
Figure 13 is the structure of the three-dimensional feature vector extraction module in another mobile terminal provided in an embodiment of the present invention
Schematic diagram;
Figure 14 is the structural schematic diagram of another mobile terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
It is a kind of flow diagram of the method for recognition of face provided in an embodiment of the present invention referring to Fig. 1, Fig. 1, including with
Lower step:
Step 101, the two-dimensional image data and range data for acquiring user's face.
In the embodiment of the present invention, the two-dimensional image data and range data of above-mentioned acquisition user face can pass through
The facial image of multiple front cameras acquisition user of mobile terminal, including two dimensional image and three dimensional depth image, by face
Image obtains the two-dimensional image data and the range data.Wherein, the two-dimensional image data can be X-Y scheme
The gray value of each pixel as in, the depth value of each position, can pass through face depth in the range data face
Algorithm is calculated.Such as: the depth value of each position in the three-dimensional depth map of face can be calculated according to principle of triangulation, such as
Shown in Fig. 2, if the three dimensional depth image is obtained by the front camera M and front camera M ' of mobile terminal, and preceding
Setting the distance between camera M and front camera M ' is 2L, and position to be measured is point N in the three-dimensional depth map of face, mobile end
The processor at end can detecte front camera M imaging angle α, using principle of triangulation can be calculated point N with it is preposition
The vertical range d of line between camera M and front camera M '.Due to nose be in face with front camera M and preposition
The vertical range of line is minimum between camera M ', can be as a reference point with nose, if it is with front camera M and preposition takes the photograph
As the vertical range of line between head M ' is d '.So, the depth value of position to be measured is H=d-d ' in face.
Step 102, the two-dimensional feature vector for extracting the two-dimensional image data.
Wherein it is possible to the two-dimensional feature vector of the two-dimensional image data is extracted by feature extraction algorithm, the feature
Extraction algorithm can mention for Principal Component Analysis Algorithm, Local textural feature algorithm, algorithm of support vector machine and Gabor filtering characteristics
Any one in algorithm etc. is taken, using the feature extraction algorithm by the grayvalue transition of pixel each in two dimensional image at feature
Value, to form the two-dimensional feature vector.The redundancy in original two-dimensional image data can be filtered out in this way, reduce meter
Calculation amount is to accelerate recognition speed.
Step 103 passes through the range data, and the feature vector for extracting preset matching characteristic point in face is formed
Three-dimensional feature vector, wherein the matching characteristic point is the Partial Feature point in face.
In the embodiment of the present invention, the matching characteristic point is the Partial Feature point in the face chosen in advance, can be anti-
Reflect the facial characteristics of user.Such as: nose, eyes, mouth, ear and forehead etc..Due to meeting light in face recognition process
The influence of the factors such as line, when as weaker such as light, the characteristic point with larger depth value when detecting depth value there may be deviation,
To will affect recognition accuracy.Can whether normal according to recognition of face influence factor in face recognition process, select people
The characteristic point of different number is as the matching characteristic point in face, such as: it, can be quasi- by nose and eyes when light is normal
Really reflection user's face feature, can accurately identify user's face, select nose and eyes as the matching characteristic at this time
Point;When light is stronger or weaker, user's face only can not be accurately identified by nose and eyes, can increase remove at this time
Other characteristic points except nose and eyes are as the matching characteristic point.Therefore, the matching characteristic point can be above special
It is any one or more in sign point.
Since different user facial characteristics is different, such as the proportional difference of face contour and each position, user can be passed through
The two dimensional image or three dimensional depth image of face detect the matching characteristic point in user's face, i.e., by described
Two dimensional image or the three dimensional depth image position position of the matching characteristic point in user's face.
Wherein it is possible to which the feature extraction algorithm in 102 extracts the feature vector of the matching characteristic point through the above steps
Form three-dimensional feature vector.Such as: it can be whole by user's face in range data first with the feature extraction algorithm
The depth value of pixel is converted into characteristic value, forms the feature vector of three dimensional depth image;Again from the three dimensional depth image
The corresponding combination of eigenvectors of the matching characteristic point is selected to form the three-dimensional feature vector in feature vector.
In embodiments of the present invention, above-mentioned steps 102 carry out before above-mentioned steps 103.However, it is desirable to explanation,
Above-mentioned steps 102 can also carry out after above-mentioned steps 103, no longer be repeated herein.
Step 104, the similarity for calculating assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector
It is combined and is formed by the two-dimensional feature vector and the three-dimensional feature vector, the matching characteristic vector is by it is special to preset two dimension
Vector is levied, and default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed.
Wherein, the assemblage characteristic vector is combined and is formed by the two-dimensional feature vector and the three-dimensional feature vector, example
Such as: assuming that the two-dimensional feature vector is A, the matching characteristic point includes nose and eyes, and the corresponding feature vector of nose
For B, the corresponding feature vector of eyes is C, then the three-dimensional feature vector is D=[B C], and the assemblage characteristic vector E=
[A D]=[A B C].
The matching characteristic vector can be to be combined and formed by default two-dimensional feature vector and default three-dimensional feature vector,
In, the default two-dimensional feature vector is corresponding with the two-dimensional feature vector, the default three-dimensional feature vector and the matching
Characteristic point is corresponding.Such as: when the assemblage characteristic vector is above-mentioned E=[A B C], the matching characteristic vector can be corresponded to
Be E '=[A ' D '], wherein A ' is default two-dimensional feature vector, and D ' is default three-dimensional feature vector, then D ' be with nose and
The corresponding feature vector of eyes, i.e. D '=[B ' C '], at this point, E '=[A ' B ' C '].Wherein, the default two-dimensional feature vector and
Default three-dimensional feature vector, can be the pre- two-dimensional image data for first passing through user's face and range data is extracted and obtained.
In the embodiment of the present invention, the similarity of above-mentioned calculating assemblage characteristic vector and matching characteristic vector can pass through meter
The algorithm for calculating similarity between two vectors is calculated, such as passes through Euclidean distance, mahalanobis distance and Minkowski distance
Any one algorithm in calculates the similarity of matching characteristic vector described in the assemblage characteristic vector sum.
Step 105 judges whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than or equal to
The similarity threshold, determination pass through recognition of face.
Wherein, the similarity threshold can be the preset value of the performance setting according to mobile terminal.The similarity is big
When the similarity threshold, illustrate the user's facial image acquired above by mobile terminal and preset user people
Face image matches, it is determined that passes through recognition of face;When the similarity is less than the similarity threshold, illustrate above by shifting
The user's facial image and preset user's facial image of dynamic terminal acquisition mismatch, it is determined that identification verifying does not pass through.Certainly,
When identification verifying does not pass through, it can repeatedly or infinitely repeat above-mentioned steps.
In the method for above-mentioned recognition of face step 101 to step 105 can be applied to mobile terminal unlock, payment and
Using login etc..
Optionally, as shown in figure 3, after step 105, this method can also include:
Step 106 is unlocked mobile terminal.
Wherein it is possible to be unlocked by the method for above-mentioned recognition of face to mobile terminal.It is right when identification is verified
Mobile terminal is unlocked, and allows users to normal use mobile terminal, and the unlocking process is simple and highly-safe.
In the embodiment of the present invention, above-mentioned mobile terminal any can have the mobile end of characteristics of human body's acquisition component and screen
End, such as: mobile phone, tablet computer (Tablet Personal Computer), laptop computer (Laptop Computer),
Personal digital assistant (personal digital assistant, abbreviation PDA), mobile Internet access device (Mobile Internet
Device, MID) or wearable device (Wearable Device) etc..
The method of the recognition of face of the embodiment of the present invention acquires the two-dimensional image data and three dimensional depth number of user's face
According to;Extract the two-dimensional feature vector of the two-dimensional image data;By the range data, preset is extracted in face
Feature vector with characteristic point forms three-dimensional feature vector;Calculate the similarity of assemblage characteristic vector and matching characteristic vector;Sentence
Whether the similarity of breaking is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold, really
Fixed identification is verified.To make the recognition of face of mobile terminal while meet recognition accuracy height and recognition speed is fast.
Referring to fig. 4, Fig. 4 is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention, including
Following steps:
Step 401, the two-dimensional image data and range data for acquiring user's face.
In the embodiment of the present invention, user's facial image can be acquired by multiple front cameras of mobile terminal, including
Two dimensional image and three dimensional depth image.Wherein, the two-dimensional image data can be the gray value of each pixel in two dimensional image,
The range data can calculate the corresponding depth value of each position in face by principle of triangulation and obtain.
Wherein it is possible to which the two-dimensional image data and range data to acquisition pre-process, such as denoises and normalize
Deng, thus make user's facial image enhance, improve the accuracy of the two-dimensional image data and the 3 d image data, in turn
Improve recognition accuracy.
Step 402, the two-dimensional feature vector for extracting the two-dimensional image data.
In the embodiment of the present invention, it can be extracted from the two-dimensional image data by Gabor filtering characteristics extraction algorithm
The two-dimensional feature vector.Wherein, Gabor filtering characteristics extraction algorithm all has preferable resolution energy in spatial domain and frequency domain
Power, and there is good direction selection in spatial domain, also there is good frequency selectivity in frequency domain, can extract and highlight not
The feature vector of same dimensions in frequency and grain direction, to improve recognition accuracy.
Key feature points in step 403, detection user's face, wherein the key feature points are the portion in face
Divide characteristic point.
In the embodiment of the present invention, the key feature points are the Partial Feature point in face, be can be to specify in face
Main portions as characteristic point, the main portions may include nose, eyes, mouth, chin, forehead and ear etc..
Wherein, it in order to guarantee that recognition speed is fast and recognition accuracy is high, can be preassigned in face not according to the performance of mobile terminal
Main portions with quantity as the key feature points, such as: the mobile terminal fast for processing speed, it is possible to specify five
Or six main portions are as key feature points;And the mobile terminal slow for processing speed, it can only specify three or four
Main portions are as key feature points.
Wherein, the key feature points in above-mentioned detection user's face, can pass through the two dimensional image or three-dimensional
Depth image positions position of the key feature points in user's face.Such as: when light is normal, X-Y scheme can be passed through
As the quasi- each main portions accurately identified in face, so that the corresponding position by specified main portions is positioned as the key
Characteristic point;And when light is abnormal, it needs to identify by each position depth value difference in three dimensional depth image each in face
Main portions, so that the position of specified main portions is positioned as the key feature points.
Step 404, the feature vector that the key feature points are extracted by the range data.
Wherein, for the extraction of the feature vector of the key feature points, Gabor filtering characteristics extraction algorithm can be passed through
Characteristic value is converted by depth value corresponding with the key feature points in range data, being not necessarily to will be in range data
All depth values are converted into characteristic value, reduce the calculation amount of mobile terminal, to accelerate recognition speed.
Step 405 judges whether recognition of face influence factor is normal.
Wherein, the recognition of face due to current recognition of face influence factor and when acquiring preset facial image influence because
Element changes, and can make the two-dimensional image data currently acquired and 3 d image data and preset two-dimensional image data and in advance
If 3 d image data generate biggish difference, to influence recognition accuracy.
Optionally, the recognition of face influence factor includes intensity of illumination, human face posture and countenance.
For judging whether intensity of illumination is normal, preset illumination whether can be in by comparing the intensity of illumination of acquisition
In strength range, if the intensity of illumination of acquisition is within the scope of preset intensity of illumination, it is determined that intensity of illumination is normal;If acquisition
Intensity of illumination beyond within the scope of preset intensity of illumination, it is determined that intensity of illumination is abnormal, and light is too strong or too weak at this time,
The range data of acquisition can be had an impact.
Human face posture can be the deflection angle relative to the positive face of face, for judging whether human face posture is normal,
It can be by comparing the deflection angle and deflection angle threshold value of the face of acquisition, if the deflection angle of the face of acquisition is less than or equal to deflection
Angle threshold value, it is determined that human face posture is normal;If the deflection angle of the face of acquisition is greater than deflection angle threshold value, it is determined that human face posture is not
Normally, the deflection angle of face is too big at this time, can have an impact to the range data of acquisition.
For judging whether countenance is normal, need to compare face in the facial image of acquisition by face recognition method
Whether expression changes with the countenance in preset facial image, if not changing, it is determined that countenance is normal;
If changing, it is determined that countenance is abnormal.Wherein, the face recognition method can be facial movement unit analytic approach
With the face recognition method etc. based on Independent Component Analysis Algorithm, herein without repeating.
Wherein, when the intensity of illumination, human face posture and the normal facial expression, it is determined that the recognition of face
Influence factor is normal, and the range data acquired at this time is not affected by influence, and accuracy is high;When the intensity of illumination, people
In face posture and the facial expression any one it is abnormal when, it is determined that the recognition of face influence factor is abnormal, this
When the range data that acquires be affected, accuracy reduces.
If step 406, the recognition of face influence factor are normal, the Partial Feature point in the key feature points is selected to make
For the matching characteristic point, and the three-dimensional feature vector is formed by the feature vector of the matching characteristic point.
Wherein, when the recognition of face influence factor is normal, the accuracy of the range data of acquisition is high, it is only necessary to
Partial Feature point in the key feature points can accurately reflect the facial characteristics of user, it is thereby possible to select the key
Partial Feature point in characteristic point extracts the matching as matching characteristic point from the feature vector of the key feature points
The feature vector of characteristic point forms the three-dimensional feature vector.It wherein, can in order to guarantee that recognition speed is fast and recognition accuracy is high
The characteristic point of different number in the key feature points is preassigned as the matching characteristic using the performance according to mobile terminal
Point.
Such as: assuming that four nose, eyes, mouth and ear positions in specified face be as the key feature points,
And specifying the characteristic point of the matching characteristic point is two.When the intensity of illumination, human face posture and the facial expression just
Chang Shi, only need to using in the key feature points Partial Feature point, that is, nose and eyes or any other two positions as institute
Matching characteristic point is stated, and extracts nose and the corresponding combination of eigenvectors formation three-dimensional feature vector of eyes.
Optionally, as shown in figure 5, this method can also include:
If step 407, the recognition of face influence factor are abnormal, whole characteristic points of the key feature points is selected to make
For the matching characteristic point, and the three-dimensional feature vector is formed by the feature vector of the matching characteristic point.
Wherein, when the recognition of face influence factor is normal, the accuracy of the range data of acquisition is reduced, and is needed
Whole characteristic points in the key feature points just can accurately reflect the facial characteristics of user, and therefore, it is necessary to select the key
Whole characteristic points in characteristic point form described three as matching characteristic point, and by whole feature vectors of the key feature points
Dimensional feature vector.
Step 408, the similarity for calculating assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector
It is combined and is formed by the two-dimensional feature vector and the three-dimensional feature vector, the matching characteristic vector is by it is special to preset two dimension
Vector is levied, and default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed.
Step 409 judges whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than or equal to
The similarity threshold, determination pass through recognition of face.
The method of the recognition of face of the embodiment of the present invention, by the key feature points in detection user's face, from described three
The feature vector of the key feature points is only extracted in dimension depth data, and whether normal according to recognition of face influence factor, choosing
The feature vector for extracting some or all of characteristic point in the key feature points is selected as three-dimensional feature vector, is guaranteeing to know
The calculation amount in identification verification process is reduced in the case that other accuracy is high, accelerates recognition speed.
It is the flow diagram of the method for another recognition of face provided in an embodiment of the present invention, such as Fig. 6 referring to Fig. 6, Fig. 6
It is shown, comprising the following steps:
Step 601, the two-dimensional image data and range data for acquiring user's face.
Step 602, the two-dimensional feature vector for extracting the two-dimensional image data.
Key feature points in step 603, detection user's face, wherein the key feature points are the portion in face
Divide characteristic point.
Step 604 judges whether recognition of face influence factor is normal.
If step 605, the recognition of face influence factor are normal, select the Partial Feature point of the key feature points as
The matching characteristic point, and the feature vector for extracting from the range data matching characteristic point forms the three-dimensional
Feature vector.
In the embodiment of the present invention, when the recognition of face influence factor is normal, it can choose in the key feature points
Partial Feature point as matching characteristic point, and the feature of the matching characteristic point is directly extracted from the range data
Vector combines to form the three-dimensional feature vector, the three-dimensional vector of whole characteristic points without extracting the key feature points, from
And the calculation amount in face recognition process is further decreased, make recognition speed faster.
Optionally, as shown in fig. 7, after the step 604, this method can also include:
If step 606, the recognition of face influence factor are normal, select the Partial Feature point of the key feature points as
The matching characteristic point, and the feature vector for extracting from the range data matching characteristic point forms the three-dimensional
Feature vector.
Wherein, when the recognition of face influence factor is normal, the accuracy of the range data of acquisition is reduced, and is needed
Whole characteristic points in the key feature points just can accurately reflect the facial characteristics of user, and therefore, it is necessary to select the key
Whole characteristic points in characteristic point form described three as matching characteristic point, and by whole feature vectors of the key feature points
Dimensional feature vector.
Step 607, the similarity for calculating assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector
It is combined and is formed by the two-dimensional feature vector and the three-dimensional feature vector, the matching characteristic vector is by it is special to preset two dimension
Vector is levied, and default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed.
Optionally, the similarity for calculating assemblage characteristic vector and matching characteristic vector, comprising:
Default three-dimensional feature vector corresponding with the matching characteristic point is extracted from preset range data, and will
Default three-dimensional feature vector combines to form matching characteristic vector with default two-dimensional feature vector, and calculates the assemblage characteristic vector
With the similarity of the matching characteristic vector.
In the present embodiment, can recognition of face influence factor under normal circumstances, first pass through the multiple of mobile terminal
Front camera acquires the default two-dimensional image data and default range data of user in advance.When the calculating assemblage characteristic
When the similarity of vector and the matching characteristic vector, by feature extraction algorithm such as Gabor filtering characteristics extraction algorithm from institute
It states and extracts corresponding with the two-dimensional feature vector default two-dimensional feature vector in default two-dimensional image data, and from described pre-
If extracting default three-dimensional feature vector corresponding with the matching characteristic point in range data.
Step 608 judges whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than or equal to
The similarity threshold determines that identification is verified.
In the present embodiment, by the key feature points in detection user's face, it is according to recognition of face influence factor
No normal, some or all of characteristic point in key feature points described in selective extraction is as matching characteristic point, directly from three-dimensional
The feature vector of the matching characteristic point is extracted in depth data as three-dimensional feature vector, is guaranteeing the high feelings of recognition accuracy
The calculation amount in identification verification process is further decreased under condition, accelerates recognition speed.
It is a kind of structural schematic diagram for mobile terminal that the present invention applies example offer referring to Fig. 8, Fig. 8, as shown in figure 8, mobile
Terminal 800 includes data acquisition module 801, two-dimensional feature vector extraction module 802, three-dimensional feature vector extraction module 803, phase
Like degree computing module 804 and identification authentication module 805:
Data acquisition module 801, for acquiring the two-dimensional image data and range data of user's face;
Two-dimensional feature vector extraction module 802, for extracting the two-dimensional feature vector of the two-dimensional image data;
Three-dimensional feature vector extraction module 803, for extracting preset matching in face by the range data
The feature vector of characteristic point forms three-dimensional feature vector, wherein the matching characteristic point is the Partial Feature point in face;
Similarity calculation module 804, for calculating the similarity of assemblage characteristic vector Yu matching characteristic vector, wherein institute
It states assemblage characteristic vector to be combined and formed by the two-dimensional feature vector and the three-dimensional feature vector, the matching characteristic vector is
By presetting two-dimensional feature vector, and default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed;
Authentication module 805 is identified, for judging whether the similarity is greater than or equal to similarity threshold, if described similar
Degree is greater than or equal to the similarity threshold, and determination passes through recognition of face.
Optionally, as shown in figure 9, the mobile terminal 800 further include:
Unlocked state 806, for being unlocked to mobile terminal.
Optionally, as shown in Figure 10, the three-dimensional feature vector extraction module 803 may include:
First detection unit 8031, for detecting key feature points in user's face, wherein the key feature points
For the Partial Feature point in face;
Levy vector extraction unit 8032, for extracted by the range data features of the key feature points to
Amount;
First judging unit 8033, for judging whether recognition of face influence factor is normal;
First selecting unit 8034 selects in the key feature points if normal for the recognition of face influence factor
Partial Feature point as the matching characteristic point, and from the feature vector of the matching characteristic point formed the three-dimensional feature to
Amount.
Optionally, as shown in figure 11, the three-dimensional feature vector extraction module 803 can also include:
Second selecting unit 8035 selects the key feature points if abnormal for the recognition of face influence factor
Whole characteristic points as the matching characteristic point, and formed by whole feature vectors of the key feature points described three-dimensional special
Levy vector.
Optionally, the recognition of face influence factor includes intensity of illumination, human face posture and countenance.
Optionally, as shown in figure 12, the three-dimensional feature vector extraction module 803 may include:
Second detection unit 8036, for detecting key feature points in user's face, wherein the key feature points
For the Partial Feature point in face;
Second judgment unit 8037, for judging whether recognition of face influence factor is normal;
Third selecting unit 8038 selects the key feature points if normal for the recognition of face influence factor
Partial Feature point as the matching characteristic point, and extract from the range data feature of the matching characteristic point to
Amount forms the three-dimensional feature vector.
Optionally, as shown in figure 13, the three-dimensional feature vector extraction module 803 can also include:
4th selecting unit 8039 selects the key feature points if abnormal for the recognition of face influence factor
Whole characteristic points as the matching characteristic point, and extract from the range data feature of the matching characteristic point
Vector forms the three-dimensional feature vector.
Optionally, the similarity calculation module 804 can be also used for extraction and institute from preset range data
The corresponding default three-dimensional feature vector of matching characteristic point is stated, and combines default three-dimensional feature vector with default two-dimensional feature vector
Matching characteristic vector is formed, and calculates the similarity of the assemblage characteristic vector Yu the matching characteristic vector.
Mobile terminal 800 can be realized each process that mobile terminal is realized in the embodiment of the method for Fig. 1 to Fig. 7, to keep away
Exempt to repeat, which is not described herein again.
The mobile terminal 800 of the embodiment of the present invention acquires the two-dimensional image data and range data of user's face;It mentions
Take the two-dimensional feature vector of the two-dimensional image data;The spy of matching characteristic point in face is extracted by the range data
It levies vector and forms three-dimensional feature vector;Calculate the similarity of assemblage characteristic vector and matching characteristic vector;Judge the similarity
Whether it is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold, determines that identification verifying is logical
It crosses.To make the recognition of face of mobile terminal while meet recognition accuracy height and recognition speed is fast.
It is the structural schematic diagram of another mobile terminal provided in an embodiment of the present invention referring to Figure 14, Figure 14, such as Figure 14 institute
Show, mobile terminal 1400 includes: at least one processor 1401, memory 1402, at least one network interface 1404 and user
Interface 1403.Various components in mobile terminal 1400 are coupled by bus system 1405.It is understood that bus system
1405 for realizing the connection communication between these components.Bus system 1405 further includes power supply in addition to including data/address bus
Bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus in Figure 14
System 1405.
Wherein, user interface 1403 may include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memory 1402 in the embodiment of the present invention can be volatile memory or non-volatile memories
Device, or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory
(Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), erasable programmable are only
Read memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM,
) or flash memory EEPROM.Volatile memory can be random access memory (Random Access Memory, RAM), use
Make External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random-access
Memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random-access
Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data
Rate SDRAM, DDRSDRAM), it is enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronous
Connect dynamic random access memory (Synchlink DRAM, SLDRAM) and direct rambus random access memory
(Direct Rambus RAM, DRRAM).The memory 1402 of system and method described herein is intended to include but is not limited to this
A little and any other suitable type memory.
In some embodiments, memory 1402 stores following element, executable modules or data structures, or
Their subset of person or their superset: operating system 14021 and application program 14022.
Wherein, operating system 14021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and the hardware based task of processing.Application program 14022 includes various application programs, such as matchmaker
Body player (Media Player), browser (Browser) etc., for realizing various applied business.Realize that the present invention is implemented
The program of example method may be embodied in application program 14022.
In embodiments of the present invention, by the program or instruction of calling memory 1402 to store, specifically, can be application
The program or instruction stored in program 14022, processor 1401 are used for:
Acquire the two-dimensional image data and range data of user's face;
Extract the two-dimensional feature vector of the two-dimensional image data;
By the range data, the feature vector for extracting preset matching characteristic point in face forms three-dimensional feature
Vector, wherein the matching characteristic point is the Partial Feature point in face;
Calculate the similarity of assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector is by described two
Dimensional feature vector and the three-dimensional feature vector combine to be formed, the matching characteristic vector be by, preset two-dimensional feature vector, with
And default three-dimensional feature vector corresponding with the matching characteristic point combines to be formed;
Judge whether the similarity is greater than or equal to similarity threshold, if the similarity is more than or equal to described similar
Threshold value is spent, determination passes through recognition of face.
The method that the embodiments of the present invention disclose can be applied in processor 1401, or real by processor 1401
It is existing.Processor 1401 may be a kind of IC chip, the processing capacity with signal.During realization, the above method
Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 1401 or software form.Above-mentioned
Processor 1401 can be general processor, digital signal processor (Digital Signal Processor, DSP), dedicated
Integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general
Processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with institute of the embodiment of the present invention
The step of disclosed method, can be embodied directly in hardware decoding processor and execute completion, or with the hardware in decoding processor
And software module combination executes completion.Software module can be located at random access memory, and flash memory, read-only memory may be programmed read-only
In the storage medium of this fields such as memory or electrically erasable programmable memory, register maturation.The storage medium is located at
The step of memory 1402, processor 1401 reads the information in memory 1402, completes the above method in conjunction with its hardware.
It is understood that embodiments described herein can with hardware, software, firmware, middleware, microcode or its
Combination is to realize.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (Application
Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processing,
DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable Logic
Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general processor,
In controller, microcontroller, microprocessor, other electronic units for executing herein described function or combinations thereof.
For software implementations, it can be realized herein by executing the module (such as process, function etc.) of function described herein
The technology.Software code is storable in memory and is executed by processor.Memory can in the processor or
It is realized outside processor.
Optionally, processor 1401 executes described by the range data, and it is special to extract preset matching in face
The feature vector of sign point forms three-dimensional feature vector, comprising:
Detect key feature points in user's face;
The feature vector of the key feature points is extracted by the range data;
Judge whether recognition of face influence factor is normal;
If the recognition of face influence factor is normal, select the Partial Feature point in the key feature points as described
The three-dimensional feature vector is formed with characteristic point, and by the feature vector of the matching characteristic point.
Optionally, processor 1401 executes described by the range data, and it is special to extract preset matching in face
The feature vector of sign point forms three-dimensional feature vector, further includes:
If the recognition of face influence factor is abnormal, select whole characteristic points of the key feature points as described
The three-dimensional feature vector is formed with characteristic point, and by the feature vector of the matching characteristic point.
Optionally, processor 1401 executes described by the range data, and it is special to extract preset matching in face
The feature vector of sign point forms three-dimensional feature vector, comprising:
Detect key feature points in user's face, wherein the key feature points are the Partial Feature point in face;
Judge whether recognition of face influence factor is normal;
If the recognition of face influence factor is normal, select the Partial Feature point of the key feature points as the matching
Characteristic point, and the feature vector for extracting from the range data matching characteristic point formed the three-dimensional feature to
Amount.
Optionally, processor 1401 executes described by the range data, and it is special to extract preset matching in face
The feature vector of sign point forms three-dimensional feature vector, further includes:
If the recognition of face influence factor is abnormal, select whole characteristic points of the key feature points as described
With characteristic point, and the feature vector for extracting from the range data matching characteristic point formed the three-dimensional feature to
Amount.
Optionally, processor 1401 executes the similarity for calculating assemblage characteristic vector and matching characteristic vector, comprising:
Default three-dimensional feature vector corresponding with the matching characteristic point is extracted from preset range data, and will
Default three-dimensional feature vector combines to form matching characteristic vector with default two-dimensional feature vector, and calculates the assemblage characteristic vector
With the similarity of the matching characteristic vector.
Optionally, processor 1401 executes after the determining identification is verified, further includes:
Mobile terminal is unlocked.
Mobile terminal 1400 can be realized each process that mobile terminal is realized in previous embodiment, to avoid repeating, this
In repeat no more.
The mobile terminal 1400 of the embodiment of the present invention acquires the two-dimensional image data and range data of user's face;
Extract the two-dimensional feature vector of the two-dimensional image data;Matching characteristic point in face is extracted by the range data
Feature vector forms three-dimensional feature vector;Calculate the similarity of assemblage characteristic vector and matching characteristic vector;Judge described similar
Whether degree is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold, determines identification verifying
Pass through.To make the recognition of face of mobile terminal while meet recognition accuracy height and recognition speed is fast.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit
It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (12)
1. a kind of method of recognition of face characterized by comprising
Acquire the two-dimensional image data and range data of user's face;
Extract the two-dimensional feature vector of the two-dimensional image data;
By the range data, extract preset matching characteristic point in face feature vector formed three-dimensional feature to
Amount, wherein the matching characteristic point is the Partial Feature point in face;
Calculate the similarity of assemblage characteristic vector and matching characteristic vector, wherein the assemblage characteristic vector is special by the two dimension
Three-dimensional feature vector described in sign vector sum combines to be formed, the matching characteristic vector be by default two-dimensional feature vector and with institute
The corresponding default three-dimensional feature vector of matching characteristic point is stated to combine to be formed;
Judge whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than or equal to the similarity threshold
Value, determination pass through recognition of face;
It is described by the range data, the feature vector for extracting preset matching characteristic point in face forms three-dimensional feature
Vector, comprising:
Detect key feature points in user's face, wherein the key feature points are the Partial Feature point in face;
The feature vector of the key feature points is extracted by the range data;
Judge whether recognition of face influence factor is normal;
If the recognition of face influence factor is normal, select the Partial Feature point of the key feature points as the matching characteristic
Point, and the three-dimensional feature vector is formed by the feature vector of the matching characteristic point;
The feature vector for extracting preset matching characteristic point in face by the range data forms three-dimensional feature
Vector, further includes:
If the recognition of face influence factor is abnormal, select whole characteristic points of the key feature points special as the matching
Point is levied, and the three-dimensional feature vector is formed by the feature vector of the matching characteristic point.
2. the method as described in claim 1, which is characterized in that the recognition of face influence factor includes intensity of illumination, face
Posture and countenance.
3. the method as described in claim 1, which is characterized in that described extracted in face by the range data is preset
Matching characteristic point feature vector formed three-dimensional feature vector, further includes:
Detect key feature points in user's face, wherein the key feature points are the Partial Feature point in face;
Judge whether recognition of face influence factor is normal;
If the recognition of face influence factor is normal, select the Partial Feature point of the key feature points as the matching characteristic
Point, and the feature vector for extracting from the range data matching characteristic point forms the three-dimensional feature vector.
4. method as claimed in claim 3, which is characterized in that described extracted in face by the range data is preset
Matching characteristic point feature vector formed three-dimensional feature vector, further includes:
If the recognition of face influence factor is abnormal, select whole characteristic points of the key feature points special as the matching
Point is levied, and the feature vector for extracting from the range data matching characteristic point forms the three-dimensional feature vector.
5. the method as described in Claims 1 to 4 any one, which is characterized in that the calculating assemblage characteristic vector with match
The similarity of feature vector, comprising:
Default three-dimensional feature vector corresponding with the matching characteristic point is extracted from preset range data, and will be preset
Three-dimensional feature vector combines to form matching characteristic vector with default two-dimensional feature vector, and calculates the assemblage characteristic vector and institute
State the similarity of matching characteristic vector.
6. the method as described in Claims 1 to 4 any one, which is characterized in that after the determination passes through recognition of face, also
Include:
Mobile terminal is unlocked.
7. a kind of mobile terminal characterized by comprising
Data acquisition module, for acquiring the two-dimensional image data and range data of user's face;
Two-dimensional feature vector extraction module, for extracting the two-dimensional feature vector of the two-dimensional image data;
Three-dimensional feature vector extraction module, for extracting preset matching characteristic point in face by the range data
Feature vector formed three-dimensional feature vector, wherein the matching characteristic point be face in Partial Feature point;
Similarity calculation module, for calculating the similarity of assemblage characteristic vector Yu matching characteristic vector, wherein the combination is special
Sign vector is combined and is formed by the two-dimensional feature vector and the three-dimensional feature vector, and the matching characteristic vector is by presetting two
Dimensional feature vector and default three-dimensional feature vector corresponding with the matching characteristic point combine to be formed;
Authentication module is identified, for judging whether the similarity is greater than or equal to similarity threshold, if the similarity is greater than
Or it is equal to the similarity threshold, determination passes through recognition of face;
The three-dimensional feature vector extraction module includes:
First detection unit, for detecting key feature points in user's face, wherein the key feature points are in face
Partial Feature point;
Vector extraction unit is levied, for extracting the feature vector of the key feature points by the range data;
First judging unit, for judging whether recognition of face influence factor is normal;
First selecting unit selects the part in the key feature points special if normal for the recognition of face influence factor
Sign point is used as the matching characteristic point, and forms the three-dimensional feature vector by the feature vector of the matching characteristic point;
The three-dimensional feature vector extraction module further include:
Second selecting unit selects all special of the key feature points if abnormal for the recognition of face influence factor
Sign point is used as the matching characteristic point, and forms the three-dimensional feature vector by the feature vector of the matching characteristic point.
8. mobile terminal as claimed in claim 7, which is characterized in that the recognition of face influence factor include intensity of illumination,
Human face posture and countenance.
9. mobile terminal as claimed in claim 7, which is characterized in that the three-dimensional feature vector extraction module includes:
Second detection unit, for detecting key feature points in user's face, wherein the key feature points are in face
Partial Feature point;
Second judgment unit, for judging whether recognition of face influence factor is normal;
Third selecting unit selects the Partial Feature of the key feature points if normal for the recognition of face influence factor
Point is used as the matching characteristic point, and the feature vector formation institute of the matching characteristic point is extracted from the range data
State three-dimensional feature vector.
10. mobile terminal as claimed in claim 8, which is characterized in that the three-dimensional feature vector extraction module further include:
4th selecting unit selects all special of the key feature points if abnormal for the recognition of face influence factor
Sign point is used as the matching characteristic point, and the feature vector for extracting from the range data matching characteristic point is formed
The three-dimensional feature vector.
11. the mobile terminal as described in claim 7~9 any one, which is characterized in that the similarity calculation module is also used
In the extraction default three-dimensional feature vector corresponding with the matching characteristic point from preset range data, and three will be preset
Dimensional feature vector combines to form matching characteristic vector with default two-dimensional feature vector, and calculate the assemblage characteristic vector with it is described
The similarity of matching characteristic vector.
12. the mobile terminal as described in claim 7~9 any one, which is characterized in that further include:
Unlocked state, for being unlocked to mobile terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610736293.9A CN106326867B (en) | 2016-08-26 | 2016-08-26 | A kind of method and mobile terminal of recognition of face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610736293.9A CN106326867B (en) | 2016-08-26 | 2016-08-26 | A kind of method and mobile terminal of recognition of face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106326867A CN106326867A (en) | 2017-01-11 |
CN106326867B true CN106326867B (en) | 2019-06-07 |
Family
ID=57790850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610736293.9A Active CN106326867B (en) | 2016-08-26 | 2016-08-26 | A kind of method and mobile terminal of recognition of face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106326867B (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791440A (en) * | 2017-01-20 | 2017-05-31 | 奇酷互联网络科技(深圳)有限公司 | Control the method and device of Face datection function |
CN106991377B (en) * | 2017-03-09 | 2020-06-05 | Oppo广东移动通信有限公司 | Face recognition method, face recognition device and electronic device combined with depth information |
CN107392137B (en) * | 2017-07-18 | 2020-09-08 | 艾普柯微电子(上海)有限公司 | Face recognition method and device |
CN107506752A (en) * | 2017-09-18 | 2017-12-22 | 艾普柯微电子(上海)有限公司 | Face identification device and method |
CN109557999B (en) * | 2017-09-25 | 2022-08-26 | 北京小米移动软件有限公司 | Bright screen control method and device and storage medium |
CN107613550B (en) * | 2017-09-27 | 2020-12-29 | Oppo广东移动通信有限公司 | Unlocking control method and related product |
CN108875491B (en) | 2017-10-11 | 2021-03-23 | 北京旷视科技有限公司 | Data updating method, authentication equipment and system for face unlocking authentication and nonvolatile storage medium |
CN108009483A (en) * | 2017-11-28 | 2018-05-08 | 信利光电股份有限公司 | A kind of image collecting device, method and intelligent identifying system |
CN109948314A (en) * | 2017-12-20 | 2019-06-28 | 宁波盈芯信息科技有限公司 | A kind of the face 3D unlocking method and device of smart phone |
CN108280422B (en) * | 2018-01-22 | 2022-06-14 | 百度在线网络技术(北京)有限公司 | Method and device for recognizing human face |
TWI693560B (en) * | 2018-04-12 | 2020-05-11 | 合盈光電科技股份有限公司 | Face recognition method |
CN110472459B (en) * | 2018-05-11 | 2022-12-27 | 华为技术有限公司 | Method and device for extracting feature points |
CN110516516A (en) * | 2018-05-22 | 2019-11-29 | 北京京东尚科信息技术有限公司 | Robot pose measurement method and device, electronic equipment, storage medium |
CN108958613A (en) * | 2018-07-03 | 2018-12-07 | 佛山市影腾科技有限公司 | A kind of terminal unlock method based on recognition of face, device and terminal |
CN108960841A (en) | 2018-07-16 | 2018-12-07 | 阿里巴巴集团控股有限公司 | Method of payment, apparatus and system |
CN109344703B (en) * | 2018-08-24 | 2021-06-25 | 深圳市商汤科技有限公司 | Object detection method and device, electronic equipment and storage medium |
CN109614238B (en) * | 2018-12-11 | 2023-05-12 | 深圳市网心科技有限公司 | Target object identification method, device and system and readable storage medium |
CN111325078A (en) * | 2018-12-17 | 2020-06-23 | 航天信息股份有限公司 | Face recognition method, face recognition device and storage medium |
CN109685014A (en) * | 2018-12-25 | 2019-04-26 | 努比亚技术有限公司 | Face recognition method, device, mobile terminal and storage medium |
CN109670487A (en) * | 2019-01-30 | 2019-04-23 | 汉王科技股份有限公司 | A kind of face identification method, device and electronic equipment |
CN109710371A (en) * | 2019-02-20 | 2019-05-03 | 北京旷视科技有限公司 | Font adjusting method, apparatus and system |
CN109977794A (en) * | 2019-03-05 | 2019-07-05 | 北京超维度计算科技有限公司 | A method of recognition of face is carried out with deep neural network |
CN110378209B (en) * | 2019-06-11 | 2021-12-17 | 深圳市锐明技术股份有限公司 | Driver identity verification method and device |
CN110619239A (en) * | 2019-08-30 | 2019-12-27 | 捷开通讯(深圳)有限公司 | Application interface processing method and device, storage medium and terminal |
CN110889373B (en) * | 2019-11-27 | 2022-04-08 | 中国农业银行股份有限公司 | Block chain-based identity recognition method, information storage method and related device |
CN112766086A (en) * | 2021-01-04 | 2021-05-07 | 深圳阜时科技有限公司 | Identification template registration method and storage medium |
SG10202102048QA (en) * | 2021-03-01 | 2021-08-30 | Alipay Labs Singapore Pte Ltd | A User Authentication Method and System |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103678984A (en) * | 2013-12-20 | 2014-03-26 | 湖北微模式科技发展有限公司 | Method for achieving user authentication by utilizing camera |
CN104598878A (en) * | 2015-01-07 | 2015-05-06 | 深圳市唯特视科技有限公司 | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information |
CN104899579A (en) * | 2015-06-29 | 2015-09-09 | 小米科技有限责任公司 | Face recognition method and face recognition device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8593452B2 (en) * | 2011-12-20 | 2013-11-26 | Apple Inc. | Face feature vector construction |
-
2016
- 2016-08-26 CN CN201610736293.9A patent/CN106326867B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103678984A (en) * | 2013-12-20 | 2014-03-26 | 湖北微模式科技发展有限公司 | Method for achieving user authentication by utilizing camera |
CN104598878A (en) * | 2015-01-07 | 2015-05-06 | 深圳市唯特视科技有限公司 | Multi-modal face recognition device and method based on multi-layer fusion of gray level and depth information |
CN104899579A (en) * | 2015-06-29 | 2015-09-09 | 小米科技有限责任公司 | Face recognition method and face recognition device |
Non-Patent Citations (2)
Title |
---|
《人脸图像检测与识别方法综述》;王科俊等;《自动化技术与应用》;20041231;第23卷(第12期);第5-9页 |
《人脸识别技术》;张会森等;《计算机工程与设计》;20060630;第27卷(第11期);第1923-1928页 |
Also Published As
Publication number | Publication date |
---|---|
CN106326867A (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106326867B (en) | A kind of method and mobile terminal of recognition of face | |
CN107844748B (en) | Auth method, device, storage medium and computer equipment | |
Spreeuwers | Fast and accurate 3D face recognition: using registration to an intrinsic coordinate system and fusion of multiple region classifiers | |
US9922237B2 (en) | Face authentication system | |
US9646193B2 (en) | Fingerprint authentication using touch sensor data | |
Wang et al. | Face liveness detection using 3D structure recovered from a single camera | |
KR101956071B1 (en) | Method and apparatus for verifying a user | |
US9530066B2 (en) | Ear-scan-based biometric authentication | |
WO2020135096A1 (en) | Method and device for determining operation based on facial expression groups, and electronic device | |
US10063541B2 (en) | User authentication method and electronic device performing user authentication | |
WO2015146101A1 (en) | Face comparison device, method, and recording medium | |
US20240087358A1 (en) | Liveness test method and apparatus and biometric authentication method and apparatus | |
KR102476016B1 (en) | Apparatus and method for determining position of eyes | |
US9495580B2 (en) | Face recognition apparatus and method using plural face images | |
US20200302041A1 (en) | Authentication verification using soft biometric traits | |
TW202006630A (en) | Payment method, apparatus, and system | |
KR20190060671A (en) | Iris recognition based user authentication apparatus and method thereof | |
EP3642756A1 (en) | Detecting artificial facial images using facial landmarks | |
US10445546B2 (en) | Authentication method and authentication apparatus using synthesized code for iris | |
CN111274602A (en) | Image characteristic information replacement method, device, equipment and medium | |
TW202217611A (en) | Authentication method | |
Baraki et al. | Authentication of a user using a combination of hand gesture and online signature | |
Bae et al. | Automated 3D Face authentication & recognition | |
CN113837053A (en) | Biological face alignment model training method, biological face alignment method and device | |
Schouten et al. | Non-intrusive face verification by a virtual mirror interface using fractal codes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |