CN105787416A - Mobile terminal-based face recognition method and system - Google Patents

Mobile terminal-based face recognition method and system Download PDF

Info

Publication number
CN105787416A
CN105787416A CN201410807233.2A CN201410807233A CN105787416A CN 105787416 A CN105787416 A CN 105787416A CN 201410807233 A CN201410807233 A CN 201410807233A CN 105787416 A CN105787416 A CN 105787416A
Authority
CN
China
Prior art keywords
extreme point
local feature
mobile terminal
face
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410807233.2A
Other languages
Chinese (zh)
Inventor
谭颖璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201410807233.2A priority Critical patent/CN105787416A/en
Publication of CN105787416A publication Critical patent/CN105787416A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a mobile terminal-based face recognition method and a system. The method comprises the steps of acquiring the image of a user face by a built-in camera of a mobile terminal in advance, sequentially performing the normalization treatment, the local feature vector extraction and the local facial subspace training to obtain a local feature vector, storing the local feature vector, acquiring the local feature vector of the face image of a current user during the face recognition process, matching the local feature vector with the above stored local feature vector, and recognizing the face information of the current user. According to the technical scheme of the invention, based on the storage and computation capacity of the mobile terminal, the face information acquiring and processing operation is conducted. Therefore, the mobile terminal can recognize a human face even in the network disconnection state without any server. The above method fully considers the limited CPU processing capacity of the mobile terminal, wherein the local feature vector extraction is conducted.

Description

A kind of face identification method based on mobile terminal and system
Technical field
The present invention relates to technical field of face recognition, particularly relate to a kind of face identification method based on mobile terminal and system.
Background technology
The development of embedded technology, has promoted mobile equipment and technology to move towards a brand-new period.Nowadays the mobile equipment that people use not only possesses voice, function of browse, is also equipped with taking pictures, images, the function such as transaction.Mobile equipment is increasingly similar to PC role in people live.
Universal along with mobile Internet, mobile phone financing, Mobile banking, mobile phone shopping etc. apply with its easily and efficiently advantage accepted by increasing people.Meanwhile, some safety problems are also brought.In order to improve safety, adopting complex password or password string is method conventional at present transaction as certification, if but use identical password at different mobile equipment, obtain easy to use while too increase potential safety hazard.Therefore, add biological characteristic authentication function on the mobile apparatus and become the inexorable trend of development.
Recognition of face has noncontact, is prone to the advantages such as collection, be attempted in recent years to be applied to have the PDA of camera function, mobile phone, Pad and other move in equipment, by facial characteristics recognition function, the identity of user is verified, thus the safety that support equipment uses.
In prior art, the pattern that the face recognition technology of mobile terminal adopts is usually and gathers facial image in terminal, then facial image is uploaded onto the server and process, the most backward terminal returns result, this pattern is to extract to separate with face characteristic by face collection to process at distinct device, the performance requirement of mobile equipment is low, favorable expandability, but it relies on the Internet, if mobile equipment cannot connect the Internet, recognition of face can not be carried out, and this pattern is higher to the performance requirement of network.
Therefore, prior art has yet to be improved and developed.
Summary of the invention
In view of above-mentioned the deficiencies in the prior art, it is an object of the invention to provide a kind of face identification method based on mobile terminal and system, it is intended to solving existing face recognition technology needs in the problem separately gathering facial image and identification.
Technical scheme is as follows:
A kind of face identification method based on mobile terminal, wherein, including step:
A, beforehand through the built-in camera collection user's facial image of mobile terminal, and be sequentially carried out normalized, local feature vectors extract and subspace, local facial training, obtain local feature vectors and store;
B, when carrying out recognition of face, obtain active user's facial image local feature vectors, and with storage local feature vectors mate, so that active user's face information is identified.
The described face identification method based on mobile terminal, wherein, in described step A, the process that local feature vectors is extracted specifically includes:
A1, build difference of Gaussian metric space, then the extreme point in metric space is detected;
A2, extreme point is screened, and reject the interference brought by skirt response;
A3, ask for the gradient direction of extreme point;
A4, coordinate axes is rotated to the principal direction of extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
The described face identification method based on mobile terminal, wherein, in described step A4, take centered by extreme point, take the space of its neighborhood 8*8, then this space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, namely forms face's local feature image.
The described face identification method based on mobile terminal, wherein, in described step A3, calculates the direction of each extreme point, and set angle is interval, if a certain angular interval has maximum characteristic points, then the space, center of this angular interval is the gradient direction of extreme point.
The described face identification method based on mobile terminal, wherein, described step A2 specifically includes:
A21, to the Taylor expansion derivation at metric space of the DoG function, so as to be 0, try to achieve correction value;
A22, correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandon this extreme point;
A23, according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
A kind of face identification system based on mobile terminal, wherein, including:
Memory module, for the camera collection user facial image built-in beforehand through mobile terminal, and is sequentially carried out normalized, local feature vectors extraction and the training of subspace, local facial, obtains local feature vectors and store;
Identification module, for when carrying out recognition of face, obtaining the local feature vectors of active user's facial image, and mate with the local feature vectors of storage, so that active user's face information to be identified.
The described face identification system based on mobile terminal, wherein, described memory module specifically includes:
Detection unit, for building the metric space of difference of Gaussian, then detects the extreme point in metric space;
Screening unit, for extreme point is screened, and rejects the interference brought by skirt response;
Direction calculating unit, for asking for the gradient direction of extreme point;
Generate unit, for coordinate axes is rotated the principal direction to extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
The described face identification system based on mobile terminal, wherein, in described generation unit, take centered by extreme point, take the space of its neighborhood 8*8, then this space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, namely forms face's local feature image.
The described face identification system based on mobile terminal, wherein, in described direction calculating unit, calculate the direction of each extreme point, set angle is interval, if a certain angular interval has maximum characteristic points, then the space, center of this angular interval is the gradient direction of extreme point.
The described face identification system based on mobile terminal, wherein, described screening unit specifically includes:
Derivation subelement, for the Taylor expansion derivation at metric space of the DoG function, so as to be 0, trying to achieve correction value;
First screening unit, for correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandons this extreme point;
Second screening unit, for according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
Beneficial effect: the present invention utilizes storage and the computing capability of mobile terminal, carries out face information collection and process, departing from server so that mobile terminal also is able to carry out recognition of face under without network connection;The method of the present invention fully takes into account the CPU disposal ability that mobile terminal is limited, owing to whole face is to be made up of multiple local features, the present invention adopts local feature vectors extracting method, extraction can characterize the local feature vectors of face characteristic and mate, decrease the quantity of information of process, improve recognition efficiency.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of face identification method preferred embodiment based on mobile terminal of the present invention.
Fig. 2 is the particular flow sheet of step S101 in method shown in Fig. 1.
Fig. 3 is the Organization Chart of gaussian pyramid in the present invention.
Fig. 4 is the particular flow sheet of step S202 in flow process shown in Fig. 2.
Fig. 5 is the flow chart that the method for the present invention is applied in payment technical field preferred embodiment.
Fig. 6 is the flow chart of a kind of face identification system preferred embodiment based on mobile terminal of the present invention.
Fig. 7 is the concrete structure block diagram of memory module in system shown in Figure 6.
Fig. 8 is the concrete structure block diagram screening unit in module shown in Fig. 7.
Detailed description of the invention
The present invention provides a kind of face identification method based on mobile terminal and system, and for making the purpose of the present invention, technical scheme and effect clearly, clearly, the present invention is described in more detail below.Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
Refer to the flow chart that Fig. 1, Fig. 1 are a kind of face payment system preferred embodiment based on mobile terminal of the present invention, as it can be seen, it includes step:
S101, beforehand through the built-in camera collection user's facial image of mobile terminal, and be sequentially carried out normalized, local feature vectors extract and subspace, local facial training, obtain local feature vectors and store;
S102, when carrying out recognition of face, obtain active user's facial image local feature vectors, and with storage local feature vectors mate, so that active user's face information is identified.
In the embodiment of the present invention, mobile terminal is utilized to carry out face information collection and process, participation without server, mobile terminal is made also to be able to carry out recognition of face under without network connection, reducing the requirement to network performance, owing to whole face is to be made up of multiple local features, the present invention adopts local feature vectors extracting method, extraction can characterize the local feature vectors of face characteristic and mate, and improves recognition efficiency.
Specifically, in step S101, first pass through the built-in photographic head of mobile terminal to gather user's facial image, then the user's facial image collected is carried out pretreatment, be namely normalized.Then it is sequentially carried out local feature vectors to extract and local facial sub-space learning.
Below the process of local characteristic vector pickup is specifically described, as in figure 2 it is shown, it can be refined as following steps:
S201, build difference of Gaussian metric space, then the extreme point in metric space is detected;
The purpose of Scale-space theory is the Analysis On Multi-scale Features of simulated image data, and Gaussian convolution core is the unique linear core realizing change of scale, and the metric space of image is defined as formula (1):
(1)
Wherein,It is space coordinates,Being yardstick coordinate, yardstick coordinate is more big, then more level off to global feature, and yardstick coordinate is more little, then more level off to minutia.
Being changeable scale Gaussian function, it is defined as formula (2):
(2)
Application DoG(DifferenceofGaussian) function build difference of Gaussian metric space, definition such as formula (3):
(3)
Wherein, k is constant, and the pyramidal each layer of DoG is all that to be carried out Difference Calculation by the two of last layer adjacent spaces obtained, until the pyramidal top of DoG.What the metric space of difference of Gaussian described is the situation of change of pixel value, if the pixel of same position is not changed in, namely thinking does not have feature, as shown in Figure 3.
After obtaining the metric space of image, then the extreme point in metric space is detected.
9 neighborhood points in 8 neighborhood points on each sampled point needs and self metric space, the metric space on this self metric space, and 9 neighborhood points in the metric space under this self metric space, amount to 26 neighborhood points and compare.If sampled point is more than or less than all 26 neighborhood points, then it is assumed that this sampled point is an image extreme point at this yardstick.
S202, extreme point is screened, and reject the interference brought by skirt response;
This step is the extreme point relatively low, unstable in order to remove contrast, to strengthen stability and the noise resisting ability of coupling, by matching quadratic function and principal curvatures size, extreme point is screened.
Specifically, as shown in Figure 4, described step S202 can be refined as following steps:
S301, to the Taylor expansion derivation at metric space of the DoG function, so as to be 0, try to achieve correction value;
Utilize DoG function in the Taylor expansion of metric space, such as formula (4):
(4)
To its derivation, so as to obtain 0, trying to achieve its correction value is formula (5):
(5)
S302, correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandon this extreme point;
Correction value is updated to expansion and can obtain formula (6):
(6)
If meeting formula, then this extreme point is low contrast point extreme point, so abandoning this extreme point.
S303, according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
Owing to DoG operator has stronger skirt response, so also needing to reject the interference brought by skirt response.DoG function has bigger principal curvatures in the direction across edge, and has less principal curvatures in the direction of vertical edge, therefore by judging that interfering extreme point can be screened out process by the size of principal curvatures.
First calculating the Hessian matrix of the 2X2 of extreme point Location Scale, derivative is estimated by the adjacent difference of extreme point, and matrix H is formula (7):
(7)
Wherein,Represent the image x direction derivation of a certain yardstick twice in DoG pyramid.
The principal curvatures of D and the eigenvalue of H are directly proportional, orderFor eigenvalue of maximum,For minimal eigenvalue, then
(8)
Order, then:
(9)
If meeting formula (10):
(10)
Then extreme point is rejected, it is contemplated that the storage capacity of mobile terminal and computing capability, γ value herein is 10.Finally give extreme point one group stable, i.e. the representative feature point of image.
S203, ask for the gradient direction of extreme point;
First ask for the gradient direction of extreme point, rotate according to the direction of gradient, it is possible to making Feature Descriptor (face's local feature image) have rotational invariance, the gradient of extreme point is:
(11)
Gradient magnitude is:
(12)
Gradient direction is:
(13)
Calculating the direction of each extreme point, set angle is interval, if having maximum extreme points in a certain angular interval, then the direction, center of this angular interval is the gradient direction of extreme point.Angular interval is typically set to 36 according to precision difference, namely every 10 degree one.
S204, coordinate axes is rotated to the principal direction of extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
Coordinate axes is rotated to be the principal direction of extreme point, then centered by extreme point, takes the space of its neighborhood 8X8.Afterwards space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, face's local feature image (Feature Descriptor) can be formed.
Then face's local feature image projection is trained to subspace, local facial, obtains local feature vectors, and store.Described subspace, local facial is an image storage unit being deployed within mobile terminal, this image storage unit storage has the face's local feature image under different condition, the face's local feature image projection got is trained in subspace, local facial, finally gives local feature vectors.
In step s 102, obtained the local feature vectors of active user's facial image by above-mentioned same method, then mate with the local feature vectors of storage, so that active user's face information is identified.
Its application of the method for the present invention is extensive, for instance the various mutual fields such as unblock, payment.
Below to be applied to the method for the present invention be specifically described during face pays.
First need to create payment account:
Input bank card, the credit card or Third-party payment account, then check whether respective pay account exists, if existing, and end operation.If being absent from, then prompt the user whether to carry out facial information extraction, if user selects to be user then carries out face information extraction, and distributes unique PayID for payment account, if not, then end operation.PayID therein is payment account, serial number composition, through a string ciphertext that encryption obtains.
Face information therein is extracted identical with the process of step S101, repeats no more, additionally, also the local feature vectors obtained bound with PayID, and stores in Transaction Information storehouse.Described Transaction Information storehouse stores PayID, local feature vectors and their corresponding relation.
In transaction verification process:
Obtained the face information of active user by mobile terminal built-in camera, then extract local feature vectors, and in Transaction Information storehouse, search local feature vectors corresponding to current PayID, and mate with the local feature vectors of active user, and return result.
Whether if the match is successful, carry out subsequent operation, if unsuccessful, point out and again verify, its whole process is as it is shown in figure 5, include step:
Start;
S401, facial information collection;
S402, normalized;
S403, local feature vectors are extracted;
S404, the training of subspace, local facial;
S405, local feature vectors are inquired about and mate;
S406, judge whether coupling, if then enter step S407, proceed to step S408 if not;
S407, payment;
Whether S408, prompting are verified again, if selecting are, then return step S401, otherwise terminate;
Terminate.
Based on said method, the present invention also provides for a kind of face identification system preferred embodiment based on mobile terminal, as shown in Figure 6, comprising:
Memory module 100, for the camera collection user facial image built-in beforehand through mobile terminal, and is sequentially carried out normalized, local feature vectors extraction and the training of subspace, local facial, obtains local feature vectors and store;
Identification module 200, for when carrying out recognition of face, obtaining the local feature vectors of active user's facial image, and mate with the local feature vectors of storage, so that active user's face information to be identified.
Further, as it is shown in fig. 7, described memory module 100 specifically includes:
Detection unit 110, for building the metric space of difference of Gaussian, then detects the extreme point in metric space;
Screening unit 120, for extreme point is screened, and rejects the interference brought by skirt response;
Direction calculating unit 130, for asking for the gradient direction of extreme point;
Generate unit 140, for coordinate axes is rotated the principal direction to extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
In described generation unit 140, take centered by extreme point, take the space of its neighborhood 8*8, then this space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, namely form face's local feature image.
In described direction calculating unit 130, calculating the direction of each extreme point, set angle is interval, if a certain angular interval has maximum characteristic points, then the space, center of this angular interval is the gradient direction of extreme point.
Further, as shown in Figure 8, described screening unit 120 specifically includes:
Derivation subelement 121, for the Taylor expansion derivation at metric space of the DoG function, so as to be 0, trying to achieve correction value;
First screening unit 122, for correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandons this extreme point;
Second screening unit 123, for according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
About ins and outs existing detailed description in method above of above-mentioned modular unit, therefore repeat no more.
In sum, the present invention utilizes storage and the computing capability of mobile terminal, carries out face information collection and process, departing from server so that mobile terminal also is able to carry out recognition of face under without network connection;The method of the present invention fully takes into account the CPU disposal ability that mobile terminal is limited, owing to whole face is to be made up of multiple local features, the present invention adopts local feature vectors extracting method, extracts and can characterize the local feature vectors of face characteristic and mate, improves recognition efficiency.
It should be appreciated that the application of the present invention is not limited to above-mentioned citing, for those of ordinary skills, it is possible to improved according to the above description or convert, all these improve and convert the protection domain that all should belong to claims of the present invention.

Claims (10)

1. the face identification method based on mobile terminal, it is characterised in that include step:
A, beforehand through the built-in camera collection user's facial image of mobile terminal, and be sequentially carried out normalized, local feature vectors extract and subspace, local facial training, obtain local feature vectors and store;
B, when carrying out recognition of face, obtain active user's facial image local feature vectors, and with storage local feature vectors mate, so that active user's face information is identified.
2. the face identification method based on mobile terminal according to claim 1, it is characterised in that in described step A, the process that local feature vectors is extracted specifically includes:
A1, build difference of Gaussian metric space, then the extreme point in metric space is detected;
A2, extreme point is screened, and reject the interference brought by skirt response;
A3, ask for the gradient direction of extreme point;
A4, coordinate axes is rotated to the principal direction of extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
3. the face identification method based on mobile terminal according to claim 2, it is characterized in that, in described step A4, take centered by extreme point, take the space of its neighborhood 8*8, then this space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, namely form face's local feature image.
4. the face identification method based on mobile terminal according to claim 2, it is characterised in that in described step A3, calculate the direction of each extreme point, set angle is interval, if a certain angular interval has maximum characteristic points, then the space, center of this angular interval is the gradient direction of extreme point.
5. the face identification method based on mobile terminal according to claim 2, it is characterised in that described step A2 specifically includes:
A21, to the Taylor expansion derivation at metric space of the DoG function, so as to be 0, try to achieve correction value;
A22, correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandon this extreme point;
A23, according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
6. the face identification system based on mobile terminal, it is characterised in that including:
Memory module, for the camera collection user facial image built-in beforehand through mobile terminal, and is sequentially carried out normalized, local feature vectors extraction and the training of subspace, local facial, obtains local feature vectors and store;
Identification module, for when carrying out recognition of face, obtaining the local feature vectors of active user's facial image, and mate with the local feature vectors of storage, so that active user's face information to be identified.
7. the face identification system based on mobile terminal according to claim 6, it is characterised in that described memory module specifically includes:
Detection unit, for building the metric space of difference of Gaussian, then detects the extreme point in metric space;
Screening unit, for extreme point is screened, and rejects the interference brought by skirt response;
Direction calculating unit, for asking for the gradient direction of extreme point;
Generate unit, for coordinate axes is rotated the principal direction to extreme point, then centered by extreme point, take its neighborhood space, calculate the accumulated value of gradient direction, form face's local feature image.
8. the face identification system based on mobile terminal according to claim 7, it is characterized in that, in described generation unit, take centered by extreme point, take the space of its neighborhood 8*8, then this space is divided into 4 sub spaces, every sub spaces calculates the accumulated value of gradient direction, namely form face's local feature image.
9. the face identification system based on mobile terminal according to claim 7, it is characterized in that, in described direction calculating unit, calculate the direction of each extreme point, set angle is interval, if a certain angular interval has maximum characteristic points, then the space, center of this angular interval is the gradient direction of extreme point.
10. the face identification system based on mobile terminal according to claim 7, it is characterised in that described screening unit specifically includes:
Derivation subelement, for the Taylor expansion derivation at metric space of the DoG function, so as to be 0, trying to achieve correction value;
First screening unit, for correction value is substituted into expansion, if calculated absolute value is less than a predetermined threshold, then judges that this extreme point is as low contrast extreme point, and abandons this extreme point;
Second screening unit, for according to DoG function in the principal curvatures size on edge and vertical edge direction, screen out interfering extreme point.
CN201410807233.2A 2014-12-23 2014-12-23 Mobile terminal-based face recognition method and system Pending CN105787416A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410807233.2A CN105787416A (en) 2014-12-23 2014-12-23 Mobile terminal-based face recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410807233.2A CN105787416A (en) 2014-12-23 2014-12-23 Mobile terminal-based face recognition method and system

Publications (1)

Publication Number Publication Date
CN105787416A true CN105787416A (en) 2016-07-20

Family

ID=56386415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410807233.2A Pending CN105787416A (en) 2014-12-23 2014-12-23 Mobile terminal-based face recognition method and system

Country Status (1)

Country Link
CN (1) CN105787416A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
CN106407418A (en) * 2016-09-23 2017-02-15 Tcl集团股份有限公司 A face identification-based personalized video recommendation method and recommendation system
CN107316028A (en) * 2017-06-30 2017-11-03 广东工业大学 The quick location tracking method of Given Face and device in crowd based on mobile terminal
WO2019192217A1 (en) * 2018-04-04 2019-10-10 北京市商汤科技开发有限公司 Identity authentication, unlocking and payment methods and apparatuses, storage medium, product and device
CN111160098A (en) * 2019-11-21 2020-05-15 长春理工大学 Expression change face recognition method based on SIFT features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
CN103034982A (en) * 2012-12-19 2013-04-10 南京大学 Image super-resolution rebuilding method based on variable focal length video sequence
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510257A (en) * 2009-03-31 2009-08-19 华为技术有限公司 Human face similarity degree matching method and device
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
CN103034982A (en) * 2012-12-19 2013-04-10 南京大学 Image super-resolution rebuilding method based on variable focal length video sequence
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384126A (en) * 2016-09-07 2017-02-08 东华大学 Clothes pattern identification method based on contour curvature feature points and support vector machine
CN106407418A (en) * 2016-09-23 2017-02-15 Tcl集团股份有限公司 A face identification-based personalized video recommendation method and recommendation system
CN107316028A (en) * 2017-06-30 2017-11-03 广东工业大学 The quick location tracking method of Given Face and device in crowd based on mobile terminal
WO2019192217A1 (en) * 2018-04-04 2019-10-10 北京市商汤科技开发有限公司 Identity authentication, unlocking and payment methods and apparatuses, storage medium, product and device
CN111160098A (en) * 2019-11-21 2020-05-15 长春理工大学 Expression change face recognition method based on SIFT features

Similar Documents

Publication Publication Date Title
TWI687879B (en) Server, client, user verification method and system
CN105787416A (en) Mobile terminal-based face recognition method and system
CN103390153B (en) For the method and system of the textural characteristics of biological characteristic validation
CN108345779B (en) Unlocking control method and related product
CN108021912B (en) Fingerprint identification method and device
CN107590430A (en) Biopsy method, device, equipment and storage medium
CN103793642B (en) Mobile internet palm print identity authentication method
CN105518709A (en) Method, system and computer program product for identifying human face
CN103514446A (en) Outdoor scene recognition method fused with sensor information
CN109829370A (en) Face identification method and Related product
JP6532523B2 (en) Management of user identification registration using handwriting
CN107194833A (en) Hotel management method, system and storage medium based on recognition of face
CN106303599A (en) A kind of information processing method, system and server
EP2701096A2 (en) Image processing device and image processing method
CN107408207A (en) Fingerprint localizes
CN104794386A (en) Data processing method and device based on face recognition
CN107832598B (en) Unlocking control method and related product
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
CN107038462A (en) Equipment control operation method and system
CN109816543A (en) A kind of image lookup method and device
CN111178129B (en) Multi-mode personnel identification method based on human face and gesture
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN108647640A (en) The method and electronic equipment of recognition of face
CN109089102A (en) A kind of robotic article method for identifying and classifying and system based on binocular vision
CN105262758A (en) Identity authentication method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160720