CN103593598B - User's on-line authentication method and system based on In vivo detection and recognition of face - Google Patents

User's on-line authentication method and system based on In vivo detection and recognition of face Download PDF

Info

Publication number
CN103593598B
CN103593598B CN201310602042.8A CN201310602042A CN103593598B CN 103593598 B CN103593598 B CN 103593598B CN 201310602042 A CN201310602042 A CN 201310602042A CN 103593598 B CN103593598 B CN 103593598B
Authority
CN
China
Prior art keywords
face
user
photo
result
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310602042.8A
Other languages
Chinese (zh)
Other versions
CN103593598A (en
Inventor
张珅哲
尹科奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI JUNYU DIGITAL TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI JUNYU DIGITAL TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI JUNYU DIGITAL TECHNOLOGY Co Ltd filed Critical SHANGHAI JUNYU DIGITAL TECHNOLOGY Co Ltd
Priority to CN201310602042.8A priority Critical patent/CN103593598B/en
Publication of CN103593598A publication Critical patent/CN103593598A/en
Application granted granted Critical
Publication of CN103593598B publication Critical patent/CN103593598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2133Verifying human interaction, e.g., Captcha

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Present invention is disclosed a kind of user's on-line authentication method and system based on In vivo detection and recognition of face, described method includes user's online registration step, user's on-line authentication step;User's on-line authentication step includes In vivo detection step, image processing step, characteristics extraction step, face alignment step, result treatment step;In vivo detection step confirming, whether certification user is live body and obtains human face photo;The human face photo gathered is processed by image processing step;To the human face photo processed in characteristic extraction step, extract face component feature;In face alignment step, the characteristic of the facial image of the collection of extraction and the corresponding human face data in described user's face characteristic Value Data storehouse, by setting a threshold value, when similarity exceedes this threshold value, then result coupling obtained exports.The present invention can avoid using the video containing face to gain certification by cheating, improves security of system;Recognition time can be shortened simultaneously, improve recognition accuracy.

Description

User's on-line authentication method and system based on In vivo detection and recognition of face
Technical field
The invention belongs to computer and technical field of face recognition, relate to a kind of on-line authentication identification side Method, particularly relates to a kind of user's on-line authentication method based on In vivo detection and recognition of face;Meanwhile, The invention still further relates to a kind of user's on-line authentication system based on In vivo detection and recognition of face.
Background technology
In current information-intensive society, the information technology with cyber-net as representative almost penetrates into life Every aspect.But we are while enjoying " information ", have also brought dangerous Shade, so people are in being engaged in vairious activities, it is often necessary to carry out the certification of personal identification, Thus the safety of guarantee information.Because the naturality of face, stability, easily collection property, and quilt It is applied to authentication.
Existing verification process refers to Fig. 1, mainly includes two stages: when user's online registration, Need to gather user's human face photo, extract face characteristic and put it in characteristic value data storehouse;? During user's on-line authentication, gather user's human face photo, then carry out Face datection, image procossing, Face characteristic extracts, and by face corresponding with characteristic value data storehouse for the face characteristic that extracts Feature is compared, and then obtains a result.
In place of existing authentication method Shortcomings, specifically include that one, at camera collection photo mould Block, it is impossible to screen image/video or true man;Its two, this method is at recognition speed and discrimination Do not meet the real demand of user.
In view of this, nowadays in the urgent need to designing a kind of new on-line authentication mode, in order to overcome existing The drawbacks described above of authentication method.
Summary of the invention
The technical problem to be solved is: provide a kind of based on In vivo detection with recognition of face User's on-line authentication method, can avoid using the video containing face to gain certification by cheating, improve system Safety;Recognition time can be shortened simultaneously, improve recognition accuracy.
Additionally, the present invention also provides for a kind of user of based on In vivo detection and recognition of face on-line authentication system System, can avoid using the video containing face to gain certification by cheating, improve security of system;Simultaneously can To shorten recognition time, improve recognition accuracy.
For solving above-mentioned technical problem, the present invention adopts the following technical scheme that
A kind of user's on-line authentication method based on In vivo detection and recognition of face, described method includes:
Step S10, user's online registration step: the online fill message of user, and according to the body submitted to Part card number transfers the Certification of Second Generation photo of correspondence by public security Intranet, extracts photo face characteristic value, Set up user's face characteristic Value Data storehouse;
Step S20, user's on-line steps;Including In vivo detection step, image processing step, feature Extraction step, face alignment step, result treatment step;Specifically include:
-step S21, In vivo detection step, be confirmed whether it is live body and the optimum human face photo of acquisition; Based on head rotation direction as the determining program of instruction, only turned by head within the setting time Dynamic drive bead reaches to specify position, then be judged as live body, can authenticate;Choose the anglec of rotation simultaneously The photo of degree minimum is optimum human face photo;In vivo detection step includes Face datection and human face posture Detection;
In Face datection step, it is determined whether be face and structures locating;To camera collection to each Two field picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Gray-scale map is utilized Integration quickly calculates Harr-Like wavelet character value, is applied to off-line training good AdaBoost-Cascade grader, it determines whether be face;Face shape facility according to face With AdaBoost-Cascade grader the region of face window carried out eyes, double eyebrow, nose, Face and lower jaw location, determine human face position;
In human face posture detecting step, use based on oval template and the Attitude estimation side of face position Method;First pass through Face datection and determine the position of eyes, nose and face, then to detecting Face connected domain border carries out ellipse fitting and obtains oval template, calculates eyes, face and nose Location parameter in a template, finally sends into the appearance that three-layer artificial neural network obtains by location parameter The rough estimate value of state parameter;For improving the precision of Attitude estimation, according to rough estimate result by defeated Enter image and send into corresponding linear correlation wave filter after certain process, obtain relatively accurate people Face Attitude estimation result;
-step S22, image processing step;The human face photo collecting optimum attitude carries out light benefit Repay, greyscale transformation, histogram equalization, normalization, geometric correction, filter and sharpen, service In feature extraction;
-step S23, characteristic extraction step;To the human face photo processed, extract face component special Levy, including naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract people Face component feature;
-step S24, face alignment step;The characteristic and two of the facial image of the collection extracted Generation card human face data, by setting a threshold value, when similarity exceedes this threshold value, then coupling The result output obtained;
-step S25, result treatment step;As result is mated, then prompting " certification success ", with Time extract face characteristic value and be saved in server user's face database;As result is not mated, then Prompting " re-authentication ", restarts photo acquisition.
A kind of user's on-line authentication method based on In vivo detection and recognition of face, described method includes:
Step S10, user's online registration step: the online fill message of user, obtain corresponding face special Value indicative, sets up user's face characteristic Value Data storehouse;
Step S20, user's on-line authentication step;Including In vivo detection step, image processing step, Characteristic extraction step, face alignment step, result treatment step;Specifically include:
-step S21, In vivo detection step, confirm whether certification user is live body and obtains face photograph Sheet;
-step S22, image processing step;The human face photo gathered is processed;
-step S23, characteristic extraction step;To the human face photo processed, extract face component special Levy;
-step S24, face alignment step;The characteristic of the facial image of the collection extracted and institute State the corresponding human face data in user's face characteristic Value Data storehouse, by setting a threshold value, work as phase Exceed this threshold value like degree, then result coupling obtained exports;
-step S25, result treatment step;Respective handling is made according to face alignment result.
As a preferred embodiment of the present invention, in described step S21, it may be judged whether for the side of live body Method is: based on head rotation direction as the determining program of instruction, only driven by head rotation Object reaches to specify position, then be judged as that live body chooses the minimum photo of the anglec of rotation for optimum simultaneously Human face photo.
As a preferred embodiment of the present invention, the In vivo detection step in described step S21 includes people Face detection and human face posture detection;
In Face datection step, it is determined whether be face and structures locating;To camera collection to each Two field picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;
Gray-scale map utilizes integration quickly calculate Harr-Like wavelet character value, is applied to off-line instruction The AdaBoost-Cascade grader perfected, it determines whether be face;
Face shape facility according to face and AdaBoost-Cascade grader are to face window Region carries out eyes, double eyebrow, nose, face and lower jaw location, determines human face position;
In human face posture detecting step, use based on oval template and the Attitude estimation side of face position Method;
First pass through Face datection and determine the position of eyes, nose and face, then to the people detected Face connected domain border carries out ellipse fitting and obtains oval template, calculate eyes, face and nose Location parameter in template, finally sends into the attitude that three-layer artificial neural network obtains by location parameter The rough estimate value of parameter;
For improving the precision of Attitude estimation, according to rough estimate result, input picture is sent after treatment Enter corresponding linear correlation wave filter, obtain relatively accurate human face modeling result.
As a preferred embodiment of the present invention, in step S22, the face collecting optimum attitude shines Sheet carries out light compensation, greyscale transformation, histogram equalization, normalization, geometric correction, filtering And sharpen, serve feature extraction;
In step S23, the face component feature of extraction includes naked face, eyebrow, eyes, mouth face Part, utilizes principal component method to extract face component feature;
In step S25, as result is mated, then prompting " successfully ", extracts face characteristic value simultaneously It is saved in server user's face database;As result is not mated, then prompting " again ", again Start photo acquisition.
A kind of user's on-line authentication system based on In vivo detection and recognition of face, described system includes:
User's online registration module, for the online fill message of user, and according to the ID (identity number) card No. submitted to Transferred the Certification of Second Generation photo of correspondence by public security Intranet, extract photo face characteristic value, set up user Face characteristic Value Data storehouse;
User's on-line authentication module, including In vivo detection unit, graphics processing unit, feature extraction list Unit, face alignment unit, result treatment unit;
Described In vivo detection unit is in order to be confirmed whether being live body and the optimum human face photo of acquisition;Live body is examined Survey unit and include that head rotation direction obtains subelement, position generates subelement, object of which movement drives Subelement, head rotation direction obtain subelement in order to obtain the video in head rotation direction, and from The rotation direction of middle acquisition head;Position generates the subelement position in order to stochastic generation object, with And object needs the appointment position of arrival;Object of which movement drives subelement in order to according to head rotation side To driving object of which movement, only reach to specify position by head rotation band animal body, be then judged as Live body;Choose the minimum photo of the anglec of rotation for optimum human face photo simultaneously;Described In vivo detection list Unit includes Face datection subelement and human face posture detection sub-unit;
Face datection subelement is in order to determine whether face and structures locating;To camera collection to every One two field picture,
Carry out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Utilize integration quick gray-scale map Calculate Harr-Like wavelet character value, be applied to the AdaBoost-Cascade that off-line training is good Grader, it determines whether be face;Face shape facility according to face and AdaBoost-Cascade grader carries out eyes, double eyebrow, nose, mouth to the region of face window Bar and lower jaw location, determine human face position;
Human face posture detection sub-unit is in order to use based on oval template and the Attitude estimation of face position Method carries out attitude detection;The position of eyes, nose and face is determined, to inspection by Face datection The face connected domain border measured carries out ellipse fitting and obtains oval template, calculate eyes, face and The location parameter in a template of nose, sends location parameter into what three-layer artificial neural network obtained The rough estimate value of attitude parameter;For improving the precision of Attitude estimation, will according to rough estimate result Input picture sends into corresponding linear correlation wave filter after treatment, obtains relatively accurate face Attitude estimation result;
Described graphics processing unit carries out light compensation, ash in order to the human face photo collecting optimum attitude Spend conversion, histogram equalization, normalization, geometric correction, filter and sharpen, serve feature Extract;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature, bag Include naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component Feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and Certification of Second Generation Human face data, by setting a threshold value, when similarity exceedes this threshold value, then obtains coupling Result output;
Described result treatment unit is in order to make respective handling according to the comparison result of face alignment unit; As result is mated, then prompting " certification success ", extract face characteristic value simultaneously and be saved in service Device user's face database;As result is not mated, then prompting " re-authentication ", restarts to shine Sheet gathers.
A kind of user's on-line authentication system based on In vivo detection and recognition of face, described system includes:
-user online registration module, for the online fill message of user, obtains corresponding face characteristic value, And set up user's face characteristic Value Data storehouse;
-user on-line authentication module, including In vivo detection unit, graphics processing unit, feature extraction Unit, face alignment unit, result treatment unit;
Described In vivo detection unit is in order to confirm whether certification user is live body and obtains human face photo;
Described graphics processing unit is in order to process the human face photo gathered;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and described use Corresponding human face data in face characteristic Value Data storehouse, family, by setting a threshold value, works as similarity Exceed this threshold value, then result coupling obtained exports;
Described result treatment unit is in order to make respective handling according to face alignment result.
As a preferred embodiment of the present invention, described user's online registration module is according to the identity submitted to Card number transfers the Certification of Second Generation photo of correspondence by public security Intranet, extracts photo face characteristic value, builds Vertical user's face characteristic Value Data storehouse.
As a preferred embodiment of the present invention, described In vivo detection unit is in order to be confirmed whether being live body And obtain optimum human face photo;In vivo detection unit includes that head rotation direction obtains subelement, position Put generation subelement, object of which movement drives subelement, and head rotation direction obtains subelement in order to obtain Take the video in head rotation direction, and therefrom obtain the rotation direction of head;Position generates subelement In order to the position of stochastic generation object, and object needs the appointment position of arrival;Object of which movement drives Subunit, in order to drive object of which movement according to head rotation direction, is only driven by head rotation Object reaches to specify position, then be judged as live body;Choose the minimum photo of the anglec of rotation is the most simultaneously Excellent human face photo;Described In vivo detection unit includes Face datection subelement and human face posture detection Unit;
Face datection subelement is in order to determine whether face and structures locating;To camera collection to every One two field picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Gray scale is desired to make money or profit Quickly calculate Harr-Like wavelet character value with integration, be applied to off-line training good AdaBoost-Cascade grader, it determines whether be face;Face shape facility according to face With AdaBoost-Cascade grader the region of face window carried out eyes, double eyebrow, nose, Face and lower jaw location, determine human face position;
Human face posture detection sub-unit is in order to use based on oval template and the Attitude estimation of face position Method carries out attitude detection;The position of eyes, nose and face is determined, to inspection by Face datection The face connected domain border measured carries out ellipse fitting and obtains oval template, calculate eyes, face and The location parameter in a template of nose, sends location parameter into what three-layer artificial neural network obtained The rough estimate value of attitude parameter;For improving the precision of Attitude estimation, will according to rough estimate result Input picture sends into corresponding linear correlation wave filter after treatment, obtains relatively accurate face Attitude estimation result.
As a preferred embodiment of the present invention, described graphics processing unit is in order to collect optimum attitude Human face photo carry out light compensation, greyscale transformation, histogram equalization, normalization, geometry school Just, filtering and sharpening, serving feature extraction;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature, bag Include naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component Feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and Certification of Second Generation Human face data, by setting a threshold value, when similarity exceedes this threshold value, then obtains coupling Result output;
Described result treatment unit is in order to make respective handling according to the comparison result of face alignment unit; As result is mated, then prompting " certification success ", extract face characteristic value simultaneously and be saved in service Device user's face database;As result is not mated, then prompting " re-authentication ", restarts to shine Sheet gathers.
The beneficial effects of the present invention is: the present invention propose based on In vivo detection and the use of recognition of face Family on-line authentication method and system, can avoid using the video containing face to gain certification by cheating, improve Security of system.The present invention command operating by game type, it is ensured that the photo collected is user Photo in person;Meanwhile, the eigenvalue of the human face photo of success identity is saved in user's face database, So comparison of one-to-many is greatly shortened recognition time and improves recognition accuracy.
Accompanying drawing explanation
Fig. 1 is the flow chart of existing on-line authentication method.
Fig. 2 is the flow chart of on-line authentication method of the present invention.
Fig. 3 is the composition schematic diagram of on-line authentication system of the present invention.
Detailed description of the invention
Describe the preferred embodiments of the present invention below in conjunction with the accompanying drawings in detail.
Embodiment one
Refer to Fig. 1, present invention is disclosed a kind of user based on In vivo detection and recognition of face online Authentication method, described method includes:
[step S10] user online registration step: the online fill message of user, and according to the body submitted to Part card number transfers the Certification of Second Generation photo of correspondence by public security Intranet, extracts photo face characteristic value, Set up user's face characteristic Value Data storehouse.
[step S20] user on-line authentication step (e.g., can be logged on certification);Examine including live body Survey step, image processing step, characteristic extraction step, face alignment step, result treatment step. Specifically include:
-step S21, In vivo detection step, be confirmed whether it is live body and the optimum human face photo of acquisition; First on setting screen, generate the video of bead motion, and record the rail of the bead each time point of motion Mark;If it is identical with the combination of the bead direction of motion to collect the combination of continuous human face posture, then it is judged as living Body, is otherwise photo or video;Choose the minimum photo of the anglec of rotation to shine for optimum face simultaneously Sheet.Another kind of In vivo detection mode is: based on head rotation direction as the determining program instructed, Only drive bead to reach to specify position by head rotation within the setting time, be then judged as living Body, can authenticate.
In vivo detection step includes Face datection and human face posture detection;
In Face datection step, it is determined whether be face and structures locating;To camera collection to each Two field picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Gray-scale map is utilized Integration quickly calculates Harr-Like wavelet character value, is applied to off-line training good AdaBoost-Cascade grader, it determines whether be face;Face shape facility according to face With AdaBoost-Cascade grader the region of face window carried out eyes, double eyebrow, nose, Face and lower jaw location, determine human face position;
In human face posture detecting step, use based on oval template and the Attitude estimation side of face position Method;First pass through Face datection and determine the position of eyes, nose and face, then to detecting Face connected domain border carries out ellipse fitting and obtains oval template, calculates eyes, face and nose Location parameter in a template, finally sends into the appearance that three-layer artificial neural network obtains by location parameter The rough estimate value of state parameter;For improving the precision of Attitude estimation, according to rough estimate result by defeated Enter image and send into corresponding linear correlation wave filter after certain process, obtain relatively accurate people Face Attitude estimation result;
-step S22, image processing step;The human face photo collecting optimum attitude carries out light benefit Repay, greyscale transformation, histogram equalization, normalization, geometric correction, filter and sharpen, service In feature extraction;
-step S23, characteristic extraction step;To the human face photo processed, extract face component special Levy, including naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract people Face component feature;
-step S24, face alignment step;The characteristic and two of the facial image of the collection extracted Generation card human face data, by setting a threshold value, when similarity exceedes this threshold value, then coupling The result output obtained;
-step S25, result treatment step;As result is mated, then prompting " certification success ", with Time extract face characteristic value and be saved in server user's face database;As result is not mated, then Prompting " re-authentication ", restarts photo acquisition.
It is described above present invention user based on In vivo detection and recognition of face on-line authentication method Flow process, the present invention, while disclosing said method, also discloses a kind of based on In vivo detection and face The user's on-line authentication system identified;Described system includes: user's online registration module, Yong Hu Line authentication module.
User's online registration module is for the online fill message of user, and leads to according to the ID (identity number) card No. submitted to Cross public security Intranet and transfer the Certification of Second Generation photo (can certainly be other human face photos) of correspondence, carry Take photo face characteristic value, set up user's face characteristic Value Data storehouse.
User's on-line authentication module include In vivo detection unit, graphics processing unit, feature extraction unit, Face alignment unit, result treatment unit.
Described In vivo detection unit is in order to be confirmed whether being live body and the optimum human face photo of acquisition;Live body is examined Survey unit and include that object of which movement presents subelement, in order to generate regarding of bead motion on setting screen Frequently, the track of the In vivo detection unit record bead each time point of motion, and change according to human face posture Judge whether the human face photo gathered is live body photo;If collecting the combination of continuous human face posture with little The combination of the ball direction of motion is identical, then be judged as live body, be otherwise photo or video;Choose simultaneously The photo of anglec of rotation minimum is optimum human face photo.Further, it is also possible to detect by other means Live body, as In vivo detection unit include head rotation direction obtain subelement, position generate subelement, Object of which movement drives subelement, and head rotation direction obtains subelement in order to obtain head rotation direction Video, and therefrom obtain the rotation direction of head;Position generates subelement in order to stochastic generation thing The position of body, and the appointment position that object needs arrive;Object of which movement drives subelement in order to root Drive object of which movement according to head rotation direction, only reach specific bit by head rotation band animal body Put, be then judged as live body.
Described In vivo detection unit includes Face datection subelement and human face posture detection sub-unit.
Face datection subelement is in order to determine whether face and structures locating;To camera collection to every One two field picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Gray scale is desired to make money or profit Quickly calculate Harr-Like wavelet character value with integration, be applied to off-line training good AdaBoost-Cascade grader, it determines whether be face;Face shape facility according to face With AdaBoost-Cascade grader the region of face window carried out eyes, double eyebrow, nose, Face and lower jaw location, determine human face position.
Human face posture detection sub-unit is in order to use based on oval template and the Attitude estimation of face position Method carries out attitude detection;The position of eyes, nose and face is determined, to inspection by Face datection The face connected domain border measured carries out ellipse fitting and obtains oval template, calculate eyes, face and The location parameter in a template of nose, sends location parameter into what three-layer artificial neural network obtained The rough estimate value of attitude parameter;For improving the precision of Attitude estimation, will according to rough estimate result Input picture sends into corresponding linear correlation wave filter after treatment, obtains relatively accurate face Attitude estimation result.
Described graphics processing unit carries out light compensation, ash in order to the human face photo collecting optimum attitude Spend conversion, histogram equalization, normalization, geometric correction, filter and sharpen, serve feature Extract.
Described feature extraction unit, in order to the human face photo processed, extracts face component feature, bag Include naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component Feature.
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and Certification of Second Generation Human face data, by setting a threshold value, when similarity exceedes this threshold value, then obtains coupling Result output.
Described result treatment unit is in order to make respective handling according to the comparison result of face alignment unit; As result is mated, then prompting " certification success ", extract face characteristic value simultaneously and be saved in service Device user's face database;As result is not mated, then prompting " re-authentication ", restarts to shine Sheet gathers.
In sum, the user on-line authentication side based on In vivo detection and recognition of face that the present invention proposes Method and system, can avoid using the video containing face to gain certification by cheating, improve security of system. The present invention command operating by game type, it is ensured that the photo collected is user's photo;With Time, the eigenvalue of the human face photo of success identity is saved in user's face database, such one-to-many Comparison be greatly shortened recognition time and improve recognition accuracy.
Here description of the invention and application is illustrative, is not wishing to limit the scope of the invention In the above-described embodiments.The deformation of embodiments disclosed herein and change are possible, for For those skilled in the art, the various parts with equivalence of replacing of embodiment are public Know.It should be appreciated by the person skilled in the art that without departing from the spirit of the present invention or essence In the case of feature, the present invention can in other forms, structure, layout, ratio, Yi Jiyong Other assembly, material and parts realize.In the case of without departing from scope and spirit of the present invention, Embodiments disclosed herein can be carried out other deformation and change.

Claims (8)

1. user's on-line authentication method based on In vivo detection and recognition of face, it is characterised in that described Method includes:
Step S10, user's online registration step: the online fill message of user, and according to the identity card submitted to Number transfers the Certification of Second Generation photo of correspondence by public security Intranet, extracts photo face characteristic value, sets up user Face characteristic Value Data storehouse;
Step S20, user's on-line authentication step;Including In vivo detection step, image processing step, feature Extraction step, face alignment step, result treatment step;Specifically include:
-step S21, In vivo detection step, be confirmed whether it is live body and the optimum human face photo of acquisition;Based on head Portion's rotation direction, as the determining program of instruction, only drives bead by head rotation within the setting time Reach to specify position, be then judged as live body, can authenticate;Choose attitude optimum human face photo simultaneously;Live Body detecting step includes Face datection and human face posture detection;
In Face datection step, it is determined whether be face and structures locating;To camera collection to each frame figure Picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Utilize integration quick gray-scale map Calculate Harr-Like wavelet character value, be applied to the AdaBoost-Cascade classification that off-line training is good Device, it determines whether be face;Face shape facility according to face and AdaBoost-Cascade grader The region of face window is carried out eyes, double eyebrow, nose, face and lower jaw location, determines human face Position;
In human face posture detecting step, use based on oval template and the Attitude estimation method of face position;First First pass through Face datection and determine the position of eyes, nose and face, then to the face connected domain detected Border carries out ellipse fitting and obtains oval template, calculates eyes, face and the position in a template of nose Parameter, finally sends into the rough estimate value of the attitude parameter that three-layer artificial neural network obtains by location parameter; For improving the precision of Attitude estimation, according to rough estimate result, input picture sent into after treatment neighborhood Weighting filter, obtains relatively accurate human face modeling result;
-step S22, image processing step;The human face photo collecting optimum attitude carries out light compensation, ash Spend conversion, histogram equalization, normalization, geometric correction, filter and sharpen, serve feature extraction;
-step S23, characteristic extraction step;To the human face photo processed, extract face component feature, bag Include naked face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component feature;
-step S24, face alignment step;The characteristic of the facial image of the collection extracted and secondary witness Face data, by setting a threshold value, when similarity exceedes this threshold value, then result coupling obtained Output;
-step S25, result treatment step;As result is mated, then prompting " certification success ", extract simultaneously It is saved in server user's face database to face characteristic value;As result is not mated, then prompting is " again Certification ", restart photo acquisition.
2. user's on-line authentication method based on In vivo detection and recognition of face, it is characterised in that described Method includes:
Step S10, user's online registration step: the online fill message of user, obtain corresponding face characteristic value, Set up user's face characteristic Value Data storehouse;
Step S20, user's on-line authentication step;Including In vivo detection step, image processing step, feature Extraction step, face alignment step, result treatment step;Specifically include:
-step S21, In vivo detection step, confirm whether certification user is live body and obtains human face photo;
In vivo detection step in described step S21 includes Face datection and human face posture detection;
In Face datection step, it is determined whether be face and structures locating;To camera collection to each frame figure Picture, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;
Gray-scale map utilizes integration quickly calculate Harr-Like wavelet character value, is applied to off-line training good AdaBoost-Cascade grader, it determines whether be face;
Face shape facility according to face and the AdaBoost-Cascade grader region to face window Carry out eyes, double eyebrow, nose, face and lower jaw location, determine human face position;
In human face posture detecting step, use based on oval template and the Attitude estimation method of face position;
First pass through Face datection and determine the position of eyes, nose and face, then to the face detected even Logical border, territory carries out ellipse fitting and obtains oval template, calculate eyes, face and nose in a template Location parameter, finally sends into estimating roughly of the attitude parameter that three-layer artificial neural network obtains by location parameter Evaluation;
For improving the precision of Attitude estimation, according to rough estimate result, input picture sent into after treatment phase The linear correlation wave filter answered, obtains relatively accurate human face modeling result;
-step S22, image processing step;The human face photo gathered is processed;
-step S23, characteristic extraction step;To the human face photo processed, extract face component feature;
-step S24, face alignment step;The characteristic of the facial image of the collection extracted and described user Corresponding human face data in face characteristic Value Data storehouse, by setting a threshold value, when similarity exceedes this One threshold value, then result coupling obtained exports;
-step S25, result treatment step;Respective handling is made according to face alignment result.
User's on-line authentication method based on In vivo detection and recognition of face the most according to claim 2, It is characterized in that:
In described step S21, it may be judged whether the method for live body is: based on head rotation direction as instruction Determining program, only by head rotation band animal body reach specify position, then be judged as live body;With Time choose the minimum photo of the anglec of rotation for optimum human face photo.
User's on-line authentication method based on In vivo detection and recognition of face the most according to claim 2, It is characterized in that:
In step S22, the human face photo collecting optimum attitude carries out light compensation, greyscale transformation, Nogata Figure equalization, normalization, geometric correction, filter and sharpen, serve feature extraction;
In step S23, the face component feature of extraction includes naked face, eyebrow, eyes, mouth face component, Principal component method is utilized to extract face component feature;
In step S25, as result is mated, then prompting " certification success ", extract face characteristic value simultaneously It is saved in server user's face database;As result is not mated, then prompting " re-authentication ", again Start photo acquisition.
5. user's on-line authentication system based on In vivo detection and recognition of face, it is characterised in that described System includes:
User's online registration module, for the online fill message of user, and passes through according to the ID (identity number) card No. submitted to Public security Intranet transfers the Certification of Second Generation photo of correspondence, extracts photo face characteristic value, sets up user's face characteristic Value Data storehouse;
User's on-line authentication module, including In vivo detection unit, graphics processing unit, feature extraction unit, Face alignment unit, result treatment unit;
Described In vivo detection unit is in order to be confirmed whether being live body and the optimum human face photo of acquisition;In vivo detection list Unit includes that head rotation direction obtains subelement, position generates subelement, object of which movement drives subelement, Head rotation direction acquisition subelement is in order to obtain the video in head rotation direction, and therefrom obtains head Rotation direction;Position generates the subelement position in order to stochastic generation object, and object needs arrival Specify position;Object of which movement drives subelement in order to drive object of which movement according to head rotation direction, only Reach to specify position by head rotation band animal body, be then judged as live body;Choose the anglec of rotation the most simultaneously Little photo is optimum human face photo;Described In vivo detection unit includes Face datection subelement and face appearance State detection sub-unit;
Face datection subelement is in order to determine whether face and structures locating;To camera collection to each frame Image, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Utilize integration fast gray-scale map Speed calculates Harr-Like wavelet character value, and the AdaBoost-Cascade being applied to off-line training good divides Class device, it determines whether be face;Face shape facility according to face and AdaBoost-Cascade classification Device carries out eyes, double eyebrow, nose, face and lower jaw location to the region of face window, determines face device Official position;
Human face posture detection sub-unit is in order to use based on oval template and the Attitude estimation method of face position Carry out attitude detection;The position of eyes, nose and face is determined, to the people detected by Face datection Face connected domain border carries out ellipse fitting and obtains oval template, calculate eyes, face and nose in template In location parameter, location parameter is sent into estimating roughly of the attitude parameter that three-layer artificial neural network obtains Evaluation;For improving the precision of Attitude estimation, according to rough estimate result, input picture is sent after treatment Enter corresponding linear correlation wave filter, obtain relatively accurate human face modeling result;
Described graphics processing unit carries out light compensation in order to the human face photo collecting optimum attitude, gray scale becomes Change, histogram equalization, normalization, geometric correction, filter and sharpen, serve feature extraction;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature, including naked Face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and Certification of Second Generation face Data, by setting a threshold value, when similarity exceedes this threshold value, then defeated for the result that coupling obtains Go out;
Described result treatment unit is in order to make respective handling according to the comparison result of face alignment unit;Such as knot Fruit coupling, then prompting " certification success ", extract face characteristic value simultaneously and be saved in server user people Face data base;As result is not mated, then prompting " re-authentication ", restarts photo acquisition.
6. user's on-line authentication system based on In vivo detection and recognition of face, it is characterised in that described System includes:
-user online registration module, for the online fill message of user, obtains corresponding face characteristic value, and builds Vertical user's face characteristic Value Data storehouse;
-user on-line authentication module, including In vivo detection unit, graphics processing unit, feature extraction unit, Face alignment unit, result treatment unit;
Described In vivo detection unit is in order to confirm whether certification user is live body and obtains human face photo;Live body is examined Survey unit and include that head rotation direction obtains subelement, position generates subelement, object of which movement drives son single Unit, head rotation direction acquisition subelement is in order to obtain the video in head rotation direction, and therefrom obtains head The rotation direction in portion;Position generates the subelement position in order to stochastic generation object, and object needs The appointment position reached;Object of which movement drive subelement in order to according to head rotation direction drive object of which movement, Only reach to specify position by head rotation band animal body, be then judged as live body;Choose the anglec of rotation simultaneously The photo of degree minimum is optimum human face photo;Described In vivo detection unit includes Face datection subelement and people Face attitude detection subelement;
Face datection subelement is in order to determine whether face and structures locating;To camera collection to each frame Image, carries out greyscale transformation, Filtering Processing, it is thus achieved that high-quality gray-scale map;Utilize integration fast gray-scale map Speed calculates Harr-Like wavelet character value, and the AdaBoost-Cascade being applied to off-line training good divides Class device, it determines whether be face;Face shape facility according to face and AdaBoost-Cascade classification Device carries out eyes, double eyebrow, nose, face and lower jaw location to the region of face window, determines face device Official position;
Human face posture detection sub-unit is in order to use based on oval template and the Attitude estimation method of face position Carry out attitude detection;The position of eyes, nose and face is determined, to the people detected by Face datection Face connected domain border carries out ellipse fitting and obtains oval template, calculate eyes, face and nose in template In location parameter, location parameter is sent into estimating roughly of the attitude parameter that three-layer artificial neural network obtains Evaluation;For improving the precision of Attitude estimation, according to rough estimate result, input picture is sent after treatment Enter corresponding linear correlation wave filter, obtain relatively accurate human face modeling result;
Described graphics processing unit is in order to process the human face photo gathered;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and described user people Corresponding human face data in face characteristic value data storehouse, by setting a threshold value, when similarity exceedes this Threshold value, then result coupling obtained exports;
Described result treatment unit is in order to make respective handling according to face alignment result.
User's on-line authentication system based on In vivo detection and recognition of face the most according to claim 6, It is characterized in that:
Described user's online registration module transfers correspondence according to the ID (identity number) card No. submitted to by public security Intranet Certification of Second Generation photo, extracts photo face characteristic value, sets up user's face characteristic Value Data storehouse.
User's on-line authentication system based on In vivo detection and recognition of face the most according to claim 6, It is characterized in that:
Described graphics processing unit carries out light compensation in order to the human face photo collecting optimum attitude, gray scale becomes Change, histogram equalization, normalization, geometric correction, filter and sharpen, serve feature extraction;
Described feature extraction unit, in order to the human face photo processed, extracts face component feature, including naked Face, eyebrow, eyes, mouth face component, utilize principal component method to extract face component feature;
Described face alignment unit is in order to the characteristic of the facial image of collection extracted and described user people Corresponding human face data in face characteristic value data storehouse, by setting a threshold value, when similarity exceedes this Threshold value, then result coupling obtained exports;
Described result treatment unit is in order to make respective handling according to the comparison result of face alignment unit;Such as knot Fruit coupling, then prompting " certification success ", extract face characteristic value simultaneously and be saved in server user people Face data base;As result is not mated, then prompting " re-authentication ", restarts photo acquisition.
CN201310602042.8A 2013-11-25 2013-11-25 User's on-line authentication method and system based on In vivo detection and recognition of face Active CN103593598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310602042.8A CN103593598B (en) 2013-11-25 2013-11-25 User's on-line authentication method and system based on In vivo detection and recognition of face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310602042.8A CN103593598B (en) 2013-11-25 2013-11-25 User's on-line authentication method and system based on In vivo detection and recognition of face

Publications (2)

Publication Number Publication Date
CN103593598A CN103593598A (en) 2014-02-19
CN103593598B true CN103593598B (en) 2016-09-21

Family

ID=50083734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310602042.8A Active CN103593598B (en) 2013-11-25 2013-11-25 User's on-line authentication method and system based on In vivo detection and recognition of face

Country Status (1)

Country Link
CN (1) CN103593598B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188684A (en) * 2019-05-30 2019-08-30 湖南城市学院 A kind of face identification device and method

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396B (en) * 2014-06-24 2018-06-08 小米科技有限责任公司 Image capturing method and device
CN105407069B (en) * 2014-06-30 2019-02-15 阿里巴巴集团控股有限公司 Living body authentication method, apparatus, client device and server
CN105468950B (en) 2014-09-03 2020-06-30 阿里巴巴集团控股有限公司 Identity authentication method and device, terminal and server
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN115457664A (en) * 2015-01-19 2022-12-09 创新先进技术有限公司 Living body face detection method and device
CN105989263A (en) * 2015-01-30 2016-10-05 阿里巴巴集团控股有限公司 Method for authenticating identities, method for opening accounts, devices and systems
CN107995979B (en) * 2015-04-16 2021-12-07 托比股份公司 System, method and machine-readable medium for authenticating a user
CN106161030B (en) * 2015-04-23 2020-04-03 腾讯科技(深圳)有限公司 Account registration verification request based on image identification and registration verification method and device
CN104834853B (en) * 2015-04-30 2017-11-21 北京立思辰计算机技术有限公司 A kind of personal identification method, device and information security type duplicator
CN106295288B (en) * 2015-06-10 2019-04-16 阿里巴巴集团控股有限公司 A kind of information calibration method and device
CN106295287B (en) * 2015-06-10 2019-04-09 阿里巴巴集团控股有限公司 Biopsy method and device and identity identifying method and device
CN105095715A (en) * 2015-06-30 2015-11-25 国网山东莒县供电公司 Identity authentication method of electric power system network
WO2017000217A1 (en) * 2015-06-30 2017-01-05 北京旷视科技有限公司 Living-body detection method and device and computer program product
CN105243357A (en) * 2015-09-15 2016-01-13 深圳市环阳通信息技术有限公司 Identity document-based face recognition method and face recognition device
CN105426827B (en) * 2015-11-09 2019-03-08 北京市商汤科技开发有限公司 Living body verification method, device and system
CN105243378B (en) * 2015-11-13 2019-03-01 清华大学 Living body faces detection method and device based on eye information
CN106778454B (en) * 2015-11-25 2019-09-20 腾讯科技(深圳)有限公司 The method and apparatus of recognition of face
CN105553947A (en) * 2015-12-08 2016-05-04 腾讯科技(深圳)有限公司 Methods and devices for finding account back, protecting account security and preventing account theft
CN107085822B (en) * 2016-02-16 2020-09-04 北京小米移动软件有限公司 Face image processing method and device
CN105930709B (en) * 2016-04-21 2018-07-24 深圳泰首智能技术有限公司 Face recognition technology is applied to the method and device of testimony of a witness consistency check
CN105868611A (en) * 2016-05-04 2016-08-17 广东欧珀移动通信有限公司 Biological-information authentication method and device and mobile terminal
CN106127870A (en) * 2016-06-30 2016-11-16 中相(海南)信息科技有限公司 Remote mobile authentication ids system and method during a kind of express delivery addressee
CN106101136B (en) * 2016-07-22 2019-04-12 飞天诚信科技股份有限公司 A kind of authentication method and system of biological characteristic comparison
US10289822B2 (en) * 2016-07-22 2019-05-14 Nec Corporation Liveness detection for antispoof face recognition
CN106529414A (en) * 2016-10-14 2017-03-22 国政通科技股份有限公司 Method for realizing result authentication through image comparison
CN106778797A (en) * 2016-10-31 2017-05-31 江苏濠汉信息技术有限公司 A kind of identity intelligent identification Method
CN106682578B (en) * 2016-11-21 2020-05-05 北京交通大学 Weak light face recognition method based on blink detection
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN106778653A (en) * 2016-12-27 2017-05-31 北京光年无限科技有限公司 Towards the exchange method and device based on recognition of face Sample Storehouse of intelligent robot
CN110114777B (en) * 2016-12-30 2023-10-20 托比股份公司 Identification, authentication and/or guidance of a user using gaze information
CN106874857B (en) * 2017-01-19 2020-12-01 腾讯科技(上海)有限公司 Living body distinguishing method and system based on video analysis
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN106851224A (en) * 2017-03-29 2017-06-13 宁夏凯速德科技有限公司 Intelligent video frequency monitoring method and system based on user behavior recognition
CN108875452A (en) * 2017-05-11 2018-11-23 北京旷视科技有限公司 Face identification method, device, system and computer-readable medium
CN108133129A (en) * 2017-06-22 2018-06-08 广东网金云计算有限公司 A kind of unlocking method of application program, device and mobile terminal
CN107463875A (en) * 2017-07-03 2017-12-12 金讯系统管理有限公司 A kind of method and apparatus for judging personnel identity
CN107358187A (en) * 2017-07-04 2017-11-17 四川云物益邦科技有限公司 A kind of certificate photograph recognition methods
CN108875331B (en) * 2017-08-01 2022-08-19 北京旷视科技有限公司 Face unlocking method, device and system and storage medium
CN107980131A (en) * 2017-08-21 2018-05-01 深圳市汇顶科技股份有限公司 Identity identifying method, device and electronic equipment based on multi-biological characteristic sensor
CN108229120B (en) * 2017-09-07 2020-07-24 北京市商汤科技开发有限公司 Face unlocking method, face unlocking information registration device, face unlocking information registration equipment, face unlocking program and face unlocking information registration medium
US10579785B2 (en) * 2017-09-29 2020-03-03 General Electric Company Automatic authentification for MES system using facial recognition
CN107766807A (en) * 2017-09-30 2018-03-06 平安科技(深圳)有限公司 Electronic installation, insure livestock recognition methods and computer-readable recording medium
CN107657248A (en) * 2017-10-26 2018-02-02 广州云从信息科技有限公司 A kind of infrared binocular In vivo detections of Android based on recognition of face certification
CN108062673B (en) * 2017-11-15 2021-06-01 平安科技(深圳)有限公司 Payment method, terminal device and computer-readable storage medium
CN107944380B (en) * 2017-11-20 2022-11-29 腾讯科技(深圳)有限公司 Identity recognition method and device and storage equipment
CN108021892B (en) * 2017-12-06 2021-11-19 上海师范大学 Human face living body detection method based on extremely short video
CN108052902A (en) * 2017-12-12 2018-05-18 途客思科技(天津)有限公司 User identification method and electronic equipment
CN107992842B (en) * 2017-12-13 2020-08-11 深圳励飞科技有限公司 Living body detection method, computer device, and computer-readable storage medium
CN108038456B (en) * 2017-12-19 2024-01-26 中科视拓(北京)科技有限公司 Anti-deception method in face recognition system
CN110610113A (en) * 2018-06-14 2019-12-24 北京华泰科捷信息技术股份有限公司 AI chip-based high-density dynamic face recognition device and method
CN109063671A (en) * 2018-08-20 2018-12-21 三星电子(中国)研发中心 Method and device for intelligent cosmetic
CN109067767B (en) * 2018-08-31 2021-02-19 上海艾融软件股份有限公司 Face recognition authentication method and system
CN109145884B (en) * 2018-10-10 2020-11-24 百度在线网络技术(北京)有限公司 Method, device, terminal and computer-readable storage medium for searching target person
CN109492555A (en) * 2018-10-26 2019-03-19 平安科技(深圳)有限公司 Newborn identity identifying method, electronic device and computer readable storage medium
CN109472228A (en) * 2018-10-29 2019-03-15 上海交通大学 A kind of yawn detection method based on deep learning
CN109635021A (en) * 2018-10-30 2019-04-16 平安科技(深圳)有限公司 A kind of data information input method, device and equipment based on human testing
CN109522877A (en) * 2018-12-14 2019-03-26 睿云联(厦门)网络通讯技术有限公司 A kind of offline plurality of human faces recognition methods and computer equipment based on Android device
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face
CN112307817B (en) * 2019-07-29 2024-03-19 中国移动通信集团浙江有限公司 Face living body detection method, device, computing equipment and computer storage medium
CN110647823A (en) * 2019-09-02 2020-01-03 中国建设银行股份有限公司 Method and device for optimizing human face base
TWI722872B (en) * 2020-04-17 2021-03-21 技嘉科技股份有限公司 Face recognition device and face recognition method
CN111539351B (en) * 2020-04-27 2023-11-03 广东电网有限责任公司广州供电局 Multi-task cascading face frame selection comparison method
CN111680616A (en) * 2020-06-04 2020-09-18 中国建设银行股份有限公司 Qualification authentication method, device, equipment and medium for subsidy retriever
CN112069904A (en) * 2020-08-07 2020-12-11 武汉天喻聚联科技有限公司 System and method for determining online picture attribution
CN112329727A (en) * 2020-11-27 2021-02-05 四川长虹电器股份有限公司 Living body detection method and device
CN113569622A (en) * 2021-06-09 2021-10-29 北京旷视科技有限公司 Living body detection method, device and system based on webpage and electronic equipment
CN114241588B (en) * 2022-02-24 2022-05-20 北京锐融天下科技股份有限公司 Self-adaptive face comparison method and system
WO2023159462A1 (en) * 2022-02-25 2023-08-31 百果园技术(新加坡)有限公司 Identity authentication method and apparatus, terminal, storage medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841482B2 (en) * 1996-06-18 2006-11-01 松下電器産業株式会社 Face image recognition device
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
CN102789572A (en) * 2012-06-26 2012-11-21 五邑大学 Living body face safety certification device and living body face safety certification method
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100438841B1 (en) * 2002-04-23 2004-07-05 삼성전자주식회사 Method for verifying users and updating the data base, and face verification system using thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841482B2 (en) * 1996-06-18 2006-11-01 松下電器産業株式会社 Face image recognition device
CN101159016A (en) * 2007-11-26 2008-04-09 清华大学 Living body detecting method and system based on human face physiologic moving
CN102789572A (en) * 2012-06-26 2012-11-21 五邑大学 Living body face safety certification device and living body face safety certification method
CN103116763A (en) * 2013-01-30 2013-05-22 宁波大学 Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188684A (en) * 2019-05-30 2019-08-30 湖南城市学院 A kind of face identification device and method
CN110188684B (en) * 2019-05-30 2021-04-06 湖南城市学院 Face recognition device and method

Also Published As

Publication number Publication date
CN103593598A (en) 2014-02-19

Similar Documents

Publication Publication Date Title
CN103593598B (en) User's on-line authentication method and system based on In vivo detection and recognition of face
CN105574518B (en) Method and device for detecting living human face
CN106203294B (en) The testimony of a witness based on face character analysis unifies auth method
CN105740779B (en) Method and device for detecting living human face
CN101710383B (en) Method and device for identity authentication
CN108229427A (en) A kind of identity-based certificate and the identity security verification method and system of recognition of face
CN101542503B (en) Procedure for identifying a person by eyelash analysis
CN105740780B (en) Method and device for detecting living human face
CN107590452A (en) A kind of personal identification method and device based on gait and face fusion
CN105740781B (en) Three-dimensional human face living body detection method and device
CN106446754A (en) Image identification method, metric learning method, image source identification method and devices
EP1477924A3 (en) Gesture recognition apparatus, method and program
CN105631272A (en) Multi-safeguard identity authentication method
CN107346422A (en) A kind of living body faces recognition methods based on blink detection
CN110516649B (en) Face recognition-based alumni authentication method and system
CN102542242B (en) The biological characteristic area positioning method and device of contactless collection image
JP6265592B2 (en) Facial feature extraction apparatus and face authentication system
CN107293002B (en) A kind of intelligent access control system based on recognition of face
CN101344914A (en) Human face recognition method based on characteristic point
CN108898108B (en) User abnormal behavior monitoring system and method based on sweeping robot
CN101533466B (en) Image processing method for positioning eyes
CN102184016B (en) Noncontact type mouse control method based on video sequence recognition
CN105320948A (en) Image based gender identification method, apparatus and system
CN106778636A (en) Auth method and device
CN103907122A (en) Detecting of fraud for access control system of biometric type

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant