WO2018036389A1 - 用户核身方法、装置及系统 - Google Patents

用户核身方法、装置及系统 Download PDF

Info

Publication number
WO2018036389A1
WO2018036389A1 PCT/CN2017/096987 CN2017096987W WO2018036389A1 WO 2018036389 A1 WO2018036389 A1 WO 2018036389A1 CN 2017096987 W CN2017096987 W CN 2017096987W WO 2018036389 A1 WO2018036389 A1 WO 2018036389A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
face image
eye pattern
preset
image
Prior art date
Application number
PCT/CN2017/096987
Other languages
English (en)
French (fr)
Inventor
何乐
庹宇鲲
李亮
黄冕
陈继东
杨文波
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP2019510864A priority Critical patent/JP6756037B2/ja
Priority to SG11201901519XA priority patent/SG11201901519XA/en
Priority to PL17842812T priority patent/PL3506589T3/pl
Priority to AU2017314341A priority patent/AU2017314341B2/en
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to MYPI2019000894A priority patent/MY193941A/en
Priority to CA3034612A priority patent/CA3034612C/en
Priority to KR1020197007983A priority patent/KR102084900B1/ko
Priority to ES17842812T priority patent/ES2879682T3/es
Priority to EP17842812.4A priority patent/EP3506589B1/en
Publication of WO2018036389A1 publication Critical patent/WO2018036389A1/zh
Priority to US16/282,102 priority patent/US10467490B2/en
Priority to PH12019500383A priority patent/PH12019500383B1/en
Priority to ZA2019/01506A priority patent/ZA201901506B/en
Priority to US16/587,376 priority patent/US10997443B2/en
Priority to AU2019101579A priority patent/AU2019101579A4/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of information technology, and in particular, to a user core method, apparatus, and system.
  • the face image recognition is usually combined with the face image in vivo verification, that is, while the collected face image is verified, some face action parameters are sent to the user, and the user needs to follow these Action to complete the action biometric verification.
  • a more realistic 3D face image can be synthesized at present, and the user's motion and expression can be simulated, the accuracy and reliability of the existing user's nucleus method are low, and the security of the user's application APP cannot be guaranteed.
  • the embodiment of the present invention provides a user core method, device, and system, and the main purpose thereof is to solve the problem that the accuracy and reliability of the user core method in the prior art are low.
  • the present invention provides the following technical solutions:
  • an embodiment of the present invention provides a user core method, including:
  • the core success information is sent to the client.
  • an embodiment of the present invention provides another user core method, including:
  • an embodiment of the present invention provides a server, including:
  • a receiving unit configured to receive a face image corresponding to the kernel object sent by the client, and an eye-pair image corresponding to the number of eye-collection steps;
  • a comparison unit configured to compare the face image with a preset face image, and compare the eye line pair image with a preset eye pattern template
  • a sending unit configured to send the core success information to the client if the comparison result of the face image and the eye pattern pair image meets a preset condition.
  • an embodiment of the present invention provides a client, including:
  • the collecting unit is configured to collect a face image corresponding to the body object and an eye line pair image corresponding to the number of eye line collecting steps;
  • a sending unit configured to send the face image and the eye-pair image to the server, so that the server performs core verification on the core object.
  • an embodiment of the present invention provides a user core system, including:
  • a server configured to send, to the client, a face quality threshold value and an eye line collection step number corresponding to the current body mode when receiving the user core request;
  • a client configured to acquire a face image according to the face quality threshold and obtain an eye-pair image corresponding to the number of eye-collection steps;
  • the server is further configured to receive a face image sent by the client and an eye-pair image corresponding to the number of eye-collection steps; compare the face image with a preset face image, and compare the The eye pattern compares the image with the preset eye pattern template; if the comparison result of the face image and the eye pattern to the image meets the preset condition, the core success information is sent to the client.
  • the technical solution provided by the embodiment of the present invention has at least the following advantages:
  • the user core method, device and system provided by the embodiment of the present invention first receive a face image corresponding to the kernel object sent by the client and an eye corresponding to the number of eye pattern acquisition steps when receiving the user's core request. And pairing the face image with the preset face image, and comparing the eye pattern to the preset eye pattern, if the face image and the eye pattern If the comparison result of the image meets the preset condition, the core success information is sent to the client.
  • the embodiment of the present invention combines eye image verification and eye pattern image by face image verification. Multi-dimensional verification methods such as in-vivo verification implement the user's nucleus, which improves the accuracy and reliability of the user's nucleus method, and ensures the security of the user when using the application.
  • FIG. 1 is a flowchart of a user core method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of another user core method provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of still another user core method provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of still another user core method according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another server according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a client according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of another client according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a user core system according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a user core scene according to an embodiment of the present invention.
  • An embodiment of the present invention provides a user core method. As shown in FIG. 1 , the method includes:
  • the larger the number of eye pattern acquisition steps the longer the eye pattern acquisition time. Therefore, when the number of eye pattern templates of the nucleus body is sufficient, the number of eye pattern acquisition steps can be relatively small; When the number of pattern templates is small, in order to collect the eye pattern, the image is accumulated as an eye pattern template of the body object, and therefore, the number of eye pattern acquisition steps can be relatively larger.
  • different eyelet collecting steps are configured according to different situations, and the user's core body precision can be further improved.
  • the server may specifically perform data transmission with the client through a communication manner such as a mobile cellular network or a WIFI network, which is not limited in the embodiment of the present invention.
  • the pre-set face image may be a photo that is registered by the user on the public security network or a face photo that has passed through the user's nucleus, which is not limited in the embodiment of the present invention.
  • the preset eye pattern template may be a plurality of sets of eye pattern pairs that have passed the security verification.
  • the comparison operation may specifically be performed to detect whether the matching degree between the images meets the preset requirement, which is not limited in the embodiment of the present invention.
  • the preset condition may be a face ratio halving threshold, an eye line ratio halving threshold, and the like, which are not limited in the embodiment of the present invention.
  • the comparison result of the face image and the eye-line pair image conforms to a preset condition, the verification of the nuclear body is successful, thereby realizing multi-dimensionality such as image through face image and eye pattern.
  • the user's nucleus can further improve the accuracy of the user's nucleus method.
  • a user core method provided by the embodiment of the present invention when receiving a request from a user, first receives a face image corresponding to the kernel object sent by the client and an eye-pair image corresponding to the number of eye-collection steps. And comparing the face image with the preset face image, and comparing the eye pattern to the preset eye pattern, if the face image and the eye pattern are compared to the image If the result meets the preset condition, the core success information is sent to the client.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification. The verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides another user core method, as shown in FIG. 2, the method includes:
  • the preset storage location stores an eye patch template corresponding to different body objects.
  • the user needs to perform login authentication, or secure authentication such as payment authentication, the user sends a user core request to the client.
  • the method further includes: if the number of the eye pattern templates is less than the preset threshold, determining the current kernel mode as an eye pattern input mode; and transmitting, to the client, the eye pattern input mode
  • the face quality sub-threshold value and the eye pattern acquisition step number so that the client acquires a face image according to the face quality sub-threshold value and acquires an eye-pattern pair image corresponding to the number of the eye-pattern acquisition steps;
  • the eye line pair image is saved to the preset storage location as an eye pattern template corresponding to the preset core body object.
  • the current body mode is determined as the eye pattern input mode.
  • the client can record the eye pattern that meets the condition as the eye pattern template of the body object in real time, thereby realizing the accumulation of the eye pattern template, and when the number of eye pattern templates reaches a preset threshold, switching to the identity verification mode, thereby Further improve the accuracy and reliability of the user's core.
  • the current mode may specifically include an eye pattern collection mode, an identity verification mode, and the like, and the current mode is associated with the number of the eye pattern templates corresponding to the user saved in the preset storage location, which is not limited in the embodiment of the present invention.
  • the eye pattern collection mode the image quality of the collected face image is relatively high, so that the server can accumulate the eye pattern; in the authentication mode, the image quality of the face eye is normal. Because the server can perform eye pattern comparison through the previously accumulated eye pattern template.
  • the face quality threshold is used to indicate the quality of the face image collected by the client.
  • the eye collection time is different due to the larger number of eye pattern acquisition steps, so the number of eye pattern acquisition steps configured for different core modes is different.
  • the number of eye pattern acquisition steps can be relatively small; for the eye pattern collection mode, since the main purpose of the mode is to collect the eye pattern for the image as The eye pattern template accumulates, so the eye pattern acquisition steps can be configured to be relatively larger.
  • different eye pattern acquisition steps are configured according to different kernel modes, and the user's core body precision can be further improved.
  • the client obtains a face image according to the face quality threshold value and acquires an eye-pair image corresponding to the number of the eye-pattern acquisition steps.
  • the client can be configured on a mobile device having a camera and a microphone, including but not limited to a smartphone and a tablet, and the client can collect related images through the camera.
  • the server may specifically perform data transmission through a communication manner such as a mobile cellular network or a WIFI network, which is not limited in the embodiment of the present invention.
  • the pre-set face image may be a photo that is registered by the user on the public security network or a face photo that has passed through the user's nucleus, which is not limited in the embodiment of the present invention.
  • the preset eye pattern template may be a plurality of sets of eye pattern pairs that have passed the security verification.
  • the comparing the face image with the preset face image may specifically include: the face image and the preset person The face image is used as an input of the preset face algorithm, and the face ratio corresponding to the body object is obtained; the comparing the eye pattern to the preset eye pattern includes: The image and the eye pattern template corresponding to the body object are used as input of the preset eye pattern algorithm, and a plurality of eye mark living body points and eye line ratios corresponding to the number of the eye line collecting steps are obtained.
  • the pre-set face algorithm and the eye-pattern algorithm may be a convolutional neural network algorithm, a multi-layer neural network algorithm, and the like, which are not limited in the embodiment of the present invention.
  • the face ratio is used to reflect the degree of matching between the face image of the nucleus object and the preset face image.
  • the step 206 may include: sending the core to the client if the face ratio is divided, the plurality of eye marks are separated, and the eye pattern is more than a preset threshold. Successful information.
  • the success of the core is confirmed, thereby improving the accuracy and reliability of the user's nuclear method. .
  • the eye pattern template corresponding to the core body object can further ensure the accuracy of the eye pattern template corresponding to the core body object saved in the preset storage location, thereby further improving the accuracy of the user's core body method.
  • the specific application scenario process may be as shown in FIG. 10, but is not limited thereto.
  • the method may include: first, the server may obtain the number of the eye pattern templates corresponding to the nucleus object by the configured decision module FEArbitrator to be 10, which is greater than the preset template.
  • the threshold is 9, so that the current nucleus mode is determined to be the authentication mode Verify, and then the face quality threshold QT and the eye pattern acquisition step number corresponding to the authentication mode Verify are sent to the client. At this time, the client collects the face.
  • the face image is compared with the verified preset face image by a preset face algorithm to obtain a face image comparison FX, and the collected eye pattern is pre-prepared by a preset eye pattern algorithm.
  • the eye-pattern template is compared, and the eye-line living body is divided into LK and the eye-grain ratio is divided into MX. If FX is greater than or equal to the preset face-to-half threshold FT, LK is greater than or equal to the preset eye pattern.
  • the threshold threshold LT is obtained, and the MX is greater than or equal to the preset eye-pattern ratio halving threshold. Then, the core success information is sent to the client, and the preset eye pattern is updated according to the collected eye pattern. If the number of the eye pattern templates corresponding to the nucleus object is less than 9, the client is instructed to perform eye-pattern-to-image collection until the number of eye-pattern templates corresponding to the nucleus object is greater than or equal to 9, and the mode is switched to the identity verification mode.
  • the user core method provided by the embodiment of the present invention first receives the face image corresponding to the kernel object sent by the client and the eye pattern pair corresponding to the number of eye pattern acquisition steps when receiving the user's core request. And comparing the face image with the preset face image, and comparing the eye pattern to the preset eye pattern, if the face image and the eye pattern are If the comparison result meets the preset condition, the core success information is sent to the client.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification.
  • the verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides another user core method. As shown in FIG. 3, the method includes:
  • the execution body of the embodiment of the present invention may be a client, and the client may be configured on a mobile device having a camera and a microphone, including but not limited to a smart phone and a tablet.
  • the client receives the user's request for account login, payment, etc., the face image corresponding to the nucleus object and the eye pattern pair corresponding to the number of eye pattern acquisition steps are collected, so that the server performs identity verification and payment identity verification on the user. Wait for security verification.
  • the server is caused to perform core verification on the core object.
  • the method may further include: the client performs pre-processing on the collected face image and the eye pattern, and the pre-processing may include: image optimization, image separation, image compression, and face image quality.
  • image optimization image separation
  • image compression image compression
  • face image quality image quality
  • the user core method provided by the embodiment of the present invention firstly collects a face image corresponding to the body object and an eye line pair image corresponding to the number of eye line collecting steps, and then sends the face image to the server and The eye line is paired with an image such that the server performs a body verification on the core object.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification.
  • the verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides a user core method, as shown in FIG. 4, the method includes:
  • the execution body of the embodiment of the present invention may be a client, and the client may be configured on a mobile device having a camera and a microphone, including but not limited to a smart phone and a tablet.
  • the user requests an operation such as account login, payment, etc.
  • the user sends a user core request to the server, so that the server performs security verification such as identity verification, payment authentication, and the like.
  • the user's core request may carry the identification information of the user, so that the server extracts the preset face image of the user, the preset eye pattern, and the like for the user to perform the information of the user's nucleus.
  • the client can specifically communicate with the server through a mobile cellular network, a WIFI network, or the like. According to the transmission, the embodiment of the present invention is not limited.
  • the nucleus mode, the face quality threshold, and the eye pattern acquisition step refer to the explanation of the corresponding part of step 101, and details are not described herein again.
  • the client can obtain the face image of the current nucleus object and the image of the eye lining by using the preset camera, which is not limited in the embodiment of the present invention.
  • the preset eye pattern living condition is used to reflect the authenticity of the eye pattern to the image.
  • the preset eye pattern living condition before detecting the face image and the eye-pair image to the server, by detecting whether the image quality of the currently acquired face image is greater than or equal to the face quality threshold, And detecting whether the eye pattern conforms to the preset eye pattern living condition, can ensure the true accuracy of the image for the nuclear body sent to the server end, thereby ensuring the accuracy of the user's core body.
  • the server is caused to perform verification on the user.
  • the image is sent to the server, thereby ensuring the true accuracy of the image used for the kernel at the server end, and further Guarantee the accuracy of the user's body.
  • a user core method provided by the embodiment of the present invention firstly collects a face image corresponding to the body object and an eye pattern pair image corresponding to the number of eye pattern acquisition steps, and then sends the face image to the server and The eye line is paired with an image such that the server performs a body verification on the core object.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification.
  • the verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides a server.
  • the server may include: a receiving unit 51, a comparing unit 52, and a sending unit 53.
  • the receiving unit 51 is configured to receive a face image corresponding to the kernel object sent by the client and an eye-pair image corresponding to the number of eye-collection steps;
  • the comparison unit 52 is configured to compare the face image with a preset face image, and compare the eye line pair image with a preset eye pattern template;
  • the sending unit 53 is configured to send the core success information to the client if the comparison result between the face image and the eye-to-image is consistent with a preset condition.
  • the device embodiment corresponds to the foregoing method embodiment.
  • the device embodiment does not describe the details in the foregoing method embodiments one by one, but it should be clear that the device in this embodiment can Corresponding to implementing all of the foregoing method embodiments.
  • a server provided by the embodiment of the present invention first receives a face image corresponding to a nuclear body object sent by a client and an eye-pattern pair image corresponding to the number of eye-collection steps, and then Comparing the face image with the preset face image, and comparing the eye pattern to the preset eye pattern, if the face image and the eye pattern are compared If the preset condition is met, the core success information is sent to the client.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification. The verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides another server.
  • the server may include: a receiving unit 61, a comparing unit 62, a sending unit 63, and an acquiring unit. 64.
  • the receiving unit 61 is configured to receive a face image corresponding to the kernel object sent by the client and an eye-pair image corresponding to the number of eye-collection steps;
  • the comparison unit 62 is configured to compare the face image with a preset face image, and compare the eye line pair image with a preset eye pattern template;
  • the sending unit 63 is configured to send the core success information to the client if the comparison result of the face image and the eye pattern pair image meets a preset condition.
  • server further includes:
  • the obtaining unit 64 is configured to acquire, according to the user's core request, the number of eye pattern templates corresponding to the kernel object from the preset storage location, where the preset storage location stores eye lines corresponding to different body objects respectively template;
  • a determining unit 65 configured to determine the current mode as an identity verification mode if the number of the eye pattern templates is greater than or equal to a preset threshold
  • the sending unit 63 is further configured to send, to the client, a face quality sub-threshold value and an eye-grain collection step number corresponding to the identity verification mode, so that the client obtains the face image according to the face quality sub-threshold value and Obtaining an eye-pair image corresponding to the number of eye-collection steps.
  • the server further includes: a saving unit 66.
  • the determining unit 65 is further configured to determine the current nucleus mode as an eye pattern input mode if the number of the eye pattern templates is less than the preset threshold;
  • the sending unit 63 is further configured to send, to the client, a face quality sub-threshold value and an eye-grain collection step number corresponding to the eye pattern entry mode, so that the client obtains the face image according to the face quality sub-threshold value. And obtaining an eye-pair image corresponding to the number of the eye-pattern acquisition steps;
  • the saving unit 66 is configured to save the eye patch image as an eye patch template corresponding to the preset kernel object to the preset storage location.
  • comparison unit 62 is specifically configured to use the face image and the preset face image as input of a preset face algorithm, and obtain a face ratio corresponding to the body object;
  • the sending unit 63 is specifically configured to send a core to the client if the face ratio is divided, the plurality of eyeprints are separated, and the eyeprint ratio is greater than a preset threshold. Successful information.
  • server further includes:
  • the updating unit 67 is configured to update an eye patch template corresponding to the core object saved in the preset storage location according to an eye pattern collected by the client when confirming that the user is successful.
  • the device embodiment corresponds to the foregoing method embodiment.
  • the device embodiment does not describe the details in the foregoing method embodiments one by one, but it should be clear that the device in this embodiment can Corresponding to implementing all of the foregoing method embodiments.
  • Another server provided by the embodiment of the present invention when receiving the request of the user's nucleus, first receives the face image corresponding to the nucleus object sent by the client and the eye-pair image corresponding to the number of eye-collection steps, and then Comparing the face image with the preset face image, and comparing the eye pattern to the preset eye pattern, if the face image and the eye pattern are compared with the image If the preset conditions are met, the core success information is sent to the client.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification. The verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • an embodiment of the present invention provides a client.
  • the client may include: an acquiring unit 71 and a sending unit 72.
  • the collecting unit 71 is configured to collect a face image corresponding to the body object and an eye line pair image corresponding to the number of eye line collecting steps;
  • the sending unit 72 is configured to send the face image and the eye-pair image to the server, so that the server performs core verification on the core object.
  • the device embodiment corresponds to the foregoing method embodiment.
  • the device embodiment does not describe the details in the foregoing method embodiments one by one, but it should be clear that the device in this embodiment can Corresponding to implementing all of the foregoing method embodiments.
  • a client provided by the embodiment of the present invention first collects a face image corresponding to the body object and an eye pattern pair corresponding to the number of eye pattern acquisition steps, and then sends the face image and the eye to the server.
  • An image is paired to cause the server to perform a body verification on the nuclear body object.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification.
  • the verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the embodiment of the present invention provides another client.
  • the client may include: an acquisition unit 81, a sending unit 82, a receiving unit 83, and detection. Unit 84.
  • the collecting unit 81 is configured to collect a face image corresponding to the body object and an eye line pair image corresponding to the number of eye line collecting steps;
  • the sending unit 82 is configured to send the face image and the eye-pair image to the server, so that the server performs core verification on the core object.
  • the client further includes: a receiving unit 83;
  • the sending unit 82 is further configured to send a user core request to the server;
  • the receiving unit 83 is configured to receive a face quality threshold value and an eye pattern collection step number corresponding to the current nucleus mode sent by the server;
  • the collecting unit 81 is configured to acquire a face image corresponding to the core body object according to the face quality threshold value, and acquire an eye patch image corresponding to the number of the eye pattern collecting steps.
  • the client further includes: a detecting unit 84;
  • the detecting unit 84 is configured to detect whether the image quality of the currently acquired face image is greater than or equal to the face quality sub-threshold value
  • the sending unit is specifically configured to: if yes, send the face image to the server.
  • the detecting unit 84 is further configured to detect whether the eye line pair image conforms to a preset eye pattern living condition
  • the sending unit 81 is specifically configured to send the eye patch object map to the server if it matches.
  • the device embodiment corresponds to the foregoing method embodiment.
  • the device embodiment does not describe the details in the foregoing method embodiments one by one, but it should be clear that the device in this embodiment can Corresponding to implementing all of the foregoing method embodiments.
  • Another client provided by the embodiment of the present invention first collects a face image corresponding to the body object and an eye line pair image corresponding to the number of eye line acquisition steps, and then sends the face image to the server and the The eye is aligned to the image such that the server performs a body verification on the nucleus object.
  • the embodiment of the present invention combines eye image verification with eye image verification and eye image verification.
  • the dimension verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's core method, and ensuring the security of the user when using the application.
  • an embodiment of the present invention provides a user core system.
  • the user core system includes: a server 91 and a client 92.
  • the server 91 is configured to send, to the client, a face quality sub-threshold value and an eye-grain collection step number corresponding to the current nucleus mode when receiving the user's nucleus request;
  • the client 92 is configured to acquire a face image according to the face quality threshold and obtain an eye-pair image corresponding to the number of eye-collection steps;
  • the server 91 is further configured to receive a face image sent by the client and an eye-pair image corresponding to the number of eye-collection steps; compare the face image with a preset face image, and compare The eye pattern compares the image with the preset eye pattern template; if the comparison result of the face image and the eye pattern to the image meets the preset condition, the core success information is sent to the client.
  • system embodiment corresponds to the foregoing method embodiment.
  • embodiment of the system does not describe the details in the foregoing method embodiments one by one, but it should be clear that the system in this embodiment can Corresponding to implementing all of the foregoing method embodiments.
  • a user core system provided by the embodiment of the present invention first receives a client when receiving a request from a user's body. a face image corresponding to the body object sent by the end and an eye line pair image corresponding to the number of eye pattern acquisition steps, and then comparing the face image with the preset face image, and comparing the eye pattern to the image Comparing with the preset eye pattern template, if the comparison result of the face image and the eye pattern to the image meets a preset condition, the core success information is sent to the client.
  • the embodiment of the present invention combines face image verification with eye pattern verification and eye pattern image verification. The verification method is used to implement the user's core body, thereby improving the accuracy and reliability of the user's nucleus method, and ensuring the security of the user when using the application.
  • the user nucleus device includes a processor and a memory, each of which is stored as a program unit in a memory, and the processor executes the program unit stored in the memory to implement a corresponding function.
  • the processor contains a kernel, and the kernel removes the corresponding program unit from the memory.
  • the kernel can be set to one or more, and the kernel parameters can be adjusted to solve the problem of low accuracy of the existing user's nucleus method.
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory (flash RAM), the memory including at least one Memory chip.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • the application also provides a computer program product, when executed on a data processing device, adapted to perform initialization of program code having the following method steps:
  • a server configured to send, to the client, a face quality threshold value and an eye line collection step number corresponding to the current body mode when receiving the user core request;
  • a client configured to acquire a face image according to the face quality threshold and obtain an eye-pair image corresponding to the number of eye-collection steps;
  • the server is further configured to receive a face image sent by the client and an eye-pair image corresponding to the number of eye-collection steps; compare the face image with a preset face image, and compare the The eye pattern compares the image with the preset eye pattern template; if the comparison result of the face image and the eye pattern to the image meets the preset condition, the core success information is sent to the client.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the present invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明公开了一种用户核身方法、装置及系统,涉及信息技术领域。本发明主要用于解决现有用户核身方法的精确度和可靠性较低的问题。所述方法包括:首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,然后将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。

Description

用户核身方法、装置及系统
本申请要求2016年08月24日递交的申请号为201610717080.1、发明名称为“用户核身方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及信息技术领域,尤其涉及一种用户核身方法、装置及系统。
背景技术
随着信息技术和互联网的不断发展,各种各样的应用随之出现。其中,越来越多的金融机构为用户提供应用APP来办理相关金融业务。为了保证用户的信息安全,需要对通过APP来办理相关金融业务的用户进行核身,即对用户进行身份认证、实名认证等安全验证操作。
目前,在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式进行,即验证采集的人脸图像的同时,下发一些人脸动作参数给用户,用户需要按照这些动作来完成动作活体验证。然而由于目前能够合成出较为逼真的3D人脸图像,并且能够模拟用户的动作和表情,造成现有用户核身方法的精确度和可靠性较低,无法保证用户使用应用APP的安全性。
发明内容
有鉴于此,本发明实施例提供一种用户核身方法、装置及系统,主要目的是解决现有技术中用户核身方法的精确度和可靠性较低的问题。
为达到上述目的,本发明提供如下技术方案:
一方面,本发明实施例提供一种用户核身方法,包括:
接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
另一方面,本发明实施例提供另一种用户核身方法,包括:
采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
再一方面,本发明实施例提供一种服务器,包括:
接收单元,用于接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
比对单元,用于将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
发送单元,用于若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
又一方面,本发明实施例提供一种客户端,包括:
采集单元,用于采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
发送单元,用于向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
又一方面,本发明实施例提供一种用户核身系统,包括:
服务器,用于当接收到用户核身请求时,向客户端发送与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
客户端,用于根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
所述服务器,还用于接收客户端发送的人脸图像和与所述眼纹采集步数对应的眼纹对图像;将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向述客户端发送核身成功信息。
借由上述技术方案,本发明实施例提供的技术方案至少具有下列优点:
本发明实施例提供的一种用户核身方法、装置及系统,当接收到用户核身请求时,首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像 活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本发明实施例提供的一种用户核身方法的流程图;
图2示出了本发明实施例提供的另一种用户核身方法的流程图;
图3示出了本发明实施例提供的又一种用户核身方法的流程图;
图4示出了本发明实施例提供的再一种用户核身方法的流程图;
图5示出了本发明实施例提供的一种服务器的结构示意图;
图6示出了本发明实施例提供的另一种服务器的结构示意图;
图7示出了本发明实施例提供的一种客户端的结构示意图;
图8示出了本发明实施例提供的另一种客户端的结构示意图;
图9示出了本发明实施例提供的一种用户核身系统示意图;
图10示出了本发明实施例提供的一种用户核身场景流程示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
本发明实施例提供了一种用户核身方法,如图1所示,所述方法包括:
101、接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像。
其中,由于眼纹采集步数越大,眼纹采集的时间越长,因此当核身对象眼纹模板数量充足时,可以将眼纹采集步数配置的相对较小一些;当核身对象眼纹模板数量较少时,为了采集眼纹对图像作为该核身对象的眼纹模板积累,因此,可以将眼纹采集步数配置的相对较大一些。对于本发明实施例,根据不同的情况配置不同的眼纹采集步数,可以进一步提高用户核身精度。服务器具体可以通过移动蜂窝网络、WIFI网络等通信方式与客户端进行数据传输,本发明实施例不做限定。
102、将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对。
其中,所述预置人脸图像具体可以为该用户在公安网登记过的照片或者之前已通过用户核身的人脸照片,本发明实施例不做限定。所述预置眼纹模板可以为已通过安全验证的多组眼纹对图像。所述比对操作具体可以为检测图像之间的匹配度是否符合预设要求,本发明实施例不做限定。
103、若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
其中,预设条件可以为人脸比对分阈值、眼纹比对分阈值等,本发明实施例不做限定。对于本发明实施例,当所述人脸图像和所述眼纹对图像的比对结果均符合预设条件时,确认核身成功,从而实现了通过人脸图像和眼纹对图像等多维度的用户核身,进而可以提高用户核身方法的精确度。
本发明实施例提供的一种用户核身方法,当接收到用户核身请求时,首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,本发明实施例提供了另一种用户核身方法,如图2所示,所述方法包括:
201、当接收到用户核身请求时,从预置存储位置获取所述核身对象对应的眼纹模板数量。
其中,所述预置存储位置保存有不同核身对象分别对应的眼纹模板。当用户需要进行登录身份验证、或者支付身份验证等安全验证时,向客户端发送用户核身请求。
对于本发明实施例,步骤201之后还可以包括:若所述眼纹模板数量小于所述预设阈值,则将当前核身模式确定为眼纹录入模式;向客户端发送与眼纹录入模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;将所述眼纹对图像作为所述预置核身对象对应的眼纹模板保存到所述预置存储位置。
需要说明的是,当眼纹模板数量小于预设阈值时,说明此时眼纹模板数量较少,无法保证眼纹验证的精确度,此时,将当前核身模式确定为眼纹录入模式,可以使得客户端实时录入符合条件的眼纹作为该核身对象的眼纹模板,从而实现了眼纹模板的积累,当眼纹模板数量达到预设阈值时,再切换为身份验证模式,从而可以进一步提高用户核身的精确度和可靠性。
202、若所述眼纹模板数量大于或等于预设阈值,则将当前模式确定为身份验证模式。
其中,所述当前模式具体可以包括眼纹采集模式、身份验证模式等,当前模式与预置存储位置中保存的该用户对应的眼纹模板数量相关联,本发明实施例不做限定。需要说明的是,在眼纹采集模式下,采集的人脸眼纹图像质量要求相对较高,以便于服务端累积眼纹模板;在身份验证模式下,人脸眼纹图像质量普通即可,因为服务端可以通过之前累积的眼纹模板进行眼纹比对。所述人脸质量分阈值用于指示客户端采集的人脸图像的质量,人脸质量分阈值越大,要求客户端采集的人脸图像的质量越高;所述眼纹采集步数用于指示客户端一次采集多少组眼纹对,例如,眼纹采集步数为5,则客户端需要采集5组眼纹对。
需要说明的是,由于眼纹采集步数越大,眼纹采集的时间越长,因此针对不同核身模式配置的眼纹采集步数不同。例如,对于身份验证模式,由于该模式下眼纹模板数量充足,因此可以将眼纹采集步数配置的相对较小一些;对于眼纹采集模式,由于该模式主要目的是采集眼纹对图像作为眼纹模板积累,因此,可以将眼纹采集步数配置的相对较大一些。对于本发明实施例,根据不同的核身模式配置不同的眼纹采集步数,可以进一步提高用户核身精度。
203、向客户端发送与身份验证模式对应的人脸质量分阈值和眼纹采集步数。
进一步地,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。其中,所述客户端可以配置在具有摄像头和麦克风的移动设备上,这些移动设备包括但不限于智能手机和平板电脑,客户端可以通过摄像头采集相关图像。
204、接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像。
其中,服务器具体可以通过移动蜂窝网络、WIFI网络等通信方式进行数据传输,本发明实施例不做限定。
205、将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对。
其中,所述预置人脸图像具体可以为该用户在公安网登记过的照片或者之前已通过用户核身的人脸照片,本发明实施例不做限定。所述预置眼纹模板可以为已通过安全验证的多组眼纹对图像。
对于本发明实施例,若所述当前模式确定为身份验证模式,所述将所述人脸图像与预置人脸图像进行比对具体可以包括:将所述人脸图像与所述预置人脸图像作为预置人脸算法的输入,获取所述核身对象对应的人脸比对分;所述将所述眼纹对图像与预置眼纹模板进行比对包括:将所述眼纹对图像和与所述核身对象对应的眼纹模板作为预置眼纹算法的输入,得到与所述眼纹采集步数数量对应的多个眼纹活体分和眼纹比对分。
其中,所述预置人脸算法和所述眼纹算法具体可以为卷积神经网络算法、多层神经网络算法等,本发明实施例不做限定。人脸比对分用于反映核身对象的人脸图像与预置人脸图像之间的匹配度,人脸比对分越高,说明核身对象的人脸图像与预置人脸图像之间的匹配度越高;眼纹活体分用于反映当前采集的核身对象的眼纹对图像的真实度,眼纹活体分越高,说明该眼纹对的真实度越大;眼纹比对分用于反映核身对象的眼纹对图像与预置眼纹模板之间的匹配度,眼纹比对分越高,说明核身对象的眼纹对图像与预置眼纹模板之间的匹配度越高。
206、若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
对于本发明实施例,步骤206具体可以包括:若所述人脸比对分、多个眼纹活体分、和所述眼纹比对分均大于预设阈值,则向所述客户端发送核身成功信息。在本发明实施 例中,当所人脸比对分、多个眼纹活体分、和所述眼纹比对分均大于预设阈值时,确认核身成功,从而可以提高用户核身方法的精确度和可靠性。
207、当确认用户核身成功时,根据客户端采集的眼纹对图像更新所述预置存储位置中保存的与所述核身对象对应的眼纹模板。
对于本发明实施例,当确认用户核身成功时,说明此时客户端采集的眼纹对图像为真实可靠的,根据客户端采集的眼纹对图像更新所述预置存储位置中保存的与所述核身对象对应的眼纹模板,可以进一步保证预置存储位置中保存的与所述核身对象对应的眼纹模板的准确性,进而进一步提高用户核身方法的精确度。
对于本发明实施例,具体应用场景流程可以如图10所示,但不限于此,包括:首先服务器可以通过配置的决策模块FEArbitrator获取核身对象对应的眼纹模板数量为10,大于预设模板数量阈值9,因此确定当前核身模式为身份验证模式Verify,然后向客户端发送与身份验证模式Verify对应的人脸质量分阈值QT和眼纹采集步数1,此时,客户端采集人脸图像和一组眼纹对,然后在确定采集的人脸图像质量大于或等于QT后,对采集的人脸图像和眼纹对图像进行优化、压缩等预处理后,发送给服务器,此时服务器通过预置人脸算法将所述人脸图像和已验证过的预置人脸图像进行比对,得到人脸图像比对分FX,并且通过预置眼纹算法将采集的眼纹对与预置眼纹模板进行比对,得到眼纹活体分LK和眼纹比对分MX,若FX大于或等于预设人脸比对分阈值FT,LK大于或等于预设眼纹活体分阈值LT,并且MX大于或等于预设眼纹比对分阈值,则此时向所述客户端发送核身成功信息,并根据采集的眼纹对图像更新预置眼纹模板。若核身对象对应的眼纹模板数量小于9,则指示客户端进行眼纹对图像采集,直到核身对象对应的眼纹模板数量大于或等于9时,切换为身份验证模式。
本发明实施例提供的另一种用户核身方法,当接收到用户核身请求时,首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,本发明实施例提供了又一种用户核身方法,如图3所示,所述方法包括:
301、采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像。
其中,本发明实施例的执行主体可以为客户端,该客户端可以配置在具有摄像头和麦克风的移动设备上,这些移动设备包括但不限于智能手机和平板电脑。当客户端接收到用户进行账号登录、支付等请求时,采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,以便服务器对该用户进行身份验证、支付身份验证等安全验证。
302、向所述服务器发送所述人脸图像和所述眼纹对图像。
进一步地,以使得所述服务器对所述核身对象进行核身验证。
对于本发明实施例,步骤302之前还可以包括:客户端对采集的人脸图像和眼纹对图像进行预处理,所述预处理可以包括:图像优化、图像分隔、图像压缩、人脸图像质量计算和眼纹活体计算等,本发明实施例不做限定。通过对采集的人脸图像和眼纹对图像进行预处理,可以保证在服务器端用于核身的图像的真实准确性,进而保证用户核身的精确度。
本发明实施例提供的又一种用户核身方法,首先采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,然后向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,本发明实施例提供了再一种用户核身方法,如图4所示,所述方法包括:
401、向服务器发送用户核身请求。
其中,本发明实施例的执行主体可以为客户端,该客户端可以配置在具有摄像头和麦克风的移动设备上,这些移动设备包括但不限于智能手机和平板电脑。当用户请求进行账号登录、支付等操作时,向服务器发送用户核身请求,以便服务器对该用户进行身份验证、支付身份验证等安全验证。所述用户核身请求中可以携带该用户的标识信息,以便服务器从数据库中提取该用户的预置人脸图像、预置眼纹模板等用于后续进行用户核身的信息。
402、接收服务器发送的与当前核身模式对应的人脸质量分阈值和眼纹采集步数。
其中,客户端具体可以通过移动蜂窝网络、WIFI网络等通信方式与服务器进行数 据传输,本发明实施例不做限定。核身模式、人脸质量分阈值和眼纹采集步数的具体解释说明可以参见步骤101相应部分的解释,在此不再赘述。
403、根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
具体地,客户端可以通过预置摄像头获取当前核身对象的人脸图像以及眼纹对图像,本发明实施例不做限定。
404、检测当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值,并且检测所述眼纹对图像是否符合预置眼纹活体条件。
其中,预置眼纹活体条件用于反映眼纹对图像的真实性。对于本发明实施例,在向所述服务器发送所述人脸图像和所述眼纹对图像之前,通过检测当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值,并且检测所述眼纹对图像是否符合预置眼纹活体条件,可以保证发送给服务器端的用于核身的图像的真实准确性,进而保证用户核身的精确度。
405、若当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值,并且眼纹对图像符合预置眼纹活体条件,则向所述服务器发送所述人脸图像和所述眼纹对图像。
进一步地,以使得所述服务器对所述用户进行核身验证。对于本发明实施例,在确认了采集的人脸图像质量以及眼纹对图像质量符合要求后,再向服务器发送上述图像,从而可以保证在服务器端用于核身的图像的真实准确性,进而保证用户核身的精确度。
本发明实施例提供的再一种用户核身方法,首先采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,然后向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,作为图1所示方法的具体实现,本发明实施例提供一种服务器,如图5所示,所述服务器可以包括:接收单元51、比对单元52、发送单元53。
接收单元51,用于接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
比对单元52,用于将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
发送单元53,用于若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
需要说明的是,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
本发明实施例提供的一种服务器,当接收到用户核身请求时,首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,作为图2所示方法的具体实现,本发明实施例提供另一种服务器,如图6所示,所述服务器可以包括:接收单元61、比对单元62、发送单元63、获取单元64、确定单元65、保存单元66、更新单元67。
接收单元61,用于接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
比对单元62,用于将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
发送单元63,用于若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
进一步地,所述服务器还包括:
获取单元64,用于当接收到用户核身请求时,从预置存储位置获取所述核身对象对应的眼纹模板数量,所述预置存储位置保存有不同核身对象分别对应的眼纹模板;
确定单元65,用于若所述眼纹模板数量大于或等于预设阈值,则将当前模式确定为身份验证模式;
所述发送单元63,还用于向客户端发送与身份验证模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
进一步地,所述服务器还包括:保存单元66。
所述确定单元65,还用于若所述眼纹模板数量小于所述预设阈值,则将当前核身模式确定为眼纹录入模式;
所述发送单元63,还用于向客户端发送与眼纹录入模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
所述保存单元66,用于将所述眼纹对图像作为所述预置核身对象对应的眼纹模板保存到所述预置存储位置。
进一步地,所述比对单元62,具体用于将所述人脸图像与所述预置人脸图像作为预置人脸算法的输入,获取所述核身对象对应的人脸比对分;
将所述眼纹对图像和与所述核身对象对应的眼纹模板作为眼纹算法的输入,得到与所述眼纹采集步数数量对应的多个眼纹活体分和眼纹比对分。
进一步地,所述发送单元63,具体用于若所述人脸比对分、多个眼纹活体分、和所述眼纹比对分均大于预设阈值,则向所述客户端发送核身成功信息。
进一步地,所述服务器还包括:
更新单元67,用于当确认用户核身成功时,根据客户端采集的眼纹对图像更新所述预置存储位置中保存的与所述核身对象对应的眼纹模板。
需要说明的是,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
本发明实施例提供的另一种服务器,当接收到用户核身请求时,首先接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度 的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,作为图3所示方法的具体实现,本发明实施例提供一种客户端,如图7所示,所述客户端可以包括:采集单元71、发送单元72。
采集单元71,用于采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
发送单元72,用于向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
需要说明的是,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
本发明实施例提供的一种客户端,首先采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,然后向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
进一步地,作为图4所示方法的具体实现,本发明实施例提供另一种客户端,如图8所示,所述客户端可以包括:采集单元81、发送单元82、接收单元83、检测单元84。
采集单元81,用于采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
发送单元82,用于向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
进一步地,所述客户端还包括:接收单元83;
所述发送单元82,还用于向服务器发送用户核身请求;
所述接收单元83,用于接收服务器发送的与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
所述采集单元81,具体用于根据所述人脸质量分阈值获取所述核身对象对应的人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
进一步地,所述客户端还包括:检测单元84;
所述检测单元84,用于检测当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值;
所述发送单元,具体用于若是,则向所述服务器发送所述人脸图像。
所述检测单元84,还用于检测所述眼纹对图像是否符合预置眼纹活体条件;
所述发送单元81,具体还用于若符合,则向所述服务器发送所述眼纹对象图。
需要说明的是,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的装置能够对应实现前述方法实施例中的全部内容。
本发明实施例提供的另一种客户端,首先采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,然后向所述服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式进行相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
再进一步地,作为图1和图3所示方法的具体实现,本发明实施例提供一种用户核身系统,如图9所示,所述用户核身系统包括:服务器91和客户端92。
所述服务器91,用于当接收到用户核身请求时,向客户端发送与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
客户端92,用于根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
所述服务器91,还用于接收客户端发送的人脸图像和与所述眼纹采集步数对应的眼纹对图像;将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向述客户端发送核身成功信息。
需要说明的是,该系统实施例与前述方法实施例对应,为便于阅读,本系统实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的系统能够对应实现前述方法实施例中的全部内容。
本发明实施例提供的一种用户核身系统,当接收到用户核身请求时,首先接收客户 端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像,再将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。与目前在进行用户核身时,通常是采用人脸图像识别结合人脸图像活体验证的方式相比,本发明实施例通过人脸图像验证结合眼纹图像验证和眼纹图像活体验证等多维度的验证方式来实现用户核身,进而提升了用户核身方法的精确度和可靠性,可以保证用户使用应用时的安全性。
所述用户核身装置包括处理器和存储器,上述虚拟单元均作为程序单元存储在存储器中,由处理器执行存储在存储器中的上述程序单元来实现相应的功能。
处理器中包含内核,由内核去存储器中调取相应的程序单元。内核可以设置一个或以上,通过调整内核参数来解决现有用户核身方法的精确度较低的问题。
存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM),存储器包括至少一个存储芯片。
本申请还提供了一种计算机程序产品,当在数据处理设备上执行时,适于执行初始化有如下方法步骤的程序代码:
服务器,用于当接收到用户核身请求时,向客户端发送与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
客户端,用于根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
所述服务器,还用于接收客户端发送的人脸图像和与所述眼纹采集步数对应的眼纹对图像;将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向述客户端发送核身成功信息。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本实施例的用户核身方法、装置、系统和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。存储器是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
以上仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (21)

  1. 一种用户核身方法,其特征在于,包括:
    接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
    将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
    若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
  2. 根据权利要求1所述的方法,其特征在于,所述接收客户端发送的核身对象对应的人脸图像和与所述眼纹采集步数对应的眼纹对图像之前,所述方法还包括:
    当接收到用户核身请求时,从预置存储位置获取所述核身对象对应的眼纹模板数量,所述预置存储位置保存有不同核身对象分别对应的眼纹模板;
    若所述眼纹模板数量大于或等于预设阈值,则将当前模式确定为身份验证模式;
    向客户端发送与身份验证模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
  3. 根据权利要求2所述的方法,其特征在于,所述从预置存储位置获取所述核身对象对应的眼纹模板数量之后,所述方法还包括:
    若所述眼纹模板数量小于所述预设阈值,则将当前核身模式确定为眼纹录入模式;
    向客户端发送与眼纹录入模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
    将所述眼纹对图像作为所述预置核身对象对应的眼纹模板保存到所述预置存储位置。
  4. 根据权利要求2所述的方法,其特征在于,所述将所述人脸图像与预置人脸图像进行比对包括:
    将所述人脸图像与所述预置人脸图像作为预置人脸算法的输入,获取所述核身对象对应的人脸比对分;
    所述将所述眼纹对图像与预置眼纹模板进行比对包括:
    将所述眼纹对图像和与所述核身对象对应的眼纹模板作为预置眼纹算法的输入,得到与所述眼纹采集步数数量对应的多个眼纹活体分和眼纹比对分。
  5. 根据权利要求4所述的方法,其特征在于,若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息包括:
    若所述人脸比对分、多个眼纹活体分、和所述眼纹比对分均大于预设阈值,则向所述客户端发送核身成功信息。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    当确认用户核身成功时,根据客户端采集的眼纹对图像更新所述预置存储位置中保存的与所述核身对象对应的眼纹模板。
  7. 一种用户核身方法,其特征在于,包括:
    采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
    向服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
  8. 根据权利要求7所述的用户核身方法,其特征在于,所述采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像之前,所述方法还包括:
    向服务器发送用户核身请求;
    接收服务器发送的与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
    所述采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像包括:
    根据所述人脸质量分阈值获取所述核身对象对应的人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
  9. 根据权利要求8所述的用户核身方法,其特征在于,所述向服务器发送所述人脸图像之前,所述方法还包括:
    检测当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值;
    所述向服务器发送所述人脸图像包括:
    若是,则向所述服务器发送所述人脸图像。
  10. 根据权利要求7所述的用户核身方法,其特征在于,所述向服务器发送所述眼纹对图像之前,所述方法还包括:
    检测所述眼纹对图像是否符合预置眼纹活体条件;
    所述向服务器发送所述眼纹对图像包括:
    若符合,则向所述服务器发送所述眼纹对象图。
  11. 一种服务器,其特征在于,包括:
    接收单元,用于接收客户端发送的核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
    比对单元,用于将所述人脸图像与预置人脸图像进行比对,并且将所述眼纹对图像与预置眼纹模板进行比对;
    发送单元,用于若所述人脸图像和所述眼纹对图像的比对结果均符合预设条件,则向所述客户端发送核身成功信息。
  12. 根据权利要求11所述的服务器,其特征在于,所述服务器还包括:获取单元、确定单元;
    所述获取单元,用于当接收到用户核身请求时,从预置存储位置获取所述核身对象对应的眼纹模板数量,所述预置存储位置保存有不同核身对象分别对应的眼纹模板;
    所述确定单元,用于若所述眼纹模板数量大于或等于预设阈值,则将当前模式确定为身份验证模式;
    所述发送单元,还用于向客户端发送与身份验证模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
  13. 根据权利要求12所述的服务器,其特征在于,所述服务器还包括:保存单元;
    所述确定单元,还用于若所述眼纹模板数量小于所述预设阈值,则将当前核身模式确定为眼纹录入模式;
    所述发送单元,还用于向客户端发送与眼纹录入模式对应的人脸质量分阈值和眼纹采集步数,以使得所述客户端根据所述人脸质量分阈值获取人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像;
    所述保存单元,用于将所述眼纹对图像作为所述预置核身对象对应的眼纹模板保存到所述预置存储位置。
  14. 根据权利要求12所述的服务器,其特征在于,
    所述比对单元,具体用于将所述人脸图像与所述预置人脸图像作为预置人脸算法的输入,获取所述核身对象对应的人脸比对分;
    将所述眼纹对图像和与所述核身对象对应的眼纹模板作为预置眼纹算法的输入,得到与所述眼纹采集步数数量对应的多个眼纹活体分和眼纹比对分。
  15. 根据权利要求14所述的服务器,其特征在于,
    所述发送单元,具体用于若所述人脸比对分、多个眼纹活体分、和所述眼纹比对分均大于预设阈值,则向所述客户端发送核身成功信息。
  16. 根据权利要求14所述的服务器,其特征在于,所述服务器还包括:
    更新单元,用于当确认用户核身成功时,根据客户端采集的眼纹对图像更新所述预置存储位置中保存的与所述核身对象对应的眼纹模板。
  17. 一种客户端,其特征在于,包括:
    采集单元,用于采集核身对象对应的人脸图像和与眼纹采集步数对应的眼纹对图像;
    发送单元,用于向服务器发送所述人脸图像和所述眼纹对图像,以使得所述服务器对所述核身对象进行核身验证。
  18. 根据权利要求17所述的客户端,其特征在于,所述客户端还包括:接收单元;
    所述发送单元,还用于向服务器发送用户核身请求;
    所述接收单元,用于接收服务器发送的与当前核身模式对应的人脸质量分阈值和眼纹采集步数;
    所述采集单元,具体用于根据所述人脸质量分阈值获取所述核身对象对应的人脸图像并获取与所述眼纹采集步数数量对应的眼纹对图像。
  19. 根据权利要求18所述的客户端,其特征在于,所述客户端还包括:检测单元;
    所述检测单元,用于检测当前获取到的人脸图像的图像质量是否大于或等于所述人脸质量分阈值;
    所述发送单元,具体用于若是,则向所述服务器发送所述人脸图像。
  20. 根据权利要求19所述的客户端,其特征在于,
    所述检测单元,还用于检测所述眼纹对图像是否符合预置眼纹活体条件;
    所述发送单元,具体还用于若符合,则向所述服务器发送所述眼纹对象图。
  21. 一种用户核身系统,其特征在于,包括:权利要求11-16任一项所述的服务器和权利要求17-20任一项所述的客户端。
PCT/CN2017/096987 2016-08-24 2017-08-11 用户核身方法、装置及系统 WO2018036389A1 (zh)

Priority Applications (14)

Application Number Priority Date Filing Date Title
CA3034612A CA3034612C (en) 2016-08-24 2017-08-11 User identity verification method, apparatus and system
PL17842812T PL3506589T3 (pl) 2016-08-24 2017-08-11 Sposób, urządzenie i układ do weryfikacji tożsamości użytkownika
AU2017314341A AU2017314341B2 (en) 2016-08-24 2017-08-11 User identity verification method, apparatus and system
ES17842812T ES2879682T3 (es) 2016-08-24 2017-08-11 Método, aparato y sistema de verificación de identidad de usuario
MYPI2019000894A MY193941A (en) 2016-08-24 2017-08-11 User identity verification method, apparatus and system
SG11201901519XA SG11201901519XA (en) 2016-08-24 2017-08-11 User identity verification method, apparatus and system
KR1020197007983A KR102084900B1 (ko) 2016-08-24 2017-08-11 사용자 신원 검증 방법, 장치 및 시스템
JP2019510864A JP6756037B2 (ja) 2016-08-24 2017-08-11 ユーザ本人確認の方法、装置及びシステム
EP17842812.4A EP3506589B1 (en) 2016-08-24 2017-08-11 User identity verification method, apparatus and system
US16/282,102 US10467490B2 (en) 2016-08-24 2019-02-21 User identity verification method, apparatus and system
PH12019500383A PH12019500383B1 (en) 2016-08-24 2019-02-22 User identity verification method, apparatus and system
ZA2019/01506A ZA201901506B (en) 2016-08-24 2019-03-11 User identity verification method, apparatus and system
US16/587,376 US10997443B2 (en) 2016-08-24 2019-09-30 User identity verification method, apparatus and system
AU2019101579A AU2019101579A4 (en) 2016-08-24 2019-12-13 User identity verification method, apparatus and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610717080.1A CN106899567B (zh) 2016-08-24 2016-08-24 用户核身方法、装置及系统
CN201610717080.1 2016-08-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/282,102 Continuation US10467490B2 (en) 2016-08-24 2019-02-21 User identity verification method, apparatus and system

Publications (1)

Publication Number Publication Date
WO2018036389A1 true WO2018036389A1 (zh) 2018-03-01

Family

ID=59191133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/096987 WO2018036389A1 (zh) 2016-08-24 2017-08-11 用户核身方法、装置及系统

Country Status (15)

Country Link
US (2) US10467490B2 (zh)
EP (1) EP3506589B1 (zh)
JP (1) JP6756037B2 (zh)
KR (1) KR102084900B1 (zh)
CN (1) CN106899567B (zh)
AU (2) AU2017314341B2 (zh)
CA (1) CA3034612C (zh)
ES (1) ES2879682T3 (zh)
MY (1) MY193941A (zh)
PH (1) PH12019500383B1 (zh)
PL (1) PL3506589T3 (zh)
SG (1) SG11201901519XA (zh)
TW (2) TWI752418B (zh)
WO (1) WO2018036389A1 (zh)
ZA (1) ZA201901506B (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022544349A (ja) * 2019-08-14 2022-10-18 グーグル エルエルシー デバイスのネットワーク全体での人物認識可能性を使用するシステムおよび方法
CN116756719A (zh) * 2023-08-16 2023-09-15 北京亚大通讯网络有限责任公司 基于指纹生物识别及uwb协议的身份验证系统

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10614204B2 (en) 2014-08-28 2020-04-07 Facetec, Inc. Facial recognition authentication system including path parameters
US10698995B2 (en) 2014-08-28 2020-06-30 Facetec, Inc. Method to verify identity using a previously collected biometric image/data
CN106899567B (zh) * 2016-08-24 2019-12-13 阿里巴巴集团控股有限公司 用户核身方法、装置及系统
CN107277265A (zh) * 2017-07-13 2017-10-20 广东欧珀移动通信有限公司 解锁控制方法及相关产品
CN107808127B (zh) * 2017-10-11 2020-01-14 Oppo广东移动通信有限公司 人脸识别方法及相关产品
CN108573038A (zh) * 2018-04-04 2018-09-25 北京市商汤科技开发有限公司 图像处理、身份验证方法、装置、电子设备和存储介质
CN109190509B (zh) 2018-08-13 2023-04-25 创新先进技术有限公司 一种身份识别方法、装置和计算机可读存储介质
CN109359972B (zh) * 2018-08-15 2020-10-30 创新先进技术有限公司 核身产品推送及核身方法和系统
CN109583348A (zh) * 2018-11-22 2019-04-05 阿里巴巴集团控股有限公司 一种人脸识别方法、装置、设备及系统
CN109543635A (zh) * 2018-11-29 2019-03-29 北京旷视科技有限公司 活体检测方法、装置、系统、解锁方法、终端及存储介质
WO2020125773A1 (zh) * 2018-12-20 2020-06-25 云丁网络技术(北京)有限公司 一种身份确认方法和系统
CN109871811A (zh) * 2019-02-22 2019-06-11 中控智慧科技股份有限公司 一种基于图像的活体目标检测方法、装置及系统
CN110196924B (zh) * 2019-05-31 2021-08-17 银河水滴科技(宁波)有限公司 特征信息库的构建、目标对象的追踪方法及装置
TWI742447B (zh) * 2019-10-11 2021-10-11 沅聖科技股份有限公司 系統對接方法、電子裝置及存儲介質
US11651371B2 (en) * 2019-11-21 2023-05-16 Rockspoon, Inc Zero-step user recognition and biometric access control
US11528267B2 (en) * 2019-12-06 2022-12-13 Bank Of America Corporation System for automated image authentication and external database verification
EP4246454A3 (en) * 2020-04-09 2023-11-29 Identy Inc. Liveliness detection using a device comprising an illumination source
KR20210149542A (ko) * 2020-06-02 2021-12-09 삼성에스디에스 주식회사 이미지 촬영 및 판독 방법, 이를 위한 장치
CN111881431B (zh) 2020-06-28 2023-08-22 百度在线网络技术(北京)有限公司 人机验证方法、装置、设备及存储介质
CN111898498A (zh) * 2020-07-16 2020-11-06 北京市商汤科技开发有限公司 匹配阈值确定方法、身份验证方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032923A1 (en) * 2012-07-30 2014-01-30 Eka A/S System and device for authenticating a user
US20140118520A1 (en) * 2012-10-29 2014-05-01 Motorola Mobility Llc Seamless authorized access to an electronic device
US20140341441A1 (en) * 2013-05-20 2014-11-20 Motorola Mobility Llc Wearable device user authentication
CN105825102A (zh) * 2015-01-06 2016-08-03 中兴通讯股份有限公司 一种基于眼纹识别的终端解锁方法和装置
CN106899567A (zh) * 2016-08-24 2017-06-27 阿里巴巴集团控股有限公司 用户核身方法、装置及系统

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5024012B1 (zh) 1970-12-26 1975-08-12
JPS59190739A (ja) 1983-04-13 1984-10-29 Nec Corp チヤネル指定方式
US6247813B1 (en) 1999-04-09 2001-06-19 Iritech, Inc. Iris identification system and method of identifying a person through iris recognition
JP4037009B2 (ja) * 1999-06-16 2008-01-23 沖電気工業株式会社 アイリス認識装置及び本人認証システム
JP3825222B2 (ja) * 2000-03-24 2006-09-27 松下電器産業株式会社 本人認証装置および本人認証システムならびに電子決済システム
KR100373850B1 (ko) 2000-10-07 2003-02-26 주식회사 큐리텍 홍채를 이용한 신원 확인 시스템 및 방법과 그 방법에대한 컴퓨터 프로그램 소스를 저장한 기록매체
KR100453943B1 (ko) 2001-12-03 2004-10-20 주식회사 세넥스테크놀로지 개인 식별을 위한 홍채 영상의 처리 및 인식방법과 시스템
JP2004118627A (ja) * 2002-09-27 2004-04-15 Toshiba Corp 人物認証装置および人物認証方法
JP2004164483A (ja) 2002-11-15 2004-06-10 Matsushita Electric Ind Co Ltd 目画像認証装置ならびにそれを用いた入退出管理装置および情報処理装置
JP4374904B2 (ja) * 2003-05-21 2009-12-02 株式会社日立製作所 本人認証システム
JP2005141678A (ja) * 2003-11-10 2005-06-02 Sharp Corp 顔画像照合システム及びicカード
JP3945474B2 (ja) 2003-11-28 2007-07-18 松下電器産業株式会社 眼画像入力装置および認証装置ならびに画像処理方法
JP2005202732A (ja) * 2004-01-16 2005-07-28 Toshiba Corp 生体照合装置、生体照合方法および通行制御装置
US7336806B2 (en) 2004-03-22 2008-02-26 Microsoft Corporation Iris-based biometric identification
US7697735B2 (en) * 2004-06-21 2010-04-13 Google Inc. Image based multi-biometric system and method
JP2006107028A (ja) * 2004-10-04 2006-04-20 Glory Ltd 個人認証装置および個人認証方法
US7298873B2 (en) * 2004-11-16 2007-11-20 Imageware Systems, Inc. Multimodal biometric platform
KR100629550B1 (ko) 2004-11-22 2006-09-27 아이리텍 잉크 다중스케일 가변영역분할 홍채인식 방법 및 시스템
CA2600388C (en) * 2005-03-17 2017-08-08 Imageware Systems, Inc. Multimodal biometric analysis
US20080052527A1 (en) * 2006-08-28 2008-02-28 National Biometric Security Project method and system for authenticating and validating identities based on multi-modal biometric templates and special codes in a substantially anonymous process
JP2008117333A (ja) 2006-11-08 2008-05-22 Sony Corp 情報処理装置、情報処理方法、個人識別装置、個人識別装置における辞書データ生成・更新方法および辞書データ生成・更新プログラム
JP5674473B2 (ja) 2007-11-27 2015-02-25 ウェイヴフロント・バイオメトリック・テクノロジーズ・ピーティーワイ・リミテッド 眼を使用した生体認証
JP5024012B2 (ja) * 2007-12-10 2012-09-12 沖電気工業株式会社 扉装置および扉システム
US8317325B2 (en) 2008-10-31 2012-11-27 Cross Match Technologies, Inc. Apparatus and method for two eye imaging for iris identification
WO2010099475A1 (en) * 2009-02-26 2010-09-02 Kynen Llc User authentication system and method
AU2011252761B2 (en) * 2010-05-13 2016-12-15 Iomniscient Pty Ltd Automatic identity enrolment
US8351662B2 (en) * 2010-09-16 2013-01-08 Seiko Epson Corporation System and method for face verification using video sequence
US9135500B2 (en) 2011-02-18 2015-09-15 Google Inc. Facial recognition
EP2710514A4 (en) * 2011-05-18 2015-04-01 Nextgenid Inc REGISTRATION TERMINAL HAVING MULTIPLE BIOMETRIC APPARATUSES INCLUDING BIOMETRIC INSCRIPTION AND VERIFICATION SYSTEMS, FACIAL RECOGNITION AND COMPARISON OF FINGERPRINTS
US9082011B2 (en) 2012-03-28 2015-07-14 Texas State University—San Marcos Person identification using ocular biometrics with liveness detection
US8369595B1 (en) * 2012-08-10 2013-02-05 EyeVerify LLC Texture features for biometric authentication
US8856541B1 (en) 2013-01-10 2014-10-07 Google Inc. Liveness detection
US20140358954A1 (en) * 2013-03-15 2014-12-04 Ideal Innovations Incorporated Biometric Social Network
US9367676B2 (en) * 2013-03-22 2016-06-14 Nok Nok Labs, Inc. System and method for confirming location using supplemental sensor and/or location data
US10270748B2 (en) 2013-03-22 2019-04-23 Nok Nok Labs, Inc. Advanced authentication techniques and applications
US9286528B2 (en) * 2013-04-16 2016-03-15 Imageware Systems, Inc. Multi-modal biometric database searching methods
US20140354405A1 (en) * 2013-05-31 2014-12-04 Secure Planet, Inc. Federated Biometric Identity Verifier
US9553859B2 (en) * 2013-08-08 2017-01-24 Google Technology Holdings LLC Adaptive method for biometrically certified communication
US9189686B2 (en) * 2013-12-23 2015-11-17 King Fahd University Of Petroleum And Minerals Apparatus and method for iris image analysis
JP6417676B2 (ja) * 2014-03-06 2018-11-07 ソニー株式会社 情報処理装置、情報処理方法、アイウェア端末および認証システム
EP3140780B1 (en) 2014-05-09 2020-11-04 Google LLC Systems and methods for discerning eye signals and continuous biometric identification
TWI528213B (zh) * 2014-05-30 2016-04-01 由田新技股份有限公司 手持式身分驗證裝置、身分驗證方法與身分驗證系統
US9563998B2 (en) * 2014-06-11 2017-02-07 Veridium Ip Limited System and method for facilitating user access to vehicles based on biometric information
US20160019420A1 (en) 2014-07-15 2016-01-21 Qualcomm Incorporated Multispectral eye analysis for identity authentication
TWI524215B (zh) * 2014-10-15 2016-03-01 由田新技股份有限公司 基於眼動追蹤的網路身分認證方法及系統
CN105678137A (zh) * 2014-11-19 2016-06-15 中兴通讯股份有限公司 一种身份识别的方法和装置
CN104778396B (zh) * 2015-04-29 2019-01-29 惠州Tcl移动通信有限公司 一种基于环境筛选帧的眼纹识别解锁方法及系统
CN105848306A (zh) * 2016-05-30 2016-08-10 努比亚技术有限公司 一种终端之间的连接建立方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032923A1 (en) * 2012-07-30 2014-01-30 Eka A/S System and device for authenticating a user
US20140118520A1 (en) * 2012-10-29 2014-05-01 Motorola Mobility Llc Seamless authorized access to an electronic device
US20140341441A1 (en) * 2013-05-20 2014-11-20 Motorola Mobility Llc Wearable device user authentication
CN105825102A (zh) * 2015-01-06 2016-08-03 中兴通讯股份有限公司 一种基于眼纹识别的终端解锁方法和装置
CN106899567A (zh) * 2016-08-24 2017-06-27 阿里巴巴集团控股有限公司 用户核身方法、装置及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3506589A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022544349A (ja) * 2019-08-14 2022-10-18 グーグル エルエルシー デバイスのネットワーク全体での人物認識可能性を使用するシステムおよび方法
CN116756719A (zh) * 2023-08-16 2023-09-15 北京亚大通讯网络有限责任公司 基于指纹生物识别及uwb协议的身份验证系统
CN116756719B (zh) * 2023-08-16 2023-10-24 北京亚大通讯网络有限责任公司 基于指纹生物识别及uwb协议的身份验证系统

Also Published As

Publication number Publication date
AU2017314341A1 (en) 2019-03-14
JP6756037B2 (ja) 2020-09-16
CA3034612A1 (en) 2018-03-01
JP2019525358A (ja) 2019-09-05
PH12019500383A1 (en) 2019-10-28
ES2879682T3 (es) 2021-11-22
CN106899567B (zh) 2019-12-13
EP3506589A1 (en) 2019-07-03
CN106899567A (zh) 2017-06-27
KR102084900B1 (ko) 2020-03-04
US20200026940A1 (en) 2020-01-23
PL3506589T3 (pl) 2021-12-06
EP3506589A4 (en) 2020-04-01
PH12019500383B1 (en) 2019-10-28
CA3034612C (en) 2020-07-14
US20190188509A1 (en) 2019-06-20
TW201807635A (zh) 2018-03-01
SG11201901519XA (en) 2019-03-28
ZA201901506B (en) 2020-08-26
MY193941A (en) 2022-11-02
TWI752418B (zh) 2022-01-11
TW202026984A (zh) 2020-07-16
US10467490B2 (en) 2019-11-05
AU2017314341B2 (en) 2020-05-07
TWI687879B (zh) 2020-03-11
US10997443B2 (en) 2021-05-04
AU2019101579A4 (en) 2020-01-23
EP3506589B1 (en) 2021-06-23
KR20190038923A (ko) 2019-04-09

Similar Documents

Publication Publication Date Title
WO2018036389A1 (zh) 用户核身方法、装置及系统
US11163982B2 (en) Face verifying method and apparatus
US10387714B2 (en) Face verifying method and apparatus
US10198615B2 (en) Fingerprint enrollment method and apparatus
KR20190072563A (ko) 얼굴 라이브니스 검출 방법 및 장치, 그리고 전자 디바이스
US9740932B2 (en) Cross-sensor iris matching
CN111095246B (zh) 用于认证用户的方法和电子设备
US20180165517A1 (en) Method and apparatus to recognize user
JP2013506192A (ja) 短縮虹彩コードを発生させるための方法、システム、およびコンピュータ読み取り可能プログラムを含むコンピュータ読み取り可能記憶媒体(短縮虹彩コードを発生させ使用するためのシステムおよび方法)
EP3038317A1 (en) User authentication for resource transfer based on mapping of physiological characteristics
US11681883B2 (en) Systems and methods of identification verification using near-field communication and optical authentication
US20180211089A1 (en) Authentication method and authentication apparatus using synthesized code for iris
WO2015103970A1 (en) Method, apparatus and system for authenticating user
TWM591664U (zh) 用以進行身分註冊程序的電子裝置
JP6349062B2 (ja) 認証システム、クライアント端末、認証サーバー、端末プログラム及びサーバープログラム
CN110321758B (zh) 生物特征识别的风险管控方法及装置
CN116824314A (zh) 信息获取方法及系统
CN114722372A (zh) 基于人脸识别的登录验证方法及装置、处理器和电子设备
CN116451195A (zh) 一种活体识别方法和系统
CN116797228A (zh) 支付方法、装置、设备以及存储介质
EP3651063A1 (fr) Procédé de reconnaissance biométrique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17842812

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3034612

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2019510864

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017314341

Country of ref document: AU

Date of ref document: 20170811

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20197007983

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017842812

Country of ref document: EP

Effective date: 20190325