WO2019001253A1 - 虹膜活体检测方法及相关产品 - Google Patents

虹膜活体检测方法及相关产品 Download PDF

Info

Publication number
WO2019001253A1
WO2019001253A1 PCT/CN2018/090646 CN2018090646W WO2019001253A1 WO 2019001253 A1 WO2019001253 A1 WO 2019001253A1 CN 2018090646 W CN2018090646 W CN 2018090646W WO 2019001253 A1 WO2019001253 A1 WO 2019001253A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
image
living body
body detection
quality evaluation
Prior art date
Application number
PCT/CN2018/090646
Other languages
English (en)
French (fr)
Inventor
周意保
张学勇
周海涛
唐城
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP18824914.8A priority Critical patent/EP3627382A4/en
Publication of WO2019001253A1 publication Critical patent/WO2019001253A1/zh
Priority to US16/725,539 priority patent/US11200437B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Definitions

  • the present application relates to the field of electronic device technologies, and in particular, to an iris living body detecting method and related products.
  • iris recognition is increasingly favored by electronic equipment manufacturers, and the safety of iris recognition is also one of the important issues of concern.
  • the iris is usually inspected before the iris is recognized, but the current iris detection accuracy is not high.
  • the embodiment of the present application provides an iris living body detecting method and related products, so as to improve the accuracy of the iris living body detection.
  • an embodiment of the present application provides a method for detecting an iris living body, including:
  • Whether the iris image is from the living iris is determined based on the K detection results.
  • an embodiment of the present application provides an iris living body detecting apparatus, including:
  • a dividing unit configured to divide the iris image into K area images, where K is an integer greater than 1;
  • a detecting unit configured to perform a living body detection on the K area images by using a P-type iris living body detection algorithm, to obtain K detection results, wherein the P is an integer greater than 1 and not greater than the K;
  • a determining unit configured to determine, according to the K detection results, whether the iris image is from a living iris.
  • an embodiment of the present application provides an electronic device, including: an application processor AP and a memory; and one or more programs, where the one or more programs are stored in the memory, and configured to Executed by the AP, the program includes instructions for performing some or all of the steps as described in the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to execute as implemented in the present application.
  • an embodiment of the present application provides a computer program product, where the computer program product includes a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute Apply some or all of the steps described in the first aspect of the embodiment.
  • the computer program product can be a software installation package.
  • the iris image is acquired, the iris image is divided into K area images, K is an integer greater than 1, and the P image is detected by the P iris active detection algorithm to obtain K images.
  • the detection result, P is an integer greater than 1 and not greater than K, and whether the iris image is from the living iris according to the K detection results, thereby, the iris image can be divided into multiple regions, and an iris living body detection is selected for each region.
  • the algorithm performs in vivo detection, and then can obtain multiple detection results, and according to the detection results, whether the iris image is from the living iris or not, can reduce the false detection rate caused by the single iris living body detection algorithm, and improve the accuracy of the iris living body detection. Sex.
  • FIG. 1A is a schematic structural diagram of an example smart phone provided by an embodiment of the present application.
  • FIG. 1B is a schematic flow chart of an iris living body detecting method disclosed in an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of another iris living body detecting method disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 4A is a schematic structural view of an iris living body detecting device according to an embodiment of the present application.
  • FIG. 4B is a schematic structural diagram of a detecting unit of the iris living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4C is a schematic structural diagram of a determining unit of the iris living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4D is a schematic structural diagram of an acquiring unit of the iris living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4E is another schematic structural diagram of the iris living body detecting device described in FIG. 4A provided by the embodiment of the present application;
  • FIG. 4F is another schematic structural view of the iris living body detecting device described in FIG. 4A provided by the embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the electronic device involved in the embodiments of the present application may include various handheld devices having wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user devices (User Equipment, UE), mobile station (MS), terminal device, and the like.
  • user devices User Equipment, UE
  • MS mobile station
  • terminal device terminal device
  • the devices mentioned above are collectively referred to as electronic devices.
  • the iris recognition device of the smart phone 100 may include an infrared fill light 21 and an infrared camera 22.
  • the iris recognition device collects the iris image, and the camera 23 can be a front camera.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for detecting an iris living body according to an embodiment of the present application, which can be applied to a smart phone as described in FIG. 1A .
  • the iris living body detecting method described in this embodiment includes the following steps:
  • the iris image in the embodiment of the present application may be an image of a single-finger iris region or an image including an iris region (for example, a human eye image).
  • the iris image can be acquired by the iris recognition device.
  • the foregoing step 101 may include the following steps:
  • the image of the test image can be segmented to segment the iris image therefrom.
  • the iris image can be divided into K area images, each area image exists independently, that is, there is no overlapping area between the K area images, and K is an integer greater than 1.
  • the iris can be arranged according to a preset grid.
  • the image is divided into K area images, or the iris image is divided into K area images of equal area.
  • the preset grid may be set by the user, or the system defaults, for example, from a pre-stored grid template.
  • step 101 the following steps may be further included:
  • Image enhancement processing is performed on the iris image.
  • the image enhancement processing may include, but is not limited to, image denoising (eg, wavelet transform for image denoising), image restoration (eg, Wiener filtering), dark visual enhancement algorithm (eg, histogram equalization, grayscale pull) Stretching, etc.), after image enhancement processing of the iris image, the quality of the iris image can be improved to some extent. Further, in the process of performing step 102, the iris image after the enhancement processing may be divided into K area images.
  • image denoising eg, wavelet transform for image denoising
  • image restoration eg, Wiener filtering
  • dark visual enhancement algorithm eg, histogram equalization, grayscale pull
  • step 101 the following steps may be further included:
  • A1. Perform image quality evaluation on the iris image to obtain an image quality evaluation value
  • A2 Perform image enhancement processing on the iris image when the image quality evaluation value is lower than a preset quality threshold.
  • the preset quality threshold may be set by the user or the system defaults, and the image quality of the iris image may be first evaluated to obtain an image quality evaluation value, and whether the quality of the iris image is good or bad is determined by the image quality evaluation value.
  • the image quality evaluation value is greater than or equal to the preset quality threshold, the iris image quality is considered to be good.
  • the image quality evaluation value is less than the preset quality threshold, the iris image quality may be considered to be poor, and further, the iris image may be subjected to image enhancement processing. Further, in the process of performing step 102, the iris image after the enhancement processing may be divided into K area images.
  • At least one image quality evaluation index may be used to perform image quality evaluation on the iris image, thereby obtaining an image quality evaluation value.
  • Image quality evaluation indicators may be included, and each image quality evaluation index also corresponds to a weight. Thus, each image quality evaluation index can obtain an evaluation result when performing image quality evaluation on the iris image, and finally, a weighting operation is performed. The final image quality evaluation value is also obtained.
  • Image quality evaluation indicators may include, but are not limited to, mean, standard deviation, entropy, sharpness, signal to noise ratio, and the like.
  • Image quality can be evaluated by using 2 to 10 image quality evaluation indicators. Specifically, the number of image quality evaluation indicators and which indicator are selected are determined according to specific implementation conditions. Of course, it is also necessary to select image quality evaluation indicators in combination with specific scenes, and the image quality indicators in the dark environment and the image quality evaluation in the bright environment may be different.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • the image may be evaluated by using multiple image quality evaluation indicators, and the plurality of image quality evaluation indicators may be set when the image quality is evaluated.
  • the weight of each image quality evaluation index in the image quality evaluation index may obtain a plurality of image quality evaluation values, and the final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and corresponding weights, for example, three images
  • the quality evaluation indicators are: A index, B index and C index.
  • the weight of A is a1
  • the weight of B is a2
  • the weight of C is a3.
  • A, B and C are used to evaluate the image quality of an image
  • a The corresponding image quality evaluation value is b1
  • the image quality evaluation value corresponding to B is b2
  • the image quality evaluation value corresponding to C is b3
  • the final image quality evaluation value a1b1+a2b2+a3b3.
  • the larger the image quality evaluation value the better the image quality.
  • the above-mentioned P-type iris living body detection algorithm can be any living body detection algorithm.
  • each iris living body detection algorithm in the P-type iris living body detection algorithm performs at least one of the K area images for living detection.
  • the only one area image in the area image is subjected to living body detection.
  • P is less than K, an iris living body detection algorithm appears to perform living body detection on a plurality of area images.
  • the above P may be set by the user or the system defaults, which may be determined according to actual conditions.
  • the P-type iris living body detection algorithm is used to perform the living body detection on the K area images to obtain K detection results, which may include the following steps:
  • the above regional characteristics may be at least one of the following: number of feature points, image quality evaluation value, definition, entropy, average brightness, and the like.
  • the mapping relationship between the above regional characteristics and the iris living body detection algorithm can be obtained according to experiments. For example, when the regional characteristic is the number of feature points, different iris living body detection algorithms can be selected according to the number of feature points, for example, 0 ⁇ Five feature points correspond to the iris living body detection algorithm A, 6 to 10 feature points, corresponding to the iris living body detection algorithm B, 11 to 15 feature points, corresponding to the iris living body detection algorithm C, and so on.
  • the regional characteristics of each region in the K region images can be determined, and K region characteristics can be obtained.
  • the iris corresponding to the K region characteristics can be determined according to the mapping relationship between the preset region characteristics and the iris living body detection algorithm.
  • the living body detection algorithm obtains the P-type iris living body detection algorithm.
  • some of the K region characteristics correspond to an iris living body detection algorithm.
  • the K area images can be subjected to biometric detection according to the P-type iris living body detection algorithm, and K detection results are obtained, and each detection result corresponds to one area image. In this way, according to the regional characteristics, the living body detection can be performed on different regions in a targeted manner, and the false detection rate can be reduced to some extent.
  • the above-mentioned iris living body detection algorithm may be at least one of the following: a support vector machine (SVM)-based iris living body detection algorithm, a neural network-based iris living body detection algorithm, and a genetic algorithm-based iris living body detection algorithm. and many more.
  • SVM support vector machine
  • the above K detection results may indicate that the iris image is from the living iris, and of course, the iris image may also be displayed from the non-living iris.
  • the K test results show that the iris image is from the living iris, it can be confirmed that the iris image is from the living iris.
  • the above-mentioned non-living iris may be one of the following: a 3D printed iris, an iris in a photograph, and an iris of a person without vital signs, which are not limited herein.
  • determining, according to the K detection results, whether the iris image is from a living iris may include the following steps:
  • the preset threshold may be set by the user or the system defaults.
  • the above-mentioned P-type iris living body detection algorithm, each iris living body detection algorithm corresponds to a credibility, that is, it correctly realizes the probability of iris living body detection, and the credibility can be obtained by experience, for example, testing 1000 iris images, wherein, judging The ratio between the correct number of times and the total number of times (1000) is taken as the credibility.
  • Each detection result can correspond to a certain degree of credibility. Therefore, the weighted operation can be performed according to the corresponding credibility of the K detection results, and the target detection result can be obtained.
  • the detection result of the detection result is defined as 1 in the living body.
  • the detection result is defined as 0 for non-in vivo detection, and further, 5 detection results, A, B, C, D, and E, wherein A and B correspond to iris living body detection algorithm a, and the reliability is k1, C D corresponds to the iris living body detection algorithm b, and its reliability is k2, E corresponds to the iris living body detection algorithm c, and its reliability corresponds to k3.
  • a detection result is a living body
  • the B detection result is non-living
  • the target detection result is k1+2*k2.
  • the target detection result is greater than the preset threshold
  • the iris image can be considered to be from the living iris.
  • the target detection result is less than or equal to the preset threshold
  • the iris image can be considered to be from the non-living iris.
  • the iris image is acquired, the iris image is divided into K area images, K is an integer greater than 1, and the P image is detected by the P iris active detection algorithm to obtain K images.
  • the detection result, P is an integer greater than 1 and not greater than K, and whether the iris image is from the living iris according to the K detection results, thereby, the iris image can be divided into multiple regions, and an iris living body detection is selected for each region.
  • the algorithm performs in vivo detection, and then can obtain multiple detection results, and according to the detection results, whether the iris image is from the living iris or not, can reduce the false detection rate caused by the single iris living body detection algorithm, and improve the accuracy of the iris living body detection. Sex.
  • FIG. 2 is a schematic flow chart of an embodiment of a method for detecting an iris living body according to an embodiment of the present application.
  • the iris living body detecting method described in this embodiment includes the following steps:
  • the face image of the electronic device can be used to obtain the face image, and the face image is used as the test image.
  • the test image can also be a human eye image.
  • step 202 extracting an iris image from the test image may include the following steps:
  • the above-mentioned human eye recognition may adopt a classifier, and the classifier may be at least one of the following: a support vector machine (SVM), an Adaboost classifier, a Bayesian classifier, etc., which are not limited herein, for example.
  • the Adaboost classifier can be used to perform face recognition on the test image to obtain the human eye image, and the contour of the human eye image can be further extracted to obtain a contour image.
  • the contour extraction method can be at least one of the following: Hough transform, haar calculation Sub, Canny operator, etc., are not limited here.
  • the structure of the human eye determines the position of the iris contour and the shape of the iris contour. Therefore, the iris contour can be determined from the contour image according to the human eye structure. Further, the image of the corresponding region can be selected from the human eye image according to the iris contour, that is, the iris image.
  • the test image is acquired, and the iris image is extracted therefrom, and the iris image is divided into K area images, K is an integer greater than 1, and the P area iris detection algorithm is used to calculate the K area images.
  • Performing a living body detection to obtain K detection results, P is an integer greater than 1 and not greater than K, and determining whether the iris image is from the living iris according to the K detection results, thereby, the iris image can be divided into a plurality of regions, and each The region selects an iris biopsy algorithm to perform in vivo detection, and then obtains multiple detection results, and according to the detection results, whether the iris image is from the living iris or not, can reduce the false detection rate caused by the single iris living detection algorithm. Improve the accuracy of iris biopsy.
  • FIG. 3 is an electronic device according to an embodiment of the present application, including: an application processor AP and a memory; and one or more programs, where the one or more programs are stored in the memory, And configured to be executed by the AP, the program comprising instructions for performing the following steps:
  • Whether the iris image is from the living iris is determined based on the K detection results.
  • the program includes instructions for performing the following steps:
  • the region characteristic is at least one of the following: number of feature points, image quality evaluation value, sharpness, entropy, and average brightness.
  • the program includes instructions for performing the following steps:
  • the target detection result is greater than a preset threshold, it is confirmed that the iris image is from a living iris.
  • the program includes instructions for performing the following steps:
  • the iris image is extracted from the test image.
  • the program includes instructions for performing the following steps:
  • An iris image is extracted from the human eye image based on the iris contour.
  • the program includes instructions for performing the following steps:
  • the program includes instructions for performing the following steps:
  • the iris image after the image enhancement processing is divided into K area images.
  • the program further includes instructions for performing the following steps:
  • the step of performing image enhancement processing on the iris image is performed.
  • the image quality evaluation is performed on the iris image to obtain an image quality evaluation value
  • the program includes instructions for performing the following steps:
  • the image quality of the iris image is evaluated by using at least one image quality evaluation index to obtain an image quality evaluation value.
  • FIG. 4A is a schematic structural diagram of an iris living body detecting apparatus according to the embodiment.
  • the iris living body detecting device is applied to an electronic device, and the iris living body detecting device includes an obtaining unit 401, a dividing unit 402, a detecting unit 403, and a determining unit 404, wherein
  • An obtaining unit 401 configured to acquire an iris image
  • a dividing unit 402 configured to divide the iris image into K area images, where K is an integer greater than 1;
  • the detecting unit 403 is configured to perform a living body detection on the K area images by using a P-type iris living body detection algorithm to obtain K detection results, where P is an integer greater than 1 and not greater than the K;
  • the determining unit 404 is configured to determine, according to the K detection results, whether the iris image is from a living iris.
  • FIG. 4B is a specific detailed structure of the detecting unit 403 of the iris living body detecting device described in FIG. 4A, the detecting unit 403 includes: a first determining module 4031 and a detecting module 4032, as follows;
  • a first determining module 4031 configured to determine an area characteristic of each of the K area images, to obtain the K area characteristics
  • the first determining module 4031 is further configured to:
  • the detecting module 4032 is configured to perform living body detection on the K area images according to the P type iris living body detection algorithm to obtain the K detection results.
  • the regional characteristic is at least one of the following: a number of feature points, an image quality evaluation value, a sharpness, an entropy, and an average brightness.
  • FIG. 4C is a specific detailed structure of the determining unit 404 of the iris living body detecting apparatus described in FIG. 4A, and the determining unit 404 may include: a first acquiring module 4041, a calculating module 4042, and a second determining. Module 4043, as follows:
  • the first obtaining module 4041 is configured to obtain the credibility corresponding to the P-type iris living body detection algorithm, and obtain P credibility degrees;
  • the calculating module 4042 is configured to perform a weighting operation according to the K detection results and the P reliability levels to obtain a target detection result;
  • the second determining module 4043 is configured to confirm that the iris image is from the living body iris when the target detection result is greater than a preset threshold.
  • FIG. 4D is a specific detail structure of the acquiring unit 401 of the iris living body detecting device described in FIG. 4A, and the acquiring unit 401 may include: a second acquiring module 4011 and an extracting module 4012, as follows:
  • a second obtaining module 4011 configured to acquire a test image
  • An extraction module 4012 is configured to extract the iris image from the test image.
  • the extracting module 4012 is specifically configured to:
  • An iris image is extracted from the human eye image based on the iris contour.
  • FIG. 4E is a modified structure of the iris living body detecting device described in FIG. 4A, and the device may further include: a processing unit 405, as follows:
  • the processing unit 405 is configured to perform image enhancement processing on the iris image, and divide the iris image after the image enhancement processing into K area images by the dividing unit.
  • FIG. 4F is a modified structure of the iris living body detecting device described in FIG. 4A, and the device may further include: an evaluating unit 406, as follows:
  • the evaluation unit 406 is configured to perform image quality evaluation on the iris image to obtain an image quality evaluation value, and when the image quality evaluation value is lower than a preset quality threshold, performing, by the processing unit 405, the iris image The steps of image enhancement processing.
  • the evaluation unit 406 is specifically configured to:
  • the image quality of the iris image is evaluated by using at least one image quality evaluation index to obtain an image quality evaluation value.
  • the embodiment of the present application further provides another electronic device. As shown in FIG. 5, for the convenience of description, only the parts related to the embodiment of the present application are shown. If the specific technical details are not disclosed, refer to the method of the embodiment of the present application. section.
  • the electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sales), an in-vehicle computer, and the like, and the electronic device is used as a mobile phone as an example:
  • FIG. 5 is a block diagram showing a partial structure of a mobile phone related to an electronic device provided by an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, an application processor AP980, and a power supply. 990 and other components.
  • RF radio frequency
  • the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 930 may include a touch display screen 933, an iris recognition device 931, and other input devices 932.
  • the iris recognition device 931 is coupled to the touch display screen 933, and the iris recognition area of the iris recognition device 931 is located in the first area of the touch display screen 933.
  • the input unit 930 can also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of physical buttons, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the iris recognition device 931 is configured to acquire an iris image
  • the AP 980 is configured to perform the following steps:
  • Whether the iris image is from the living iris is determined based on the K detection results.
  • the AP 980 is the control center of the handset, which utilizes various interfaces and lines to connect various portions of the entire handset, and executes the handset by running or executing software programs and/or modules stored in the memory 920, as well as invoking data stored in the memory 920. A variety of functions and processing data to monitor the phone as a whole.
  • the AP 980 may include one or more processing units; preferably, the AP 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation.
  • the processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the AP 980.
  • memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the RF circuit 910 can be used for receiving and transmitting information.
  • RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • LNA low noise amplifier
  • RF circuitry 910 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (code division) Multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, short messaging service (SMS), and the like.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • SMS short messaging service
  • the handset may also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the touch display screen according to the brightness of the ambient light, and the proximity sensor can turn off the touch display when the mobile phone moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 960, a speaker 961, and a microphone 962 can provide an audio interface between the user and the handset.
  • the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961 for conversion to the sound signal by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal by the audio circuit 960. After receiving, it is converted into audio data, and then the audio data is played by the AP 980, sent to the other mobile phone via the RF circuit 910, or the audio data is played to the memory 920 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the mobile phone also includes a power source 990 (such as a battery) that supplies power to various components.
  • a power source 990 such as a battery
  • the power source can be logically connected to the AP980 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • each step method flow can be implemented based on the structure of the mobile phone.
  • each unit function can be implemented based on the structure of the mobile phone.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing the computer to execute any one of the iris living body detection methods as described in the foregoing method embodiments. Some or all of the steps.
  • the embodiment of the present application further provides a computer program product, comprising: a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the operations as recited in the foregoing method embodiments Any or all of the steps of any iris in vivo detection method.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software program module.
  • the integrated unit if implemented in the form of a software program module and sold or used as a standalone product, may be stored in a computer readable memory.
  • a computer device which may be a personal computer, server or network device, etc.
  • the foregoing memory includes: a U disk, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes. medium.

Abstract

一种虹膜活体检测方法及相关产品,方法包括:获取虹膜图像(101);将所述虹膜图像划分为K个区域图像,所述K为大于1的整数(102);采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数(103);根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜(104)。将虹膜图像划分为多个区域,并对每一区域选择一种虹膜活体检测算法对其进行活体检测,进而可得到多个检测结果,并依据这些检测结果确定虹膜图像是否来自活体虹膜,可降低单种虹膜活体检测算法带来的误检测率,提高了虹膜活体检测的准确性。

Description

虹膜活体检测方法及相关产品
本申请要求2017年6月30日递交的发明名称为“虹膜活体检测方法及相关产品”的申请号201710523091.0的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请涉及电子设备技术领域,具体涉及一种虹膜活体检测方法及相关产品。
背景技术
随着电子设备(手机、平板电脑等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。
目前来看,虹膜识别越来越受到电子设备生产厂商的青睐,虹膜识别的安全性也是其关注的重要问题之一。出于安全性考虑,通常情况下,会在虹膜识别之前,先对虹膜进行活体检测,但是目前的虹膜活体检测准确性并不高。
发明内容
本申请实施例提供了一种虹膜活体检测方法及相关产品,以期提高虹膜活体检测的准确性。
第一方面,本申请实施例提供一种虹膜活体检测方法,包括:
获取虹膜图像;
将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
第二方面,本申请实施例提供了一种虹膜活体检测装置,包括:
获取单元,用于获取虹膜图像;
划分单元,用于将所述虹膜图像划分为K个区域图像,所述K为大于1 的整数;
检测单元,用于采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
判断单元,用于根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
第三方面,本申请实施例提供了一种电子设备,包括:应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行如本申请实施例第一方面中所描述的部分或全部步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,所述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
实施本申请实施例,具有如下有益效果:
可以看出,本申请实施例中,获取虹膜图像,将虹膜图像划分为K个区域图像,K为大于1的整数,采用P种虹膜活体检测算法对K个区域图像进行活体检测,得到K个检测结果,P为大于1且不大于K的整数,根据K个检测结果判断虹膜图像是否来自活体虹膜,从而,可将虹膜图像划分为多个区域,并对每一区域选择一种虹膜活体检测算法对其进行活体检测,进而可得到多个检测结果,并依据这些检测结果确定虹膜图像是否来自活体虹膜,可降低单种虹膜活体检测算法带来的误检测率,提高了虹膜活体检测的准确性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种示例智能手机的架构示意图;
图1B是本申请实施例公开的一种虹膜活体检测方法的流程示意图;
图2是本申请实施例公开的另一种虹膜活体检测方法的流程示意图;
图3是本申请实施例提供的一种电子设备的结构示意图;
图4A是本申请实施例提供的一种虹膜活体检测装置的结构示意图;
图4B是本申请实施例提供的图4A所描述的虹膜活体检测装置的检测单元的结构示意图;
图4C是本申请实施例提供的图4A所描述的虹膜活体检测装置的确定单元的结构示意图;
图4D是本申请实施例提供的图4A所描述的虹膜活体检测装置的获取单元的结构示意图;
图4E是本申请实施例提供的图4A所描述的虹膜活体检测装置的另一结构示意图;
图4F是本申请实施例提供的图4A所描述的虹膜活体检测装置的另一结构示意图;
图5是本申请实施例公开的另一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性 可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为电子设备。下面对本申请实施例进行详细介绍。如图1A所示的一种示例智能手机100,该智能手机100的虹膜识别装置可以包括红外补光灯21和红外摄像头22,在虹膜识别装置工作过程中,红外补光灯21的光线打到虹膜上之后,经过虹膜反射回红外摄像头22,虹膜识别装置采集虹膜图像,另外,摄像头23可为前置摄像头。
请参阅图1,为本申请实施例提供的一种虹膜活体检测方法的实施例流程示意图,可应用于如图1A所描述的智能手机。本实施例中所描述的虹膜活体检测方法,包括以下步骤:
101、获取虹膜图像。
其中,本申请实施例中的虹膜图像可为单指虹膜区域的图像,或者,包含虹膜区域的图像(例如,一只人眼图像)。例如,在用户使用电子设备时,可通过虹膜识别装置获取虹膜图像。
可选地,上述步骤101,可包括如下步骤:
11、获取测试图像。
12、从所述测试图像中提取虹膜图像。
其中,测试图像的话,不仅只有虹膜,当然还包括其他区域,因此,可对测试图像进行图像分割,以从中分割出虹膜图像。
102、将所述虹膜图像划分为K个区域图像,所述K为大于1的整数。
其中,可将虹膜图像划分为K个区域图像,每一区域图像独立存在,即K个区域图像之间没有重叠区域,K为大于1的整数,可选地,可以按照预设网格将虹膜图像划分为K个区域图像,又或者,将虹膜图像划分为等面积的K 个区域图像,上述预设网格可由用户自行设置,或者,系统默认,例如,来自于预先存储的网格模板。
可选的,在上述步骤101与步骤102之间,还可以包含如下步骤:
对所述虹膜图像进行图像增强处理。
其中,图像增强处理可包括但不仅限于:图像去噪(例如,小波变换进行图像去噪)、图像复原(例如,维纳滤波)、暗视觉增强算法(例如,直方图均衡化、灰度拉伸等等),在对虹膜图像进行图像增强处理之后,虹膜图像的质量可在一定程度上得到提升。进一步地,在执行步骤102的过程中,可对增强处理之后的虹膜图像划分为K个区域图像。
可选地,在上述步骤101与步骤102之间,还可以包含如下步骤:
A1、对所述虹膜图像进行图像质量评价,得到图像质量评价值;
A2、在所述图像质量评价值低于预设质量阈值时,对所述虹膜图像进行图像增强处理。
其中,上述预设质量阈值可由用户自行设置或者系统默认,可先对虹膜图像进行图像质量评价,得到一个图像质量评价值,通过该图像质量评价值判断该虹膜图像的质量是好还是坏,在图像质量评价值大于或等于预设质量阈值时,可认为虹膜图像质量好,在图像质量评价值小于预设质量阈值时,可认为虹膜图像质量差,进而,可对虹膜图像进行图像增强处理。进一步地,在执行步骤102的过程中,可对增强处理之后的虹膜图像划分为K个区域图像。
其中,上述步骤A1中,可采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,从而,得到图像质量评价值。
可包含多个图像质量评价指标,每一图像质量评价指标也对应一个权重,如此,每一图像质量评价指标对虹膜图像进行图像质量评价时,均可得到一个评价结果,最终,进行加权运算,也就得到最终的图像质量评价值。图像质量评价指标可包括但不仅限于:均值、标准差、熵、清晰度、信噪比等等。
需要说明的是,由于采用单一评价指标对图像质量进行评价时,具有一定的局限性,因此,可采用多个图像质量评价指标对图像质量进行评价,当然,对图像质量进行评价时,并非图像质量评价指标越多越好,因为图像质量评价指标越多,图像质量评价过程的计算复杂度越高,也不见得图像质量评价效果越好,因此,在对图像质量评价要求较高的情况下,可采用2~10个图像质量 评价指标对图像质量进行评价。具体地,选取图像质量评价指标的个数及哪个指标,依据具体实现情况而定。当然,也得结合具体地场景选取图像质量评价指标,在暗环境下进行图像质量评价和亮环境下进行图像质量评价选取的图像质量指标可不一样。
可选地,在对图像质量评价精度要求不高的情况下,可用一个图像质量评价指标进行评价,例如,以熵对待处理图像进行图像质量评价值,可认为熵越大,则说明图像质量越好,相反地,熵越小,则说明图像质量越差。
可选地,在对图像质量评价精度要求较高的情况下,可以采用多个图像质量评价指标对图像进行评价,在多个图像质量评价指标对图像进行图像质量评价时,可设置该多个图像质量评价指标中每一图像质量评价指标的权重,可得到多个图像质量评价值,根据该多个图像质量评价值及其对应的权重可得到最终的图像质量评价值,例如,三个图像质量评价指标分别为:A指标、B指标和C指标,A的权重为a1,B的权重为a2,C的权重为a3,采用A、B和C对某一图像进行图像质量评价时,A对应的图像质量评价值为b1,B对应的图像质量评价值为b2,C对应的图像质量评价值为b3,那么,最后的图像质量评价值=a1b1+a2b2+a3b3。通常情况下,图像质量评价值越大,说明图像质量越好。
103、采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数。
其中,上述P种虹膜活体检测算法可为任一活体检测算法,当然,由于每一虹膜活体检测算法并非可以实现无误检测,因而,其具有一定的概率,即每一种活体检测算法对应一个可信度。具体实现中,P种虹膜活体检测算法中的每种虹膜活体检测算法至少对K个区域图像中的一个区域图像进行活体检测,在P=K时,则每一种虹膜活体检测算法可对K个区域图像中的唯一一个区域图像进行活体检测,在P小于K时,则会出现有的虹膜活体检测算法对多个区域图像进行活体检测。当然,上述P可由用户自行设置或者系统默认,具体可依据实际情况而定。
可选地,上述步骤103中,所述采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,可包括如下步骤:
31、确定所述K个区域图像中每一区域图像的区域特性,得到K个区域 特性;
32、按照预设的区域特性与虹膜活体检测算法之间的映射关系,确定所述K个区域特性对应的活体检测算法,得到所述P种虹膜活体检测算法;
33、根据所述P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果。
其中,上述区域特性可为以下至少一种:特征点个数、图像质量评价值、清晰度、熵、平均亮度等等。上述区域特性与虹膜活体检测算法之间的映射关系,可依据实验得到,例如,在区域特性为特征点个数的时候,可依据特征点个数选择不同的虹膜活体检测算法,例如,0~5个特征点,对应虹膜活体检测算法A,6~10个特征点,对应虹膜活体检测算法B,11~15个特征点,对应虹膜活体检测算法C,以此类推。可确定K个区域图像中每一区域的区域特性,可得到K个区域特性,进而,可按照上述预设的区域特性与虹膜活体检测算法之间的映射关系,确定K个区域特性对应的虹膜活体检测算法,得到P种虹膜活体检测算法,当然,有可能K个区域特性中某几个区域特性对应一种虹膜活体检测算法。进而,可根据该P种虹膜活体检测算法对K个区域图像进行活体检测,得到K个检测结果,每一检测结果对应一个区域图像。如此,可依据区域特性,有针对性地对不同区域进行活体检测,可在一定程度上降低误检测率。
可选地,上述虹膜活体检测算法可为以下至少一种:基于支持向量机(Support Vector Machine,SVM)的虹膜活体检测算法,基于神经网络的虹膜活体检测算法、基于遗传算法的虹膜活体检测算法等等。
104、根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
其中,上述K个检测结果中有的可能是显示虹膜图像来自于活体虹膜,当然,也可能有显示虹膜图像来自于非活体虹膜。在K个检测结果中大部分检测结果显示虹膜图像来自于活体虹膜的时候,可确认虹膜图像来自于活体虹膜。上述非活体虹膜可为以下一种:3D打印的虹膜、照片中的虹膜、没有生命特征的人的虹膜,在此不做限定。
可选地,上述步骤104中,根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜,可包括如下步骤:
41、获取所述P种虹膜活体检测算法对应的可信度,得到P个可信度;
42、根据所述K个检测结果和所述P个可信度进行加权运算,得到目标检测结果;
43、在所述目标检测结果大于预设阈值时,则确认所述虹膜图像来自活体虹膜。
其中,上述预设阈值可由用户自行设置或者系统默认。上述P种虹膜活体检测算法,每一虹膜活体检测算法均对应一个可信度,即其正确实现虹膜活体检测的概率,该可信度可由经验得到,例如,测试1000个虹膜图像,其中,判断正确的次数与总次数(1000)之间的比值作为可信度。每一检测结果可对应一个可信度,因而,可根据K个检测结果其对应的可信度进行加权运算,可得到目标检测结果,具体地,可将检测结果为活体的检测结果定义为1,检测结果为非活体的检测结果定义为0,进而,5个检测结果,A、B、C、D和E,其中,A、B对应虹膜活体检测算法a,其可信度为k1,C、D对应虹膜活体检测算法b,其可信度为k2,E对应虹膜活体检测算法c,其可信度对应k3,若A检测结果为活体,B检测结果为非活体,C、D检测结果为活体,E检测结果为非活体,则其目标检测结果为k1+2*k2。在目标检测结果大于预设阈值的时候,可认为虹膜图像来自活体虹膜。在目标检测结果小于或等于预设阈值的时候,可认为虹膜图像来自非活体虹膜。
可以看出,本申请实施例中,获取虹膜图像,将虹膜图像划分为K个区域图像,K为大于1的整数,采用P种虹膜活体检测算法对K个区域图像进行活体检测,得到K个检测结果,P为大于1且不大于K的整数,根据K个检测结果判断虹膜图像是否来自活体虹膜,从而,可将虹膜图像划分为多个区域,并对每一区域选择一种虹膜活体检测算法对其进行活体检测,进而可得到多个检测结果,并依据这些检测结果确定虹膜图像是否来自活体虹膜,可降低单种虹膜活体检测算法带来的误检测率,提高了虹膜活体检测的准确性。
请参阅图2,为本申请实施例提供的一种虹膜活体检测方法的实施例流程示意图。本实施例中所描述的虹膜活体检测方法,包括以下步骤:
201、获取测试图像。
其中,可利用电子设备的摄像头获取人脸图像,将该人脸图像作为测试图像。当然,测试图像还可以为人眼图像。
202、从所述测试图像中提取虹膜图像。
其中,上述步骤202,从所述测试图像中提取虹膜图像,可包括如下步骤:
221、对所述测试图像进行人眼识别,得到人眼图像;
222、对所述人眼图像进行轮廓提取,得到轮廓图像;
223、依据人眼结构从所述轮廓图像中确定出虹膜轮廓;
224、依据虹膜轮廓从所述人眼图像中提取出虹膜图像。
其中,上述人眼识别可以采用分类器,分类器可以为以下至少一种:支持向量机(support vector machine,SVM)、Adaboost分类器、贝叶斯分类器等等,在此不做限定,例如,可采用Adaboost分类器对测试图像进行人脸识别,得到人眼图像,可进一步对人眼图像进行轮廓提取,得到轮廓图像,轮廓提取的方式可以为以下至少一种:霍夫变换、haar算子、Canny算子等等,在此不做限定。人眼结构决定了虹膜轮廓的位置、以及虹膜轮廓形状,因此,可以依据人眼结构从轮廓图像中确定出虹膜轮廓,进而,可以依据虹膜轮廓从人眼图像中选取对应区域的图像,即虹膜图像。
203、将所述虹膜图像划分为K个区域图像,所述K为大于1的整数。
204、采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数。
205、根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
其中,上述步骤202-步骤205的具体描述可参照图1B所描述的虹膜活体检测方法的对应步骤,在此不再赘述。
可以看出,本申请实施例中,获取测试图像,并从其中提取虹膜图像,将虹膜图像划分为K个区域图像,K为大于1的整数,采用P种虹膜活体检测算法对K个区域图像进行活体检测,得到K个检测结果,P为大于1且不大于K的整数,根据K个检测结果判断虹膜图像是否来自活体虹膜,从而,可将虹膜图像划分为多个区域,并对每一区域选择一种虹膜活体检测算法对其进行活体检测,进而可得到多个检测结果,并依据这些检测结果确定虹膜图像是否来自活体虹膜,可降低单种虹膜活体检测算法带来的误检测率,提高了虹膜活体检测的准确性。
请参阅图3,图3是本申请实施例提供的一种电子设备,包括:应用处理 器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行以下步骤的指令:
获取虹膜图像;
将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
在一个可能的示例中,在所述采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果方面,所述程序包括用于执行以下步骤的指令:
确定所述K个区域图像中每一区域图像的区域特性,得到所述K个区域特性;
按照预设的区域特性与虹膜活体检测算法之间的映射关系,确定所述K个区域特性对应的虹膜活体检测算法,得到所述P种虹膜活体检测算法;
根据所述P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果。
在一个可能的示例中,所述区域特性为以下至少一种:特征点个数、图像质量评价值、清晰度、熵和平均亮度。
在一个可能的示例中,在所述根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜方面,所述程序包括用于执行以下步骤的指令:
获取所述P种虹膜活体检测算法对应的可信度,得到P个可信度;
根据所述K个检测结果和所述P个可信度进行加权运算,得到目标检测结果;
在所述目标检测结果大于预设阈值时,则确认所述虹膜图像来自活体虹膜。
在一个可能的示例中,在所述获取虹膜图像方面,所述程序包括用于执行以下步骤的指令:
获取测试图像;
从所述测试图像中提取所述虹膜图像。
在一个可能的示例中,在所述从所述测试图像中提取虹膜图像方面,所述程序包括用于执行以下步骤的指令:
对所述测试图像进行人眼识别,得到人眼图像;
对所述人眼图像进行轮廓提取,得到轮廓图像;
依据人眼结构从所述轮廓图像中确定出虹膜轮廓;
依据虹膜轮廓从所述人眼图像中提取出虹膜图像。
在一个可能的示例中,所述程序包括用于执行以下步骤的指令:
对所述虹膜图像进行图像增强处理;在所述将所述虹膜图像划分为K个区域图像方面,所述程序包括用于执行以下步骤的指令:
将所述图像增强处理后的虹膜图像划分为K个区域图像。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
对所述虹膜图像进行图像质量评价,得到图像质量评价值;
在所述图像质量评价值低于预设质量阈值时,执行对所述虹膜图像进行图像增强处理的步骤。
在一个可能的示例中,所述对所述虹膜图像进行图像质量评价,得到图像质量评价值,所述程序包括用于执行以下步骤的指令:
采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,得到图像质量评价值。
请参阅图4A,图4A是本实施例提供的一种虹膜活体检测装置的结构示意图。该虹膜活体检测装置应用于电子设备,虹膜活体检测装置包括获取单元401、划分单元402、检测单元403和判断单元404,其中,
获取单元401,用于获取虹膜图像;
划分单元402,用于将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
检测单元403,用于采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
判断单元404,用于根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
可选地,如图4B,图4B是图4A所描述的虹膜活体检测装置的检测单元 403的具体细节结构,所述检测单元403包括:第一确定模块4031和检测模块4032,具体如下;
第一确定模块4031,用于确定所述K个区域图像中每一区域图像的区域特性,得到所述K个区域特性;
所述第一确定模块4031,还用于:
按照预设的区域特性与虹膜活体检测算法之间的映射关系,确定所述K个区域特性对应的虹膜活体检测算法,得到所述P种虹膜活体检测算法;
检测模块4032,用于根据所述P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果。
可选地,所述区域特性为以下至少一种:特征点个数、图像质量评价值、清晰度、熵和平均亮度。
可选地,如图4C,图4C是图4A所描述的虹膜活体检测装置的判断单元404的具体细节结构,所述判断单元404可包括:第一获取模块4041、计算模块4042和第二确定模块4043,具体如下:
第一获取模块4041,用于获取所述P种虹膜活体检测算法对应的可信度,得到P个可信度;
计算模块4042,用于根据所述K个检测结果和所述P个可信度进行加权运算,得到目标检测结果;
第二确定模块4043,用于在所述目标检测结果大于预设阈值时,则确认所述虹膜图像来自活体虹膜。
可选地,如图4D,图4D是图4A所描述的虹膜活体检测装置的获取单元401的具体细节结构,所述获取单元401可包括:第二获取模块4011和提取模块4012,具体如下:
第二获取模块4011,用于获取测试图像;
提取模块4012,用于从所述测试图像中提取所述虹膜图像。
进一步可选地,在所述从所述测试图像中提取虹膜图像方面,所述提取模块4012具体用于:
对所述测试图像进行人眼识别,得到人眼图像;
对所述人眼图像进行轮廓提取,得到轮廓图像;
依据人眼结构从所述轮廓图像中确定出虹膜轮廓;
依据虹膜轮廓从所述人眼图像中提取出虹膜图像。
可选地,如图4E,图4E为图4A所描述的虹膜活体检测装置的变型结构,所述装置还可包括:处理单元405,具体如下:
处理单元405,用于对所述虹膜图像进行图像增强处理,并由所述划分单元将所述图像增强处理后的虹膜图像划分为K个区域图像。
可选地,如图4F,图4F为图4A所描述的虹膜活体检测装置的变型结构,所述装置还可包括:评价单元406,具体如下:
评价单元406,用于对所述虹膜图像进行图像质量评价,得到图像质量评价值,在所述图像质量评价值低于预设质量阈值时,由所述处理单元405执行对所述虹膜图像进行图像增强处理的步骤。
进一步可选地,在所述对所述虹膜图像进行图像质量评价,得到图像质量评价值方面,所述评价单元406具体用于:
采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,得到图像质量评价值。
可以理解的是,本实施例的虹膜活体检测装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供了另一种电子设备,如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括手机、平板电脑、PDA(personal digital assistant,个人数字助理)、POS(point of sales,销售终端)、车载电脑等任意终端设备,以电子设备为手机为例:
图5示出的是与本申请实施例提供的电子设备相关的手机的部分结构的框图。参考图5,手机包括:射频(radio frequency,RF)电路910、存储器920、输入单元930、传感器950、音频电路960、无线保真(wireless fidelity,WiFi)模块970、应用处理器AP980、以及电源990等部件。本领域技术人员可以理解,图5中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图5对手机的各个构成部件进行具体的介绍:
输入单元930可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元930可包括触控显示屏933、虹膜识别装置931以及其他输入设备932。虹膜识别装置931结合至触控显示屏933,虹膜识别装置931的虹膜识别区域位于触控显示屏933的第一区域。输入单元930还可以包括其他输入设备932。具体地,其他输入设备932可以包括但不限于物理按键、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
其中,所述虹膜识别装置931用于获取虹膜图像;
所述AP980用于执行如下步骤:
将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果,所述P为大于1且不大于所述K的整数;
根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
AP980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,AP980可包括一个或多个处理单元;优选的,AP980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到AP980中。
此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
RF电路910可用于信息的接收和发送。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(low noise amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(global system of mobile communication,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件、短消息服务(short messaging  service,SMS)等。
手机还可包括至少一种传感器950,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触控显示屏的亮度,接近传感器可在手机移动到耳边时,关闭触控显示屏和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号播放;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据播放AP980处理后,经RF电路910以发送给比如另一手机,或者将音频数据播放至存储器920以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
手机还包括给各个部件供电的电源990(比如电池),优选的,电源可以通过电源管理系统与AP980逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
前述图1B、图2所示的实施例中,各步骤方法流程可以基于该手机的结构实现。
前述图3、图4A~图4F所示的实施例中,各单元功能可以基于该手机的结构实现。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实 施例中记载的任何一种虹膜活体检测方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种虹膜活体检测方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部 或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(RAM,random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种虹膜活体检测方法,其特征在于,包括:
    获取虹膜图像;
    将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
    采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
    根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
  2. 根据权利要求1所述的方法,其特征在于,所述采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,包括:
    确定所述K个区域图像中每一区域图像的区域特性,得到K个区域特性;
    按照预设的区域特性与虹膜活体检测算法之间的映射关系,确定所述K个区域特性对应的虹膜活体检测算法,得到所述P种虹膜活体检测算法;
    根据所述P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果。
  3. 根据权利要求2所述的方法,其特征在于,所述区域特性为以下至少一种:特征点个数、图像质量评价值、清晰度、熵和平均亮度。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜,包括:
    获取所述P种虹膜活体检测算法对应的可信度,得到P个可信度;
    根据所述K个检测结果和所述P个可信度进行加权运算,得到目标检测结果;
    在所述目标检测结果大于预设阈值时,则确认所述虹膜图像来自活体虹膜。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述获取虹膜图像,包括:
    获取测试图像;
    从所述测试图像中提取所述虹膜图像。
  6. 根据权利要求5所述的方法,其特征在于,所述从所述测试图像中提取虹膜图像,包括:
    对所述测试图像进行人眼识别,得到人眼图像;
    对所述人眼图像进行轮廓提取,得到轮廓图像;
    依据人眼结构从所述轮廓图像中确定出虹膜轮廓;
    依据虹膜轮廓从所述人眼图像中提取出虹膜图像。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述方法还包括:
    对所述虹膜图像进行图像增强处理;
    所述将所述虹膜图像划分为K个区域图像,包括:
    将所述图像增强处理后的虹膜图像划分为K个区域图像。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    对所述虹膜图像进行图像质量评价,得到图像质量评价值;
    在所述图像质量评价值低于预设质量阈值时,执行对所述虹膜图像进行图像增强处理的步骤。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述虹膜图像进行图像质量评价,得到图像质量评价值,包括:
    采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,得到图像质量评价值。
  10. 一种虹膜活体检测装置,其特征在于,包括:
    获取单元,用于获取虹膜图像;
    划分单元,用于将所述虹膜图像划分为K个区域图像,所述K为大于1的整数;
    检测单元,用于采用P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到K个检测结果,所述P为大于1且不大于所述K的整数;
    判断单元,用于根据所述K个检测结果判断所述虹膜图像是否来自活体虹膜。
  11. 根据权利要求10所述的装置,其特征在于,所述检测单元包括:
    第一确定模块,用于确定所述K个区域图像中每一区域图像的区域特性,得到所述K个区域特性;
    所述第一确定模块,还用于:
    按照预设的区域特性与虹膜活体检测算法之间的映射关系,确定所述K 个区域特性对应的虹膜活体检测算法,得到所述P种虹膜活体检测算法;
    检测模块,用于根据所述P种虹膜活体检测算法对所述K个区域图像进行活体检测,得到所述K个检测结果。
  12. 根据权利要求11所述的装置,其特征在于,所述区域特性为以下至少一种:特征点个数、图像质量评价值、清晰度、熵和平均亮度。
  13. 根据权利要求10-12所述的装置,其特征在于,所述判断单元包括:
    第一获取模块,用于获取所述P种虹膜活体检测算法对应的可信度,得到P个可信度;
    计算模块,用于根据所述K个检测结果和所述P个可信度进行加权运算,得到目标检测结果;
    第二确定模块,用于在所述目标检测结果大于预设阈值时,则确认所述虹膜图像来自活体虹膜。
  14. 根据权利要求10-13任一项所述的装置,其特征在于,所述获取单元包括:
    第二获取模块,用于获取测试图像;
    提取模块,用于从所述测试图像中提取所述虹膜图像。
  15. 根据权利要求14所述的装置,其特征在于,在所述从所述测试图像中提取虹膜图像方面,所述提取模块具体用于:
    对所述测试图像进行人眼识别,得到人眼图像;
    对所述人眼图像进行轮廓提取,得到轮廓图像;
    依据人眼结构从所述轮廓图像中确定出虹膜轮廓;
    依据虹膜轮廓从所述人眼图像中提取出虹膜图像。
  16. 根据权利要求10-15任一项所述的装置,其特征在于,所述装置还包括:
    处理单元,用于对所述虹膜图像进行图像增强处理,并由所述划分单元将所述图像增强处理后的虹膜图像划分为K个区域图像。
  17. 根据权利要求16所述的装置,其特征在于,所述装置还包括:
    评价单元,用于对所述虹膜图像进行图像质量评价,得到图像质量评价值,在所述图像质量评价值低于预设质量阈值时,由所述处理单元执行对所述虹膜图像进行图像增强处理的步骤。
  18. 一种电子设备,其特征在于,包括:应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于如权利要求1-9任一项方法的指令。
  19. 一种计算机可读存储介质,其特征在于,其存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-9任一项所述的方法。
PCT/CN2018/090646 2017-06-30 2018-06-11 虹膜活体检测方法及相关产品 WO2019001253A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18824914.8A EP3627382A4 (en) 2017-06-30 2018-06-11 METHOD OF DETECTING IRISFULNESS AND RELATED PRODUCT
US16/725,539 US11200437B2 (en) 2017-06-30 2019-12-23 Method for iris-based living body detection and related products

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710523091.0 2017-06-30
CN201710523091.0A CN107358183B (zh) 2017-06-30 2017-06-30 虹膜活体检测方法及相关产品

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/725,539 Continuation US11200437B2 (en) 2017-06-30 2019-12-23 Method for iris-based living body detection and related products

Publications (1)

Publication Number Publication Date
WO2019001253A1 true WO2019001253A1 (zh) 2019-01-03

Family

ID=60273429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090646 WO2019001253A1 (zh) 2017-06-30 2018-06-11 虹膜活体检测方法及相关产品

Country Status (4)

Country Link
US (1) US11200437B2 (zh)
EP (1) EP3627382A4 (zh)
CN (1) CN107358183B (zh)
WO (1) WO2019001253A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358183B (zh) * 2017-06-30 2020-09-01 Oppo广东移动通信有限公司 虹膜活体检测方法及相关产品
CN111932537B (zh) * 2020-10-09 2021-01-15 腾讯科技(深圳)有限公司 对象形变检测方法、装置、计算机设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833646A (zh) * 2009-03-11 2010-09-15 中国科学院自动化研究所 一种虹膜活体检测方法
CN101916361A (zh) * 2010-05-28 2010-12-15 深圳大学 基于2d-dct变换的虹膜特征设计方法及系统
CN101923640A (zh) * 2010-08-04 2010-12-22 中国科学院自动化研究所 基于鲁棒纹理特征和机器学习对伪造虹膜图像判别的方法
CN104240205A (zh) * 2014-09-26 2014-12-24 北京无线电计量测试研究所 一种基于矩阵填充的虹膜图像增强方法
US20150098629A1 (en) * 2013-10-08 2015-04-09 Sri International Iris biometric recognition module and access control assembly
CN107358183A (zh) * 2017-06-30 2017-11-17 广东欧珀移动通信有限公司 虹膜活体检测方法及相关产品

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437513B1 (en) * 2012-08-10 2013-05-07 EyeVerify LLC Spoof detection for biometric authentication
US9767358B2 (en) * 2014-10-22 2017-09-19 Veridium Ip Limited Systems and methods for performing iris identification and verification using mobile devices
CN105447443B (zh) * 2015-06-16 2019-03-19 北京眼神智能科技有限公司 提高虹膜识别装置识别准确性的方法及装置
CN105608409B (zh) * 2015-07-16 2019-01-11 宇龙计算机通信科技(深圳)有限公司 指纹识别的方法及装置
CN105320939B (zh) * 2015-09-28 2019-01-25 深圳爱酷智能科技有限公司 虹膜活体检测的方法和装置
US9594969B1 (en) * 2015-11-19 2017-03-14 Intel Corporation Iris recognition including liveness testing
CN105550661A (zh) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 一种基于Adaboost算法的虹膜特征提取方法
CN105740914A (zh) * 2016-02-26 2016-07-06 江苏科海智能系统有限公司 一种基于近邻多分类器集成的车牌识别方法及系统
CN106408591B (zh) 2016-09-09 2019-04-05 南京航空航天大学 一种抗遮挡的目标跟踪方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833646A (zh) * 2009-03-11 2010-09-15 中国科学院自动化研究所 一种虹膜活体检测方法
CN101916361A (zh) * 2010-05-28 2010-12-15 深圳大学 基于2d-dct变换的虹膜特征设计方法及系统
CN101923640A (zh) * 2010-08-04 2010-12-22 中国科学院自动化研究所 基于鲁棒纹理特征和机器学习对伪造虹膜图像判别的方法
US20150098629A1 (en) * 2013-10-08 2015-04-09 Sri International Iris biometric recognition module and access control assembly
CN104240205A (zh) * 2014-09-26 2014-12-24 北京无线电计量测试研究所 一种基于矩阵填充的虹膜图像增强方法
CN107358183A (zh) * 2017-06-30 2017-11-17 广东欧珀移动通信有限公司 虹膜活体检测方法及相关产品

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3627382A4 *

Also Published As

Publication number Publication date
CN107358183A (zh) 2017-11-17
US20200143187A1 (en) 2020-05-07
US11200437B2 (en) 2021-12-14
EP3627382A4 (en) 2020-09-02
CN107358183B (zh) 2020-09-01
EP3627382A1 (en) 2020-03-25

Similar Documents

Publication Publication Date Title
WO2019011099A1 (zh) 虹膜活体检测方法及相关产品
WO2019020014A1 (zh) 解锁控制方法及相关产品
WO2019011206A1 (zh) 活体检测方法及相关产品
AU2018299524B2 (en) Iris-based living-body detection method, mobile terminal and storage medium
US11055547B2 (en) Unlocking control method and related products
CN107590461B (zh) 人脸识别方法及相关产品
RU2731370C1 (ru) Способ распознавания живого организма и терминальное устройство
WO2019052329A1 (zh) 人脸识别方法及相关产品
CN107657218B (zh) 人脸识别方法及相关产品
WO2019024717A1 (zh) 防伪处理方法及相关产品
WO2019011098A1 (zh) 解锁控制方法及相关产品
WO2019001254A1 (zh) 虹膜活体检测方法及相关产品
CN107451454B (zh) 解锁控制方法及相关产品
CN107784271B (zh) 指纹识别方法及相关产品
WO2019015418A1 (zh) 解锁控制方法及相关产品
CN107506697B (zh) 防伪处理方法及相关产品
CN107545163B (zh) 解锁控制方法及相关产品
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
WO2019015574A1 (zh) 解锁控制方法及相关产品
US11200437B2 (en) Method for iris-based living body detection and related products
WO2019015432A1 (zh) 识别虹膜活体的方法及相关产品
CN117876326A (zh) 标志物识别方法、装置、设备及存储介质
CN117135399A (zh) 视频卡顿检测方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18824914

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018824914

Country of ref document: EP

Effective date: 20191218

NENP Non-entry into the national phase

Ref country code: DE