WO2019011099A1 - 虹膜活体检测方法及相关产品 - Google Patents

虹膜活体检测方法及相关产品 Download PDF

Info

Publication number
WO2019011099A1
WO2019011099A1 PCT/CN2018/091082 CN2018091082W WO2019011099A1 WO 2019011099 A1 WO2019011099 A1 WO 2019011099A1 CN 2018091082 W CN2018091082 W CN 2018091082W WO 2019011099 A1 WO2019011099 A1 WO 2019011099A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris image
training result
image
iris
feature set
Prior art date
Application number
PCT/CN2018/091082
Other languages
English (en)
French (fr)
Inventor
周意保
唐城
张学勇
周海涛
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019011099A1 publication Critical patent/WO2019011099A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application relates to the field of electronic device technologies, and in particular, to an iris living body detecting method and related products.
  • iris recognition is increasingly favored by electronic equipment manufacturers, and the safety of iris recognition is also one of the important issues of concern.
  • the iris is usually inspected before the iris is recognized, but the current iris detection accuracy is not high.
  • the embodiment of the present application provides an iris living body detecting method and related products, so as to improve the accuracy of the iris living body detection.
  • an embodiment of the present application provides an iris living body detecting method, where the method includes:
  • an embodiment of the present application provides an electronic device, including a visible light camera, an infrared camera, and an application processor AP, where
  • the visible light camera is configured to acquire a first iris image
  • the infrared camera is configured to acquire a second iris image, wherein the first iris image and the second iris image are from the same target;
  • the AP is configured to perform feature extraction on the first iris image to obtain a first type feature set
  • the AP is further configured to perform feature extraction on the second iris image to obtain a second type feature set, and determine whether the target is a living body according to the first type feature set and the second type feature set.
  • an embodiment of the present application provides an iris living body detecting apparatus, including:
  • a first acquiring unit configured to acquire a first iris image by using a visible light camera
  • a second acquiring unit configured to acquire a second iris image by using an infrared camera, wherein the first iris image and the second iris image are from the same target;
  • An extracting unit configured to perform feature extraction on the first iris image to obtain a first type feature set
  • the extracting unit is further configured to perform feature extraction on the second iris image to obtain a second type feature set
  • a determining unit configured to determine, according to the first type of feature set and the second type of feature set, whether the target is a living body.
  • an embodiment of the present application provides an electronic device, including a visible light camera, an infrared camera, and an application processor AP and a memory; and one or more programs, where the one or more programs are stored in the memory. And configured to be executed by the AP, the program comprising instructions for performing some or all of the steps as described in the first aspect of the embodiments of the present application.
  • the embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium is used to store a computer program, wherein the computer program causes a computer to perform the first aspect of the embodiment of the present application.
  • an embodiment of the present application provides a computer program product, where the computer program product includes a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute Apply some or all of the steps described in the first aspect of the embodiment.
  • the computer program product can be a software installation package.
  • the first iris image is acquired by the visible light camera
  • the second iris image is acquired by the infrared camera, wherein the first iris image and the second iris image are from the same target, and the first iris image is extracted.
  • Obtaining a first type of feature set performing feature extraction on the second iris image to obtain a second type feature set, and determining whether the target is a living body according to the first type feature set and the second type feature set, visible, using visible light camera and infrared
  • the camera acquires the iris image separately, and extracts the features of the two iris images. According to the characteristics of the two, whether the iris is from the living body, the iris can be detected from multiple dimensions, which can improve the accuracy of the living body detection.
  • FIG. 1A is a schematic structural diagram of an example smart phone provided by an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 1C is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 1D is a schematic flow chart of a method for detecting an iris living body according to an embodiment of the present application
  • FIG. 1E is a comparison diagram of visible light and infrared iris images provided by an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of another iris living body detecting method provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 4A is a schematic structural view of an iris living body detecting device according to an embodiment of the present application.
  • FIG. 4B is a schematic structural diagram of a determining unit of the iris living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4C is a schematic structural diagram of a determining module of the determining unit described in FIG. 4B according to an embodiment of the present application;
  • FIG. 4D is a schematic structural diagram of a second acquiring unit of the iris living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • 4E is a schematic structural diagram of another iris living body detecting device provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the electronic device involved in the embodiments of the present application may include various handheld devices having wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user devices (user Equipment, UE), mobile station (MS), terminal device, etc.
  • user Equipment user Equipment
  • MS mobile station
  • terminal device etc.
  • the devices mentioned above are collectively referred to as electronic devices.
  • the embodiments of the present application are described in detail below.
  • the electronic device described in this embodiment of the present application may be provided with an iris recognition device, which can integrate a visible light camera and an infrared camera, thereby obtaining an iris image of visible light and acquiring infrared light.
  • the iris image, the visible light camera and the infrared camera can be registered, so that the viewing angle range of the two is completely the same.
  • the visible light camera and the infrared camera can also be unregistered, and the two have a partial overlapping viewing angle range.
  • the iris recognition device of the smart phone 100 may include an infrared fill light 21 and an infrared camera 22, and a visible light camera 23, and the visible light camera 23 may be a front camera, in which the iris is detected in vivo.
  • the iris recognition device collects the infrared iris image through the infrared camera 22, and collects the visible iris image through the visible light camera 23, and passes the infrared
  • the visible light camera 23 can also be used as a front camera alone for self-timer. The details are described in detail below.
  • FIG. 1B is a schematic structural diagram of an electronic device 100.
  • the electronic device 100 includes an application processor AP110 and an iris recognition device 130.
  • the iris recognition device 130 can integrate an infrared camera and a visible light camera. And an infrared fill light, wherein the AP 110 is connected to the iris recognition device 130 via the bus 150.
  • FIG. 1C which is a modified structure of the electronic device 100 depicted in FIG. 1B, with respect to FIG. 1B. In other words, FIG. 1C also includes an environmental sensor 160.
  • the visible light camera 23 is configured to acquire a first iris image and send the first iris image to the AP 110;
  • the infrared camera 22 is configured to acquire a second iris image, and send the second iris image to the AP 110, wherein the first iris image and the second iris image are from the same target;
  • the AP 110 is configured to perform feature extraction on the first iris image to obtain a first type feature set.
  • the AP 110 is further configured to perform feature extraction on the second iris image to obtain a second type feature set, and determine whether the target is a living body according to the first type feature set and the second type feature set.
  • the AP 110 in the determining whether the target is a living body according to the first type of feature set and the second type of feature set, is specifically configured to:
  • the first type of feature set is trained by using a preset first living body detection classifier to obtain a first training result; and the second type of feature set is trained by using a preset second living body detection classifier to obtain a first a second training result; determining whether the target is a living body according to the first training result and the second training result.
  • the electronic device is provided with an environmental sensor
  • the environment sensor 160 is configured to acquire a current environment parameter, and send the current environment parameter to the AP 110.
  • the AP 110 is specifically configured to:
  • the first training result and the second training result determine a target training result, and when the target training result meets a preset condition, confirm that the target is a living body.
  • the infrared camera 22 in acquiring the second iris image, is specifically configured to:
  • Determining region location information of the first iris image acquiring an infrared image captured by the infrared camera; determining the second iris image from the infrared image according to the region location information.
  • the infrared camera 22 and the visible light camera 23 have the same range of viewing angles.
  • the AP 110 is further specifically configured to:
  • the AP 110 is specifically configured to:
  • Feature extraction is performed on the first iris image after image enhancement processing.
  • FIG. 1D a schematic flowchart of an embodiment of an iris living body detecting method according to an embodiment of the present application is provided.
  • the method is applied to an electronic device, and the schematic diagram and structure of the electronic device can be seen in FIG. 1A-FIG. 1C, the iris living body detecting method described in this embodiment includes the following steps:
  • the electronic device may acquire the first iris image by using a visible light camera, and the first iris image may be an image of a single-finger iris region or an image including an iris region (for example, a human eye image).
  • the iris image can be acquired by the iris recognition device.
  • the electronic device may acquire the second iris image by using an infrared camera, and the second iris image may be an image of a single-finger iris region or an image including an iris region (for example, a human eye image).
  • the iris image can be acquired by the iris recognition device.
  • the first iris image and the second iris image may be from the same human eye, and the target may be a human eye or a human.
  • both the first iris image and the second iris image may be from the same human eye.
  • the left image of FIG. 1E is a visible iris image taken by a visible light camera (corresponding to the first iris image above), and the right image of FIG. 1E is an infrared iris image taken by an infrared camera (corresponding to the second iris above).
  • Image visible
  • the visible iris image contains more detailed information than the infrared iris image. Both can be used to some extent for iris biopsy.
  • acquiring the second iris image by using the infrared camera may include the following steps:
  • the visible light camera can determine the regional location information of the first iris image, and further, send the regional location information to the infrared camera, or the infrared camera can perform image recognition on the first iris image to obtain the regional location of the first iris image. information.
  • the infrared image can be captured by the infrared camera, and after the infrared image is obtained, the second iris image is determined from the infrared image according to the position information of the region. After all, the infrared image is imaged by temperature, and the image is relatively blurred, and the second iris image can be accurately determined according to the method.
  • the first type of feature set may be a fine feature set, or a rough feature set.
  • the second type of feature set described above may be a fine feature set, or a rough feature set.
  • the first type of feature set is a fine feature set
  • the second type feature set is a rough feature set
  • the first type feature set is a fine feature set
  • the second type feature set is a fine feature set, or the foregoing
  • One type of feature set is a rough feature set
  • the second type feature set is a rough feature set.
  • the first type feature set is a rough feature set
  • the second type feature set is a fine feature set.
  • the above feature extraction can be implemented by using an algorithm such as a Harris corner detection algorithm, a scale invariant feature transform (SIFT), a SUSAN corner detection algorithm, and the like, and details are not described herein.
  • SIFT scale invariant feature transform
  • the above fine feature set is a more detailed feature than the rough feature set, and the feature extraction algorithm complexity of extracting the fine feature set is more complicated than the feature extraction algorithm of the rough feature set.
  • a rough feature set the feature extraction can be performed by using the Harris corner algorithm.
  • the fine feature set can be obtained by multi-scale decomposition of the image to obtain high-frequency component images, and then the Harris corner detection algorithm is used to image the high-frequency component. Perform feature extraction.
  • the multi-scale decomposition algorithm may be used to perform multi-scale image on the iris image to obtain a low-frequency component image and a plurality of high-frequency component images
  • the multi-scale decomposition algorithm may include but is not limited to: wavelet transform, Laplace transform, contour wave Contourlet transform (CT), non-subsampled contourlet transform (NSCT), shear wave transform, etc., taking contour wave as an example, using contour wave transform to multi-scale decomposition of iris image, Obtaining a low-frequency component image and a plurality of high-frequency component images, and each of the plurality of high-frequency component images has different sizes.
  • a low frequency can be obtained.
  • a component image and a plurality of high frequency component images, and each of the plurality of high frequency component images has the same size. For high frequency components, it contains more details of the image.
  • step 103 the following steps may be further included:
  • step 103 feature extraction may be performed on the first iris image after the image enhancement processing.
  • a feature set, or, in step 104, feature extraction may be performed on the second iris image after the image enhancement process to obtain a second type feature set.
  • the image enhancement processing may include, but is not limited to, image denoising (eg, wavelet transform for image denoising), image restoration (eg, Wiener filtering), dark visual enhancement algorithm (eg, histogram equalization, grayscale pull) Stretching, etc.), after image enhancement processing of the iris image, the quality of the iris image can be improved to some extent.
  • image denoising eg, wavelet transform for image denoising
  • image restoration eg, Wiener filtering
  • dark visual enhancement algorithm eg, histogram equalization, grayscale pull
  • A11 Perform image quality evaluation on the first iris image to obtain an image quality evaluation value.
  • A12 performing image enhancement processing on the first iris image when the image quality evaluation value is lower than the first preset quality threshold, and further performing feature extraction on the first iris image after the image enhancement processing to obtain the first Class feature set.
  • the preset quality threshold may be set by the user or the system defaults, and the image quality of the first iris image may be first evaluated to obtain an image quality evaluation value, and whether the quality of the iris image is good or bad is determined by the image quality evaluation value.
  • the image quality evaluation value is greater than or equal to the first preset quality threshold, the first iris image quality is considered to be good, and when the image quality evaluation value is smaller than the first preset quality threshold, the first iris image quality may be considered to be poor, and further The image enhancement processing can be performed on the first iris image.
  • At least one image quality evaluation index may be used to perform image quality evaluation on the first iris image, thereby obtaining an image quality evaluation value.
  • Image quality evaluation indicators may be included, and each image quality evaluation index also corresponds to one weight. Thus, each image quality evaluation index can obtain an evaluation result when performing image quality evaluation on the first iris image, and finally, weighting is performed. The result of the calculation is the final image quality evaluation value.
  • Image quality evaluation indicators may include, but are not limited to, mean, standard deviation, entropy, sharpness, signal to noise ratio, and the like.
  • Image quality can be evaluated by using 2 to 10 image quality evaluation indicators. Specifically, the number of image quality evaluation indicators and which indicator are selected are determined according to specific implementation conditions. Of course, it is also necessary to select image quality evaluation indicators in combination with specific scenes, and the image quality indicators in the dark environment and the image quality evaluation in the bright environment may be different.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • the image may be evaluated by using multiple image quality evaluation indicators, and the plurality of image quality evaluation indicators may be set when the image quality is evaluated.
  • the weight of each image quality evaluation index in the image quality evaluation index may obtain a plurality of image quality evaluation values, and the final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and corresponding weights, for example, three images
  • the quality evaluation indicators are: A index, B index and C index.
  • the weight of A is a1
  • the weight of B is a2
  • the weight of C is a3.
  • A, B and C are used to evaluate the image quality of an image
  • a The corresponding image quality evaluation value is b1
  • the image quality evaluation value corresponding to B is b2
  • the image quality evaluation value corresponding to C is b3
  • the final image quality evaluation value a1b1+a2b2+a3b3.
  • the larger the image quality evaluation value the better the image quality.
  • A21 Perform image quality evaluation on the second iris image to obtain an image quality evaluation value.
  • A22 Perform image enhancement processing on the second iris image when the image quality evaluation value is lower than a second preset quality threshold, and further perform feature extraction on the second iris image after the image enhancement processing to obtain a second Class feature set.
  • the first type of feature set and the second type of feature set are respectively trained to obtain two training results, and according to the two training results, whether the target is a living body is determined.
  • the training result can be a probability value. For example, if the probability value is 80%, the iris image can be considered to be from the living iris, and the iris image is considered to be from the non-living iris, and the non-living iris can be one of the following: 3D printing The iris, the iris in the photo, or the iris of a person without vital signs.
  • determining whether the target is a living body according to the first type of feature set and the second type of feature set may include the following steps:
  • the first type of feature set is trained by using a preset first living body detection classifier to obtain a first training result
  • the second type of feature set is trained by using a preset second living body detection classifier to obtain a second training result
  • the preset first living body detection classifier is a classifier for performing living body detection on an iris image of visible light
  • the preset second living body detection classifier is a classifier for performing living body detection on an infrared iris image.
  • the first living body detection classifier or the preset second living body detection classifier may include, but is not limited to, a support vector machine (SVM), a genetic algorithm classifier, a neural network algorithm classifier, and a cascade classification. (such as genetic algorithm + SVM) and so on.
  • SVM support vector machine
  • Genetic algorithm + SVM a neural network algorithm classifier
  • cascade classification such as genetic algorithm + SVM
  • the first class target classifier and the second class target classifier are used as a living body detection classifier.
  • the living body iris image in the above steps C1-C7 is a visible iris image, and the non-living iris image is a visible light image; and the preset second living body detection classifier is determined
  • the living iris image in the above steps C1-C7 is an infrared iris image
  • the non-living iris image is an infrared image.
  • the above X and Y can be set by the user, and the larger the specific number, the better the classification effect of the classifier.
  • the positive sample set may include X positive samples, each positive sample is a living iris image, and the negative sample set may include Y negative samples, and each negative sample is non-
  • the live iris image in addition, the first designated classifier and the second specified classifier may be the same classifier or different classifiers, whether the first designated classifier or the second specified classifier may include but are not limited to: support vector Machine, genetic algorithm classifier, neural network algorithm classifier, cascade classifier (such as genetic algorithm + SVM) and so on.
  • the electronic device is provided with an environment sensor; the current environment parameter is obtained by the environment sensor; in the above step 53, determining whether the target is a living body according to the first training result and the second training result, It can include the following steps:
  • the environmental sensor may be at least one of the following: an ambient light sensor (for detecting ambient brightness), an ambient color temperature sensor (for detecting ambient color temperature), a temperature sensor (for detecting ambient temperature), and a global positioning system (for Detecting geographic location), humidity sensor (for detecting ambient humidity), magnetic field detecting sensor (for detecting magnetic field strength), etc.
  • the above environmental parameters may include, but are not limited to, ambient brightness, ambient color temperature, ambient temperature, ambient humidity, geographic location, magnetic field strength, and the like.
  • the correspondence between the environment parameter and the first weight, the correspondence between the environment parameter and the second weight may be preset, and after determining the current environment parameter, according to the current environment The parameter determines a first weight corresponding to the first training result and a second weight corresponding to the second training result. Further, the target may be determined according to the first weight, the second weight, the first training result, and the second training result.
  • the target training result meets the preset condition
  • the target is determined to be a living body
  • the preset condition may be the target
  • the training result is greater than the preset detection threshold, and the preset detection threshold may be set by the user or the system defaults, or the preset condition may be determined whether the target training result is in a preset range, and the preset range may be set by the system default or by the user.
  • the accuracy of iris biopsy detection is different in different environments. Therefore, different weights can be used according to different environments.
  • the infrared camera and the visible light camera have the same viewing angle range, so that the captured images are in the same scene.
  • the first iris image is acquired by the visible light camera
  • the second iris image is acquired by the infrared camera
  • the first iris image and the second iris image are from the same target
  • the first iris image is characterized. Extracting, obtaining a first type of feature set, performing feature extraction on the second iris image, obtaining a second type feature set, and determining whether the target is a living body according to the first type feature set and the second type feature set, visible, using a visible light camera and
  • the infrared camera obtains the iris image separately, and extracts the features of the two iris images. According to the characteristics of the two, whether the iris is from the living body or not, the living body can be detected from multiple dimensions, which can improve the accuracy of the living body detection.
  • FIG. 2 is a schematic flowchart of an embodiment of an iris living body detecting method according to an embodiment of the present application. The method is applied to an electronic device including a visible light camera, an infrared camera, and an application processor AP, and a schematic diagram and structure of the electronic device.
  • the iris living body detection method described in this embodiment includes the following steps:
  • the purpose of registering the visible light camera and the infrared camera is to make the visible light camera and the infrared camera have the same viewing angle range, so that the first iris image and the second iris image obtained subsequently can be completely overlapped.
  • the effect of living body detection is better.
  • the visible light camera and the infrared camera are first registered, so that the viewing angle ranges of the two are the same, the first iris image is obtained by using the visible light camera, and the second iris image is obtained by using the infrared camera.
  • the first iris image and the second iris image are from the same target, and feature extraction is performed on the first iris image to obtain a first type feature set, and feature extraction is performed on the second iris image to obtain a second type feature set according to the first type feature.
  • the set and the second type feature set determine whether the target is a living body, so that the iris image can be acquired by the visible light camera and the infrared camera respectively, and the iris images are extracted, and whether the iris is from the living body can be judged according to the characteristics of the two. In vivo detection of the iris in multiple dimensions can improve the accuracy of living body detection.
  • FIG. 3 is an electronic device according to an embodiment of the present application, which includes at least an application processor AP and a memory, and the electronic device may further include an iris recognition device, where the iris recognition device includes an infrared camera and an infrared a fill light and a visible light camera; and one or more programs, the one or more programs being stored in the memory and configured to be executed by the AP, the program including instructions for performing the following steps :
  • the program includes instructions for performing the following steps:
  • the first type of feature set is trained by using a preset first living body detection classifier to obtain a first training result
  • the second type of feature set is trained by using a preset second living body detection classifier to obtain a second training result
  • the electronic device is provided with an environmental sensor; the program includes instructions for performing the following steps:
  • the program includes an instruction for performing the following steps:
  • the program includes instructions for performing the following steps:
  • the second iris image is determined from the infrared image based on the region location information.
  • the infrared camera and the visible light camera have the same range of viewing angles.
  • the program further includes instructions for performing the following steps:
  • the program includes instructions for performing the following steps:
  • Feature extraction is performed on the first iris image after image enhancement processing.
  • FIG. 4A is a schematic structural diagram of an iris living body detecting apparatus according to the embodiment.
  • the iris living body detecting device is applied to an electronic device, and the iris living body detecting device includes a first acquiring unit 401, a second acquiring unit 402, an extracting unit 403, and a determining unit 404, wherein
  • a first acquiring unit 401 configured to acquire a first iris image by using a visible light camera
  • a second acquiring unit 402 configured to acquire a second iris image by using an infrared camera, wherein the first iris image and the second iris image are from the same target;
  • An extracting unit 403, configured to perform feature extraction on the first iris image to obtain a first type feature set
  • the extracting unit 403 is further configured to perform feature extraction on the second iris image to obtain a second type feature set
  • the determining unit 404 is configured to determine, according to the first type of feature set and the second type of feature set, whether the target is a living body.
  • FIG. 4B is a specific detailed structure of the determining unit 404 of the iris living body detecting apparatus described in FIG. 4A, and the determining unit 404 includes: a training module 4041 and a determining module 4042, as follows:
  • the training module 4041 is configured to train the first type of feature set by using a preset first living body detection classifier to obtain a first training result;
  • the training module 4041 is further configured to: use the preset second living body detection classifier to train the second type feature set to obtain a second training result;
  • the determining module 4042 is configured to determine, according to the first training result and the second training result, whether the target is a living body.
  • FIG. 4C is a specific detail structure of the determining module 4042 of the determining unit 404 described in FIG. 4B.
  • the determining module 4042 may include: a first obtaining module 510 and a first determining module 520, as follows: :
  • the first obtaining module 510 is configured to acquire current environment parameters.
  • a first determining module 520 configured to determine, according to the current environment parameter, a first weight corresponding to the first training result and a second weight corresponding to the second training result; according to the first weight, the Determining the target training result by the second weight, the first training result, and the second training result; and confirming that the target is a living body when the target training result meets the preset condition.
  • FIG. 4D is a specific detailed structure of the second acquiring unit 402 of the iris living body detecting apparatus described in FIG. 4A, and the second acquiring unit 402 may include: a second determining module 4021 and a second acquiring.
  • Module 4022 is as follows:
  • a second determining module 4021 configured to determine area location information of the first iris image
  • a second obtaining module 4022 configured to acquire an infrared image captured by an infrared camera
  • the second determining module 4021 is configured to determine the second iris image from the infrared image according to the area location information.
  • the infrared camera and the visible light camera have the same range of viewing angles.
  • FIG. 4E is the iris living body detecting device described in FIG. 4A, which may further include an image enhancing unit 405 as compared with FIG. 4A, as follows:
  • the extracting unit 403 is specifically configured to:
  • Feature extraction is performed on the first iris image after image enhancement processing.
  • the iris living body detecting device described in the embodiment of the present application can obtain the first iris image by using the visible light camera, and acquire the second iris image by using the infrared camera, wherein the first iris image and the second iris image are from the same target. Feature extraction is performed on the first iris image to obtain a first type feature set, and feature extraction is performed on the second iris image to obtain a second type feature set, and whether the target is a living body is determined according to the first type feature set and the second type feature set. It can be seen that the iris image can be obtained by using the visible light camera and the infrared camera respectively, and the iris images are extracted from the two iris images. According to the characteristics of the two, whether the iris is from the living body can be detected, and the iris can be detected from multiple dimensions to improve the living body detection. accuracy.
  • the embodiment of the present application further provides another electronic device. As shown in FIG. 5, for the convenience of description, only the parts related to the embodiment of the present application are shown. If the specific technical details are not disclosed, refer to the method of the embodiment of the present application. section.
  • the electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sales), an in-vehicle computer, and the like, and the electronic device is used as a mobile phone as an example:
  • FIG. 5 is a block diagram showing a partial structure of a mobile phone related to an electronic device provided by an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, an application processor AP980, and a power supply. 990 and other components.
  • RF radio frequency
  • the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 930 may include a touch display screen 933, an iris recognition device 931, and other input devices 932.
  • the specific structure of the iris recognition device 931 can be referred to FIG. 1A to FIG. 1C.
  • the input unit 930 can also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of physical buttons, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the iris recognition device 931 is configured to: acquire an iris image
  • the AP 980 is configured to perform the following steps:
  • the AP 980 is the control center of the handset, which utilizes various interfaces and lines to connect various portions of the entire handset, and executes the handset by running or executing software programs and/or modules stored in the memory 920, as well as invoking data stored in the memory 920. A variety of functions and processing data to monitor the phone as a whole.
  • the AP 980 may include one or more processing units; preferably, the AP 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation.
  • the processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the AP 980.
  • memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the RF circuit 910 can be used for receiving and transmitting information.
  • RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • LNA low noise amplifier
  • RF circuitry 910 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (code division) Multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, short messaging service (SMS), and the like.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • SMS short messaging service
  • the handset may also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the touch display screen according to the brightness of the ambient light, and the proximity sensor can turn off the touch display when the mobile phone moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 960, a speaker 961, and a microphone 962 can provide an audio interface between the user and the handset.
  • the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961 for conversion to the sound signal by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal by the audio circuit 960. After receiving, it is converted into audio data, and then the audio data is played by the AP 980, sent to the other mobile phone via the RF circuit 910, or the audio data is played to the memory 920 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the mobile phone also includes a power source 990 (such as a battery) that supplies power to various components.
  • a power source 990 such as a battery
  • the power source can be logically connected to the AP980 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include an infrared camera, a visible light camera, a Bluetooth module, and the like, and details are not described herein again.
  • each step method flow can be implemented based on the structure of the mobile phone.
  • each unit function can be implemented based on the structure of the mobile phone.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium is used to store a computer program, and the computer program causes the computer to perform some or all of the steps of any one of the iris living body detection methods described in the foregoing method embodiments. .
  • the embodiment of the present application further provides a computer program product, comprising: a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the operations as recited in the foregoing method embodiments Any or all of the steps of any iris in vivo detection method.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software program module.
  • the integrated unit if implemented in the form of a software program module and sold or used as a standalone product, may be stored in a computer readable memory.
  • a computer device which may be a personal computer, server or network device, etc.
  • the foregoing memory includes: a U disk, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

一种虹膜活体检测方法及相关产品,该方法包括:利用可见光摄像头获取第一虹膜图像(101);利用红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标(102);对所述第一虹膜图像进行特征提取,得到第一类特征集(103);对所述第二虹膜图像进行特征提取,得到第二类特征集(104);根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体(105)。所述方法可利用可见光摄像头和红外摄像头分别获取虹膜图像,并对两虹膜图像进行特征提取,根据两者的特征判断虹膜是否来自于活体,可从多个维度对虹膜进行活体检测,可提升活体检测准确性。

Description

虹膜活体检测方法及相关产品
本申请要求2017年7月14日递交的发明名称为“虹膜活体检测方法及相关产品”的申请号201710576785.0的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请涉及电子设备技术领域,具体涉及一种虹膜活体检测方法及相关产品。
背景技术
随着电子设备(手机、平板电脑等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。
目前来看,虹膜识别越来越受到电子设备生产厂商的青睐,虹膜识别的安全性也是其关注的重要问题之一。出于安全性考虑,通常情况下,会在虹膜识别之前,先对虹膜进行活体检测,但是目前的虹膜活体检测准确性并不高。
发明内容
本申请实施例提供了一种虹膜活体检测方法及相关产品,以期提高虹膜活体检测的准确性。
第一方面,本申请实施例提供一种虹膜活体检测方法,所述方法包括:
通过可见光摄像头获取第一虹膜图像;
通过红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
对所述第一虹膜图像进行特征提取,得到第一类特征集;
对所述第二虹膜图像进行特征提取,得到第二类特征集;
根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
第二方面,本申请实施例提供了一种电子设备,包括可见光摄像头、红外摄像头以及应用处理器AP,其中,
所述可见光摄像头,用于获取第一虹膜图像;
所述红外摄像头,用于获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
所述AP,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
所述AP,还用于对所述第二虹膜图像进行特征提取,得到第二类特征集;根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
第三方面,本申请实施例提供了一种虹膜活体检测装置,包括:
第一获取单元,用于利用可见光摄像头获取第一虹膜图像;
第二获取单元,用于利用红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
提取单元,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
所述提取单元,还具体用于对所述第二虹膜图像进行特征提取,得到第二类特征集;
判断单元,用于根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
第四方面,本申请实施例提供了一种电子设备,包括可见光摄像头、红外摄像头以及应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行如本申请实施例第一方面中所描述的部分或全部步骤的指令。
第五方面,本申请实施例提供了一种计算机可读存储介质,其中,所述计算机可读存储介质用于存储计算机程序,其中,所述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第六方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
实施本申请实施例,具有如下有益效果:
可以看出,本申请实施例,通过可见光摄像头获取第一虹膜图像,通过红外摄像头获取第二虹膜图像,其中,第一虹膜图像与第二虹膜图像来自同一目标,对第一虹膜图像进行特征提取,得到第一类特征集,对第二虹膜图像进行特征提取,得到第二类特征集,根据第一类特征集和第二类特征集判断目标是否为活体,可见,可利用可见光摄像头和红外摄像头分别获取虹膜图像,并对俩虹膜图像进行特征提取,根据两者的特征判断虹膜是否来自于活体,可从多个维度对虹膜进行活体检测,可提升活体检测准确性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根 据这些附图获得其他的附图。
图1A是本申请实施例提供的一种示例智能手机的架构示意图;
图1B是本申请实施例提供的一种电子设备的结构示意图;
图1C是本申请实施例提供的一种电子设备的另一结构示意图;
图1D是本申请实施例提供的一种虹膜活体检测方法的流程示意图;
图1E是本申请实施例提供的可见光与红外虹膜图像的比对效果图;
图2是本申请实施例提供的另一种虹膜活体检测方法的流程示意图;
图3是本申请实施例提供的一种电子设备的结构示意图;
图4A是本申请实施例提供的一种虹膜活体检测装置的结构示意图;
图4B是本申请实施例提供的图4A所描述的虹膜活体检测装置的判断单元的结构示意图;
图4C是本申请实施例提供的图4B所描述的判断单元的判断模块的结构示意图;
图4D是本申请实施例提供的图4A所描述的虹膜活体检测装置的第二获取单元的结构示意图;
图4E是本申请实施例提供的另一种虹膜活体检测装置的结构示意图
图5是本申请实施例公开的另一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的 用户设备(user equipment,UE),移动台(mobile station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为电子设备。下面对本申请实施例进行详细介绍。
需要说明的是,本申请实施例所描述的电子设备可设置有虹膜识别装置,该虹膜识别装置可集成了可见光摄像头和红外摄像头,因而,其即可获取可见光的虹膜图像,又可以获取红外的虹膜图像,该可见光摄像头和红外摄像头可经过配准,从而,使得两者的视角范围完全一致,当然,可见光摄像头和红外摄像头也可以未经过配准,两者有部分重叠视角范围。
下面对本申请实施例进行详细介绍。如图1A所示的一种示例智能手机100,该智能手机100的虹膜识别装置可以包括红外补光灯21和红外摄像头22以及可见光摄像头23、可见光摄像头23可为前置摄像头,在虹膜活体检测的过程中,红外补光灯21的光线打到虹膜上之后,经过虹膜反射回红外摄像头22,虹膜识别装置通过红外摄像头22采集红外虹膜图像,以及通过可见光摄像头23采集可见光虹膜图像,并通过红外虹膜图像与可见光虹膜图像实现活体检测,当然,可见光摄像头23也可以单独作为前置摄像头,用于自拍。具体在下面进行详细介绍。
请参阅图1B,图1B是所示的一种电子设备100的结构示意图,所述电子设备100包括:应用处理器AP110、虹膜识别装置130,其中,虹膜识别装置130可集成红外摄像头、可见光摄像头和红外补光灯,其中,所述AP110通过总线150连接虹膜识别装置130,进一步地,请参阅图1C,图1C为图1B所描述的电子设备100的一种变型结构,相对于图1B而言,图1C还包括环境传感器160。
在一些可能实施例中,所述可见光摄像头23,用于获取第一虹膜图像,并将所述第一虹膜图像发送给所述AP110;
所述红外摄像头22,用于获取第二虹膜图像,并将所述第二虹膜图像发送给所述AP110,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
所述AP110,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
所述AP110,还用于对所述第二虹膜图像进行特征提取,得到第二类特征集;根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
在一些可能实施例中,在所述根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体方面,所述AP110具体用于:
采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
在一些可能实施例中,所述电子设备设置有环境传感器;
所述环境传感器160,用于获取当前环境参数,并将所述当前环境参数发送给所述 AP110;
在所述根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体方面,所述AP110具体用于:
根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
在一些可能实施例中,在所述获取第二虹膜图像,所述红外摄像头22具体用于:
确定所述第一虹膜图像的区域位置信息;获取由所述红外摄像头拍摄的红外图像;根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
在一些可能实施例中,所述红外摄像头22和所述可见光摄像头23的视角范围相同。
在一些可能实施例中,所述AP110还具体用于:
对所述第一虹膜图像进行图像增强处理;
在所述对所述第一虹膜图像进行特征提取方面,所述AP110具体用于:
对图像增强处理之后的所述第一虹膜图像进行特征提取。
与上述一致地,请参阅图1D,为本申请实施例提供的一种虹膜活体检测方法的实施例流程示意图,该方法应用于电子设备,其电子设备的示意图以及结构图可参见图1A-图1C,本实施例中所描述的虹膜活体检测方法,包括以下步骤:
101、通过可见光摄像头获取第一虹膜图像。
其中,本申请实施例中,电子设备可利用可见光摄像头获取第一虹膜图像,该第一虹膜图像可为单指虹膜区域的图像,或者,包含虹膜区域的图像(例如,一只人眼图像)。例如,在用户使用电子设备时,可通过虹膜识别装置获取虹膜图像。
102、通过红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标。
其中,电子设备可利用红外摄像头获取第二虹膜图像,该第二虹膜图像可为单指虹膜区域的图像,或者,包含虹膜区域的图像(例如,一只人眼图像)。例如,在用户使用电子设备时,可通过虹膜识别装置获取虹膜图像。上述第一虹膜图像与第二虹膜图像可来自于同一人眼,上述目标可为人眼或者人。例如,第一虹膜图像与第二虹膜图像均可来自于同一人眼。
例如,如图1E所示,图1E的左图为可见光摄像头拍摄的可见光虹膜图像(对应上面第一虹膜图像),图1E的右图为红外摄像头拍摄的红外虹膜图像(对应上面的第二虹膜图像),可见,可见光虹膜图像相对于红外虹膜图像而言,包含更多的细节信息。两者均可在 一定程度上用于虹膜活体检测。
可选地,上述步骤102中,通过红外摄像头获取第二虹膜图像,可包括如下步骤:
21、确定所述第一虹膜图像的区域位置信息;
22、获取由红外摄像头拍摄的红外图像;
23、根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
其中,可见光摄像头可确定第一虹膜图像的区域位置信息,进而,将该区域位置信息发送给红外摄像头,或者,红外摄像头可对第一虹膜图像进行图像识别,以得到第一虹膜图像的区域位置信息。进而,可由红外摄像头拍摄红外图像,在得到红外图像之后,依据该区域位置信息从红外图像中确定出第二虹膜图像。毕竟红外图像由温度成像,其图像较为模糊,可根据该方式精确确定出第二虹膜图像。
可选地,上述步骤101-102可并行执行。
103、对所述第一虹膜图像进行特征提取,得到第一类特征集。
104、对所述第二虹膜图像进行特征提取,得到第二类特征集。
其中,上述第一类特征集可为精细特征集,或者,粗略特征集。上述第二类特征集可为精细特征集,或者,粗略特征集。例如,上述第一类特征集为精细特征集,第二类特征集为粗略特征集,或者,上述第一类特征集为精细特征集,第二类特征集为精细特征集,或者,上述第一类特征集为粗略特征集,第二类特征集为粗略特征集,或者,上述第一类特征集为粗略特征集,第二类特征集为精细特征集。上述特征提取可采用如下算法实现:Harris角点检测算法、尺度不变特征变换(scale invariant feature transform,SIFT)、SUSAN角点检测算法等等,在此不再赘述。
其中,上述精细特征集为比粗略特征集更细致的特征,提取精细特征集的特征提取算法复杂度比粗略特征集的特征提取算法复杂度更高。例如,粗略特征集:可采用Harris角点算法对图像进行特征提取得到,精细特征集:可先对图像进行多尺度分解得到高频分量图像,再采用Harris角点检测算法对该高频分量图像进行特征提取。
其中,可采用多尺度分解算法对虹膜图像进行多尺度图像,得到低频分量图像和多个高频分量图像,上述多尺度分解算法可包括但不仅限于:小波变换、拉普拉斯变换、轮廓波变换(contourlet transform,CT)、非下采样轮廓波变换(non-subsampled contourlet transform,NSCT)、剪切波变换等等,以轮廓波为例,采用轮廓波变换对虹膜图像进行多尺度分解,可以得到一个低频分量图像和多个高频分量图像,并且该多个高频分量图像中每一图像的尺寸大小不一,以NSCT为例,采用NSCT对虹膜图像进行多尺度分解,可以得到一个低频分量图像和多个高频分量图像,并且该多个高频分量图像中每一图像的尺寸大小一样。对于高频分量而言,其包含了较多图像的细节信息。
可选的,在上述步骤103,或者,步骤104之前,还可以包含如下步骤:
对所述第一虹膜图像进行图像增强处理,或者,对所述第二虹膜图像进行图像增强处理,进而,在步骤103中,可对图像增强处理之后的第一虹膜图像进行特征提取,得到第一类特征集,或者,在步骤104中,可对图像增强处理之后的第二虹膜图像进行特征提取,得到第二类特征集。
其中,图像增强处理可包括但不仅限于:图像去噪(例如,小波变换进行图像去噪)、图像复原(例如,维纳滤波)、暗视觉增强算法(例如,直方图均衡化、灰度拉伸等等),在对虹膜图像进行图像增强处理之后,虹膜图像的质量可在一定程度上得到提升。
可选地,在执行上述步骤103之前,还可以包含如下步骤:
A11、对所述第一虹膜图像进行图像质量评价,得到图像质量评价值;
A12、在所述图像质量评价值低于第一预设质量阈值时,对所述第一虹膜图像进行图像增强处理,进而,对图像增强处理之后的第一虹膜图像进行特征提取,得到第一类特征集。
其中,上述预设质量阈值可由用户自行设置或者系统默认,可先对第一虹膜图像进行图像质量评价,得到一个图像质量评价值,通过该图像质量评价值判断该虹膜图像的质量是好还是坏,在图像质量评价值大于或等于第一预设质量阈值时,可认为第一虹膜图像质量好,在图像质量评价值小于第一预设质量阈值时,可认为第一虹膜图像质量差,进而,可对第一虹膜图像进行图像增强处理。
其中,上述步骤A11中,可采用至少一个图像质量评价指标对第一虹膜图像进行图像质量评价,从而,得到图像质量评价值。
可包含多个图像质量评价指标,每一图像质量评价指标也对应一个权重,如此,每一图像质量评价指标对第一虹膜图像进行图像质量评价时,均可得到一个评价结果,最终,进行加权运算,也就得到最终的图像质量评价值。图像质量评价指标可包括但不仅限于:均值、标准差、熵、清晰度、信噪比等等。
需要说明的是,由于采用单一评价指标对图像质量进行评价时,具有一定的局限性,因此,可采用多个图像质量评价指标对图像质量进行评价,当然,对图像质量进行评价时,并非图像质量评价指标越多越好,因为图像质量评价指标越多,图像质量评价过程的计算复杂度越高,也不见得图像质量评价效果越好,因此,在对图像质量评价要求较高的情况下,可采用2~10个图像质量评价指标对图像质量进行评价。具体地,选取图像质量评价指标的个数及哪个指标,依据具体实现情况而定。当然,也得结合具体地场景选取图像质量评价指标,在暗环境下进行图像质量评价和亮环境下进行图像质量评价选取的图像质量指标可不一样。
可选地,在对图像质量评价精度要求不高的情况下,可用一个图像质量评价指标进行评价,例如,以熵对待处理图像进行图像质量评价值,可认为熵越大,则说明图像质量越 好,相反地,熵越小,则说明图像质量越差。
可选地,在对图像质量评价精度要求较高的情况下,可以采用多个图像质量评价指标对图像进行评价,在多个图像质量评价指标对图像进行图像质量评价时,可设置该多个图像质量评价指标中每一图像质量评价指标的权重,可得到多个图像质量评价值,根据该多个图像质量评价值及其对应的权重可得到最终的图像质量评价值,例如,三个图像质量评价指标分别为:A指标、B指标和C指标,A的权重为a1,B的权重为a2,C的权重为a3,采用A、B和C对某一图像进行图像质量评价时,A对应的图像质量评价值为b1,B对应的图像质量评价值为b2,C对应的图像质量评价值为b3,那么,最后的图像质量评价值=a1b1+a2b2+a3b3。通常情况下,图像质量评价值越大,说明图像质量越好。
可选地,在执行上述步骤104之前,还可以包含如下步骤:
A21、对所述第二虹膜图像进行图像质量评价,得到图像质量评价值;
A22、在所述图像质量评价值低于第二预设质量阈值时,对所述第二虹膜图像进行图像增强处理,进而,对图像增强处理之后的第二虹膜图像进行特征提取,得到第二类特征集。
其中,上述步骤A21-A22具体可参考上述步骤A11-A12的具体描述。
105、根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
其中,可分别对第一类特征集和第二类特征集进行训练,得到两个训练结果,根据该两个训练结果确定目标是否为活体。训练结果可为一个概率值,例如,概率值为80%,则可认为虹膜图像来自于活体虹膜,低于则认为虹膜图像来自于非活体虹膜,该非活体虹膜可为以下一种:3D打印的虹膜、照片中的虹膜或者,没有生命特征的人的虹膜。
可选地,上述步骤105中,根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体,可包括如下步骤:
51、采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;
52、采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;
53、根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
其中,上述预设的第一活体检测分类器为针对可见光的虹膜图像进行活体检测的分类器,上述预设的第二活体检测分类器为针对红外的虹膜图像进行活体检测的分类器,上述预设的第一活体检测分类器或者预设的第二活体检测分类器均可包括但不仅限于:支持向量机(support vector machine,SVM)、遗传算法分类器、神经网络算法分类器、级联分类器(如遗传算法+SVM)等等。两者的实现过程可参照如下方式步骤C1-C7:
C1、获取正样本集,所述正样本集包含X个活体虹膜图像,所述X为正整数;
C2、获取负样本集,所述负样本集包含Y个非活体虹膜图像,所述Y为正整数;
C3、对所述正样本集进行特征提取,得到所述X组特征;
C4、对所述负样本集进行特征提取,得到所述Y组特征;
C5、采用第一指定分类器对所述X组特征进行训练,得到第一类目标分类器;
C6、采用第二指定分类器对所述Y组特征进行训练,得到第二类目标分类器;
C7、将所述第一类目标分类器和所述第二类目标分类器作为活体检测分类器。
其中,在确定预设的第一活体检测分类器的时候,上述步骤C1-C7中的活体虹膜图像为可见光虹膜图像,非活体虹膜图像为可见光图像;在确定预设的第二活体检测分类器的时候,上述步骤C1-C7中的活体虹膜图像为红外虹膜图像,非活体虹膜图像为红外图像。上述X与Y均可由用户设置,其具体数量越大,则分类器分类效果越好。上述C3、C4中的特征提取可参照上述描述,上述正样本集可包含X个正样本,每一正样本为活体虹膜图像,上述负样本集可包含Y个负样本,每一负样本为非活体虹膜图像,另外,第一指定分类器和第二指定分类器可为同一分类器或者不同的分类器,无论是第一指定分类器还是第二指定分类器均可包括但不仅限于:支持向量机、遗传算法分类器、神经网络算法分类器、级联分类器(如遗传算法+SVM)等等。
可选地,所述电子设备设置有环境传感器;通过所述环境传感器获取当前环境参数;上述步骤53中,根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体,可包括如下步骤:
531、根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;
532、根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
其中,上述环境传感器可为以下至少一种:环境光传感器(用于检测环境亮度)、环境色温传感器(用于检测环境色温)、温度传感器(用于检测环境温度)、全球定位系统(用于检测地理位置)、湿度传感器(用于检测环境湿度)、磁场检测传感器(用于检测磁场强度)等等。上述环境参数可包括但不仅限于:环境亮度、环境色温、环境温度、环境湿度、地理位置、磁场强度等等。在执行本申请实施例之前,可预先设置环境参数与第一权重之间的对应关系,环境参数与第二权值之间的对应关系,进而,在确定了当前环境参数之后,可根据当前环境参数确定第一训练结果对应的第一权值和第二训练结果对应的第二权值,进一步地,可根据第一权值、第二权值、第一训练结果和第二训练结果确定目标训练结果,例如,目标训练结果=第一权值*第一训练结果+第二权值*第二训练结果,在目标训练结果符合预设条件时,确定目标为活体,预设条件可为目标训练结果大于预设检测阈值,上述预设检测阈值可由用户自行设置或者系统默认,或者,上述预设条件可为判断目标训练结 果是否处于预设范围,该预设范围可由系统默认或者用户自行设置。具体实现中,不同的环境,虹膜活体检测的准确性不一样,因而,可依据不同的环境采用不同的权值。
可选地,本申请实施例中,红外摄像头和所述可见光摄像头的视角范围相同,从而,拍摄出来的图像处于同一场景。
可以看出,本申请实施例,通过可见光摄像头获取第一虹膜图像,用过红外摄像头获取第二虹膜图像,其中,第一虹膜图像与第二虹膜图像来自同一目标,对第一虹膜图像进行特征提取,得到第一类特征集,对第二虹膜图像进行特征提取,得到第二类特征集,根据第一类特征集和第二类特征集判断目标是否为活体,可见,可利用可见光摄像头和红外摄像头分别获取虹膜图像,并对俩虹膜图像进行特征提取,根据两者的特征判断虹膜是否来自于活体,可从多个维度对虹膜进行活体检测,可提升活体检测准确性。
请参阅图2,为本申请实施例提供的一种虹膜活体检测方法的实施例流程示意图,该方法应用于包括可见光摄像头、红外摄像头以及应用处理器AP的电子设备,其电子设备的示意图以及结构图可参见图1A-图1C,本实施例中所描述的虹膜活体检测方法,包括以下步骤:
201、根据目标对可见光摄像头与红外摄像头进行配准,使得所述可见光摄像头与所述红外摄像头的视角范围一致。
其中,可对可见光摄像头与红外摄像头进行配准,目的在于,使得可见光摄像头与红外摄像头的视角范围一致,如此,后续得到的第一虹膜图像和第二虹膜图像,两者之间可实现完全重合,活体检测效果更佳。
202、通过所述可见光摄像头获取第一虹膜图像。
203、通过所述红外摄像头获取第二虹膜图像。
204、对所述第一虹膜图像进行特征提取,得到第一类特征集。
205、对所述第二虹膜图像进行特征提取,得到第二类特征集。
206、根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
其中,上述步骤203-步骤206的具体描述可参照图1D所描述的虹膜活体检测方法的对应步骤,在此不再赘述。
可以看出,本申请实施例中,可先对可见光摄像头与红外摄像头进行配准,使得两者的视角范围一致,利用可见光摄像头获取第一虹膜图像,利用红外摄像头获取第二虹膜图像,其中,第一虹膜图像与第二虹膜图像来自同一目标,对第一虹膜图像进行特征提取,得到第一类特征集,对第二虹膜图像进行特征提取,得到第二类特征集,根据第一类特征集和第二类特征集判断目标是否为活体,从而,可利用可见光摄像头和红外摄像头分别获取虹膜图像,并对俩虹膜图像进行特征提取,根据两者的特征判断虹膜是否来自于活体, 可从多个维度对虹膜进行活体检测,可提升活体检测准确性。
请参阅图3,图3是本申请实施例提供的一种电子设备,至少包括:应用处理器AP和存储器,所述电子设备还可包括虹膜识别装置,所述虹膜识别装置包括红外摄像头、红外补光灯和可见光摄像头;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行以下步骤的指令:
通过可见光摄像头获取第一虹膜图像;
通过红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
对所述第一虹膜图像进行特征提取,得到第一类特征集;
对所述第二虹膜图像进行特征提取,得到第二类特征集;
根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
在一个可能的示例中,在所述根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体方面,所述程序包括用于执行以下步骤的指令:
采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;
采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;
根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
在一个可能的示例中,所述电子设备设置有环境传感器;所述程序包括用于执行以下步骤的指令:
控制所述环境传感器获取当前环境参数;在所述根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体方面,所述程序包括用于执行以下步骤的指令:
根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;
根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
在一个可能的示例中,在所述通过红外摄像头获取第二虹膜图像方面,所述程序包括用于执行以下步骤的指令:
确定所述第一虹膜图像的区域位置信息;
获取由所述红外摄像头拍摄的红外图像;
根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
在一个可能的示例中,所述红外摄像头和所述可见光摄像头的视角范围相同。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
对所述第一虹膜图像进行图像增强处理;
在所述对所述第一虹膜图像进行特征提取方面,所述程序包括用于执行以下步骤的指令:
对图像增强处理之后的所述第一虹膜图像进行特征提取。
请参阅图4A,图4A是本实施例提供的一种虹膜活体检测装置的结构示意图。该虹膜活体检测装置应用于电子设备,虹膜活体检测装置包括第一获取单元401、第二获取单元402、提取单元403和判断单元404,其中,
第一获取单元401,用于利用可见光摄像头获取第一虹膜图像;
第二获取单元402,用于利用红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
提取单元403,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
所述提取单元403,还具体用于对所述第二虹膜图像进行特征提取,得到第二类特征集;
判断单元404,用于根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
可选地,如图4B,图4B是图4A所描述的虹膜活体检测装置的判断单元404的具体细节结构,所述判断单元404包括:训练模块4041和判断模块4042,具体如下:
训练模块4041,用于采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;
所述训练模块4041,还具体用于采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;
判断模块4042,用于根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
可选地,如图4C,图4C是图4B所描述的判断单元404的判断模块4042的具体细节结构,所述判断模块4042可包括:第一获取模块510和第一确定模块520,具体如下:
第一获取模块510,用于获取当前环境参数;
第一确定模块520,用于根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果;在所述目标训练结果符合预设条件时,确认所述目标为活体。
可选地,如图4D,图4D是图4A所描述的虹膜活体检测装置的第二获取单元402的 具体细节结构,所述第二获取单元402可包括:第二确定模块4021和第二获取模块4022,具体如下:
第二确定模块4021,用于确定所述第一虹膜图像的区域位置信息;
第二获取模块4022,用于获取由红外摄像头拍摄的红外图像;
所述第二确定模块4021,用于根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
可选地,所述红外摄像头和所述可见光摄像头的视角范围相同。
可选地,如图4E,图4E是图4A所描述的虹膜活体检测装置,其与图4A相比较,还可以包括图像增强单元405,具体如下:
对所述第一虹膜图像进行图像增强处理;
在所述对所述第一虹膜图像进行特征提取方面,所述提取单元403具体用于:
对图像增强处理之后的所述第一虹膜图像进行特征提取。
可以看出,本申请实施例所描述的虹膜活体检测装置,可利用可见光摄像头获取第一虹膜图像,利用红外摄像头获取第二虹膜图像,其中,第一虹膜图像与第二虹膜图像来自同一目标,对第一虹膜图像进行特征提取,得到第一类特征集,对第二虹膜图像进行特征提取,得到第二类特征集,根据第一类特征集和第二类特征集判断目标是否为活体,可见,可利用可见光摄像头和红外摄像头分别获取虹膜图像,并对俩虹膜图像进行特征提取,根据两者的特征判断虹膜是否来自于活体,可从多个维度对虹膜进行活体检测,可提升活体检测准确性。
可以理解的是,本实施例的虹膜活体检测装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供了另一种电子设备,如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括手机、平板电脑、PDA(personal digital assistant,个人数字助理)、POS(point of sales,销售终端)、车载电脑等任意终端设备,以电子设备为手机为例:
图5示出的是与本申请实施例提供的电子设备相关的手机的部分结构的框图。参考图5,手机包括:射频(Radio Frequency,RF)电路910、存储器920、输入单元930、传感器950、音频电路960、无线保真(wireless fidelity,WiFi)模块970、应用处理器AP980、以及电源990等部件。本领域技术人员可以理解,图5中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图5对手机的各个构成部件进行具体的介绍:
输入单元930可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元930可包括触控显示屏933、虹膜识别装置931以及其他输入设备932。虹膜识别装置931的具体结构可参照图1A-图1C。输入单元930还可以包括其他输入设备932。具体地,其他输入设备932可以包括但不限于物理按键、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
其中,所述虹膜识别装置931用于:获取虹膜图像;
所述AP980,用于执行如下步骤:
控制所述可见光摄像头获取第一虹膜图像,并将所述第一虹膜图像发送给所述AP;
控制所述红外摄像头获取第二虹膜图像,并将所述第二虹膜图像发送给所述AP,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
对所述第一虹膜图像进行特征提取,得到第一类特征集;
对所述第二虹膜图像进行特征提取,得到第二类特征集;
根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
AP980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,AP980可包括一个或多个处理单元;优选的,AP980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到AP980中。
此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
RF电路910可用于信息的接收和发送。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(low noise amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(global system of mobile communication,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件、短消息服务(short messaging service,SMS)等。
手机还可包括至少一种传感器950,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触控显示屏的亮度,接近传感器可在手机移动到耳边时,关闭触控显示屏和/ 或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号播放;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据播放AP980处理后,经RF电路910以发送给比如另一手机,或者将音频数据播放至存储器920以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
手机还包括给各个部件供电的电源990(比如电池),优选的,电源可以通过电源管理系统与AP980逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括红外摄像头、可见光摄像头、蓝牙模块等,在此不再赘述。
前述图1D、图2所示的实施例中,各步骤方法流程可以基于该手机的结构实现。
前述图3、图4A~图4E所示的实施例中,各单元功能可以基于该手机的结构实现。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质用于存储计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种虹膜活体检测方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种虹膜活体检测方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分, 可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种虹膜活体检测方法,其特征在于,所述方法包括:
    通过可见光摄像头获取第一虹膜图像;
    通过红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
    对所述第一虹膜图像进行特征提取,得到第一类特征集;
    对所述第二虹膜图像进行特征提取,得到第二类特征集;
    根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体,包括:
    采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;
    采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;
    根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
  3. 根据权利要求2所述的方法,其特征在于,所述电子设备设置有环境传感器;所述方法还包括:
    通过所述环境传感器获取当前环境参数;
    所述根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体,包括:
    根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;
    根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述通过红外摄像头获取第二虹膜图像,包括:
    确定所述第一虹膜图像的区域位置信息;
    获取由所述红外摄像头拍摄的红外图像;
    根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述红外摄像头和所述可见光摄像头的视角范围相同。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述方法还包括:
    对所述第一虹膜图像进行图像增强处理;
    所述对所述第一虹膜图像进行特征提取,包括:
    对图像增强处理之后的所述第一虹膜图像进行特征提取。
  7. 一种电子设备,其特征在于,包括可见光摄像头、红外摄像头以及应用处理器AP,其中,
    所述可见光摄像头,用于获取第一虹膜图像;
    所述红外摄像头,用于获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
    所述AP,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
    所述AP,还用于对所述第二虹膜图像进行特征提取,得到第二类特征集;根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
  8. 根据权利要求7所述的电子设备,其特征在于,在所述根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体方面,所述AP具体用于:
    采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
  9. 根据权利要求8所述的电子设备,其特征在于,所述电子设备设置有环境传感器;
    所述环境传感器,用于获取当前环境参数;
    在所述根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体方面,所述AP具体用于:
    根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
  10. 根据权利要求7-9任一项所述的电子设备,其特征在于,在所述获取第二虹膜图像,所述红外摄像头具体用于:
    确定所述第一虹膜图像的区域位置信息;获取由所述红外摄像头拍摄的红外图像;根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
  11. 根据权利要求7-10任一项所述的电子设备,其特征在于,所述红外摄像头和所述可见光摄像头的视角范围相同。
  12. 根据权利要求7-11任一项所述的电子设备,其特征在于,所述AP还具体用于:
    对所述第一虹膜图像进行图像增强处理;
    在所述对所述第一虹膜图像进行特征提取方面,所述AP具体用于:
    对图像增强处理之后的所述第一虹膜图像进行特征提取。
  13. 一种虹膜活体检测装置,其特征在于,包括:
    第一获取单元,用于利用可见光摄像头获取第一虹膜图像;
    第二获取单元,用于利用红外摄像头获取第二虹膜图像,其中,所述第一虹膜图像与所述第二虹膜图像来自同一目标;
    提取单元,用于对所述第一虹膜图像进行特征提取,得到第一类特征集;
    所述提取单元,还具体用于对所述第二虹膜图像进行特征提取,得到第二类特征集;
    判断单元,用于根据所述第一类特征集和所述第二类特征集判断所述目标是否为活体。
  14. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    训练模块,用于采用预设的第一活体检测分类器对所述第一类特征集进行训练,得到第一训练结果;以及采用预设的第二活体检测分类器对所述第二类特征集进行训练,得到第二训练结果;
    判断模块,用于根据所述第一训练结果和所述第二训练结果判断所述目标是否为活体。
  15. 根据权利要求14所述的装置,其特征在于,所述判断模块包括:
    第一获取模块,用于通过环境传感器获取当前环境参数;
    第一确定模块,用于根据所述当前环境参数确定所述第一训练结果对应的第一权值和所述第二训练结果对应的第二权值;以及根据所述第一权值、所述第二权值、所述第一训练结果和所述第二训练结果确定目标训练结果,并在所述目标训练结果符合预设条件时,确认所述目标为活体。
  16. 根据权利要求13-15任一项所述的装置,其特征在于,所述第二获取单元包括:
    第二确定模块,用于确定所述第一虹膜图像的区域位置信息;
    第二获取单元,用于获取由所述红外摄像头拍摄的红外图像;
    所述第二确定模块,还用于根据所述区域位置信息从所述红外图像中确定所述第二虹膜图像。
  17. 根据权利要求13-16任一项所述的装置,其特征在于,所述红外摄像头和所述可 见光摄像头的视角范围相同。
  18. 一种电子设备,其特征在于,包括:可见光摄像头、红外摄像头以及应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于如权利要求1-6任一项方法的指令。
  19. 一种计算机可读存储介质,其特征在于,其用于存储计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-6任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-6任一项所述的方法。
PCT/CN2018/091082 2017-07-14 2018-06-13 虹膜活体检测方法及相关产品 WO2019011099A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710576785.0A CN107292285B (zh) 2017-07-14 2017-07-14 虹膜活体检测方法及相关产品
CN201710576785.0 2017-07-14

Publications (1)

Publication Number Publication Date
WO2019011099A1 true WO2019011099A1 (zh) 2019-01-17

Family

ID=60101890

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091082 WO2019011099A1 (zh) 2017-07-14 2018-06-13 虹膜活体检测方法及相关产品

Country Status (2)

Country Link
CN (1) CN107292285B (zh)
WO (1) WO2019011099A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914672A (zh) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 图像标注方法和装置及存储介质
CN112949353A (zh) * 2019-12-10 2021-06-11 北京眼神智能科技有限公司 虹膜静默活体检测方法、装置、可读存储介质及设备
CN113158890A (zh) * 2021-04-15 2021-07-23 上海云从企业发展有限公司 活体检测系统、方法及计算机存储介质
CN115798002A (zh) * 2022-11-24 2023-03-14 北京的卢铭视科技有限公司 人脸检测方法、系统、电子设备及存储介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292285B (zh) * 2017-07-14 2020-01-14 Oppo广东移动通信有限公司 虹膜活体检测方法及相关产品
CN108009534B (zh) * 2017-11-20 2018-06-15 上海聚虹光电科技有限公司 基于瞳孔灰度的活体检测方法
CN108268839A (zh) * 2018-01-05 2018-07-10 北京万相融通科技股份有限公司 一种活体验证方法及其系统
KR102466997B1 (ko) * 2018-01-22 2022-11-14 삼성전자주식회사 라이브니스 검사 방법 및 장치
CN108776786A (zh) * 2018-06-04 2018-11-09 北京京东金融科技控股有限公司 用于生成用户真伪识别模型的方法和装置
CN109089052B (zh) * 2018-10-18 2020-09-01 浙江宇视科技有限公司 一种目标物体的验证方法及装置
CN109840514A (zh) * 2019-03-04 2019-06-04 深圳三人行在线科技有限公司 一种活体检测的方法和设备
CN111079576B (zh) * 2019-11-30 2023-07-28 腾讯科技(深圳)有限公司 活体检测方法、装置、设备及存储介质
CN111339885B (zh) * 2020-02-19 2024-05-28 平安科技(深圳)有限公司 基于虹膜识别的用户身份确定方法及相关装置
CN111611848B (zh) * 2020-04-02 2024-02-06 北京中科虹霸科技有限公司 尸体虹膜识别方法和装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324908A (zh) * 2012-03-23 2013-09-25 桂林电子科技大学 用于虹膜识别的快速虹膜采集判断控制方法
CN103400108A (zh) * 2013-07-10 2013-11-20 北京小米科技有限责任公司 人脸识别方法、装置和移动终端
CN103839054A (zh) * 2014-03-14 2014-06-04 北京中科虹霸科技有限公司 一种支持虹膜识别的多功能移动智能终端传感器
CN104166835A (zh) * 2013-05-17 2014-11-26 诺基亚公司 用于识别活体用户的方法和装置
CN104933419A (zh) * 2015-06-30 2015-09-23 小米科技有限责任公司 获取虹膜图像的方法、装置及红膜识别设备
CN105354557A (zh) * 2014-11-03 2016-02-24 倪蔚民 一种生物识别防伪造物活体检测方法
CN106529436A (zh) * 2016-10-25 2017-03-22 徐鹤菲 一种身份一致性认证方法、装置和移动终端
CN107292285A (zh) * 2017-07-14 2017-10-24 广东欧珀移动通信有限公司 虹膜活体检测方法及相关产品

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064145B2 (en) * 2011-04-20 2015-06-23 Institute Of Automation, Chinese Academy Of Sciences Identity recognition based on multiple feature fusion for an eye image
CN106055961B (zh) * 2016-05-31 2019-02-05 Oppo广东移动通信有限公司 一种指纹解锁方法及移动终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324908A (zh) * 2012-03-23 2013-09-25 桂林电子科技大学 用于虹膜识别的快速虹膜采集判断控制方法
CN104166835A (zh) * 2013-05-17 2014-11-26 诺基亚公司 用于识别活体用户的方法和装置
CN103400108A (zh) * 2013-07-10 2013-11-20 北京小米科技有限责任公司 人脸识别方法、装置和移动终端
CN103839054A (zh) * 2014-03-14 2014-06-04 北京中科虹霸科技有限公司 一种支持虹膜识别的多功能移动智能终端传感器
CN105354557A (zh) * 2014-11-03 2016-02-24 倪蔚民 一种生物识别防伪造物活体检测方法
CN104933419A (zh) * 2015-06-30 2015-09-23 小米科技有限责任公司 获取虹膜图像的方法、装置及红膜识别设备
CN106529436A (zh) * 2016-10-25 2017-03-22 徐鹤菲 一种身份一致性认证方法、装置和移动终端
CN107292285A (zh) * 2017-07-14 2017-10-24 广东欧珀移动通信有限公司 虹膜活体检测方法及相关产品

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949353A (zh) * 2019-12-10 2021-06-11 北京眼神智能科技有限公司 虹膜静默活体检测方法、装置、可读存储介质及设备
CN111914672A (zh) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 图像标注方法和装置及存储介质
CN111914672B (zh) * 2020-07-08 2023-08-04 浙江大华技术股份有限公司 图像标注方法和装置及存储介质
CN113158890A (zh) * 2021-04-15 2021-07-23 上海云从企业发展有限公司 活体检测系统、方法及计算机存储介质
CN115798002A (zh) * 2022-11-24 2023-03-14 北京的卢铭视科技有限公司 人脸检测方法、系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN107292285A (zh) 2017-10-24
CN107292285B (zh) 2020-01-14

Similar Documents

Publication Publication Date Title
WO2019011099A1 (zh) 虹膜活体检测方法及相关产品
WO2019011206A1 (zh) 活体检测方法及相关产品
WO2019052329A1 (zh) 人脸识别方法及相关产品
RU2731370C1 (ru) Способ распознавания живого организма и терминальное устройство
WO2019020014A1 (zh) 解锁控制方法及相关产品
CN107590461B (zh) 人脸识别方法及相关产品
AU2018299524B2 (en) Iris-based living-body detection method, mobile terminal and storage medium
US11055547B2 (en) Unlocking control method and related products
CN107657218B (zh) 人脸识别方法及相关产品
WO2019024717A1 (zh) 防伪处理方法及相关产品
WO2019011098A1 (zh) 解锁控制方法及相关产品
CN107784271B (zh) 指纹识别方法及相关产品
CN107451454B (zh) 解锁控制方法及相关产品
WO2019001254A1 (zh) 虹膜活体检测方法及相关产品
WO2019015418A1 (zh) 解锁控制方法及相关产品
CN107506697B (zh) 防伪处理方法及相关产品
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
WO2019015574A1 (zh) 解锁控制方法及相关产品
US11200437B2 (en) Method for iris-based living body detection and related products

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18831242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18831242

Country of ref document: EP

Kind code of ref document: A1