WO2019011098A1 - 解锁控制方法及相关产品 - Google Patents

解锁控制方法及相关产品 Download PDF

Info

Publication number
WO2019011098A1
WO2019011098A1 PCT/CN2018/091073 CN2018091073W WO2019011098A1 WO 2019011098 A1 WO2019011098 A1 WO 2019011098A1 CN 2018091073 W CN2018091073 W CN 2018091073W WO 2019011098 A1 WO2019011098 A1 WO 2019011098A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
image
user
preset
human eye
Prior art date
Application number
PCT/CN2018/091073
Other languages
English (en)
French (fr)
Inventor
周意保
张海平
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019011098A1 publication Critical patent/WO2019011098A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device
    • H04M1/724631User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device by limiting the access to the user interface, e.g. locking a touch-screen or a keypad
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/66Substation equipment, e.g. for use by subscribers with means for preventing unauthorised or fraudulent calling
    • H04M1/667Preventing unauthorised calls from a telephone set
    • H04M1/67Preventing unauthorised calls from a telephone set by electronic means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device

Definitions

  • the present application relates to the field of electronic device technologies, and in particular, to an unlock control method and related products.
  • multi-biometric identification is increasingly favored by electronic equipment manufacturers. Due to its own safety, iris recognition is more valued by various manufacturers. However, in practical applications, especially in a strong light environment, the user is in a blinking state, and at this time, the iris recognition success rate is low, thereby reducing the efficiency of multi-biometric recognition.
  • the embodiment of the present application provides an unlocking control method and related products, so as to improve multi-biological recognition efficiency in the case of blinking.
  • an embodiment of the present application provides an electronic device, including an iris recognition device, a camera, and an application processor (AP), where:
  • the camera is configured to detect whether the user is in a blink state, and send the detection result to the AP;
  • the AP is configured to: when the detection result of the camera is that the user is in a blinking state, reduce an identification threshold corresponding to the iris recognition operation, obtain a first identification threshold, and notify the iris recognition device to perform iris collection;
  • the iris recognition device is configured to acquire an iris image and send the iris image to the AP;
  • the AP is further configured to match the iris image with a preset iris template, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, perform the next Unlock the process.
  • an embodiment of the present application provides an unlocking control method, which is applied to an electronic device including an iris recognition device, a camera, and an application processor AP, where the method includes:
  • the iris image is matched with the preset iris template, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, the next unlocking process is performed.
  • an embodiment of the present application provides an unlocking control device, which is applied to an electronic device including an iris recognition device, a camera, and an application processor AP.
  • the unlock control device includes: a detecting unit, a reducing unit, and a first acquiring unit. , matching unit and execution unit, wherein
  • the detecting unit is configured to detect whether the user is in a blinking state
  • the reducing unit is configured to reduce an identification threshold corresponding to the iris recognition operation when the user is in a blinking state, to obtain a first identification threshold;
  • the first acquiring unit is configured to acquire an iris image
  • the matching unit is configured to match the iris image with a preset iris template
  • the executing unit is configured to perform a next unlocking process when a matching result of the matching unit is that a matching value between the iris image and the preset iris template is greater than the first identification threshold.
  • an embodiment of the present application provides an electronic device, including: being applied to include an iris recognition device, a camera, and an application processor AP and a memory; and one or more programs, where the one or more programs are stored in In the memory, and configured to be executed by the AP, the program includes instructions for some or all of the steps as described in the second aspect.
  • the embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium is used to store a computer program, wherein the computer program causes the computer to perform the second aspect of the embodiment of the present application. Instructions for some or all of the steps described in the section.
  • an embodiment of the present application provides a computer program product, where the computer program product includes a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute Apply some or all of the steps described in the second aspect of the embodiments.
  • the computer program product can be a software installation package.
  • the control camera detects whether the user is in a blinking state, and sends the detection result to the AP, and controls the AP to reduce the recognition corresponding to the iris recognition operation when the detection result of the camera is that the user is in a blinking state.
  • the threshold value is obtained, the first recognition threshold is obtained, the iris recognition device is notified to perform iris collection, the iris recognition device is controlled to acquire the iris image, and the iris image is sent to the AP, and the control AP matches the iris image with the preset iris template, and the iris image is matched with the iris image.
  • the next unlocking process is performed, and thus, in the blinking state, the iris recognition threshold can be lowered, and further, the iris recognition is passed without affecting the safety.
  • the rate is increased, which improves the efficiency of multi-biometric identification.
  • FIG. 1A is a schematic structural diagram of an example smart phone provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram showing a comparison of eye states according to an embodiment of the present application.
  • 1C is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 1D is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 1E is a schematic flowchart of an unlocking control method disclosed in an embodiment of the present application.
  • 1F is a schematic diagram showing a closed contour of a human eye image disclosed in an embodiment of the present application.
  • 1G is a schematic flow chart of iris recognition disclosed in an embodiment of the present application.
  • 1H is a schematic flowchart of another unlocking control method disclosed in an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of another unlocking control method disclosed in an embodiment of the present application.
  • FIG. 3 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 4A is a schematic structural diagram of an unlocking control apparatus according to an embodiment of the present application.
  • FIG. 4B is a schematic structural diagram of a lowering unit of the unlocking control apparatus described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4C is a schematic structural diagram of a detecting unit of the unlocking control device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4D is a schematic structural diagram of a first acquiring unit of the unlocking control apparatus described in FIG. 4A according to an embodiment of the present application;
  • 4E is another schematic structural diagram of an unlocking control apparatus according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the electronic device involved in the embodiments of the present application may include various handheld devices having wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user devices (user Equipment, UE), mobile station (MS), terminal device, etc.
  • user Equipment user Equipment
  • MS mobile station
  • terminal device etc.
  • the devices mentioned above are collectively referred to as electronic devices.
  • the electronic device in the embodiment of the present application may be installed with multiple biometric devices, that is, multiple biometric devices, which may include, but are not limited to, a fingerprint recognition device, in addition to the iris recognition device. , a face recognition device, a vein recognition device, an electroencephalogram recognition device, an electrocardiogram recognition device, etc., each biometric identification device has a corresponding recognition algorithm and an identification threshold, and each biometric identification device has a corresponding The template that is pre-recorded by the user, for example, the fingerprint identification device has a preset fingerprint template corresponding thereto.
  • the fingerprint identification device can collect the fingerprint image, and the matching value between the fingerprint image and the preset fingerprint template is greater than the corresponding
  • the pass is recognized.
  • the iris image in the embodiment of the present application may be an image of a single-finger iris region or an image including an iris region (for example, a human eye image).
  • the iris image can be acquired by the iris recognition device.
  • the multiple biometric mode in the embodiment of the present application may include two or more recognition steps, for example, fingerprint recognition, face recognition after fingerprint recognition, or fingerprint recognition and face recognition. Synchronization. Multi-biometric recognition mode is more secure than single biometric recognition mode (for example, unlocking only by fingerprint recognition), and thus, multiple biometric recognition modes are becoming more and more popular.
  • the iris recognition device of the smart phone 1000 may include an infrared fill light 21 and an infrared camera 22.
  • the light of the infrared fill light 21 is hit.
  • the iris recognition device collects the iris image, and the front camera 23 can be used as a face recognition device.
  • the recognition success rate determines the state of the user's eyes to a certain extent.
  • FIG. 1B shows that the size of the eye is significantly different between the normal state and the blink state. In the blink state, the eye area is small, and the corresponding iris area is small, which increases the difficulty of iris recognition.
  • FIG. 1C is a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 includes an application processor AP 110 , a camera 120 , and an iris recognition device 130 .
  • the iris recognition device 130 and the camera 120 Integrated, or the iris recognition device and the camera 120 can exist independently, wherein the AP 110 is connected to the camera 120 and the iris recognition device 130 via the bus 150.
  • FIG. 1D is the electronic device depicted in FIG. 1C.
  • a variant of the device 100, with respect to FIG. 1C, also includes an ambient light sensor 160.
  • FIG. 1E is a schematic flowchart of an embodiment of an unlock control method according to an embodiment of the present application.
  • the unlocking control method described in this embodiment is applied to an electronic device including an iris recognition device, a camera, and an application processor AP.
  • the physical map and the structure diagram can be seen in FIG. 1A to FIG. 1D, and the following steps are included:
  • the electronic device can capture the user through the camera, thereby acquiring the face image, extracting the human eye image from the face image, and determining whether the user is in a blink state according to the human eye image.
  • the iris area which is the area where the iris is located.
  • detecting whether the user is in a blinking state may include the following steps:
  • A1. Determine the current image size of the human eye
  • the face image can be obtained, and the human eye image is extracted from the face image, and then the area size of the human eye image can be calculated.
  • the human eye image can be contour extracted to obtain a closed contour. Calculate the area size of the area contained within the closed contour.
  • the first preset range described above may be set by the user or the system defaults. When the above-mentioned area size is in the first preset range, it is confirmed that the user is in a blinking state.
  • determining the current image size of the human eye may include the following steps:
  • the size of the face image is adjusted such that the size of the face image conforms to a preset size
  • the preset size can be set by the user or the system defaults.
  • the face image can be acquired by the iris recognition device, and then the size of the face image can be adjusted.
  • the size of the face image can be adjusted in two ways. One is zoom processing, and the face is determined. The image is adjusted to a zoom factor corresponding to the preset size, and further, the face image is obtained by shooting according to the zoom coefficient, and the other is image stretching. After the face image is obtained, the size and the pre-image of the face image are obtained. A scaling factor between the sizes is set, and the face image is stretched according to the scaling factor. Further, the current human eye image can be extracted from the face image after the resizing.
  • the method further includes the following steps: adjusting the angle of the face image, and in step A12, adjusting the size of the face image after the angle adjustment.
  • the main consideration is that different angles will have a certain influence when determining the image area of the human eye. In order to eliminate the influence of this factor, the angle of the face image can be adjusted.
  • the detecting whether the user is in a blinking state may include the following steps:
  • the current human eye image can be acquired, and then the current human eye image is contour extracted to obtain a closed contour, and the closed contour includes a first arc and a second arc, as shown in FIG. 1F and FIG. 1F.
  • An arc and a second arc, wherein the first arc and the second arc form a peripheral contour of the human eye, and thus, the maximum vertical distance between the first arc and the second arc may be further determined,
  • the first arc and the second arc may be projected to the coordinate system, and a plurality of vertical distances may be obtained by mathematical calculation, and the maximum vertical distance is selected therefrom.
  • the second preset range may be set by the user or the system defaults. When the maximum vertical distance is in the second preset range, the user may be confirmed to be in a blinking state.
  • the recognition threshold corresponding to the iris recognition operation can be appropriately reduced, and the first recognition threshold is obtained.
  • the identification threshold corresponding to the iris recognition operation is reduced, and the first identification threshold is obtained, which may include the following steps:
  • the recognition threshold corresponding to the iris recognition operation is reduced according to the proportional value, and a first identification threshold is obtained.
  • the preset human eye image is a human eye image that is pre-stored by the user before executing the embodiment of the present application, and the area size of the current human eye image and the size of the preset human eye image may be calculated, and then calculated between the two.
  • the proportional value decreases the recognition threshold according to the proportional value to obtain a first identification threshold.
  • the ratio value can be proportional to the recognition threshold. For example, if the scale value is 0.5, the recognition threshold can be reduced to half.
  • the identification threshold corresponding to the iris recognition operation is reduced, and the first identification threshold is obtained, which may include the following steps:
  • the recognition threshold corresponding to the iris recognition operation is decreased according to the second ratio value, to obtain a first identification threshold.
  • the preset human eye image is a human eye image pre-stored by the user before executing the embodiment of the present application, and the maximum vertical distance of the current human eye image and the maximum vertical distance of the preset human eye image can be calculated, and then calculated.
  • the ratio between the two values obtains a second ratio value, and the recognition threshold is lowered according to the second ratio value to obtain a first identification threshold.
  • the second ratio value can be proportional to the recognition threshold. For example, if the scale value is 0.5, the recognition threshold can be reduced to half.
  • the iris image in the embodiment of the present application may be an image of a single-finger iris region or an image including an iris region (for example, a human eye image).
  • the iris image can be acquired by the iris recognition device.
  • the human eye is also divided into the left eye and the right eye. Therefore, in the process of acquiring the iris image, the iris is also divided into the left eye iris or the right eye iris, and when the iris image is acquired, it can also be determined first.
  • a template for storing the iris of the right eye, and an iris image of the right eye can be obtained, and the template of the iris of the left eye is stored.
  • the iris image of the left eye can be obtained, and if the templates of the left eye and the right eye are stored, the iris image of which eye can be obtained at will, or which collection is determined according to the collection factor (whether the eye is blocked, angle, etc.)
  • the iris of the eye, or both obtain an iris image of both eyes.
  • acquiring an iris image may include the following steps:
  • the area where the iris is located can be focused, and then the focused iris area is zoomed to obtain an iris image, and a clearer iris image can be obtained.
  • the preset iris template can be pre-stored before the step 101 is performed, and the iris image of the user is collected by the iris recognition device, and the preset iris template can be saved in the iris template library.
  • the iris image is matched with the preset iris template, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, the matching is successful, and further, Performing the following unlocking process, when the matching value between the iris image and the preset iris template is less than or equal to the first recognition threshold, the entire process of multi-biometric recognition may be ended, or the user may be prompted to perform multi-biological recognition again, for example, Fingerprint recognition + iris recognition, fingerprint recognition passed, and iris recognition failed, and fingerprint recognition was resumed.
  • the iris image and the preset iris template may be separately extracted, and then the features obtained after the feature extraction are feature-matched.
  • the above feature extraction can be implemented by using an algorithm such as a Harris corner detection algorithm, a scale invariant feature transform (SIFT), a SUSAN corner detection algorithm, and the like, and details are not described herein.
  • the iris image may be pre-processed, and the pre-processing may include, but is not limited to, image enhancement processing, binarization processing, smoothing processing, color image conversion into grayscale image, and the like.
  • the feature extraction of the iris image after pre-processing is performed to obtain a feature set of the iris image, and at least one iris template is selected from the iris template library, and the iris template may be an original iris image or a set of features. Furthermore, the feature set of the iris image is matched with the feature set of the iris template to obtain a matching result, and the matching result is determined according to the matching result.
  • biometric recognition is achieved by a principle similar to iris recognition.
  • the next unlocking process may be performed, and the next unlocking process may include, but is not limited to, implementing unlocking to enter the main page, or Specify a page of an application, or go to the next biometric step (for example, fingerprint recognition, specifically, determined by a specific process of multiple biometric modes), for example, face recognition or fingerprint recognition, etc., or prompt the user Iris recognition is successful and so on.
  • the next biometric step for example, fingerprint recognition, specifically, determined by a specific process of multiple biometric modes
  • face recognition or fingerprint recognition etc.
  • the above embodiments of the present application can be applied to multiple biometric recognition modes including iris recognition steps, such as fingerprint recognition + iris recognition + face recognition.
  • iris recognition can be performed, and iris recognition can be adopted.
  • face recognition can be performed. Since the embodiment of the present application can adjust the iris recognition threshold reasonably for the blink state, the iris recognition success rate is improved, that is, the multi-biological recognition success rate is improved.
  • the current multi-biometric mode does not include the iris recognition step, the foregoing embodiment of the present application may not be performed. If the current multi-bi biometric mode has the iris recognition step, the foregoing embodiment of the present application may be performed.
  • the embodiment of the present application can be applied to iris recognition only.
  • the iris recognition can be directly performed according to the execution process of the embodiment of the present application. After the iris recognition is successful, the unlocking operation can be performed.
  • matching the iris image with the preset iris template may include the following steps:
  • D1 performing multi-scale decomposition on the iris image by using a multi-scale decomposition algorithm to obtain a first high-frequency component image of the iris image, and performing feature extraction on the first high-frequency component image to obtain a first feature set;
  • the multi-scale decomposition algorithm may be used to perform multi-scale decomposition on the iris image to obtain a low-frequency component image and a plurality of high-frequency component images
  • the first high-frequency component image may be one of a plurality of high-frequency component images
  • the multi-scale The decomposition algorithm may include, but is not limited to, wavelet transform, Laplace transform, contourlet transform (CT), non-subsampled contourlet transform (NSCT), shear wave transform, and the like.
  • CT contourlet transform
  • NCT non-subsampled contourlet transform
  • shear wave transform shear wave transform
  • multi-scale decomposition of the iris image by NSCT can obtain a low-frequency component image and a plurality of high-frequency component images, and each of the plurality of high-frequency component images has the same size. For high frequency component images, it contains more detailed information of the original image.
  • the multi-scale decomposition algorithm can be used to perform multi-scale decomposition on the preset iris template to obtain a low-frequency component image and a plurality of high-frequency component images, and the second high-frequency component image can be one of a plurality of high-frequency component images.
  • the first high-frequency component image corresponds to the position between the second high-frequency component, that is, the hierarchical position between the two high-frequency component is the same as the scale position, for example, the first high-frequency component image is located on the second layer, the third layer At the scale, the second high-frequency component image is also located on the second layer and the third scale.
  • the first feature set and the second feature set are filtered to obtain a first stable feature set and the second stable feature set, and the screening process may be as follows.
  • the first feature set may include multiple feature points.
  • the second feature set also includes a plurality of feature points, each feature point is a vector, which includes a size and a direction, and thus, a modulus of each feature point can be calculated, and if the mode is greater than a certain threshold, the feature point is retained. In this way, the feature points can be filtered.
  • the preset number threshold can be set by the user or the system defaults. The number of matching feature points between the first stable feature set and the second stable feature set can be understood as a match between the two. The value, the preset number threshold can be understood as the above first identification threshold.
  • the main consideration is to achieve matching of the fine features between the iris image and the preset iris image, which can improve the accuracy of the iris recognition. In most cases, the more detailed the feature, the more difficult it is to forge, so Improve the security of multiple biometrics.
  • step 103 the following steps may be further included:
  • Image enhancement processing is performed on the iris image.
  • the image enhancement processing may include, but is not limited to, image denoising (eg, wavelet transform for image denoising), image restoration (eg, Wiener filtering), dark visual enhancement algorithm (eg, histogram equalization, grayscale pull) Stretching, etc.), after image enhancement processing of the iris image, the quality of the iris image can be improved to some extent.
  • image denoising eg, wavelet transform for image denoising
  • image restoration eg, Wiener filtering
  • dark visual enhancement algorithm eg, histogram equalization, grayscale pull
  • step 103 the following steps may be further included:
  • E2 performing image enhancement processing on the iris image when the image quality evaluation value is lower than a preset quality threshold.
  • the preset quality threshold may be set by the user or the system defaults, and the image quality of the iris image may be first evaluated to obtain an image quality evaluation value, and whether the quality of the iris image is good or bad is determined by the image quality evaluation value.
  • the image quality evaluation value is greater than or equal to the preset quality threshold, the iris image quality is considered to be good.
  • the image quality evaluation value is less than the preset quality threshold, the iris image quality may be considered to be poor, and further, the iris image may be subjected to image enhancement processing.
  • At least one image quality evaluation index may be used to perform image quality evaluation on the iris image, thereby obtaining an image quality evaluation value.
  • Image quality evaluation indicators may include, but are not limited to, mean, standard deviation, entropy, sharpness, signal to noise ratio, and the like.
  • Image quality can be evaluated by using 2 to 10 image quality evaluation indicators. Specifically, the number of image quality evaluation indicators and which indicator are selected are determined according to specific implementation conditions. Of course, it is also necessary to select image quality evaluation indicators in combination with specific scenes, and the image quality indicators in the dark environment and the image quality evaluation in the bright environment may be different.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • the image may be evaluated by using multiple image quality evaluation indicators, and the plurality of image quality evaluation indicators may be set when the image quality is evaluated.
  • the weight of each image quality evaluation index in the image quality evaluation index may obtain a plurality of image quality evaluation values, and the final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and corresponding weights, for example, three images
  • the quality evaluation indicators are: A index, B index and C index.
  • the weight of A is a1
  • the weight of B is a2
  • the weight of C is a3.
  • A, B and C are used to evaluate the image quality of an image
  • a The corresponding image quality evaluation value is b1
  • the image quality evaluation value corresponding to B is b2
  • the image quality evaluation value corresponding to C is b3
  • the final image quality evaluation value a1b1+a2b2+a3b3.
  • the larger the image quality evaluation value the better the image quality.
  • the recognition threshold corresponding to the iris recognition operation is reduced, the first recognition threshold is obtained, the iris image is acquired, and the iris image and the pre-image are obtained.
  • the iris template is matched to perform, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, the next unlocking process is performed, thereby reducing the iris recognition threshold in the blink state, and further, Without affecting safety, the iris recognition pass rate is increased, which improves the efficiency of multi-biometric recognition.
  • FIG. 1H illustrates the main processing procedure involved in the embodiment of the present application from the internal processing flow of the electronic device 100. For details, refer to the following steps F1-F4:
  • the camera 120 detects whether the user is in a blink state, and sends the detection result to the AP 110;
  • the recognition threshold corresponding to the iris recognition operation is reduced, and the first recognition threshold is obtained, and the iris recognition device 130 is notified to perform iris collection.
  • the iris recognition device 130 acquires an iris image, and sends the iris image to the AP 110;
  • F4 and AP110 match the iris image with the preset iris template, and when the matching is successful, the next unlocking process is performed.
  • FIG. 2 is a schematic flowchart of an embodiment of an unlock control method according to an embodiment of the present application.
  • the unlocking control method described in this embodiment is applied to an electronic device including an iris recognition device, an ambient light sensor, a camera, and an application processor AP.
  • the physical map and the structure diagram can be seen in FIG. 1A to FIG. 1D, and the following steps are included:
  • the ambient light sensor acquires an ambient light intensity value, and sends the ambient light intensity value to the AP.
  • the electronic device can be installed with an ambient light sensor, which can be used to detect the ambient light intensity, and in turn, can obtain the ambient light intensity value.
  • the AP notifies the camera whether the user is in a blink state when the ambient light intensity value is greater than a preset light intensity threshold.
  • the preset light intensity threshold can be set by the user or the system defaults. When the ambient light intensity value is less than or equal to the preset light intensity threshold, step 203 may not be performed. In a specific application, the user is likely to blink in a strong light environment. Therefore, in a strong light environment, the embodiment of the present application can be performed.
  • the camera detects whether the user is in a blinking state, and sends the detection result to the AP.
  • the detection result of the camera is that the user is in a blinking state
  • reduce the recognition threshold corresponding to the iris recognition operation obtain a first recognition threshold, and notify the iris recognition device to perform iris collection.
  • the iris recognition device acquires an iris image and transmits the iris image to the AP.
  • the AP matches the iris image with a preset iris template, and performs a next unlocking process when a matching value between the iris image and the preset iris template is greater than the first identification threshold. .
  • the ambient light intensity value is obtained by the ambient light sensor, and the ambient light intensity value is sent to the AP, and the control AP notifies the camera to detect when the ambient light intensity value is greater than the preset light intensity threshold.
  • the control camera detects whether the user is in a blinking state, and sends the detection result to the AP, and the control AP reduces the recognition threshold corresponding to the iris recognition operation when the detection result of the camera is that the user is in a blinking state.
  • a first recognition threshold informing the iris recognition device to perform iris acquisition, controlling the iris recognition device to acquire an iris image, and transmitting the iris image to the AP, and controlling the AP to match the iris image with the preset iris template, and in the iris image and the preset iris
  • the next unlocking process is executed. Therefore, in the blinking state, the iris recognition threshold can be lowered, and further, the iris recognition pass rate is improved without affecting the security. Increased efficiency of multiple biometrics.
  • FIG. 3 is an electronic device according to an embodiment of the present application, including: an application processor AP and a memory.
  • the electronic device further includes an iris recognition device and a camera; and one or more programs, the one Or a plurality of programs are stored in the memory and configured to be executed by the AP, the program comprising instructions for performing the following steps:
  • the iris image is matched with the preset iris template, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, the next unlocking process is performed.
  • the program includes instructions for performing the following steps:
  • the current human eye image area size is determined, and when the area size is in the first preset range, it is confirmed that the user is in a blinking state.
  • the program includes instructions for performing the following steps:
  • the program includes instructions for performing the following steps:
  • Obtaining a current human eye image performing contour extraction on the current human eye image to obtain a closed contour, the closed contour being composed of a first arc and a second arc, determining the first arc and the second The maximum vertical distance between the arcs, confirming that the user is in a blinking state when the maximum vertical distance is in the second predetermined range.
  • the program includes instructions for performing the following steps:
  • the iris region is focused, and the focused iris region is subjected to a zooming process to obtain the iris image.
  • the electronic device is further provided with an ambient light sensor, the program further comprising instructions for performing the following steps:
  • FIG. 4A is a schematic structural diagram of an unlocking control apparatus according to this embodiment.
  • the unlocking control device is applied to an electronic device, and the unlocking control device includes a detecting unit 401, a reducing unit 402, a first obtaining unit 403, a matching unit 404, and an executing unit 405, wherein
  • the detecting unit 401 is configured to detect whether the user is in a blinking state
  • the reducing unit 402 is configured to reduce an identification threshold corresponding to the iris recognition operation when the user is in a blinking state, to obtain a first identification threshold;
  • the first acquiring unit 403 is configured to acquire an iris image
  • the matching unit 404 is configured to match the iris image with a preset iris template
  • the executing unit 405 is configured to perform a next unlocking process when a matching result of the matching unit is that a matching value between the iris image and the preset iris template is greater than the first identification threshold.
  • the detecting unit 401 is specifically configured to:
  • the current human eye image area size is determined, and when the area size is in the first preset range, it is confirmed that the user is in a blinking state.
  • FIG. 4B is a specific detailed structure of the lowering unit 402 of the unlocking control device described in FIG. 4A, and the reducing unit 402 may include: a calculating module 4021 and a reducing module 4022, as follows:
  • the calculating module 4021 is configured to calculate a ratio between an area size of the current human eye image and an area size of the preset human eye image;
  • the lowering module 4022 is configured to reduce the recognition threshold corresponding to the iris recognition operation according to the proportional value, to obtain a first identification threshold.
  • FIG. 4C is a specific detailed structure of the detecting unit 401 of the unlocking control device described in FIG. 4A, and the detecting unit 401 may include: an obtaining module 4011, an extracting module 4012, and a determining module 4013, as follows: :
  • An obtaining module 4011 configured to acquire a current human eye image
  • An extraction module 4012 configured to perform contour extraction on the current human eye image to obtain a closed contour, where the closed contour is composed of a first arc and a second arc;
  • a determining module 4013 configured to determine a maximum vertical distance between the first arc and the second arc, and confirm that the user is in a blink when the maximum vertical distance is in a second preset range status.
  • FIG. 4D is a specific detailed structure of the first acquiring unit 403 of the unlocking control device described in FIG. 4A, and the first acquiring unit 403 may include: a focusing module 4031 and a processing module 4032, as follows: :
  • the processing module 4032 is configured to perform zoom processing on the focused iris region to obtain the iris image.
  • FIG. 4E is a modified structure of the unlocking control device described in FIG. 4A, and the device may further include: a second acquiring unit 406, which is specifically as follows:
  • the second obtaining unit 406 is configured to obtain an ambient light intensity value by using the ambient light sensor, and when the ambient light intensity value is greater than a preset light intensity threshold, the detecting unit 401 performs the detecting whether the user is in the UI. The steps of the eye state.
  • the unlocking control device described in the embodiment of the present application detects whether the user is in a blinking state, and when the user is in a blinking state, reduces the recognition threshold corresponding to the iris recognition operation, obtains a first recognition threshold, and acquires an iris image. And matching the iris image with the preset iris template, and performing a next unlocking process when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, thereby reducing the iris in the blink state
  • the threshold is recognized, and the iris recognition pass rate is improved without affecting the safety, thereby improving the efficiency of multi-biometric recognition.
  • the embodiment of the present application further provides another electronic device. As shown in FIG. 5, for the convenience of description, only the parts related to the embodiment of the present application are shown. If the specific technical details are not disclosed, refer to the method of the embodiment of the present application. section.
  • the electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sales), an in-vehicle computer, and the like, and the electronic device is used as a mobile phone as an example:
  • FIG. 5 is a block diagram showing a partial structure of a mobile phone related to an electronic device provided by an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, an application processor AP980, and a power supply. 990 and other components.
  • RF radio frequency
  • the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 930 may include a touch display screen 933, a multi-biometric device 931, and other input devices 932.
  • the specific structural composition of the multi-biometric device 931 can be referred to the above description, and will not be described here.
  • the input unit 930 can also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of physical buttons, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the AP 980 is configured to perform the following steps:
  • the recognition threshold corresponding to the iris recognition operation is decreased, and a first recognition threshold is obtained, and the iris recognition device is notified to perform iris collection;
  • the iris image is matched with the preset iris template, and when the matching value between the iris image and the preset iris template is greater than the first recognition threshold, the next unlocking process is performed.
  • the AP 980 is the control center of the handset, which utilizes various interfaces and lines to connect various portions of the entire handset, and executes the handset by running or executing software programs and/or modules stored in the memory 920, as well as invoking data stored in the memory 920. A variety of functions and processing data to monitor the phone as a whole.
  • the AP 980 may include one or more processing units, where the processing unit may be an artificial intelligence chip or a quantum chip; preferably, the AP 980 may integrate an application processor and a modem processor, where the application processor mainly processes operations.
  • the system, user interface, application, etc., the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the AP 980.
  • memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the RF circuit 910 can be used for receiving and transmitting information.
  • RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • LNA low noise amplifier
  • RF circuitry 910 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (code division) Multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, short messaging service (SMS), and the like.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • SMS short messaging service
  • the handset may also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the touch display screen according to the brightness of the ambient light, and the proximity sensor can turn off the touch display when the mobile phone moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 960, a speaker 961, and a microphone 962 can provide an audio interface between the user and the handset.
  • the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961 for conversion to the sound signal by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal by the audio circuit 960. After receiving, it is converted into audio data, and then the audio data is played by the AP 980, sent to the other mobile phone via the RF circuit 910, or the audio data is played to the memory 920 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the mobile phone also includes a power source 990 (such as a battery) that supplies power to various components.
  • a power source 990 such as a battery
  • the power source can be logically connected to the AP980 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • each step method flow can be implemented based on the structure of the mobile phone.
  • each unit function can be implemented based on the structure of the mobile phone.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing the computer to execute any part of the unlocking control method as described in the foregoing method embodiment. Or all steps.
  • the embodiment of the present application further provides a computer program product, comprising: a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the operations as recited in the foregoing method embodiments Any or all of the steps to unlock the control method.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software program module.
  • the integrated unit if implemented in the form of a software program module and sold or used as a standalone product, may be stored in a computer readable memory.
  • a computer device which may be a personal computer, server or network device, etc.
  • the foregoing memory includes: a U disk, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种解锁控制方法及相关产品,应用于包括虹膜识别装置、摄像头以及应用处理器AP的电子设备,方法包括:检测用户是否处于眯眼状态(101);在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值(102);获取虹膜图像(103);将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于第一识别阈值时,执行下一解锁流程(104)。可在眯眼状态下,降低虹膜识别阈值,进而,在不影响安全性的情况下,虹膜识别通过率提高,提升了多生物识别的效率。

Description

解锁控制方法及相关产品
本申请要求2017年7月10日递交的发明名称为“解锁控制方法及相关产品”的申请号201710557795.X的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请涉及电子设备技术领域,具体涉及一种解锁控制方法及相关产品。
背景技术
随着电子设备(手机、平板电脑等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。
目前来看,多生物识别越来越受到电子设备生产厂商的青睐,虹膜识别由于其自身的安全性,更加受到各产厂商的重视。但是,在实际应用中,尤其是在强光环境下,用户处于眯眼状态,这时候虹膜识别成功率较低,因而,降低了多生物识别的效率。
发明内容
本申请实施例提供了一种解锁控制方法及相关产品,以期提高眯眼情况下的多生物识别效率。
第一方面,本申请实施例提供了一种电子设备,包括虹膜识别装置、摄像头以及应用处理器(application processor,AP),其中:
所述摄像头,用于检测用户是否处于眯眼状态,并将检测结果发送给所述AP;
所述AP,用于在所述摄像头的检测结果为所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知所述虹膜识别装置进行虹膜采集;
所述虹膜识别装置,用于获取虹膜图像,并将所述虹膜图像发送给所述AP;
所述AP,还用于将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
第二方面,本申请实施例提供一种解锁控制方法,应用于包括虹膜识别装置、摄像头以及应用处理器AP的电子设备,所述方法包括:
检测用户是否处于眯眼状态;
在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
获取虹膜图像;
将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之 间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
第三方面,本申请实施例提供了一种解锁控制装置,应用于包括虹膜识别装置、摄像头以及应用处理器AP的电子设备,所述解锁控制装置包括:检测单元、降低单元、第一获取单元、匹配单元和执行单元,其中,
所述检测单元,用于检测用户是否处于眯眼状态;
所述降低单元,用于在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
所述第一获取单元,用于获取虹膜图像;
所述匹配单元,用于将所述虹膜图像与预设虹膜模板进行匹配;
所述执行单元,用于在所述匹配单元的匹配结果为所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
第四方面,本申请实施例提供了一种电子设备,包括:应用于包括虹膜识别装置、摄像头以及应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于如第二方面中所描述的部分或全部步骤的指令。
第五方面,本申请实施例提供了一种计算机可读存储介质,其中,所述计算机可读存储介质用于存储计算机程序,其中,所述计算机程序使得计算机执行如本申请实施例第二方面中所描述的部分或全部步骤的指令。
第六方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如本申请实施例第二方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
实施本申请实施例,具有如下有益效果:
可以看出,本申请实施例中,控制摄像头检测用户是否处于眯眼状态,并将检测结果发送给AP,控制AP在摄像头的检测结果为用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知虹膜识别装置进行虹膜采集,控制虹膜识别装置获取虹膜图像,并将虹膜图像发送给AP,控制AP将虹膜图像与预设虹膜模板进行匹配,并在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,执行下一解锁流程,因而,在眯眼状态下,可降低虹膜识别阈值,进而,在不影响安全性的情况下,虹膜识别通过率提高,提升了多生物识别的效率。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技 术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种示例智能手机的架构示意图;
图1B是本申请实施例提供的眼睛状态对比演示示意图;
图1C是本申请实施例提供的一种电子设备的结构示意图;
图1D是本申请实施例提供的一种电子设备的另一结构示意图;
图1E是本申请实施例公开的一种解锁控制方法的流程示意图;
图1F是本申请实施例公开的人眼图像的闭合轮廓的演示示意图;
图1G是本申请实施例公开的虹膜识别的流程示意图;
图1H是本申请实施例公开的另一种解锁控制方法的流程示意图;
图2是本申请实施例公开的另一种解锁控制方法的流程示意图;
图3是本申请实施例提供的一种电子设备的另一结构示意图;
图4A是本申请实施例提供的一种解锁控制装置的结构示意图;
图4B是本申请实施例提供的图4A所描述的解锁控制装置的降低单元的结构示意图;
图4C是本申请实施例提供的图4A所描述的解锁控制装置的检测单元的结构示意图;
图4D是本申请实施例提供的图4A所描述的解锁控制装置的第一获取单元的结构示意图;
图4E是本申请实施例提供的一种解锁控制装置的另一结构示意图;
图5是本申请实施例公开的另一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实 施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(user equipment,UE),移动台(mobile station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为电子设备。
需要说明的是,本申请实施例中的电子设备可安装有多生物识别装置,即多个生物识别装置,该多个生物识别装置除了包括虹膜识别装置,还可包括但不仅限于:指纹识别装置、人脸识别装置、静脉识别装置、脑电波识别装置、心电图识别装置等等,每一生物识别装置均有对应的识别算法以及识别阈值,另外,每一生物识别装置均有与之对应的并由用户预先录入的模板,例如,指纹识别装置有与之对应的预设指纹模板,进一步地,指纹识别装置可采集指纹图像,在指纹图像与预设指纹模板之间的匹配值大于其对应的识别阈值时,则识别通过。本申请实施例中的虹膜图像可为单指虹膜区域的图像,或者,包含虹膜区域的图像(例如,一只人眼图像)。例如,在用户使用电子设备时,可通过虹膜识别装置获取虹膜图像。
进一步地,本申请实施例中的多生物识别模式可包含两种或者两种以上的识别步骤,例如,先指纹识别,在指纹识别通过后再人脸识别,又或者,指纹识别和人脸识别同步进行。多生物识别模式与单生物识别模式(例如,仅进行指纹识别则可实现解锁)相比较,其安全性更高,因而,多生物识别模式越来越受欢迎。
下面对本申请实施例进行详细介绍。如图1A所示的一种示例智能手机1000,该智能手机1000的虹膜识别装置可以包括红外补光灯21和红外摄像头22,在虹膜识别装置工作过程中,红外补光灯21的光线打到虹膜上之后,经过虹膜反射回红外摄像头22,虹膜识别装置采集虹膜图像,前置摄像头23可作为人脸识别装置,本申请实施例中可用于检测用户的眼睛是否处于眯眼状态。具体实现中,对虹膜识别装置而言,其识别成功率在一定程度上决定了用户眼睛的状态,例如,用户在眯眼状态下,此时,识别面积较小,较难准确识别到用户的虹膜。如图1B所示,图1B示出了正常状态和眯眼状态下眼睛面积大小明显不同,眯眼状态下,眼睛面积较小,对应的虹膜面积较小,增加了虹膜识别难度。
请参阅图1C,图1C是所示的一种电子设备100的结构示意图,所述电子设备100包括:应用处理器AP110、摄像头120、虹膜识别装置130,其中,虹膜识别装置130可与摄像头120集成在一起,或者,虹膜识别装置与摄像头120可独立存在,其中,所述AP110通过总线150连接摄像头120和虹膜识别装置130,进一步地,请参阅图1D,图1D为图1C所描述的电子设备100的一种变型结构,相对于图1C而言,图1D还包括环境光传感器160。
请参阅图1E,为本申请实施例提供的一种解锁控制方法的实施例流程示意图。本实施例中所描述的解锁控制方法,应用于包括虹膜识别装置、摄像头以及应用处理器AP的电子设备,其实物图以及结构图可参见图1A-图1D,其包括以下步骤:
101、检测用户是否处于眯眼状态。
其中,电子设备可通过摄像头对用户进行拍摄,进而获取人脸图像,从人脸图像中提取人眼图像,根据该人眼图像判断用户是否处于眯眼状态,当然,在执行步骤101的过程中,也可以确定虹膜区域,即虹膜所在区域。
可选地,上述步骤101中,检测用户是否处于眯眼状态,可包括如下步骤:
A1、确定当前人眼图像面积大小;
A2、在所述面积大小处于第一预设范围,则确认所述用户处于眯眼状态。
其中,可获取人脸图像,从该人脸图像中,提取出人眼图像,进而,可计算该人眼图像的面积大小,具体地,可对人眼图像进行轮廓提取,得到一个闭合轮廓,计算该闭合轮廓内所包含的区域的面积大小。上述第一预设范围可由用户自行设置或者系统默认。在上述面积大小处于第一预设范围时,则确认用户处于眯眼状态。
可选地,上述步骤A1中,确定当前人眼图像面积大小,可包括如下步骤:
A11、获取人脸图像;
A12、将所述人脸图像的尺寸进行调整,使得所述人脸图像的尺寸大小符合预设尺寸;
A13、从调整尺寸后的所述人脸图像中提取所述当前人眼图像。
其中,预设尺寸可由用户自行设置或者系统默认。可通过虹膜识别装置获取人脸图像,进而,对该人脸图像的尺寸进行调整,此处,对人脸图像的尺寸进行调整可采用两种方式,一种是变焦处理,确定将上述人脸图像调整为预设尺寸对应的变焦系数,进而,根据该变焦系数再次进行拍摄得到人脸图像,另一种是图像拉伸,在得到人脸图像之后,获取该人脸图像的尺寸大小与预设尺寸之间的比例系数,根据该比例系数对人脸图像进行拉伸处理。进而,可从调整了尺寸后的人脸图像中提取当前人眼图像。当然,上述步骤A11与A12之间,还可以包括如下步骤:对人脸图像的角度进行调整,步骤A12中,则可对角度调整之后的人脸图像的尺寸进行调整。其主要考虑在于,不同的角度的话,在确定人眼图像面积的时候也会有一定的影响,为了消除该因素的影响,则可以调整人脸图像的角度。
可选地,上述步骤101中,头检测用户是否处于眯眼状态,可包括如下步骤:
B1、获取当前人眼图像;
B2、对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成;
B3、确定所述第一弧线与所述第二弧线之间的最大竖直距离;
B4、在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
其中,可获取当前人眼图像,进而,对该当前人眼图像进行轮廓提取,得到一个闭合轮廓,该闭合轮廓包括第一弧线和第二弧线,如图1F,图1F示出了第一弧线和第二弧线,其中,第一弧线和第二弧线构成了人眼的外围轮廓,如此,可进一步确定第一弧线与第二弧线之间的最大竖直距离,例如,可将第一弧线与第二弧线投影到坐标系,利用数学计算方式可得到多个竖直距离,从中选取最大竖直距离。上述第二预设范围可由用户自行设置或者系统默认,在最大竖直距离处于第二预设范围时,则可确认用户处于眯眼状态。
其中,上述B1具体实现方式可参考上述步骤A11-A13。
102、在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值。
其中,由于用户在眯眼状态下,虹膜面积较小,采集的虹膜面积有限,因而,可适当降低虹膜识别操作对应的识别阈值,得到第一识别阈值。
可选地,上述步骤102中,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,可包括如下步骤:
21、计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值;
22、根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
其中,上述预设人眼图像为用户在执行本申请实施例之前预先存储的人眼图像,可计算当前人眼图像的面积大小,以及预设人眼图像的面积大小,再计算两者之间的比例值,依据该比例值降低识别阈值,得到第一识别阈值。该比例值可与识别阈值之间成正比。例如,比例值为0.5,则可将识别阈值降低为原来的一半。
可选地,上述步骤102中,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,可包括如下步骤:
C1、计算所述当前人眼图像的最大竖直距离与预设人眼图像的最大竖直距离之间的第二比例值;
C2、根据所述第二比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
其中,上述预设人眼图像为用户在执行本申请实施例之前预先存储的人眼图像,可计算当前人眼图像的最大竖直距离,以及预设人眼图像的最大竖直距离,再计算两者之间的比例值,得到第二比例值,依据该第二比例值降低识别阈值,得到第一识别阈值。该第二比例值可与识别阈值之间成正比。例如,比例值为0.5,则可将识别阈值降低为原来的一半。
103、获取虹膜图像。
其中,本申请实施例中的虹膜图像可为单指虹膜区域的图像,或者,包含虹膜区域的图像(例如,一只人眼图像)。例如,在用户使用电子设备时,可通过虹膜识别装置获取虹膜图像。通常情况下,人眼睛也会分为左眼和右眼,因而,在获取虹膜图像的过程中,虹 膜也会分为左眼虹膜还是右眼虹膜,在获取虹膜图像的时候,也可以先确定是获取左眼虹膜还是右眼虹膜,可先确定电子设备中存储了哪只眼睛的虹膜,例如,存储了右眼虹膜的模板,在可以获取右眼的虹膜图像,存储了左眼虹膜的模板,则可以获取左眼的虹膜图像,若存储了左眼、右眼的模板,则可随便获取哪只眼睛的虹膜图像,或者,依据采集因素(眼睛是否被遮挡、角度等)确定采集哪只眼睛的虹膜,或者,同时获取两者眼睛的虹膜图像。
可选地,上述步骤103中,获取虹膜图像,可包括如下步骤:
31、对虹膜区域进行聚焦;
32、对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
其中,可对虹膜所在区域进行聚焦,进而,对聚焦后的虹膜区域进行变焦处理,得到虹膜图像,此时可得到更清晰的虹膜图像。
104、将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
其中,预设虹膜模板可执行上述步骤101之前预先存储,通过虹膜识别装置采集用户的虹膜图像实现,预设虹膜模板可保存在虹膜模板库中。
可选地,在执行上述步骤104的过程中,将虹膜图像与预设虹膜模板进行匹配,在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,则匹配成功,进而,执行以下解锁流程,在虹膜图像与预设虹膜模板之间的匹配值小于或等于第一识别阈值时,可结束多生物识别的整个流程,或者,提示用户重新进行多生物识别,例如,原来是指纹识别+虹膜识别,指纹识别通过了,虹膜识别失败了,则重新开始进行指纹识别。
具体地,在执行上述步骤104的过程中,可分别对虹膜图像以及预设虹膜模板进行特征提取,再将特征提取后得到的特征进行特征匹配。上述特征提取可采用如下算法实现:Harris角点检测算法、尺度不变特征变换(scale invariant feature transform,SIFT)、SUSAN角点检测算法等等,在此不再赘述。如图1G所示,在执行步骤104过程中,可先对虹膜图像进行预处理,预处理可包括但不仅限于:图像增强处理、二值化处理、平滑处理、彩色图像转化为灰度图像等等,再对预处理之后的虹膜图像进行特征提取,得到虹膜图像的特征集,再从虹膜模板库中选择至少一个虹膜模板,该虹膜模板可以是原始虹膜图像,或者,是一组特征集合,进而,将虹膜图像的特征集合与虹膜模板的特征集合进行特征匹配,得到匹配结果,依据该匹配结果判断是否匹配成功,当然,在多生物识别过程中,指纹识别、人脸识别等也可以采用与虹膜识别类似原理实现生物识别。
其中,在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,则可执行下一解锁流程,下一解锁流程可包括但不仅限于:实现解锁,以进入主页面,或者,某个应用的指定页面,或者,进入下一步生物识别步骤(例如,指纹识别,具体地,由多生物识别模式的具体流程决定),例如,人脸识别或者指纹识别等等,或者,提示用户虹膜识别成功 等等。
举例说明下,上述本申请实施例可应用于包含虹膜识别步骤的多生物识别模式,如:指纹识别+虹膜识别+人脸识别,在指纹识别完成后,可进行虹膜识别,虹膜识别可采取上述本申请实施例,进而,虹膜识别通过后,可进行人脸识别。由于本申请实施例可针对眯眼状态,对虹膜识别阈值进行合理调整,进而,提高了虹膜识别成功率,即提升了多生物识别成功率。在具体实现中,若当前多生物识别模式不包括虹膜识别这一步骤,可不执行上述本申请实施例,若当前多生物识别模式存在虹膜识别这一步骤,可执行上述本申请实施例。
举例说明下,上述本申请实施例可应用于仅仅虹膜识别,在这种情况下,可直接按照上述本申请实施例执行流程进行虹膜识别,虹膜识别成功后,可进行解锁操作。
可选地,上述步骤104中,将所述虹膜图像与预设虹膜模板进行匹配,可包括如下步骤:
D1、采用多尺度分解算法对所述虹膜图像进行多尺度分解,得到所述虹膜图像的第一高频分量图像,并对所述第一高频分量图像进行特征提取,得到第一特征集;
D2、采用所述多尺度分解算法对所述预设虹膜模板进行多尺度分解,得到所述预设虹膜模板的第二高频分量图像,并对所述第二高频分量图像进行特征提取,得到第二特征集;
D3、对所述第一特征集和所述第二特征集进行筛选,得到第一稳定特征集和所述第二稳定特征集;
D4、将所述第一稳定特征集与所述第二稳定特征集进行特征匹配,并在所述第一稳定特征集与所述第二稳定特征集之间匹配的特征点数目大于预设数量阈值时,确认所述虹膜图像与预设虹膜模板匹配成功。
其中,可采用多尺度分解算法对虹膜图像进行多尺度分解,得到低频分量图像和多个高频分量图像,上述第一高频分量图像可为多个高频分量图像中的一个,上述多尺度分解算法可包括但不仅限于:小波变换、拉普拉斯变换、轮廓波变换(contourlet transform,CT)、非下采样轮廓波变换(non-subsampled contourlet transform,NSCT)、剪切波变换等等,以轮廓波为例,采用轮廓波变换对虹膜图像进行多尺度分解,可以得到一个低频分量图像和多个高频分量图像,并且该多个高频分量图像中每一图像的尺寸大小不一,以NSCT为例,采用NSCT对虹膜图像进行多尺度分解,可以得到一个低频分量图像和多个高频分量图像,并且该多个高频分量图像中每一图像的尺寸大小一样。对于高频分量图像而言,其包含了较多原始图像的细节信息。同理,可采用多尺度分解算法对预设虹膜模板进行多尺度分解,得到低频分量图像和多个高频分量图像,上述第二高频分量图像可为多个高频分量图像中的一个,上述第一高频分量图像与上述第二高频分量之间的位置相对应,即其两者之间的层次位置与尺度位置相同,例如,第一高频分量图像位于第2层、第3尺度,则第二高频 分量图像也位于第2层、第3尺度。上述步骤D3中,对第一特征集和第二特征集进行筛选,得到第一稳定特征集和所述第二稳定特征集,筛选过程可采用如下方式,第一特征集中可包含多个特征点,第二特征集中也包含多个特征点,每个特征点为一个矢量,其包含大小和方向,因而,可计算每一特征点的模,若模大于某一阈值,则保留该特征点,如此,可实现对特征点进行筛选,上述预设数量阈值可由用户自行设置或者系统默认,第一稳定特征集与第二稳定特征集之间匹配的特征点数目可理解为两者之间的匹配值,预设数量阈值可理解为上述第一识别阈值。上述步骤D1-D4,主要考虑实现对虹膜图像与预设虹膜图像之间的精细特征进行匹配,可提高虹膜识别的精度,往往情况下,越是细节性的特征,越难伪造,如此,可提升多生物识别的安全性。
可选地,在上述步骤103与步骤104之间,还可以包含如下步骤:
对所述虹膜图像进行图像增强处理。
其中,图像增强处理可包括但不仅限于:图像去噪(例如,小波变换进行图像去噪)、图像复原(例如,维纳滤波)、暗视觉增强算法(例如,直方图均衡化、灰度拉伸等等),在对虹膜图像进行图像增强处理之后,虹膜图像的质量可在一定程度上得到提升。
可选地,在上述步骤103与步骤104之间,还可以包含如下步骤:
E1、对所述虹膜图像进行图像质量评价,得到图像质量评价值;
E2、在所述图像质量评价值低于预设质量阈值时,对所述虹膜图像进行图像增强处理。
其中,上述预设质量阈值可由用户自行设置或者系统默认,可先对虹膜图像进行图像质量评价,得到一个图像质量评价值,通过该图像质量评价值判断该虹膜图像的质量是好还是坏,在图像质量评价值大于或等于预设质量阈值时,可认为虹膜图像质量好,在图像质量评价值小于预设质量阈值时,可认为虹膜图像质量差,进而,可对虹膜图像进行图像增强处理。
其中,上述步骤E1中,可采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,从而,得到图像质量评价值。
具体匹配中,对虹膜图像进行评价时,可包含多个图像质量评价指标,每一图像质量评价指标也对应一个权重,如此,每一图像质量评价指标对图像进行图像质量评价时,均可得到一个评价结果,最终,进行加权运算,也就得到最终的图像质量评价值。图像质量评价指标可包括但不仅限于:均值、标准差、熵、清晰度、信噪比等等。
需要说明的是,由于采用单一评价指标对图像质量进行评价时,具有一定的局限性,因此,可采用多个图像质量评价指标对图像质量进行评价,当然,对图像质量进行评价时,并非图像质量评价指标越多越好,因为图像质量评价指标越多,图像质量评价过程的计算复杂度越高,也不见得图像质量评价效果越好,因此,在对图像质量评价要求较高的情况下,可采用2~10个图像质量评价指标对图像质量进行评价。具体地,选取图像质量评价指 标的个数及哪个指标,依据具体实现情况而定。当然,也得结合具体地场景选取图像质量评价指标,在暗环境下进行图像质量评价和亮环境下进行图像质量评价选取的图像质量指标可不一样。
可选地,在对图像质量评价精度要求不高的情况下,可用一个图像质量评价指标进行评价,例如,以熵对待处理图像进行图像质量评价值,可认为熵越大,则说明图像质量越好,相反地,熵越小,则说明图像质量越差。
可选地,在对图像质量评价精度要求较高的情况下,可以采用多个图像质量评价指标对图像进行评价,在多个图像质量评价指标对图像进行图像质量评价时,可设置该多个图像质量评价指标中每一图像质量评价指标的权重,可得到多个图像质量评价值,根据该多个图像质量评价值及其对应的权重可得到最终的图像质量评价值,例如,三个图像质量评价指标分别为:A指标、B指标和C指标,A的权重为a1,B的权重为a2,C的权重为a3,采用A、B和C对某一图像进行图像质量评价时,A对应的图像质量评价值为b1,B对应的图像质量评价值为b2,C对应的图像质量评价值为b3,那么,最后的图像质量评价值=a1b1+a2b2+a3b3。通常情况下,图像质量评价值越大,说明图像质量越好。
可以看出,本申请实施例中,检测用户是否处于眯眼状态,在用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,获取虹膜图像,将虹膜图像与预设虹膜模板进行匹配,并在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,执行下一解锁流程,因而,在眯眼状态下,可降低虹膜识别阈值,进而,在不影响安全性的情况下,虹膜识别通过率提高,提升了多生物识别的效率。
图1H从电子设备100的内部处理流程说明了本申请实施例涉及的主要处理过程。其中,具体参见如下步骤F1-F4:
F1、摄像头120检测用户是否处于眯眼状态,并将检测结果发送给AP110;
F2、AP110在摄像头120的检测结果为用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知虹膜识别装置130进行虹膜采集;
F3、虹膜识别装置130获取虹膜图像,并将虹膜图像发送给AP110;
F4、AP110将虹膜图像与预设虹膜模板进行匹配,并在匹配成功时,执行下一解锁流程。
请参阅图2,为本申请实施例提供的一种解锁控制方法的实施例流程示意图。本实施例中所描述的解锁控制方法,应用于包括虹膜识别装置、环境光传感器、摄像头以及应用处理器AP的电子设备,其实物图以及结构图可参见图1A-图1D,包括以下步骤:
201、所述环境光传感器获取环境光线强度值,并将所述环境光强度值发送给所述AP。
其中,电子设备可安装有环境光传感器,该环境光传感器可用于检测环境光线强度, 进而,可获取环境光线强度值。
202、所述AP在所述环境光线强度值大于预设光强阈值时,通知所述摄像头检测用户是否处于眯眼状态。
其中,预设光强阈值可由用户自行设置或者系统默认。在环境光线强度值小于或等于预设光强阈值时,可不执行步骤203。具体应用中,往往在强光环境下,用户眯眼可能性较大,因而,在强光环境下,可执行本申请实施例。
203、所述摄像头检测用户是否处于眯眼状态,并将检测结果发送给所述AP。
204、在所述摄像头的检测结果为所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知所述虹膜识别装置进行虹膜采集。
205、所述虹膜识别装置获取虹膜图像,并将所述虹膜图像发送给所述AP。
206、所述AP将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
其中,上述步骤203-步骤206的具体描述可参照图1E所描述的解锁控制方法的对应步骤,在此不再赘述。
可以看出,本申请实施例中,通过环境光传感器获取环境光线强度值,并将所述环境光强度值发送给AP,控制AP在环境光线强度值大于预设光强阈值时,通知摄像头检测用户是否处于眯眼状态,控制摄像头检测用户是否处于眯眼状态,并将检测结果发送给AP,控制AP在摄像头的检测结果为用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知虹膜识别装置进行虹膜采集,控制虹膜识别装置获取虹膜图像,并将虹膜图像发送给AP,控制AP将虹膜图像与预设虹膜模板进行匹配,并在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,执行下一解锁流程,因而,在眯眼状态下,可降低虹膜识别阈值,进而,在不影响安全性的情况下,虹膜识别通过率提高,提升了多生物识别的效率。
请参阅图3,图3是本申请实施例提供的一种电子设备,包括:应用处理器AP和存储器,当然,电子设备还包括虹膜识别装置、摄像头;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行以下步骤的指令:
检测用户是否处于眯眼状态;
在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
获取虹膜图像;
将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
在一个可能的示例中,在所述检测用户是否处于眯眼状态方面,所述程序包括用于执行以下步骤的指令:
确定当前人眼图像面积大小,在所述面积大小处于第一预设范围,则确认所述用户处于眯眼状态。
在一个可能的示例中,在所述降低虹膜识别操作对应的识别阈值,得到第一识别阈值方面,所述程序包括用于执行以下步骤的指令:
计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值,根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
在一个可能的示例中,在所述检测用户是否处于眯眼状态方面,所述程序包括用于执行以下步骤的指令:
获取当前人眼图像,对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成,确定所述第一弧线与所述第二弧线之间的最大竖直距离,在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
在一个可能的示例中,在所述控制所述虹膜识别装置获取虹膜图像方面,所述程序包括用于执行以下步骤的指令:
对虹膜区域进行聚焦,对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
在一个可能的示例中,所述电子设备还设置有环境光传感器,所述程序还还包括用于执行以下步骤的指令:
通过所述环境光传感器获取环境光线强度值,并在所述环境光线强度值大于预设光强阈值时,执行所述检测用户是否处于眯眼状态的步骤。
请参阅图4A,图4A是本实施例提供的一种解锁控制装置的结构示意图。该解锁控制装置应用于电子设备,解锁控制装置包括检测单元401、降低单元402、第一获取单元403、匹配单元404和执行单元405,其中,
所述检测单元401,用于检测用户是否处于眯眼状态;
所述降低单元402,用于在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
所述第一获取单元403,用于获取虹膜图像;
所述匹配单元404,用于将所述虹膜图像与预设虹膜模板进行匹配;
所述执行单元405,用于在所述匹配单元的匹配结果为所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
可选地,所述检测单元401具体用于:
确定当前人眼图像面积大小,在所述面积大小处于第一预设范围,则确认所述用户处 于眯眼状态。
可选地,如图4B,图4B是图4A所描述的解锁控制装置的降低单元402的具体细节结构,所述降低单元402可包括:计算模块4021和降低模块4022,具体如下:
计算模块4021,用于计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值;
降低模块4022,用于,根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
可选地,如图4C,图4C是图4A所描述的解锁控制装置的检测单元401的具体细节结构,所述检测单元401可包括:获取模块4011、提取模块4012、确定模块4013,具体如下:
获取模块4011,用于获取当前人眼图像;
提取模块4012,用于对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成;
确定模块4013,用于确定所述第一弧线与所述第二弧线之间的最大竖直距离,在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
可选地,如图4D,图4D是图4A所描述的解锁控制装置的第一获取单元403的具体细节结构,所述第一获取单元403可包括:聚焦模块4031和处理模块4032,具体如下:
聚焦模块4031,用于对虹膜区域进行聚焦;
处理模块4032,用于,对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
可选地,如图4E,图4E为图4A所描述的解锁控制装置的变型结构,所述装置还可包括:第二获取单元406,具体如下:
第二获取单元406,用于通过所述环境光传感器获取环境光线强度值,并在所述环境光线强度值大于预设光强阈值时,由所述检测单元401执行所述检测用户是否处于眯眼状态的步骤。
可以看出,本申请实施例中所描述的解锁控制装置,检测用户是否处于眯眼状态,在用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,获取虹膜图像,将虹膜图像与预设虹膜模板进行匹配,并在虹膜图像与预设虹膜模板之间的匹配值大于第一识别阈值时,执行下一解锁流程,因而,在眯眼状态下,可降低虹膜识别阈值,进而,在不影响安全性的情况下,虹膜识别通过率提高,提升了多生物识别的效率。
可以理解的是,本实施例的解锁控制装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供了另一种电子设备,如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子 设备可以为包括手机、平板电脑、PDA(personal digital assistant,个人数字助理)、POS(point of sales,销售终端)、车载电脑等任意终端设备,以电子设备为手机为例:
图5示出的是与本申请实施例提供的电子设备相关的手机的部分结构的框图。参考图5,手机包括:射频(radio frequency,RF)电路910、存储器920、输入单元930、传感器950、音频电路960、无线保真(wireless fidelity,WiFi)模块970、应用处理器AP980、以及电源990等部件。本领域技术人员可以理解,图5中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图5对手机的各个构成部件进行具体的介绍:
输入单元930可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元930可包括触控显示屏933、多生物识别装置931以及其他输入设备932。多生物识别装置931具体结构组成可参照上述描述,在此不过多赘述。输入单元930还可以包括其他输入设备932。具体地,其他输入设备932可以包括但不限于物理按键、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
其中,所述AP980,用于执行如下步骤:
检测用户是否处于眯眼状态,并将检测结果发送给所述AP;
在所述摄像头的检测结果为所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知所述虹膜识别装置进行虹膜采集;
控制所述虹膜识别装置获取虹膜图像,并将所述虹膜图像发送给所述AP;
将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
AP980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,AP980可包括一个或多个处理单元,该处理单元可为人工智能芯片、量子芯片;优选的,AP980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到AP980中。
此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
RF电路910可用于信息的接收和发送。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(low noise amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任 一通信标准或协议,包括但不限于全球移动通讯系统(global system of mobile communication,GSM)、通用分组无线服务(general packet Radio Service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件、短消息服务(short messaging service,SMS)等。
手机还可包括至少一种传感器950,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触控显示屏的亮度,接近传感器可在手机移动到耳边时,关闭触控显示屏和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号播放;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据播放AP980处理后,经RF电路910以发送给比如另一手机,或者将音频数据播放至存储器920以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
手机还包括给各个部件供电的电源990(比如电池),优选的,电源可以通过电源管理系统与AP980逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
前述图1E或图2所示的实施例中,各步骤方法流程可以基于该手机的结构实现。
前述图3、图4A~图4E所示的实施例中,各单元功能可以基于该手机的结构实现。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种解锁控制方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实 施例中记载的任何一种解锁控制方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时, 对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种电子设备,其特征在于,包括虹膜识别装置、摄像头以及应用处理器AP,其中:
    所述摄像头,用于检测用户是否处于眯眼状态,并将检测结果发送给所述AP;
    所述AP,用于在所述摄像头的检测结果为所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值,通知所述虹膜识别装置进行虹膜采集;
    所述虹膜识别装置,用于获取虹膜图像,并将所述虹膜图像发送给所述AP;
    所述AP,还用于将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
  2. 根据权利要求1所述的电子设备,其特征在于,在所述摄像头检测用户是否处于眯眼状态方面,所述摄像头具体用于:
    确定当前人眼图像面积大小,在所述面积大小处于第一预设范围,则确认所述用户处于眯眼状态。
  3. 根据权利要求2所述的电子设备,其特征在于,在所述降低虹膜识别操作对应的识别阈值,得到第一识别阈值方面,所述AP具体用于:
    计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值,根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
  4. 根据权利要求1所述的电子设备,其特征在于,在所述摄像头检测用户是否处于眯眼状态方面,所述摄像头具体用于:
    获取当前人眼图像,对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成,确定所述第一弧线与所述第二弧线之间的最大竖直距离,在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
  5. 根据权利要求1至4任一项所述的电子设备,其特征在于,在所述获取虹膜图像方面,所述虹膜识别装置具体用于:
    控制所述虹膜识别装置对虹膜区域进行聚焦,对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
  6. 根据权利要求1至5任一项所述的电子设备,其特征在于,所述电子设备还包括环境光传感器;
    所述环境光传感器具体用于:
    通过所述环境光传感器获取环境光线强度值,并将所述环境光强度值发送给所述AP,并在所述环境光线强度值大于预设光强阈值时,通知所述摄像头执行所述检测用户是否处于眯眼状态的步骤。
  7. 一种解锁控制方法,其特征在于,应用于包括虹膜识别装置、摄像头以及应用处理 器AP的电子设备,所述方法包括:
    检测用户是否处于眯眼状态;
    在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
    获取虹膜图像;
    将所述虹膜图像与预设虹膜模板进行匹配,并在所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
  8. 根据权利要求7所述的方法,其特征在于,所述检测用户是否处于眯眼状态,包括:
    确定当前人眼图像面积大小,在所述面积大小处于第一预设范围,则确认所述用户处于眯眼状态。
  9. 根据权利要求8所述的方法,其特征在于,所述降低虹膜识别操作对应的识别阈值,得到第一识别阈值,包括:
    计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值,根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
  10. 根据权利要求7所述的方法,其特征在于,所述检测用户是否处于眯眼状态,包括:
    获取当前人眼图像,对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成,确定所述第一弧线与所述第二弧线之间的最大竖直距离,在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
  11. 根据权利要求7至10任一项所述的方法,其特征在于,所述获取虹膜图像,包括:
    对虹膜区域进行聚焦,对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
  12. 根据权利要求7至11任一项所述的方法,其特征在于,所述电子设备还设置有环境光传感器,所述方法还包括:
    通过所述环境光传感器获取环境光线强度值,并在所述环境光线强度值大于预设光强阈值时,执行所述检测用户是否处于眯眼状态的步骤。
  13. 一种解锁控制装置,其特征在于,应用于包括虹膜识别装置、摄像头以及应用处理器AP的电子设备,所述解锁控制装置包括:检测单元、降低单元、第一获取单元、匹配单元和执行单元,其中,
    所述检测单元,用于检测用户是否处于眯眼状态;
    所述降低单元,用于在所述用户处于眯眼状态时,降低虹膜识别操作对应的识别阈值,得到第一识别阈值;
    所述第一获取单元,用于获取虹膜图像;
    所述匹配单元,用于将所述虹膜图像与预设虹膜模板进行匹配;
    所述执行单元,用于在所述匹配单元的匹配结果为所述虹膜图像与所述预设虹膜模板之间的匹配值大于所述第一识别阈值时,执行下一解锁流程。
  14. 根据权利要求13所述的装置,其特征在于,在所述检测用户是否处于眯眼状态方面,所述检测单元具体用于:
    确定当前人眼图像面积大小,在所述面积大小处于第一预设范围,则确认所述用户处于眯眼状态。
  15. 根据权利要求14所述的装置,其特征在于,在所述降低虹膜识别操作对应的识别阈值,得到第一识别阈值方面,所述降低单元具体用于:
    计算所述当前人眼图像的面积大小与预设人眼图像的面积大小之间的比例值,根据所述比例值降低所述虹膜识别操作对应的识别阈值,得到第一识别阈值。
  16. 根据权利要求13所述的装置,其特征在于,在所述检测用户是否处于眯眼状态方面,包括:
    获取当前人眼图像,对所述当前人眼图像进行轮廓提取,得到一个闭合轮廓,所述闭合轮廓由第一弧线和第二弧线构成,确定所述第一弧线与所述第二弧线之间的最大竖直距离,在所述最大竖直距离处于第二预设范围时,确认所述用户处于眯眼状态。
  17. 根据权利要求13至16任一项所述的装置,其特征在于,在所述获取虹膜图像方面,所述第一获取单元具体用于:
    对虹膜区域进行聚焦,对聚焦后的所述虹膜区域进行变焦处理,得到所述虹膜图像。
  18. 一种电子设备,其特征在于,包括:应用于包括虹膜识别装置、摄像头以及应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于如权利要求7-12任一项方法的指令。
  19. 一种计算机可读存储介质,其特征在于,其用于存储计算机程序,其中,所述计算机程序使得计算机执行如权利要求7-12任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求7-12任一项所述的方法。
PCT/CN2018/091073 2017-07-10 2018-06-13 解锁控制方法及相关产品 WO2019011098A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710557795.X 2017-07-10
CN201710557795.XA CN107463818B (zh) 2017-07-10 2017-07-10 解锁控制方法及相关产品

Publications (1)

Publication Number Publication Date
WO2019011098A1 true WO2019011098A1 (zh) 2019-01-17

Family

ID=60544200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091073 WO2019011098A1 (zh) 2017-07-10 2018-06-13 解锁控制方法及相关产品

Country Status (2)

Country Link
CN (1) CN107463818B (zh)
WO (1) WO2019011098A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036211A (zh) * 2019-06-03 2020-12-04 Oppo广东移动通信有限公司 终端解锁方法、装置、电子设备和存储介质
US20210281563A1 (en) * 2015-08-28 2021-09-09 At&T Intellectual Property I, L.P. Nullifying biometrics

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463818B (zh) * 2017-07-10 2020-04-03 Oppo广东移动通信有限公司 解锁控制方法及相关产品
CN109962877B (zh) * 2017-12-14 2021-09-10 上海赛印信息技术股份有限公司 智能化试剂管理方法及系统
US11625473B2 (en) 2018-02-14 2023-04-11 Samsung Electronics Co., Ltd. Method and apparatus with selective combined authentication
CN108519810B (zh) * 2018-03-07 2021-04-09 Oppo广东移动通信有限公司 电子装置、脑电波解锁方法及相关产品
CN108446665B (zh) * 2018-03-30 2020-04-17 维沃移动通信有限公司 一种人脸识别方法和移动终端
CN108512986A (zh) * 2018-04-03 2018-09-07 Oppo广东移动通信有限公司 身份验证方法、电子装置及计算机可读存储介质
CN109753944A (zh) * 2019-01-15 2019-05-14 济南浪潮高新科技投资发展有限公司 一种基于深度三层次的虹膜识别方法
CN114842578B (zh) * 2022-04-26 2024-04-05 深圳市凯迪仕智能科技股份有限公司 智能锁、拍摄控制方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658073A (zh) * 2013-11-20 2015-05-27 鸿富锦精密工业(武汉)有限公司 虹膜钥匙及利用该虹膜钥匙对电子装置解锁的方法
CN105279492A (zh) * 2015-10-22 2016-01-27 北京天诚盛业科技有限公司 虹膜识别的方法和装置
US20160117544A1 (en) * 2014-10-22 2016-04-28 Hoyos Labs Ip Ltd. Systems and methods for performing iris identification and verification using mobile devices
CN106066957A (zh) * 2016-05-30 2016-11-02 广东欧珀移动通信有限公司 一种移动终端的解锁方法、装置和移动终端
CN107463818A (zh) * 2017-07-10 2017-12-12 广东欧珀移动通信有限公司 解锁控制方法及相关产品

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4923910B2 (ja) * 2006-09-22 2012-04-25 富士通株式会社 生体認証装置、制御方法及び制御プログラム
CN103218990A (zh) * 2013-03-29 2013-07-24 深圳市金立通信设备有限公司 一种屏幕亮度的调节方法及终端
CN106056054B (zh) * 2016-05-24 2019-08-09 青岛海信移动通信技术股份有限公司 一种进行指纹识别的方法和终端
CN105912915B (zh) * 2016-05-27 2017-10-24 广东欧珀移动通信有限公司 一种指纹解锁方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658073A (zh) * 2013-11-20 2015-05-27 鸿富锦精密工业(武汉)有限公司 虹膜钥匙及利用该虹膜钥匙对电子装置解锁的方法
US20160117544A1 (en) * 2014-10-22 2016-04-28 Hoyos Labs Ip Ltd. Systems and methods for performing iris identification and verification using mobile devices
CN105279492A (zh) * 2015-10-22 2016-01-27 北京天诚盛业科技有限公司 虹膜识别的方法和装置
CN106066957A (zh) * 2016-05-30 2016-11-02 广东欧珀移动通信有限公司 一种移动终端的解锁方法、装置和移动终端
CN107463818A (zh) * 2017-07-10 2017-12-12 广东欧珀移动通信有限公司 解锁控制方法及相关产品

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210281563A1 (en) * 2015-08-28 2021-09-09 At&T Intellectual Property I, L.P. Nullifying biometrics
US11658967B2 (en) * 2015-08-28 2023-05-23 At&T Intellectual Property I, L.P. Nullifying biometrics
CN112036211A (zh) * 2019-06-03 2020-12-04 Oppo广东移动通信有限公司 终端解锁方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN107463818B (zh) 2020-04-03
CN107463818A (zh) 2017-12-12

Similar Documents

Publication Publication Date Title
WO2019011098A1 (zh) 解锁控制方法及相关产品
US11074466B2 (en) Anti-counterfeiting processing method and related products
CN107590461B (zh) 人脸识别方法及相关产品
CN108594997B (zh) 手势骨架构建方法、装置、设备及存储介质
CN107423699B (zh) 活体检测方法及相关产品
CN107657218B (zh) 人脸识别方法及相关产品
WO2019052329A1 (zh) 人脸识别方法及相关产品
WO2019020014A1 (zh) 解锁控制方法及相关产品
RU2731370C1 (ru) Способ распознавания живого организма и терминальное устройство
CN107292285B (zh) 虹膜活体检测方法及相关产品
US11055547B2 (en) Unlocking control method and related products
CN107403147B (zh) 虹膜活体检测方法及相关产品
EP3623973B1 (en) Unlocking control method and related product
CN107451454B (zh) 解锁控制方法及相关产品
US11151398B2 (en) Anti-counterfeiting processing method, electronic device, and non-transitory computer-readable storage medium
CN107506708B (zh) 解锁控制方法及相关产品
WO2019001254A1 (zh) 虹膜活体检测方法及相关产品
US10671713B2 (en) Method for controlling unlocking and related products
WO2019015574A1 (zh) 解锁控制方法及相关产品
CN107798662B (zh) 一种图像处理方法及移动终端
EP3432206A1 (en) Method and mobile terminal for processing image and storage medium
US11200437B2 (en) Method for iris-based living body detection and related products
WO2019015432A1 (zh) 识别虹膜活体的方法及相关产品
CN112989890A (zh) 图像检测方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18832849

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18832849

Country of ref document: EP

Kind code of ref document: A1