CN112949353A - Iris silence living body detection method and device, readable storage medium and equipment - Google Patents

Iris silence living body detection method and device, readable storage medium and equipment Download PDF

Info

Publication number
CN112949353A
CN112949353A CN201911256804.7A CN201911256804A CN112949353A CN 112949353 A CN112949353 A CN 112949353A CN 201911256804 A CN201911256804 A CN 201911256804A CN 112949353 A CN112949353 A CN 112949353A
Authority
CN
China
Prior art keywords
iris
classification
living body
cnn
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911256804.7A
Other languages
Chinese (zh)
Inventor
周军
王洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eyes Intelligent Technology Co ltd, Beijing Eyecool Technology Co Ltd filed Critical Beijing Eyes Intelligent Technology Co ltd
Priority to CN201911256804.7A priority Critical patent/CN112949353A/en
Publication of CN112949353A publication Critical patent/CN112949353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method and a device for detecting iris silence living bodies, a readable storage medium and equipment, and belongs to the field of iris recognition. The method comprises the following steps: acquiring an iris image and preprocessing the iris image; classifying the iris image into one of a plurality of fuzzy grades according to the fuzzy degree by using the iris classification CNN; performing living body detection on the iris image by using a living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN. The iris living body detection method is used for iris living body detection under the condition that a user does not sense the iris living body detection, has the advantages of high accuracy, high efficiency, user friendliness and the like, and can be widely used and popularized.

Description

Iris silence living body detection method and device, readable storage medium and equipment
Technical Field
The invention relates to the field of iris recognition, in particular to a silent living body detection method and device for irises, a readable storage medium and equipment.
Background
In recent years, identity recognition technologies and products based on biometric feature recognition, such as fingerprints, faces, irises and the like, are more and more recognized and accepted by consumers, and market application is more and more extensive. However, the input of the technologies is the images of the corresponding human body parts acquired by the acquisition instrument, and the potential safety hazard of prosthesis attack exists. Therefore, it is important to research and judge whether the collected image is from the biological characteristic anti-counterfeiting technology of the real living object.
Iris recognition, with its unique, stable and unalterable nature, is recognized as one of the most reliable and secure biometric identification. With the evolution of market demands, iris acquisition conditions are more and more open, and particularly, the application of mobile phone irises ensures that the quality of iris images cannot be stably controlled and the false irises can more easily bypass the traditional iris anti-counterfeiting algorithm.
At present, the anti-counterfeiting technology of the iris recognition system faces the following main threats:
1. eye image: iris photos and videos displayed on a display screen of the mobile device, iris photos printed by printing paper, and the like.
2. Contact lenses printed with iris texture, cosmetic pupils, etc.
For the fake making means that the display screen of the mobile device plays pictures and videos in front of the lens of the acquisition instrument, the iris acquisition device is required to be provided with the near-infrared light supplement lamp, the image of the electronic screen under the near-infrared light supplement lamp is dark black, and the played iris pictures or videos cannot be effectively displayed, so that the attack threat can be congenital. Therefore, the most common attacks on the market that are easily forged are printed iris photos and contact lenses printed with iris textures.
At present, the iris identification anti-counterfeiting technology mainly comprises the following four types:
1. anti-counterfeiting technology based on the iris tremor: the nervous system of the iris is subjected to involuntary tremor, and by detecting this tremor, it can be determined whether the subject is living.
The method needs to detect very fine changes of the iris tremor within a period of time, has extremely high requirements on the quality of iris images, needs a high-precision camera, and is easy to cause misjudgment due to head shaking.
2. The anti-counterfeiting technology depending on user cooperation is divided into passive cooperation and active cooperation.
Passive matching: the hardware device designs illumination with different illumination intensities or different wavelengths to promote the iris-pupil contraction of the user, and judges whether the iris is a living body by detecting the pupil contraction and the change of the size of a light spot formed by the iris.
Active matching: the system randomly gives a predefined sight line track, prompts a user to rotate an iris according to the track, detects the contact ratio of the actual sight line track and the given track of the system, and judges whether the system is a living body.
The passive matching in the method needs additional hardware support, is difficult to standardize, is easily influenced by the environment (natural illumination intensity, lens reflection and the like), and can produce the false appearance of iris and light spot size change by folding and shaking the printed iris paper. The active fit method is not user friendly, generally requires a long time for acquisition, additionally masteries skills, and can also use the printed iris to simulate the line of sight trajectory.
3. Fourier spectrum analysis: the iris image is converted to a frequency domain for analysis by using Fourier transform, and whether the iris image is a living body is judged according to the frequency spectrum distribution.
The method is easily affected by blurring, and the frequency spectrum distribution of the high-definition living iris image and the blurred and collected false iris image is difficult to distinguish.
4. And designing an extractor for detecting the characteristics of the living iris and the false iris image, and classifying whether the living iris and the false iris image are living according to the characteristics.
The method is a relatively common anti-counterfeiting method, and the general method is to artificially design texture features such as LBP (local binary pattern) and Gabor features according to prior knowledge, use a machine learning method, generally an SVM (support vector machine), and train a real iris and a false iris image classifier. The artificial design of features is subject to the prior knowledge of the designer and requires repeated experiments and adjustments to find valid artificial features, which takes a long time. The presentation and quality of the iris image are bound with the iris acquisition equipment, the artificially designed characteristics are usually good for the images acquired under certain specific conditions and on the equipment, the conditions or the acquisition equipment are changed, the accuracy rate can be rapidly reduced, and the generalization performance is extremely weak.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method, a device, a readable storage medium and equipment for detecting the living body of the iris silence.
The technical scheme provided by the invention is as follows:
in a first aspect, the present invention provides a method for detecting iris silence in vivo, the method comprising:
acquiring an iris image and preprocessing the iris image;
classifying the iris image into one of a plurality of fuzzy grades according to the fuzzy degree by using the iris classification CNN;
performing living body detection on the iris image by using a living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
Further, the iris classification CNN is obtained by training as follows:
acquiring a training sample set, wherein the training sample set comprises a plurality of training samples;
classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus is gradually increased from the first fuzzy grade to the Nth fuzzy grade;
classifying the training samples generating motion blur in the previous blur level to a next blur level, and classifying the training samples generating motion blur in the last blur level to a last blur level;
the iris classification CNN is trained using a training sample set.
Further, the iris classification CNN includes a plurality of convolution layers, an activation layer, a down-sampling layer, a full-connection layer, a first classification layer, and a second classification layer;
when N is 3, the classifying the iris image into one of a plurality of blur levels according to the blur degree by using the iris classification CNN, including:
extracting the iris image through a plurality of convolution layers, an activation layer, a down-sampling layer and a full-connection layer of the iris classification CNN to obtain ambiguity characteristics;
constructing a first ambiguity score by using ambiguity features in a first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold value, wherein if yes, the classification result of the first classification layer is 1, otherwise, the classification result of the first classification layer is 0;
constructing a second ambiguity score by using the ambiguity features in a second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold value, wherein if yes, the classification result of the second classification layer is 1, otherwise, the classification result of the second classification layer is 0;
and adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
Further, the in vivo detection CNN is obtained by training as follows:
training the living body detection CNN corresponding to the fuzzy grade by using training samples with different fuzzy grades respectively.
Further, the living body detection CNN comprises a plurality of convolution layers, an activation layer, a down-sampling layer, a full connection layer and a classification layer;
the in-vivo detection of the iris image by using the in-vivo detection CNN corresponding to the fuzzy grade classified by the iris image to obtain an in-vivo detection result comprises the following steps:
extracting the iris image through a plurality of convolution layers, an activation layer, a down-sampling layer and a full-connection layer of living body detection CNN to obtain living body characteristics;
and constructing a living body score by using the living body characteristics at the classification layer, and judging whether the living body score is larger than a living body threshold of the living body detection CNN, wherein if so, the living body detection result is passed, and otherwise, the living body detection result is not passed.
In a second aspect, the present invention provides an iris silence live detecting apparatus corresponding to the iris silence live detecting method of the first aspect, the apparatus comprising:
the acquisition module is used for acquiring an iris image and preprocessing the iris image;
the classification module is used for classifying the iris image into one of a plurality of fuzzy grades according to the fuzzy degree by using the iris classification CNN;
the living body detection module is used for carrying out living body detection on the iris image by using the living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
Further, the iris classification CNN is obtained by training through the following modules:
the device comprises a sample set acquisition module, a data acquisition module and a data processing module, wherein the sample set acquisition module is used for acquiring a training sample set, and the training sample set comprises a plurality of training samples;
the first grading module is used for classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus gradually increases from the first fuzzy grade to the Nth fuzzy grade;
the second classification module is used for classifying the training samples generating the motion blur in the previous blur level to the next blur level and classifying the training samples generating the motion blur in the last blur level to the last blur level;
a first training module for training an iris classification CNN using a training sample set.
Further, the iris classification CNN includes a plurality of convolution layers, an activation layer, a down-sampling layer, a full-connection layer, a first classification layer, and a second classification layer;
when N is 3, the ranking module includes:
the first extraction unit is used for extracting the iris image to obtain the ambiguity characteristics through a plurality of convolution layers, activation layers, down-sampling layers and full-connection layers of iris classification CNN;
the first classification unit is used for constructing a first ambiguity score by using ambiguity features in a first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold value, if so, the classification result of the first classification layer is 1, otherwise, the classification result of the first classification layer is 0;
the second classification unit is used for constructing a second ambiguity score by using the ambiguity features in the second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold value, if so, the classification result of the second classification layer is 1, otherwise, the classification result of the second classification layer is 0;
and the accumulation unit is used for adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
Further, the in vivo detection CNN is obtained by training the following modules:
and the second training module is used for respectively using training samples with different fuzzy grades to train the living body detection CNN corresponding to the fuzzy grade.
Further, the living body detection CNN comprises a plurality of convolution layers, an activation layer, a down-sampling layer, a full connection layer and a classification layer;
the revival module comprises:
the second extraction unit is used for extracting the iris image to obtain the living body characteristics through a plurality of convolution layers, an activation layer, a down-sampling layer and a full connection layer of living body detection CNN;
and a third classification unit, configured to construct a live body score using the live body feature at the classification layer, and determine whether the live body score is greater than a live body threshold of the live body test CNN, if so, the live body test result is pass, otherwise, the live body test result is fail.
In a third aspect, the present invention provides a computer readable storage medium for iris silence liveness detection, comprising a memory for storing processor executable instructions, which when executed by the processor, implement steps comprising the iris silence liveness detection method of the first aspect.
In a fourth aspect, the present invention provides an apparatus for iris silence liveness detection, comprising at least one processor and a memory storing computer executable instructions, the processor implementing the steps of the iris silence liveness detection method of the first aspect when executing the instructions.
The invention has the following beneficial effects:
1. the iris living body detection method has low requirement on the quality of the iris image, does not need a high-precision camera, can use the low-quality iris image to carry out living body detection, meets the requirement of the market on low-quality iris recognition, and is particularly suitable for iris living body detection of mobile terminals such as mobile phones and the like.
2. The invention relates to an iris silence living body detection method, which is used for iris living body detection under the condition of no perception of a user, does not depend on active or passive cooperation of the user, does not need additional hardware support and is user-friendly. The network model is directly connected with a single iris image in a butt joint mode, and has high discrimination capacity aiming at the single iris image.
3. The method classifies the iris quality by using the iris classification CNN, and then performs living body detection by using different living body detection CNNs aiming at different qualities, so that the extraction characteristics of the living body detection CNN are more targeted, and the judgment result is more accurate.
4. The invention solves the problem of iris living body detection by using a characteristic classification method, is different from the traditional artificial design characteristic and machine learning method, uses deep learning (living body detection CNN) to train with a large number of living body irises and false irises, and can effectively judge the characteristics of true and false irises by automatic learning. The generalization is strong, can standardize the various iris identification equipment of adaptation, and convenient application deploys.
Drawings
FIG. 1 is a flow chart of a silent in vivo iris detection method of the present invention;
FIG. 2 is a schematic diagram of an example of the iris silence liveness detection method of the present invention;
fig. 3 is a schematic diagram of an example of iris classification CNN;
FIG. 4 is a diagram showing an example of a in vivo assay CNN;
FIG. 5 is a schematic view of an iris silence liveness detection apparatus of the present invention;
FIG. 6 is a diagram of an example of iris images at C1, C2, and C3 levels;
FIG. 7 is a schematic of the pretreatment.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the invention provides an iris silence in-vivo detection method, as shown in figures 1-2, the method comprises the following steps:
step S100: and acquiring an iris image and preprocessing the iris image.
The step is not limited to the manner of obtaining the iris image, and may be to shoot an iris image by the iris collecting instrument, or to select a frame of iris image from a video stream shot by the iris collecting instrument.
The pre-processing of the present invention may include iris detection, iris localization, and iris normalization, etc., as shown in fig. 7.
The iris image obtained from the iris acquisition instrument has a large resolution, a small iris proportion and contains most of the periocular and background information. Through iris detection, the iris region can be accurately acquired, interference of non-iris regions is removed, and meanwhile algorithm efficiency is improved. The invention preferably adopts AdaBoost algorithm to perform iris detection, and then preferably uses a calculus detection operator to perform iris positioning to obtain the outer circle of the iris, as shown in figure 7.
The iris normalization is to cut the iris by taking the center of the excircle of the positioned iris as the center and outwards respectively forming 1.5 times of excircle radius from top to bottom and from left to right, and the pixel at the insufficient position is compensated by 0. The cropped iris image is finally scaled to a fixed size (preferably 100 x 100) as the final iris image, as shown in fig. 7.
The iris detection algorithm of the present invention is not limited to the AdaBoost algorithm, and may be any algorithm as long as it can obtain an approximate iris position, and for example, may be a grayscale projection method, a hough transform method, or the like, or a deep learning method such as RCNN, fast RCNN, SSD, or the like. Similarly, the iris positioning algorithm is not limited to the calculus detection operator, and may be SDM, hough transform, deep learning method, or the like. And finally, inputting the obtained normalized iris image as data of the iris classification CNN and the living body detection CNN.
Step S200: the iris image is classified into one of several blur levels according to the degree of blur using the iris classification CNN (Convolutional Neural Networks).
In this step, features are extracted from the iris image using the iris classification CNN, and a ambiguity classification is performed. For example, the iris classification CNN may classify the iris image into three classes, a first blur class, a second blur class, and a third blur class, each representing a different degree of blur (i.e., quality of the iris image) of the iris image.
Step S300: performing living body detection on the iris image by using a living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
If the iris classification CNN classifies the iris image into three fuzzy levels, each fuzzy level corresponds to one living body detection CNN, and the three living body detections CNN are total. The iris classification CNN and the living body detection CNN need to be trained before use, and during training, the iris classification CNN and the three living body detection CNN are trained by using respective training sets.
And if the iris classification CNN classifies the iris image into a first fuzzy grade, performing living body detection on the iris image by using the living body detection CNN corresponding to the first fuzzy grade. When the living body detection is carried out, the living body detection CNN extracts the characteristics of the iris image, and carries out the living body detection classification to obtain the living body detection result.
The iris classification CNN is used for classifying the iris images according to the fuzzy degree, and then different in-vivo detection CNNs are used for in-vivo detection according to the fuzzy degree of the iris images, so that the method has the following beneficial effects:
1. the iris living body detection method has low requirement on the quality of the iris image, does not need a high-precision camera, can use the low-quality iris image to carry out living body detection, meets the requirement of the market on low-quality iris recognition, and is particularly suitable for iris living body detection of mobile terminals such as mobile phones and the like.
2. The invention relates to an iris silence living body detection method, which does not depend on active or passive cooperation of a user, does not need additional hardware support and is user-friendly. The network model is directly connected with a single iris image in a butt joint mode, and has high discrimination capacity aiming at the single iris image.
3. The method classifies the iris quality by using the iris classification CNN, and then performs living body detection by using different living body detection CNNs aiming at different qualities, so that the extraction characteristics of the living body detection CNN are more targeted, and the judgment result is more accurate.
4. The invention solves the problem of iris living body detection by using a characteristic classification method, is different from the traditional artificial design characteristic and machine learning method, uses deep learning (living body detection CNN) to train with a large number of living body irises and false irises, and can effectively judge the characteristics of true and false irises by automatic learning. The generalization is strong, can standardize the various iris identification equipment of adaptation, and convenient application deploys.
As an improvement of the present invention, the iris classification CNN is obtained by training as follows:
step S100': a training sample set is obtained, the training sample set including a plurality of training samples.
As an example of the training sample set, it may include public databases such as a chinese academy public database, an LG public database, and the like, and further include a self-collection database that is self-collected using an iris collector. The self-collection database comprises collected real iris data, collected false iris data printed and made by the public database and the self-collection real iris sampling data, and a collected beautiful pupil data set (worn by a real person) containing iris textures.
In the process of collecting data by using the iris collecting instrument, the real and false irises are not distinguished, the actual use scene is simulated, and iris image data of a plurality of definition levels are collected.
Step S200': and classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus gradually increases from the first fuzzy grade to the Nth fuzzy grade.
The invention considers the property of the fixed focus lens of the iris collecting instrument, the image near the focus is clear, and the image far away from the focus is fuzzy, namely the defocusing fuzzy. In view of the above, the present invention divides the iris ambiguity into N levels. From the first fuzzy grade to the Nth fuzzy grade, the distance between the training sample and the focus is gradually increased and further gradually becomes fuzzy when the training sample is shot. The first blur level is closest to the focus and clearest, the latter blur level is farther from the focus than the former blur level, so that the latter blur level is blurry than the former blur level, and the last blur level is farthest from the focus and is most blurry.
The iris image is blurred to a certain degree, and when the iris image is preprocessed, the conditions of missed detection and unavailable positioning exist in iris detection and iris positioning, namely the iris image is unavailable, and living body detection is not needed. Thus, the present invention may set the blur degree of the last blur level such that the last blur level may be detected and located (available) by iris detection and iris localization, while iris images that are more blurred than the last blur level are not available. I.e., the last blur level, is just available, so that the inventive blur grading can cover all the range of blur levels of available iris images.
For example, when N is equal to 3, the first blur level C1 is clearest and near focus imaging. The second blur level C2 is between sharp and blurred, being imaged slightly away from focus. The third blur level C3 is the most blurred, imaged away from focus.
In the present invention, when determining the first blur level, the second blur level, …, and the nth blur level, the distance from the focus (i.e., the defocus distance, i.e., the focus vicinity, the focus slightly away, and the focus away) may be set according to actual needs.
Step S300': and classifying the training samples generating the motion blur in the previous blur level to the next blur level, and classifying the training samples generating the motion blur in the last blur level to the last blur level.
The iris image is affected by eyeball rotation, blinking and face shaking, and motion blur is easily generated. If the iris image of a certain blur level generates motion blur, its sharpness is degraded, and the latter blur level is blurred compared to the former blur level, and thus it is classified into the latter blur level. The training samples in the last blur level that produce motion blur are still classified into the last blur level, since there are no more blurred levels than the last blur level.
Taking N equal to 3 as an example, if the iris image of the first blur level C1 generates motion blur, its sharpness is reduced, and thus the motion-blurred image of the first blur level imaged near the focus is divided into the second blur level C2. Similarly, a motion-blurred image of the second blur level imaged slightly away from the focus is classified into a C3 level. The motion-blurred image of the third blur level imaged away from the focus is also classified into a C3 level.
The method uses the artificial calibration to cooperate with the defocus distance of the collected object, divides the ambiguity of the whole training sample set into N levels, and gradually increases the ambiguity from the first ambiguity level to the Nth ambiguity level.
For example, when N is 3, C1 (clear), C2 (clear), and C3 (fuzzy). And the motion-blurred image generated near the focal distance is classified into a C2 level, and the motion-blurred image farther from the focal distance is classified into a C3 level.
FIG. 6 shows an example of iris images at C1, C2, and C3 levels, a first action being a live iris and a second action being a prosthetic iris; the first column is C1 level (near focus), the second column is C2 level (near focus motion blur), the third column is C3 level (far focus), and the fourth column is C3 level (far focus motion blur).
And (3) normalizing the iris image by using the pre-processing method for the calibrated training sample, and using the normalized iris image as the input of the iris classification CNN network.
Step S400': the iris classification CNN is trained using a training sample set.
The iris classification CNN is a lightweight convolutional neural network, is convenient and rapid to calculate, can meet the real-time detection requirement, and comprises a plurality of convolutional layers, an activation layer, a down-sampling layer, a full connection layer, a first classification layer and a second classification layer, wherein the first classification layer and the second classification layer are both classified into two types.
One preferred example of the iris classification CNN is shown in fig. 3: and after three convolutional layers, immediately connecting a full connection layer, and finally outputting two softmax two classification tasks, wherein each convolutional layer passes through a BN layer, a ReLU activation layer and a MaxPholing downsampling layer. The iris classification CNN of the present invention is not limited to the structure shown in fig. 3, and may be implemented in any other network structure.
Based on the iris classification CNN with the structure, taking N as an example to be 3, the invention converts 3 calibrated ambiguity grade sequential regression problems into 2 binary class sub-problems during training.
In training, the training samples are scaled to 32 x 32 as input to the iris classification CNN.
Threshold C for each ambiguity levelkThe subclass is to judge the ambiguity score yiWhether or not it is greater than CkThe prediction result is {0, 1}, and 1 is satisfied and 0 is not satisfied.
Figure BDA0002310490640000121
Wherein the content of the first and second substances,
Figure BDA0002310490640000122
for the output of the ith classification task softmax, CkA threshold value representing the k-th ambiguity level. When a sample set is trained, the first classification task judges whether the iris sample is larger than C1 level, all the C1 iris sample labels are 0, and all the C2 and C3 iris sample labels are 1; the second classification task determines whether the index is greater than the C2 level, all the iris sample labels of the C1 and C2 levels are 0, and all the iris sample labels of the C3 levels are 1.
In the training, for a single two-classification subtask, a cross entropy loss function is used, and the overall loss function is the sum of losses of all the two-classification subtasks.
After training is completed, iris classification CNN is used for online classification of iris fuzzy grades.
When N is 3, step S200 includes:
step S210: the iris image is subjected to extraction through a plurality of convolution layers, an activation layer, a down-sampling layer and a full connection layer of iris classification CNN to obtain the ambiguity characteristics.
Taking the iris classification CNN shown in fig. 3 as an example, the input iris image of 32 × 1 (size 32 × 32, channel 1) passes through the convolution layer of 3 × 16 (convolution kernel size 3 × 3, channel 16) and step size 1(stride 1), and reaches the feature map of 16 × 16 (size 16, channel 16). By analogy, an ambiguity feature of 1 × 64 (size 1 × 1, channel 64) was obtained at the fully connected layers.
Step S220: and constructing a first ambiguity score by using the ambiguity features in the first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold, wherein if so, the classification result of the first classification layer is 1, and otherwise, the classification result of the first classification layer is 0.
Greater than the first ambiguity threshold, indicating a second ambiguity level or a third ambiguity level, resulting in a 1; otherwise, the description is of the first blur level, resulting in 0.
Step S230: and constructing a second ambiguity score by using the ambiguity features in the second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold value, wherein if yes, the classification result of the second classification layer is 1, and if not, the classification result of the second classification layer is 0.
Greater than the second ambiguity threshold, indicating a third ambiguity level, resulting in a 1; otherwise, the description is the first blur level or the second blur level, and the result is 0.
Step S240: and adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
And after two classification layers, adding the results, wherein the first fuzzy grade is 0, the second fuzzy grade is 1, the third fuzzy grade is 2, and adding 1 to obtain the fuzzy grade of the iris image.
Compared with a method for directly using a classification network to classify the fuzzy degrees into two classes or three classes, the method adopts the sequential regression model to convert three classification problems (C1, C2 and C3) into two classification problems, thereby utilizing the characteristic of continuity of the iris fuzzy degree, the output of each two classification task indicates whether the output is greater than the current fuzzy grade (0 represents no, 1 represents yes), and finally the summation of the two classification results and the addition of 1 represents the final grade.
As another improvement of the invention, the in vivo detection CNN is a lightweight convolutional neural network, is convenient and rapid to calculate, can meet the real-time detection requirement, and comprises a plurality of convolutional layers, an activation layer, a down-sampling layer, a full-connection layer and a classification layer, wherein the in vivo detection CNN is obtained by training through the following method:
step S100': training the living body detection CNN corresponding to the fuzzy grade by using training samples with different fuzzy grades respectively.
Based on the foregoing structure of the living body detection CNN, step S300 includes:
step S310: the iris image is subjected to living body detection on a plurality of convolution layers, an activation layer, a down-sampling layer and a full-connection layer of CNN to obtain living body characteristics.
Step S320: and constructing a living body score by using the living body characteristics at the classification layer, and judging whether the living body score is larger than a living body threshold of the living body detection CNN, wherein if so, the living body detection result is passed, and otherwise, the living body detection result is not passed.
Illustratively, the invention has three in-vivo detection CNN models, and the iris data of different fuzzy levels are input into the corresponding in-vivo detection CNN to extract in-vivo characteristics, and then the in-vivo characteristics are calculated by using a softmax classification layer, and compared with a given threshold value, the in-vivo characteristics are determined to be in-vivo irises or false irises.
Among them, the three biopsy CNNs preferably use the same network structure, differing only in training data. The network input data size is preferably 64 x 64. In vivo detection an example of a CNN network structure is shown in fig. 4, where after 4 convolutional layers, one full connection is followed to extract 128-dimensional feature vectors, after each convolutional layer, a BN layer, a ReLU activation layer, and a MaxPooling downsampling layer are passed, and finally, a live score is calculated using softmax.
The biopsy CNN of the present invention is not limited to the configuration shown in fig. 4, and may have any other network configuration that can be realized.
Example 2:
an embodiment of the present invention provides an iris silence live detecting device, as shown in fig. 5, the device includes:
and the acquisition module 10 is used for acquiring the iris image and preprocessing the iris image.
A classification module 20 for classifying the iris image into one of a plurality of blur levels according to the degree of blur using the iris classification CNN.
The living body detection module 30 is used for carrying out living body detection on the iris image by using the living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
The iris classification CNN is used for classifying the iris images according to the fuzzy degree, and then different in-vivo detection CNNs are used for in-vivo detection according to the fuzzy degree of the iris images, so that the method has the following beneficial effects:
1. the iris living body detection method has low requirement on the quality of the iris image, does not need a high-precision camera, can use the low-quality iris image to carry out living body detection, meets the requirement of the market on low-quality iris recognition, and is particularly suitable for iris living body detection of mobile terminals such as mobile phones and the like.
2. The invention relates to an iris silence living body detection method, which does not depend on active or passive cooperation of a user, does not need additional hardware support and is user-friendly. The network model is directly connected with a single iris image in a butt joint mode, and has high discrimination capacity aiming at the single iris image.
3. The method classifies the iris quality by using the iris classification CNN, and then performs living body detection by using different living body detection CNNs aiming at different qualities, so that the extraction characteristics of the living body detection CNN are more targeted, and the judgment result is more accurate.
4. The invention solves the problem of iris living body detection by using a characteristic classification method, is different from the traditional artificial design characteristic and machine learning method, uses deep learning (living body detection CNN) to train with a large number of living body irises and false irises, and can effectively judge the characteristics of true and false irises by automatic learning. The generalization is strong, can standardize the various iris identification equipment of adaptation, and convenient application deploys.
As an improvement of the present invention, the iris classification CNN is obtained by training through the following modules:
the device comprises a sample set acquisition module, a training sample set acquisition module and a training sample acquisition module, wherein the sample set acquisition module is used for acquiring a training sample set, and the training sample set comprises a plurality of training samples.
The first grading module is used for classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus gradually increases from the first fuzzy grade to the Nth fuzzy grade.
And the second classification module is used for classifying the training samples generating the motion blur in the previous blur level to the next blur level and classifying the training samples generating the motion blur in the last blur level to the last blur level.
A first training module for training an iris classification CNN using a training sample set.
The iris classification CNN of the present invention includes several convolution layers, an activation layer, a down-sampling layer, and a full-link layer, as well as a first classification layer and a second classification layer.
Based on this, when N is 3, the aforementioned grading module includes:
the first extraction unit is used for extracting the iris image to obtain the ambiguity characteristics through a plurality of convolution layers, activation layers, down-sampling layers and full connection layers of iris classification CNN.
And the first classification unit is used for constructing a first ambiguity score by using the ambiguity features in the first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold, if so, the classification result of the first classification layer is 1, and if not, the classification result of the first classification layer is 0.
And the second classification unit is used for constructing a second ambiguity score by using the ambiguity features in the second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold, if so, the classification result of the second classification layer is 1, and if not, the classification result of the second classification layer is 0.
And the accumulation unit is used for adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
As another improvement of the invention, the in vivo detection CNN is obtained by the following module training:
and the second training module is used for respectively using training samples with different fuzzy grades to train the living body detection CNN corresponding to the fuzzy grade.
Further, the biopsy CNN includes several convolution layers, an active layer, a down-sampling layer, and a full-link layer, and a classification layer.
Based on this, the aforementioned activation module comprises:
and the second extraction unit is used for extracting the iris image to obtain the living body characteristics through a plurality of convolution layers, activation layers, down-sampling layers and full-connection layers of the living body detection CNN.
And a third classification unit, configured to construct a live body score using the live body feature at the classification layer, and determine whether the live body score is greater than a live body threshold of the live body test CNN, if so, the live body test result is pass, otherwise, the live body test result is fail.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiment 1, and for the sake of brief description, reference may be made to the corresponding content in the method embodiment 1 for the part where the embodiment of the device is not mentioned. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the unit described above may all refer to the corresponding processes in the above method embodiment 1, and are not described herein again.
Example 3:
the method provided by this specification and described in the above embodiment 1 can implement the service logic through a computer program and record the service logic on a storage medium, and the storage medium can be read and executed by a computer, so as to achieve the effect of the solution described in embodiment 1 of this specification. Accordingly, the present invention also provides a computer readable storage medium for iris silence liveness detection, comprising a memory for storing processor executable instructions which, when executed by the processor, implement the steps comprising the iris silence liveness detection method of embodiment 1.
The iris living body detection method is used for iris living body detection under the condition that a user does not sense the iris living body detection, has the advantages of high accuracy, high efficiency, user friendliness and the like, and can be widely used and popularized.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The device described above may also include other implementations in accordance with the description of method embodiment 1. The specific implementation manner may refer to the description of the related method embodiment 1, and is not described in detail here.
Example 4:
the invention also provides a device for iris silence live detection, which can be a separate computer, and can also comprise an actual operation device and the like using one or more methods or one or more embodiment devices of the specification. The apparatus for iris silence liveness detection may include at least one processor and a memory storing computer executable instructions, which when executed by the processor, implement the steps of the iris silence liveness detection method of any one or more of embodiments 1 described above.
The iris living body detection method is used for iris living body detection under the condition that a user does not sense the iris living body detection, has the advantages of high accuracy, high efficiency, user friendliness and the like, and can be widely used and popularized.
The above description of the device according to the method or apparatus embodiment may also include other implementation manners, and a specific implementation manner may refer to the description of related method embodiment 1, which is not described in detail herein.
It should be noted that, the above-mentioned apparatus or system in this specification may also include other implementation manners according to the description of the related method embodiment, and a specific implementation manner may refer to the description of the method embodiment, which is not described herein in detail. The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class, storage medium + program embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An iris silence in vivo detection method, the method comprising:
acquiring an iris image and preprocessing the iris image;
classifying the iris image into one of a plurality of fuzzy grades according to the fuzzy degree by using the iris classification CNN;
performing living body detection on the iris image by using a living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
2. The iris silence live detection method according to claim 1, characterized in that the iris classification CNN is trained by the following method:
acquiring a training sample set, wherein the training sample set comprises a plurality of training samples;
classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus is gradually increased from the first fuzzy grade to the Nth fuzzy grade;
classifying the training samples generating motion blur in the previous blur level to a next blur level, and classifying the training samples generating motion blur in the last blur level to a last blur level;
the iris classification CNN is trained using a training sample set.
3. The iris silence live detecting method according to claim 2, characterized in that the iris classification CNN includes several convolution layers, activation layers, down-sampling layers and full-connection layers, and a first classification layer and a second classification layer;
when N is 3, the classifying the iris image into one of a plurality of blur levels according to the blur degree by using the iris classification CNN, including:
extracting the iris image through a plurality of convolution layers, an activation layer, a down-sampling layer and a full-connection layer of the iris classification CNN to obtain ambiguity characteristics;
constructing a first ambiguity score by using ambiguity features in a first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold value, wherein if yes, the classification result of the first classification layer is 1, otherwise, the classification result of the first classification layer is 0;
constructing a second ambiguity score by using the ambiguity features in a second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold value, wherein if yes, the classification result of the second classification layer is 1, otherwise, the classification result of the second classification layer is 0;
and adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
4. The iris silence live detection method as claimed in any one of claims 1 to 3, wherein the live detection CNN is trained by the following method:
training the living body detection CNN corresponding to the fuzzy grade by using training samples with different fuzzy grades respectively.
5. The iris silence live detection method as claimed in claim 4, wherein the live detection CNN comprises several convolution layers, activation layers, down-sampling layers and full-connection layers, and one classification layer;
the in-vivo detection of the iris image by using the in-vivo detection CNN corresponding to the fuzzy grade classified by the iris image to obtain an in-vivo detection result comprises the following steps:
extracting the iris image through a plurality of convolution layers, an activation layer, a down-sampling layer and a full-connection layer of living body detection CNN to obtain living body characteristics;
and constructing a living body score by using the living body characteristics at the classification layer, and judging whether the living body score is larger than a living body threshold of the living body detection CNN, wherein if so, the living body detection result is passed, and otherwise, the living body detection result is not passed.
6. An iris silence live detecting apparatus, comprising:
the acquisition module is used for acquiring an iris image and preprocessing the iris image;
the classification module is used for classifying the iris image into one of a plurality of fuzzy grades according to the fuzzy degree by using the iris classification CNN;
the living body detection module is used for carrying out living body detection on the iris image by using the living body detection CNN corresponding to the fuzzy grade classified by the iris image to obtain a living body detection result; wherein each blur level corresponds to one in vivo test CNN.
7. The iris silence live detection device of claim 6, wherein the iris classification CNN is trained by the following modules:
the device comprises a sample set acquisition module, a data acquisition module and a data processing module, wherein the sample set acquisition module is used for acquiring a training sample set, and the training sample set comprises a plurality of training samples;
the first grading module is used for classifying the training samples into a first fuzzy grade to an Nth fuzzy grade according to the distance between the training samples and the focus when the training samples are shot, wherein the distance between the training samples and the focus gradually increases from the first fuzzy grade to the Nth fuzzy grade;
the second classification module is used for classifying the training samples generating the motion blur in the previous blur level to the next blur level and classifying the training samples generating the motion blur in the last blur level to the last blur level;
a first training module for training an iris classification CNN using a training sample set.
8. The iris silence liveness detection device according to claim 7, characterized in that said iris classification CNN comprises several convolution layers, activation layers, down-sampling layers and full-connection layers, and a first classification layer and a second classification layer;
when N is 3, the ranking module includes:
the first extraction unit is used for extracting the iris image to obtain the ambiguity characteristics through a plurality of convolution layers, activation layers, down-sampling layers and full-connection layers of iris classification CNN;
the first classification unit is used for constructing a first ambiguity score by using ambiguity features in a first classification layer, and judging whether the first ambiguity score is greater than a first ambiguity threshold value, if so, the classification result of the first classification layer is 1, otherwise, the classification result of the first classification layer is 0;
the second classification unit is used for constructing a second ambiguity score by using the ambiguity features in the second classification layer, and judging whether the second ambiguity score is greater than a second ambiguity threshold value, if so, the classification result of the second classification layer is 1, otherwise, the classification result of the second classification layer is 0;
and the accumulation unit is used for adding the classification result of the first classification layer and the classification result of the second classification layer and then adding 1 to obtain the fuzzy grade of the iris image.
9. A computer readable storage medium for iris silence liveness detection, comprising a memory for storing processor executable instructions which, when executed by the processor, implement steps comprising the iris silence liveness detection method of any one of claims 1 to 5.
10. An apparatus for iris silence liveness detection comprising at least one processor and a memory storing computer executable instructions, the processor when executing the instructions implementing the steps of the iris silence liveness detection method of any one of claims 1 to 5.
CN201911256804.7A 2019-12-10 2019-12-10 Iris silence living body detection method and device, readable storage medium and equipment Pending CN112949353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256804.7A CN112949353A (en) 2019-12-10 2019-12-10 Iris silence living body detection method and device, readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256804.7A CN112949353A (en) 2019-12-10 2019-12-10 Iris silence living body detection method and device, readable storage medium and equipment

Publications (1)

Publication Number Publication Date
CN112949353A true CN112949353A (en) 2021-06-11

Family

ID=76225406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256804.7A Pending CN112949353A (en) 2019-12-10 2019-12-10 Iris silence living body detection method and device, readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112949353A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373218A (en) * 2022-03-21 2022-04-19 北京万里红科技有限公司 Method for generating convolution network for detecting living body object
CN115100730A (en) * 2022-07-21 2022-09-23 北京万里红科技有限公司 Iris living body detection model training method, iris living body detection method and device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426760A (en) * 2001-12-18 2003-07-02 中国科学院自动化研究所 Identity discriminating method based on living body iris
US20080037835A1 (en) * 2006-06-02 2008-02-14 Korea Institute Of Science And Technology Iris recognition system and method using multifocus image sequence
CN103324908A (en) * 2012-03-23 2013-09-25 桂林电子科技大学 Rapid iris collecting, judging and controlling method for iris identification
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
CN107133948A (en) * 2017-05-09 2017-09-05 电子科技大学 Image blurring and noise evaluating method based on multitask convolutional neural networks
US20180018451A1 (en) * 2016-07-14 2018-01-18 Magic Leap, Inc. Deep neural network for iris identification
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
KR20180065889A (en) * 2016-12-07 2018-06-18 삼성전자주식회사 Method and apparatus for detecting target
CN108921178A (en) * 2018-06-22 2018-11-30 北京小米移动软件有限公司 Obtain method, apparatus, the electronic equipment of the classification of image fog-level
WO2019011099A1 (en) * 2017-07-14 2019-01-17 Oppo广东移动通信有限公司 Iris living-body detection method and related product
CN109447099A (en) * 2018-08-28 2019-03-08 西安理工大学 A kind of Combining Multiple Classifiers based on PCA dimensionality reduction
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110199300A (en) * 2016-12-02 2019-09-03 福特全球技术公司 Indistinct Input for autocoder
CN110298434A (en) * 2019-05-27 2019-10-01 湖州师范学院 A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426760A (en) * 2001-12-18 2003-07-02 中国科学院自动化研究所 Identity discriminating method based on living body iris
US20080037835A1 (en) * 2006-06-02 2008-02-14 Korea Institute Of Science And Technology Iris recognition system and method using multifocus image sequence
CN103324908A (en) * 2012-03-23 2013-09-25 桂林电子科技大学 Rapid iris collecting, judging and controlling method for iris identification
CN105320939A (en) * 2015-09-28 2016-02-10 北京天诚盛业科技有限公司 Iris biopsy method and apparatus
US20180018451A1 (en) * 2016-07-14 2018-01-18 Magic Leap, Inc. Deep neural network for iris identification
CN110199300A (en) * 2016-12-02 2019-09-03 福特全球技术公司 Indistinct Input for autocoder
KR20180065889A (en) * 2016-12-07 2018-06-18 삼성전자주식회사 Method and apparatus for detecting target
CN107133948A (en) * 2017-05-09 2017-09-05 电子科技大学 Image blurring and noise evaluating method based on multitask convolutional neural networks
WO2019011099A1 (en) * 2017-07-14 2019-01-17 Oppo广东移动通信有限公司 Iris living-body detection method and related product
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108921178A (en) * 2018-06-22 2018-11-30 北京小米移动软件有限公司 Obtain method, apparatus, the electronic equipment of the classification of image fog-level
CN109447099A (en) * 2018-08-28 2019-03-08 西安理工大学 A kind of Combining Multiple Classifiers based on PCA dimensionality reduction
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110298434A (en) * 2019-05-27 2019-10-01 湖州师范学院 A kind of integrated deepness belief network based on fuzzy division and FUZZY WEIGHTED

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
GANG WANG等: "A new approach to intrusion detection using Artificial Neural Networks and fuzzy clustering", 《EXPERT SYSTEMS WITH APPLICATIONS》, vol. 37, no. 9, pages 6225 - 6232 *
R. KARUNYA等: "A study of liveness detection in fingerprint and iris recognition systems using image quality assessment", 《2015 INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING AND COMMUNICATION SYSTEMS》, 7 January 2015 (2015-01-07), pages 1 - 5, XP032807994, DOI: 10.1109/ICACCS.2015.7324134 *
宋平: "基于光场成像的虹膜活体检测方法研究", 《中国优秀硕士学位论文全文数据库:信息科技辑》, no. 8, 15 August 2019 (2019-08-15), pages 1 - 77 *
洪洋等: "基于深度卷积神经网络的验证码识别", 《第19届中国系统仿真技术及其应用学术年会论文集(19TH CCSSTA 2018)》, pages 394 - 397 *
王君瑞: "虹膜识别防欺骗方法研究", 《中国优秀硕士学位论文全文数据库:信息科技辑》, no. 7, pages 1 - 69 *
王春义: "非接触式高质量掌静脉图像获取方法研究", 《中国优秀硕士学位论文全文数据库:信息科技辑》, no. 1, 15 January 2019 (2019-01-15), pages 1 - 58 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373218A (en) * 2022-03-21 2022-04-19 北京万里红科技有限公司 Method for generating convolution network for detecting living body object
CN114373218B (en) * 2022-03-21 2022-06-14 北京万里红科技有限公司 Method for generating convolution network for detecting living body object
CN115100730A (en) * 2022-07-21 2022-09-23 北京万里红科技有限公司 Iris living body detection model training method, iris living body detection method and device
CN115100730B (en) * 2022-07-21 2023-08-08 北京万里红科技有限公司 Iris living body detection model training method, iris living body detection method and device

Similar Documents

Publication Publication Date Title
CN109376603A (en) A kind of video frequency identifying method, device, computer equipment and storage medium
CN112215180B (en) Living body detection method and device
Emeršič et al. The unconstrained ear recognition challenge 2019
Benlamoudi et al. Face antispoofing based on frame difference and multilevel representation
Malgheet et al. Iris recognition development techniques: a comprehensive review
CN112949353A (en) Iris silence living body detection method and device, readable storage medium and equipment
CN115937953A (en) Psychological change detection method, device, equipment and storage medium
Das et al. An efficient deep sclera recognition framework with novel sclera segmentation, vessel extraction and gaze detection
KR20060058197A (en) Method and apparatus for eye detection
Jatain et al. Automatic human face detection and recognition based on facial features using deep learning approach
CN114663985A (en) Face silence living body detection method and device, readable storage medium and equipment
Benlamoudi Multi-modal and anti-spoofing person identification
Reddy et al. Robust subject-invariant feature learning for ocular biometrics in visible spectrum
Yin et al. Artificial neural networks for finger vein recognition: a survey
Kannoth et al. Hand Gesture Recognition Using CNN & Publication of World's Largest ASL Database
CN107315985B (en) Iris identification method and terminal
Gupta et al. Real-time face recognition: A survey
Emeršič et al. The unconstrained ear recognition challenge 2019-arxiv version with appendix
Ghosh et al. PB3C-CNN: An integrated PB3C and CNN based approach for plant leaf classification
CN117351579B (en) Iris living body detection method and device based on multi-source information fusion
Cai et al. A novel face spoofing detection method based on gaze estimation
Zabihi et al. Exploiting object features in deep gaze prediction models
Karra et al. An extensive study of facial expression recognition using artificial intelligence techniques with different datasets
Sampaio DL4Malaria: Deep Learning Approaches for the Automated Detection and Characterisation of Malaria Parasites on Thin Blood Smear Images
Singh et al. Effect of Face Tampering on Face Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination