CN113723147A - Iris face multi-mode in-vivo detection and identification method, device, medium and equipment - Google Patents
Iris face multi-mode in-vivo detection and identification method, device, medium and equipment Download PDFInfo
- Publication number
- CN113723147A CN113723147A CN202010453683.1A CN202010453683A CN113723147A CN 113723147 A CN113723147 A CN 113723147A CN 202010453683 A CN202010453683 A CN 202010453683A CN 113723147 A CN113723147 A CN 113723147A
- Authority
- CN
- China
- Prior art keywords
- iris
- face
- image
- images
- lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 209
- 238000001727 in vivo Methods 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 title claims abstract description 73
- 230000015654 memory Effects 0.000 claims description 15
- 210000000554 iris Anatomy 0.000 description 399
- 238000010586 diagram Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 13
- 238000001574 biopsy Methods 0.000 description 10
- 210000000887 face Anatomy 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 230000009286 beneficial effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000033764 rhythmic process Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 210000003128 head Anatomy 0.000 description 5
- 230000002093 peripheral effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000001815 facial effect Effects 0.000 description 4
- 210000001747 pupil Anatomy 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005021 gait Effects 0.000 description 1
- 230000030279 gene silencing Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a method, a device, a medium and equipment for multi-modal in-vivo detection and identification of iris face, belonging to the field of biological identification. The invention obtains a plurality of face images and a plurality of iris images which are collected by a face lens and an iris lens at a plurality of collecting positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed. The invention improves the accuracy and efficiency of the living body detection and improves the accuracy of the identification.
Description
Technical Field
The invention relates to the field of biological recognition, in particular to a method and a device for multi-modal living body detection and recognition of iris face, a computer readable storage medium and equipment.
Background
The biometric identification technology is closely combined with high-tech means such as optics, acoustics, biosensors and the principle of biometrics through a computer, and personal identity is identified by utilizing the inherent physiological characteristics (such as fingerprints, human faces, irises and the like) and behavior characteristics (such as handwriting, voice, gait and the like) of a human body.
The face recognition technology has become the most mainstream biometric technology at present, and the usability and the richness of data samples are incomparable with other biometric technologies. However, the safety is slightly poor, for example, the face recognition cannot distinguish the twins. The iris, as an important identification feature, has the advantages of lifelong uniqueness, stability, collectability, non-invasiveness and the like, and is a necessary trend of identification research and application development. However, iris recognition is inferior to face recognition in usability, and the amount of data samples is not yet too large. Therefore, the iris recognition and the face recognition are combined together to perform the multi-mode biological recognition, and the advantages of the face recognition and the iris recognition can be fully utilized.
No matter in face recognition, iris recognition or iris face multi-mode recognition, the risk of false body attack exists, so that the living body detection is an important part in a biological recognition system, the false body attack can be avoided, and the safety of the biological recognition system is improved.
At present, most of the existing face recognition schemes adopt near-infrared face images (gray level images) for living body detection, the images adopted by face recognition are color face images, and a near-infrared face lens module needs to be added on the basis of the original color face lens module to realize the living body detection while the face recognition is realized. If the method is applied to the iris face multi-mode recognition system, three lens modules are needed because one near-infrared iris lens module for iris recognition is also needed.
The scheme of adding the near-infrared face lens module not only improves the complexity and cost of a hardware system, but also increases the volume of the equipment. And because the video streams collected by the three lens modules in the iris face multi-modal recognition system need to be transmitted and processed simultaneously in real time, the requirements on the transmission rate of a line and the real-time processing capability of a CPU (central processing unit) are higher. In addition, the definition of images is limited when the living body detection is carried out by using the near-infrared face, the currently actually used face lens module is generally about 200 ten thousand pixels, the minutiae of the near-infrared face image shot by the module are not very clear, certain highly-simulated prosthesis materials can still trick the algorithm for carrying out the living body detection by using the near-infrared face image, and if the minutiae shot by the near-infrared face image are clearer, the problems can be theoretically solved.
However, the shot pixels which require the detail nodes to be clear and can shoot the whole face information are too high, generally more than 1000 ten thousand pixels are required, the video stream shot by the shot is difficult to be transmitted in real time and processed by the CPU of the ordinary computer in real time, and a server with high performance is needed to be capable of performing the real-time transmission and processing. Therefore, the application occasions of the biological identification equipment are limited, and most of the application occasions such as a PC (personal computer), various mobile terminals, a subway station gate, an access control system, a bank safety system and the like cannot be applied at all.
Disclosure of Invention
In order to solve the problems that in the prior art, the living body detection precision of an image shot by a common face lens is low, and the real-time performance of image transmission and processing shot by a high-precision lens is poor, the invention provides an iris face multi-mode living body detection and identification method, device, medium and equipment, and the accuracy and efficiency of the living body detection are improved.
The technical scheme provided by the invention is as follows:
in a first aspect, the present invention provides a multi-modal in-vivo iris face detection method, including:
acquiring a plurality of face images and a plurality of iris images acquired by a face lens and an iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
In a second aspect, the present invention provides an iris face multi-modal in-vivo detection apparatus corresponding to the iris face multi-modal in-vivo detection method described in the first aspect, the apparatus comprising:
the living body detection module is used for acquiring a plurality of face images and a plurality of iris images which are acquired by the face lens and the iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
In a third aspect, the present invention provides a computer readable storage medium for multi-modal live detection of an iris face, comprising a memory for storing processor executable instructions, which when executed by the processor, implement the steps of the multi-modal live detection method of an iris face according to the first aspect.
In a fourth aspect, the present invention provides an apparatus for multi-modal living body detection of an iris face, comprising at least one processor and a memory storing computer executable instructions, wherein the processor implements the steps of the multi-modal living body detection method of an iris face according to the first aspect when executing the instructions.
In a fifth aspect, the present invention provides a method for multi-modal iris face recognition, the method comprising:
performing multi-mode living body detection on the iris face by using the multi-mode living body detection method for the iris face of the first aspect;
and under the condition that the multi-modal living body detection of the iris human face passes, performing multi-modal recognition by using at least one of the first human face image, the second human face image and the spliced image and the first iris image.
In a sixth aspect, the present invention provides an iris face multi-modal recognition apparatus corresponding to the iris face multi-modal recognition method in the fifth aspect, the apparatus comprising:
the multi-mode in-vivo detection module is used for performing multi-mode in-vivo detection on the iris face through the multi-mode in-vivo detection device for the iris face in the second aspect;
and the multi-modal recognition module is used for performing multi-modal recognition by using at least one of the first face image, the second face image and the spliced image and the first iris image under the condition that the multi-modal living body detection of the iris face passes.
In a seventh aspect, the present invention provides a computer-readable storage medium for multi-modal iris face recognition, comprising a memory for storing processor-executable instructions, which when executed by the processor, implement the steps of the multi-modal iris face recognition method according to the fifth aspect.
In an eighth aspect, the present invention provides an apparatus for multimodal iris face recognition, including at least one processor and a memory storing computer executable instructions, where the processor executes the instructions to implement the steps of the iris face multimodal recognition method of the fifth aspect.
The invention has the following beneficial effects:
the invention adopts a splicing mode of a plurality of iris images at different positions to acquire complete high-definition near-infrared face information and improve the quality (image quality and detail information of iris level) of the near-infrared face image, thereby realizing high-precision living body detection. And the face images shot at different positions during splicing ensure whether the multiple iris images come from the same user, thereby preventing the errors that the multiple iris images are not of the same user during splicing.
In addition, the invention can realize multi-step living body detection by using the face lens and the iris lens with common pixels in a splicing mode, does not need a lens with ultrahigh pixels, and integrates various methods of color face living body detection, near-infrared part face area living body detection and high-definition complete near-infrared face living body detection on the premise of not increasing extra hardware cost. The hardware cost is low, the volume is small, and the real-time performance of data transmission and processing is guaranteed.
Drawings
FIG. 1 is a flow chart of an example of a multi-modal in vivo iris face detection method of the present invention;
FIG. 2 is a flow chart of another example of the multi-modal in vivo iris face detection method of the present invention;
fig. 3 is a flowchart of an example one of step S150 in the example of the iris-face multi-modal in-vivo detection method shown in fig. 2;
FIG. 4 is a schematic diagram illustrating a first iris image in the first example;
FIG. 5 is a schematic diagram illustrating a second iris image in the first example;
FIG. 6 is a schematic diagram of a stitched image obtained by stitching the first iris image shown in FIG. 4 and the second iris image shown in FIG. 5;
fig. 7 is a flowchart of an example two of step S150 in the example of the iris-face multi-modal in-vivo detection method shown in fig. 2;
FIG. 8 is a schematic diagram of a first iris image in example two;
FIG. 9 is a schematic diagram of a second iris image in example two;
FIG. 10 is a schematic diagram of a first face image or a second face image in example two;
FIG. 11 is a schematic diagram of a stitched image obtained by stitching the first iris image shown in FIG. 8 and the second iris image shown in FIG. 9;
fig. 12 is a flowchart of step S110 in the example of the iris-face multi-modal in-vivo detection method shown in fig. 2;
FIG. 13 is a schematic diagram of an example of an iris-face multi-modal in vivo detection apparatus of the present invention;
FIG. 14 is a schematic diagram of another example of the multi-modal in-vivo iris face detection apparatus of the present invention;
fig. 15 is a diagram illustrating an example of a stitching module 150 in the example of the iris-face multi-modality in-vivo detection apparatus shown in fig. 14;
fig. 16 is another exemplary diagram of the stitching module 150 in the example of the iris-face multi-modal in-vivo detection apparatus shown in fig. 14;
fig. 17 is a diagram showing an example of the first obtaining module 110 in the example of the iris-face multi-modality in-vivo detection apparatus shown in fig. 14;
FIG. 18 is a flow chart of an example of a multi-modal iris face recognition method of the present invention;
fig. 19 is a schematic diagram of an example of the iris face multi-modal recognition apparatus of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
the embodiment of the invention provides an iris face multi-mode in-vivo detection method, as shown in figure 1, the method comprises the following steps:
s100: acquiring a plurality of face images and a plurality of iris images acquired by a face lens and an iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
In the invention, the face lens and the iris lens simultaneously acquire a face image and an iris image at the same acquisition position, and a plurality of iris images acquired at a plurality of acquisition positions need to ensure that the images acquired by splicing all iris images comprise complete faces.
In addition, the invention does not limit the specific acquisition mode of a plurality of acquisition positions, and can carry out a mode of multi-time acquisition by rotating the rotatable face lens and the iris lens to different positions, and the mode only needs one face lens and one iris lens. The iris image acquisition method can also be a mode of acquiring through a fixed human face lens and an iris lens, the mode comprises a plurality of iris lenses with different angles, a plurality of iris images at different positions are acquired through the iris lenses with different angles, the human face lens can be only one, and the human face lens can be shot once to obtain a whole human face region without the cooperation of a plurality of lenses because the field angle of the human face lens is large.
The invention splices a plurality of iris images together to obtain a spliced image under the condition that a plurality of face images are judged to be the same face, thereby ensuring that the plurality of iris images come from the same face and further ensuring the correctness of the spliced face images. The spliced image contains a complete near-infrared face, and the texture and other detail points are very clear, so that very accurate near-infrared face living body detection can be performed.
And then, respectively using at least one of the plurality of face images, at least one of the plurality of iris images and the spliced image to perform multi-step living body detection. The invention does not limit the sequence of the multi-step biopsy, and can follow the sequence of first simple and then complex, namely, the multi-step biopsy is simply put in front and is executed first; it is also possible to follow the sequence of first-obtained first-performed, that is, the first-obtained face/iris image is first subjected to the live body detection.
The invention has the following beneficial effects:
the invention adopts a splicing mode of a plurality of iris images at different positions to acquire complete high-definition near-infrared face information and improve the quality (image quality and detail information of iris level) of the near-infrared face image, thereby realizing high-precision living body detection. And the face images shot at different positions during splicing ensure whether the multiple iris images come from the same user, thereby preventing the errors that the multiple iris images are not of the same user during splicing.
In addition, the invention can realize multi-step living body detection by using the face lens and the iris lens with common pixels in a splicing mode, does not need a lens with ultrahigh pixels, and integrates various methods of color face living body detection, near-infrared part face area living body detection and high-definition complete near-infrared face living body detection on the premise of not increasing extra hardware cost. The hardware cost is low, the volume is small, and the real-time performance of data transmission and processing is guaranteed.
As a more specific implementation manner of the present invention, as shown in fig. 2, the foregoing S100 includes:
s110: the method comprises the steps of acquiring a first face image and a first iris image which are simultaneously acquired at a target acquisition position by a face lens and an iris lens which are simultaneously installed on a rotating holder.
The face lens and the iris lens are simultaneously arranged on the rotating holder, and the rotating holder can rotate up and down and/or left and right, wherein the field angle of the face lens is larger, and the field angle of the iris lens is smaller.
The invention is not limited to the specific structure of the rotating pan/tilt, and can be, for example, an electric two-dimensional pan/tilt (which can rotate up and down and left and right) controlled by a stepping motor, or a simple rotating shaft (which can only rotate up and down or left and right), and the rotating shaft is controlled by the stepping motor.
The face lens and the iris lens can be simultaneously opened, simultaneously acquire the face image and the iris image in real time, and also can decide to open a certain lens after the conditions are met.
The target acquisition position is determined in advance, and an iris image which is complete, clear and has the human eye position in the middle of the picture can be acquired at the target acquisition position. Although the target acquisition position is determined in advance, the target acquisition position is not a fixed position corresponding to the actual spatial position. In the iris image acquired at the target position, the positions of human eyes are in the middle of the picture, the target acquisition position is determined by the positions of the human eyes in the picture, the positions of the human eyes in the picture are fixed, and if the height of a user, the distance between the user and a lens and the like are different, the actual spatial positions corresponding to the target acquisition positions are different.
When the holder rotates to the target acquisition position, the first face image and the first iris image are acquired at the target acquisition position at the same time, so that the first face image and the first iris image acquired at the same time are ensured to come from the same user.
S120: a first step of liveness detection is performed using the first face image and the first iris image, respectively.
The aforementioned first face image includes the entire face of a person and is a color image by which a living body detection can be performed.
The aforementioned first iris image includes eyes and a region around the eyes, and may be, for example, a half-sheet of a human face (since the first iris image is to capture the eyes, the eyes are on the upper half of the human face, and thus are generally the upper half of the human face), and is a near-infrared grayscale image. Since the requirement for the accuracy of iris texture and the like in iris recognition is much higher than the requirement for the accuracy of face texture and the like in face recognition, the number of pixels to be captured in the iris region needs to be large (about 20 pixels/mm), and the accuracy of the iris lens is required to be much higher than that of the face lens. However, the pixels of the iris lens cannot be made very large (otherwise, the requirement of real-time transmission and processing cannot be met), so that the iris lens has high precision but a small angle of view, and the size of a shot image is small, only a part of faces including eyes (although the size is small, the precision is high, and details such as textures are rich), but the iris lens cannot shoot all faces.
Therefore, the living body detection can be carried out through the first iris image, and because the first iris image is shot through the iris lens with higher precision than the human face lens, a great number of minutiae points which cannot be shot by a traditional human face infrared camera can be shot on the near-infrared first iris image, so that the precision of the living body detection by using the first iris image is far higher than that of the traditional near-infrared human face living body detection method.
The first step of live body detection is the first step of live body detection, namely, the live body detection is performed through the first face image and the first iris image, and when both the first face image and the first iris image pass through the first step of live body detection, the first step of live body detection passes through the first iris image.
The method adopted by the first face image for the living body detection is not limited, and for example, a traditional color face silence living body detection method based on deep learning and the like can be adopted. The present invention is also not limited to the method for performing the biometric determination using the first iris image, and for example, a near-infrared iris biometric determination method based on deep learning may be used.
The execution sequence of the living body detection through the first face image and the living body detection through the first iris image is not limited, and the living body detection can be executed simultaneously or sequentially.
S130: an iris region is extracted on the first iris image and a second step of volume detection is performed using the iris region.
The biometric detection using the first iris image in S120 is performed using the entire first iris image, and the biometric detection using detailed information such as texture of the iris peripheral region is mainly used. Because the first iris image not only shoots detailed information such as texture and the like of the iris peripheral area, but also shoots clear information in the iris area, rapid iris living body detection, namely second living body detection, can be carried out by utilizing the information in the iris area, and the iris area needs to be extracted before the second living body detection.
The invention is also not limited to the second in vivo detection method, which is exemplified as follows:
1. can be performed by observing the quiver pupil. The observation method is to synchronously observe the enlargement and the reduction of the pupil by utilizing the rhythm of white light flickering emitted by the biological recognition equipment, and if the change rhythm is consistent with the rhythm of the white light flickering or the difference is in a certain range, the living iris is represented.
2. The iris area living body state can also be judged by adopting a iris area silence living body detection mode based on deep learning.
S140: and acquiring at least one second face image and at least one second iris image which are simultaneously acquired by the face lens and the iris lens when the rotating holder rotates to other acquisition positions except the target acquisition position, and ensuring that all the acquired second iris images can form a complete face after being spliced with the first iris image by the other acquisition positions.
In the first step of live body detection, live body detection is performed through the first face image, and features such as texture details of the color first face image used in the first step of live body detection are not abundant, and the accuracy of live body detection is not very high. The living body detection in the first step and the living body detection in the second step are performed with high precision, but the used images are images of eyes and the periphery thereof, other parts of the whole face image are not included, if other parts (such as the lower half face) of the face image are false, the detection cannot be performed, and the solution is to perform the living body detection on the whole face which is rich in details and contains other parts.
However, as can be seen from the background art, it is impractical to directly shoot the whole near-infrared face image with rich texture details through the lens, so the invention seeks a method for obtaining the whole near-infrared face image with rich texture details under the hardware condition of the existing lens.
The invention enables the iris lens to collect at least one near-infrared iris image at other collecting positions by rotating the rotating holder to other collecting positions outside the target collecting position, and can form a complete near-infrared face after splicing at least one near-infrared iris image (second iris image) collected at other collecting positions with the first iris image. And the face lens shoots the second face image when the second iris image is collected, and the shot second iris image and the second face image can be ensured to come from the same user.
Typically, the number of second iris images is one, for example: the first iris image is the upper half face, the second iris image is the lower half face, and the upper half face and the lower half face are spliced to obtain the whole face.
In addition, the present invention may perform the first step of the live body test and the second step of the live body test in S120 and S130 during the rotation of the rotational head. Therefore, the rotating time gap of the rotating holder can be fully utilized, processors such as a CPU (central processing unit) and the like carry out the first step of biopsy and the second step of biopsy in the time gap, the time of the whole biopsy process is fully utilized, and the processing efficiency of the CPU is improved. Of course, S120 and S130 may be completed and then the rotating platform may be rotated, which is not limited in the present invention.
S150: and under the condition that the first face image and each second face image are judged to be the same face, splicing all the second iris images with the first iris image to obtain a spliced image containing the complete near-infrared face.
The step is used for splicing the second iris image and the first iris image to obtain the complete near-infrared face. When the rotating holder rotates to other acquisition positions, the user in front of the lens cannot be guaranteed to be the same person all the time (i.e. people may be changed for artificial fake, etc.), so that the second iris image and the first iris image cannot be guaranteed to be the same person, and errors may occur if the second iris image and the first iris image are directly spliced. And because the second iris image and the first iris image are different parts of the human face (even if a superposed similar region exists, the region is not large), the judgment of whether the second iris image and the first iris image are the same human face by only judging the second iris image and the first iris image by themselves cannot be realized.
In order to solve the above problem, the present invention determines whether the images are from the same person (i.e. whether the images are from the same face) through the first face image and the second face image. Because the field angle of the face lens is large, the first face image and the second face image both comprise complete faces, and therefore whether the first face image and the second face image are the same face can be judged.
The present invention is not limited to the method for determining whether the first face image and the second face image are the same face, and in one example, the method includes: comparing the first face image with the second face image by a face comparison method to obtain a comparison score; if the comparison score is larger than the set judgment threshold, the same face is obtained. Since the first face image and the second face image are shot at a short interval and almost at the same time, a very high judgment threshold value can be taken.
Because the first face image and the first iris image are shot at the same time, the first face image and the first iris image can be ensured to be from the same face (user), the second face image and the second iris image are shot at the same time, and the first face image and the second iris image can also be ensured to be from the same face (user). Therefore, after the first face image and the second face image are judged to be the same face, the first iris image and the second iris image can be ensured to be from the same face of the same user, and the second iris image and the first iris image can be spliced together, so that a spliced image containing the complete near-infrared face is obtained.
S160: and performing a third step of in-vivo detection by using the spliced image.
According to the method, the spliced image containing the complete near-infrared face is obtained through S150 splicing, and the step is carried out on the basis of the spliced image for the third step of in vivo detection. Although the spliced image is spliced, the texture and other detail points of the spliced image are very clear, and very accurate near-infrared human face living body detection can be completed.
In many cases, the previous steps including S130 can satisfy the requirement of the living body detection in many cases, and S140 to S160 mainly aim at the case where the lower half face wears a prosthetic mask (or a mask or the like).
The first step of in-vivo detection, the second step of in-vivo detection and the third step of in-vivo detection are sequentially executed, and when the first step of in-vivo detection, the second step of in-vivo detection and the third step of in-vivo detection are sequentially executed, the next step is executed only if the last step of in-vivo detection passes. The mode of the multistep living body detection executed in sequence not only improves the accuracy of the living body detection, but also ensures the rapidness and the high efficiency of the living body detection, and the execution sequence is from simple to complex and from partial to complete.
When the living body detection is carried out sequentially, if the first step living body detection, the second step living body detection and the third step living body detection all pass, the multi-mode living body detection of the iris face passes. If any one step of the living body detection fails, the iris face multi-mode living body detection fails, the prompt that the living body detection fails is sent, and the subsequent living body detection process is not executed any more.
Firstly, respectively carrying out living body detection by using an acquired colorful first face image and a first near-infrared iris image; then, carrying out living body detection on the iris area on the first iris image; and then splicing the acquired second iris image with the first iris image to obtain a complete near-infrared face image, and finally carrying out living body detection by using the complete near-infrared face image. The multi-modal live detection of the whole iris face is passed only when all live detections are passed.
The invention has the following beneficial effects:
1. the invention adopts a multistep biopsy method which is sequentially executed from simple to complex and from partial to complete, thereby not only improving the accuracy of biopsy, but also ensuring the rapidness and high efficiency of biopsy.
2. The invention adopts a splicing mode of a plurality of iris images at different positions to acquire complete high-definition near-infrared face information and improve the quality (image quality and detail information of iris level) of the near-infrared face image, thereby realizing high-precision living body detection. And the face images shot at different positions during splicing ensure whether the multiple iris images come from the same user, thereby preventing the errors that the multiple iris images are not of the same user during splicing.
3. By means of splicing, the invention can realize multi-step living body detection by using two lenses, namely the face lens and the iris lens, can reduce one near-red face lens, does not need a lens with ultrahigh pixels, and integrates various methods of color face living body detection, near-infrared partial face area living body detection, iris living body detection and high-definition complete near-infrared face living body detection on the premise of not increasing extra hardware cost. The hardware cost is low, the volume is small, and the real-time performance of data transmission and processing is guaranteed.
The present invention does not limit the specific method for stitching all the second iris images with the first iris image in S150 to obtain a stitched image containing a complete near-infrared face, and two specific examples are given below.
Example one:
the example is based on splicing of the overlapping region, and is suitable for the case that the second iris image and the first iris image have a larger overlapping region. As shown in fig. 3, the method of the present example includes:
s151: and extracting the characteristic points of the first iris image and the second iris image, and counting the characteristic points at the same positions of the first iris image and the second iris image.
The first iris image and the second iris image both comprise a part of human face, and feature points on the part of human face can be extracted through various human face positioning and feature point extraction algorithms in the prior art. Since there is an overlapping area of a suitable size between the second iris image and the first iris image, there are feature points in the same position in these overlapping areas. For example, if the feature points of the nose are distributed on both the first iris image and the second iris image, or on a plurality of images on the second iris image, the feature points of the nose are feature points at the same position.
S152: and carrying out image registration on the first iris image and the second iris image according to the feature points at the same position, and finding out an overlapping region in the first iris image and the second iris image.
The positions of the feature points distributed at the same positions on different images on the whole face image are the same, so that the positions of the images can be registered according to the feature points at the same positions on different images, namely image registration, specifically, image registration can be performed in a mode of aligning the feature points at the same positions on different images, and after image registration, the overlapped part of a plurality of images is an overlapped area.
S153: and fusing the overlapped areas to obtain a spliced image containing the complete near-infrared face.
And after the images are registered, the overlapping area comprises a plurality of layers of images, and the step of fusing the overlapping area refers to the step of fusing the plurality of layers of images in the overlapping area into one layer of image so that a plurality of images are spliced into one image, thereby obtaining the spliced whole image.
Fig. 4 is an example of a first iris image, which is an upper half face and includes a plurality of feature points of the upper half face. Fig. 5 is an example of a second iris image, which is a bottom-half face including a plurality of feature points of the bottom-half face. The first iris image and the second iris image both include feature points of a nose region, the feature points are feature points at the same position, and a part where the first iris image and the second iris image overlap (a region where a nose is located) is an overlapping region. The spliced image obtained by splicing fig. 4 and 5 is shown in fig. 6, which is a complete face image.
Example two:
the first example is applicable to the case where there is a large overlapping area between the second iris image and the first iris image, and if there is a small or substantially no overlapping area, the feature points at the same position cannot be detected in the overlapping area, and the method of the first example cannot implement the stitching.
This example can solve the problem of example one, which can be applied to a case where a large overlapping area does not exist in a plurality of iris images, and certainly can also be applied to a case where a large overlapping area exists in a plurality of iris images. As shown in fig. 7, the method of the present example includes:
s151': and extracting the characteristic points of the first face image and/or the second face image, and extracting the characteristic points of the first iris image and the second iris image.
The first iris image and the second iris image both comprise a part of human faces, the first human face image and the second human face image both comprise complete human faces, and feature points are extracted from the first human face image and the second human face image respectively through a human face positioning algorithm and a feature point extraction algorithm.
S152': and carrying out image registration on the first iris image and the second iris image according to the characteristic points of the first iris image and the second iris image and the characteristic points of the same position of the first face image and/or the second face image.
The first face image and the second face image both comprise complete faces, so that the feature points of the first face image and the second face image comprise all feature points of the faces. Since the positions of the feature points of the face of the same user are not changed, the same feature points on the first iris image, the second iris image, the first face image and the second face image should be at the same position, and therefore the first iris image and the second iris image can be subjected to image registration with the feature points of the first face image and/or the second face image as references.
The specific registration method may be: and aligning the characteristic points on the first iris image and the second iris image with the characteristic points at the same position on the first facial image and/or the second facial image to realize image registration.
S153': and if the first iris image and the second iris image after the image registration have an overlapping region, fusing the overlapping region to obtain a spliced image containing the complete near-infrared face.
Fig. 8 is an example of a first iris image, which is an upper half face including a plurality of feature points of the upper half face. Fig. 9 is an example of a second iris image, which is a bottom-half face including a plurality of feature points of the bottom-half face. The overlapping area of the first iris image and the second iris image is small, and feature points at the same position do not exist. Fig. 10 is an example of the difference between the first face image and the second face image, which is a complete face image. Referring to fig. 10 as a reference, a spliced image obtained by splicing fig. 8 and 9 is shown in fig. 11, which is a complete face image.
The second example and the second example are illustrations of a method for stitching the second iris image and the first iris image, and do not limit the method described in the first example and the second example.
In the first and second examples, integrity verification may be performed on the stitched image obtained after the stitching by using the first face image and/or the second face image as a reference, so as to ensure that the stitched image includes an integral face.
In the process of acquiring the first iris image, the iris information of the user needs to be quickly and accurately captured, but the iris area of each person is small (about 1cm in diameter), and the number of pixels captured for the iris area is large (about 20 pixels/mm). Therefore, the traditional iris lens has the problems of small depth of field, small range and high adaptability in the process of acquiring the iris.
To solve this problem, as shown in fig. 12, S110 of the present invention includes:
s111: and acquiring a front face image collected by the face lens.
S112: and carrying out human eye positioning on the front human face image to obtain human eye coordinates, calculating a first rotation angle of the rotating holder according to the human eye coordinates and the human eye reference coordinates of the target acquisition position, and enabling the rotating holder to rotate according to the first rotation angle.
Because the field angle of the face lens is larger, human eyes can be more easily detected in the face image, and therefore the method firstly obtains the front face image collected by the face lens, and preliminarily calculates the angle of the rotating holder to be rotated according to the eye coordinates obtained by positioning the front face image, namely the first rotating angle.
The target acquisition position is determined in advance, and an iris image which is complete, clear and has the human eye position in the middle of the picture can be acquired at the target acquisition position. The eye reference coordinates of the target acquisition position are eye coordinates on the face image shot at the target acquisition position, and the coordinates are predetermined according to the target acquisition position.
As described above, although the target capturing position is determined in advance, the target capturing position is not a fixed position corresponding to the actual spatial position, and if the height of the user, the distance from the lens, and the like are different, the actual spatial position corresponding to the target capturing position is also different.
After the human eye coordinates are obtained by positioning on the front human face image, the human eye coordinates are converted into an angle, namely a first rotation angle, which is required to rotate by the stepping motor according to the pixel difference between the human eye coordinates and a preset human eye reference coordinate, a rotation instruction is sent to the motor, and the motor rotates according to the first rotation angle.
Specifically, when the rotating holder can rotate in two dimensions, a transverse rotation angle and a longitudinal rotation angle are calculated according to pixel differences between a horizontal ordinate and a vertical ordinate of the human eye position respectively; when the rotating tripod head can only rotate up and down or left and right in one dimension (because the invention is mainly suitable for users with different heights, the one-dimensional rotation is generally up and down rotation), the horizontal rotation angle or the longitudinal rotation angle is calculated according to the pixel difference between the horizontal coordinate or the vertical coordinate of the position of the human eye and the reference horizontal coordinate or the vertical coordinate.
S113: and acquiring a front iris image acquired by the iris lens when the rotating holder rotates according to the first rotating angle.
When the rotating holder rotates according to the first rotating angle, the iris lens collects iris images in real time to serve as front iris images, and the front iris images are acquired.
S114: and carrying out iris positioning on the preposed iris image to obtain an iris coordinate, calculating a second rotation angle of the rotating holder according to the iris coordinate and the iris reference coordinate of the target acquisition position, and enabling the rotating holder to rotate according to the second rotation angle.
Although the field angle of the iris lens is smaller, the resolution ratio of the iris lens is higher, and the coordinate of the iris is more accurate, so that in the rotating process of the rotating holder, if the iris is found on the collected front iris image, the rotating angle is corrected according to the front iris image to obtain a second rotating angle, and the rotating holder rotates according to the second rotating angle, so that the position of the rotating holder after rotation is more accurate and closer to the target collecting position.
The target acquisition position is determined in advance, and an iris image which is complete, clear and has the human eye position in the middle of the picture can be acquired at the target acquisition position. The iris reference coordinates of the target acquisition position are iris coordinates on an iris image shot at the target acquisition position, and the coordinates are predetermined according to the target acquisition position.
The actual spatial positions (i.e. target acquisition positions) corresponding to the iris reference coordinate and the human eye reference coordinate are the same position, but the iris reference coordinate is set based on the iris image, the human eye reference coordinate is set based on the human face image, and the sizes and resolutions of the iris image and the human face image are different, so the specific numerical values of the iris reference coordinate and the human eye reference coordinate may be different.
S115: and acquiring a first face image and a first iris image which are simultaneously and respectively acquired by the face lens and the iris lens when the rotating holder rotates to the target acquisition position according to the second rotation angle.
The rotating holder rotates according to the second rotation angle, and is located at the target acquisition position after the rotation is finished, and the human face lens and the iris lens acquire a first human face image and a first iris image at the same time.
The invention utilizes the face lens and the iris lens which can rotate simultaneously, adopts a secondary positioning method, rapidly positions the face lens for the first time, accurately positions the iris lens for the second time, can rapidly and accurately realize the adaptation of the height and the position of the user, helps users with different heights/distances to rapidly finish the acquisition of the iris information, almost does not need any cooperation, ensures that the users can accurately finish the acquisition of the face and the iris information under the condition of almost no perception, and synchronously realizes the living body detection.
In some embodiments of the present invention, the pixels of the iris lens are generally equal to or greater than 200 ten thousand pixels, and the iris lens with less than 200 ten thousand pixels generally cannot meet the requirement of iris recognition because the iris recognition requires that a relatively clear iris texture is photographed. Moreover, the pixels of the iris lens are generally less than or equal to 800 ten thousand pixels, because the smaller the pixels of the iris lens are, the better the real-time transmission and real-time processing performance of the images shot by the iris lens are, the more difficult the real-time processing of the images shot by the iris lens larger than 800 ten thousand pixels is through a common CPU, and the images shot by the iris lens larger than 800 ten thousand pixels can basically cover the whole face, so that the splicing is not needed.
The invention is illustrated in detail below by way of a specific example:
1. synchronous acquisition of first face image and first iris image
Opening a face lens to obtain a front face image, and positioning face information on the front face image so as to find eye coordinates; and calculating a pixel difference between the target acquisition position and the target acquisition position according to the longitudinal axis data value of the human eye coordinate and the human eye reference coordinate, converting the pixel difference into a first rotation angle of the stepping motor, sending a rotation instruction to the motor, and preparing the motor for rotation. In the rotation process of the motor, the iris lens collects a front iris image in real time, once iris position information data is found on the front iris image, the number of longitudinal coordinate pixels of an iris coordinate is immediately obtained, the pixel difference value of the iris longitudinal coordinate and the longitudinal coordinate of an iris reference coordinate is calculated for the second time, and a second rotation angle of the stepping motor is given according to the pixel difference value. After the target acquisition position is rotated, the positions of the human faces and the irises are all proper, and the first human face image and the first iris image are synchronously acquired.
2. First step in vivo detection is carried out on the first face image and the first iris image
Respectively carrying out living body detection on a first face image (full-width face color image information) and a first iris image (upper half near-infrared face including iris and peripheral region information) acquired on site. The judgment method can adopt a traditional color human face silence living body detection method and a near-infrared deep learning training living body judgment method.
Because the camera of the first half near-infrared face shot is the iris camera, a great number of minutiae points which cannot be shot by the traditional face infrared camera can be shot, and the accuracy of the living body detection by using the first iris image is far higher than that of the traditional face living body detection method.
3. Performing second-step living body detection on the first iris image (iris internal region information)
Because the clear iris video information is shot by the half-width near-infrared face of the first iris image, the iris living body detection can be quickly carried out by utilizing the information of the internal area of the iris.
The iris living body detection method can be carried out by observing the quivering pupil, the observation method is to synchronously observe the enlargement and the reduction of the pupil by utilizing the flickering rhythm of the white light emitted by the equipment, and the iris living body is represented if the change rhythm is consistent with the flickering rhythm of the white light or the difference is in a certain range.
The living state of the iris area can also be judged in a manner of silencing the living iris based on the iris area of deep learning.
4. Splicing of multiple iris images at different acquisition positions
Because the upper half of the near-infrared face (the information of the iris and the peripheral area thereof) is only close to, whether the false body information of the lower half of the face exists cannot be completely judged. This is also a key problem point of living body detection in multi-modal iris face recognition.
Because the device adopted by the hairstyle is the self-rotating device capable of self-adapting to the height, the face lens and the iris lens on the device are not fixed, and can be rotated and controlled according to the requirements.
Therefore, after the first step and the second step of living body detection are completed, the face lens and the iris lens are required to be rotated to other acquisition positions by the rotating cradle head, images are acquired, and image splicing is completed.
The information shot before rotation is: the first iris image (b) represents the first iris image (b) of the first half of near-infrared human face information (iris and peripheral region information thereof) and the information shot after rotation is as follows: a second face image (c) representing full color face information and a second iris image (d) representing the next half of the near-infrared face (information of the mouth and its surrounding area).
Firstly, whether a and c are the same face needs to be judged, a face recognition algorithm can be used for judging, and a and c are shot at almost the same time, so that a very high judgment threshold value can be adopted.
And then b and d are spliced, and the splicing mode can be two modes described in the first example and the second example.
5. Splicing images (full-width near-infrared human face) to carry out third-step living body detection
And 4, splicing to obtain a complete high-definition near-infrared human face image. And then, the living body detection is carried out based on the near infrared face image. Although the splicing of the near-infrared face image is completed, the minutiae points are very clear, and the 4 th step of splicing is performed by taking the colorful face (the first face image and the second face image) as a reference object, so that the completeness is not problematic, and the very accurate near-infrared face living body detection can be completed.
Example 2:
the embodiment of the invention provides an iris face multi-modal in-vivo detection device, as shown in fig. 13, the device comprises:
a living body detection module 100 for acquiring a plurality of face images and a plurality of iris images acquired by a face lens and an iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
The invention adopts a splicing mode of a plurality of iris images at different positions to acquire complete high-definition near-infrared face information and improve the quality (image quality and detail information of iris level) of the near-infrared face image, thereby realizing high-precision living body detection. And the face images shot at different positions during splicing ensure whether the multiple iris images come from the same user, thereby preventing the errors that the multiple iris images are not of the same user during splicing.
In addition, the invention can realize multi-step living body detection by using the face lens and the iris lens with common pixels in a splicing mode, does not need a lens with ultrahigh pixels, and integrates various methods of color face living body detection, near-infrared part face area living body detection and high-definition complete near-infrared face living body detection on the premise of not increasing extra hardware cost. The hardware cost is low, the volume is small, and the real-time performance of data transmission and processing is guaranteed.
As a more specific implementation manner of the present invention, as shown in fig. 14, the foregoing living body detection module 100 includes:
a first obtaining unit 110, configured to obtain a first face image and a first iris image that are simultaneously collected at a target collecting position by the face lens and the iris lens that are simultaneously mounted on the rotating pan/tilt head.
A first liveness detection unit 120 for performing a first step of liveness detection using the first face image and the first iris image.
A second living body detecting unit 130 for extracting an iris region on the first iris image and performing a second living body detection using the iris region.
And the second acquisition unit 140 is configured to acquire at least one second face image and at least one second iris image that are acquired by the face lens and the iris lens simultaneously when the rotating holder rotates to other acquisition positions other than the target acquisition position, and the other acquisition positions ensure that all the acquired second iris images can form a complete face after being spliced with the first iris image.
And the splicing unit 150 is configured to splice all the second iris images with the first iris image together to obtain a spliced image including a complete near-infrared face under the condition that it is determined that the first face image and each of the second face images are the same face.
A third biopsy unit 160 for performing a third step of biopsy using the stitched image.
And if the first step of in-vivo detection, the second step of in-vivo detection and the third step of in-vivo detection all pass, the multi-mode in-vivo detection of the iris face passes.
As shown in fig. 15, the aforementioned splicing unit 150 includes:
and a first extraction subunit 151, configured to extract feature points of the first iris image and the second iris image, and count feature points at the same position of the first iris image and the second iris image.
A first registration subunit 152, configured to perform image registration on the first iris image and the second iris image according to the feature points at the same position, and find an overlapping region in the first iris image and the second iris image.
And the first fusion subunit 153 is configured to fuse the overlapping regions to obtain a spliced image including a complete near-infrared face.
Alternatively, as shown in fig. 16, the splicing unit 150 includes:
and a second extraction subunit 151' configured to extract feature points of the first face image and/or the second face image, and extract feature points of the first iris image and the second iris image.
A second registration subunit 152' is configured to perform image registration on the first iris image and the second iris image according to the feature points of the first iris image and the second iris image and the feature points of the same position of the first facial image and/or the second facial image.
And the second fusion subunit 153' is configured to fuse the overlapping regions if the first iris image and the second iris image after the image registration have the overlapping regions, so as to obtain a spliced image including a complete near-infrared face.
As shown in fig. 17, the first acquisition unit 110 includes:
and the first acquiring subunit 111 is configured to acquire a front face image acquired by the face shot.
The first calculating subunit 112 is configured to perform human eye positioning on the front human face image to obtain human eye coordinates, calculate a first rotation angle of the rotating holder according to the human eye coordinates and human eye reference coordinates of the target acquisition position, and rotate the rotating holder according to the first rotation angle.
A second obtaining subunit 113, configured to obtain a pre-iris image captured by the iris lens when the rotating pan/tilt head rotates according to the first rotation angle.
And a second calculating subunit 114, configured to perform iris positioning on the pre-iris image to obtain an iris coordinate, calculate a second rotation angle of the rotating holder according to the iris coordinate and the iris reference coordinate of the target acquisition position, and enable the rotating holder to rotate according to the second rotation angle.
And a third obtaining subunit 115, configured to obtain a first face image and a first iris image that are simultaneously and respectively obtained by the face lens and the iris lens when the rotating platform rotates to the target obtaining position according to the second rotation angle.
The number of pixels of the iris lens is more than or equal to 200 ten thousand and less than or equal to 800 ten thousand.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiment 1, and for the sake of brief description, reference may be made to the corresponding content in the method embodiment 1 for the part where the embodiment of the device is not mentioned. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the unit described above may all refer to the corresponding processes in the above method embodiment 1, and are not described herein again.
Example 3:
the method of the embodiment 1 provided by the present invention can implement the service logic through a computer program and record the service logic on a storage medium, and the storage medium can be read and executed by a computer, so as to implement the effect of the solution described in the embodiment 1 of the present specification. Accordingly, the present invention also provides a computer readable storage medium for multi-modal liveness detection of an iris face, comprising a memory for storing processor executable instructions, which when executed by the processor, implement the steps comprising the iris face multi-modal liveness detection method of embodiment 1.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The above description of the storage medium according to method embodiment 1 may also include other implementation manners, the implementation principle and the generated technical effect of this embodiment are the same as those of method embodiment 1, and reference may be specifically made to the description of related method embodiment 1, which is not repeated here.
Example 4:
the invention also provides a device for multi-modal living body detection of iris face, which can be a single computer, and can also comprise a practical operation device and the like using one or more methods or one or more embodiment devices of the specification. The device for multi-modal living body detection of the iris face can comprise at least one processor and a memory for storing computer executable instructions, and the processor executes the instructions to realize the steps of the method for multi-modal living body detection of the iris face in any one or more of the embodiments 1.
The above-mentioned description of the device according to the method or apparatus embodiment may also include other implementation manners, the implementation principle and the generated technical effect of this embodiment are the same as those of the foregoing method embodiment 1, and specific reference may be made to the description of the related method embodiment 1, which is not described in detail herein.
Example 5:
the embodiment of the invention provides an iris face multi-modal identification method, as shown in fig. 18, the method comprises the following steps:
s100': the multi-modal in-vivo iris face detection method of embodiment 1 is used for multi-modal in-vivo iris face detection.
S200': and under the condition that the multi-modal living body detection of the iris human face passes, performing multi-modal recognition by using at least one of the first human face image, the second human face image and the spliced image and the first iris image.
After the multi-modal iris face living body detection method shown in embodiment 1 is completed and the effective state of the collected user iris face living body is determined under the condition that the multi-modal iris face living body detection is passed, multi-modal identity recognition is performed on the color face information (the first face image a or the second face image c), the binocular iris information (the first iris image b) and the near-infrared face information (the stitched image b + d) acquired in embodiment 1.
In this embodiment 1, the beneficial effects described in embodiment 1 are achieved, and comparison of multi-modal feature information (visible light face, binocular iris, near infrared face) is achieved, so that the recognition comparison result is more accurate due to multi-modal features, the stitched image has more detailed features, and the recognition accuracy is further improved.
This embodiment 5 includes all the features of the foregoing embodiment 1, and for brevity, corresponding contents in the foregoing embodiment 1 may be referred to where this embodiment 5 is not mentioned, and are not repeated herein.
Example 6:
an embodiment of the present invention provides an iris face multi-modal recognition apparatus, as shown in fig. 19, the apparatus includes:
the multi-modal in-vivo detection module 10' is used for performing multi-modal in-vivo detection on the iris face by using the iris face multi-modal in-vivo detection device described in embodiment 5.
And a multi-modal recognition module 20' for performing multi-modal recognition using the first iris image and at least one of the first face image, the second face image and the stitched image if the multi-modal living body detection of the iris face passes.
The implementation principle and the technical effects of the apparatus provided in this embodiment are the same as those of the apparatus in embodiment 5, and for the sake of brief description, no mention is made to the embodiment of the apparatus, and reference may be made to the corresponding contents in embodiment 5, and further description is omitted here.
Example 7:
the method of the above embodiment 5 provided by the present invention can implement the service logic through a computer program and record the service logic on a storage medium, and the storage medium can be read and executed by a computer, so as to implement the effect of the solution described in embodiment 5 of this specification. Accordingly, the present invention also provides a computer readable storage medium for multi-modal iris face recognition, comprising a memory for storing processor executable instructions that, when executed by the processor, implement the steps of the iris face multi-modal recognition method comprising embodiment 5.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The specific form of the storage medium can be seen in example 3.
The above description of the storage medium according to method embodiment 5 may also include other implementations. The specific implementation manner and the beneficial effects may refer to the description of the related method embodiment 5, which is not described in detail herein.
Example 8:
the invention also provides a device for multi-modal iris face recognition, which can be a single computer, and can also comprise an actual operating device and the like using one or more methods or one or more embodiment devices of the specification. The device for multi-modal iris face recognition may comprise at least one processor and a memory storing computer executable instructions, wherein the processor implements the steps of the method for multi-modal iris face recognition in any one or more of the embodiments 5.
The above-mentioned description of the device according to the method or apparatus embodiment may also include other embodiments, and specific implementation manners and beneficial effects may refer to the description of related method embodiment 5, which is not described in detail herein.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. An iris face multi-mode in-vivo detection method is characterized by comprising the following steps:
acquiring a plurality of face images and a plurality of iris images acquired by a face lens and an iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
2. The iris face multi-modal in vivo detection method as claimed in claim 1, wherein said acquiring a plurality of face images and a plurality of iris images acquired by a face lens and an iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed, and the method comprises the following steps:
acquiring a first face image and a first iris image which are simultaneously acquired at a target acquisition position by the face lens and the iris lens which are simultaneously installed on the rotating holder;
performing a first step of in vivo detection using the first face image and the first iris image;
extracting an iris area from the first iris image, and performing a second-step in-vivo detection by using the iris area;
acquiring at least one second face image and at least one second iris image which are simultaneously acquired by a face lens and an iris lens when the rotating holder rotates to other acquisition positions except the target acquisition position, wherein the other acquisition positions ensure that all the acquired second iris images can form a complete face after being spliced with the first iris image;
under the condition that the first face image and each second face image are judged to be the same face, all the second iris images are spliced with the first iris image to obtain a spliced image containing a complete near-infrared face;
performing a third step of in vivo detection using the stitched image;
and if the first step of in-vivo detection, the second step of in-vivo detection and the third step of in-vivo detection all pass, the multi-mode in-vivo detection of the iris face passes.
3. The iris face multi-mode in-vivo detection method as claimed in claim 2, wherein the stitching all the second iris images with the first iris image to obtain a stitched image containing a complete near-infrared face comprises:
extracting characteristic points of the first iris image and the second iris image, and counting the characteristic points of the same positions of the first iris image and the second iris image;
carrying out image registration on the first iris image and the second iris image according to the feature points at the same position, and finding out an overlapping area in the first iris image and the second iris image;
and fusing the overlapped areas to obtain a spliced image containing the complete near-infrared face.
4. The iris face multi-mode in-vivo detection method as claimed in claim 2, wherein the stitching all the second iris images with the first iris image to obtain a stitched image containing a complete near-infrared face comprises:
extracting characteristic points of the first face image and/or the second face image, and extracting characteristic points of the first iris image and the second iris image;
carrying out image registration on the first iris image and the second iris image according to the feature points of the first iris image and the second iris image and the feature points of the same positions of the first face image and/or the second face image;
and if the first iris image and the second iris image after the image registration have an overlapping region, fusing the overlapping region to obtain a spliced image containing the complete near-infrared face.
5. An iris face multi-mode living body detection method according to any one of claims 1 to 4, wherein the acquiring of the first face image and the first iris image simultaneously acquired at the target acquisition position by the face lens and the iris lens simultaneously installed on the rotating pan-tilt comprises:
acquiring a front face image collected by a face lens;
carrying out human eye positioning on the front human face image to obtain human eye coordinates, calculating a first rotation angle of the rotating holder according to the human eye coordinates and human eye reference coordinates of a target acquisition position, and enabling the rotating holder to rotate according to the first rotation angle;
acquiring a front iris image acquired by an iris lens when the rotating holder rotates according to the first rotating angle;
performing iris positioning on the preposed iris image to obtain an iris coordinate, calculating a second rotation angle of the rotating holder according to the iris coordinate and the iris reference coordinate of the target acquisition position, and enabling the rotating holder to rotate according to the second rotation angle;
and acquiring a first face image and a first iris image which are simultaneously and respectively acquired by the face lens and the iris lens when the rotating holder rotates to the target acquisition position according to the second rotation angle.
6. An iris face multi-modal in vivo detection apparatus, the apparatus comprising:
the living body detection module is used for acquiring a plurality of face images and a plurality of iris images which are acquired by the face lens and the iris lens at a plurality of acquisition positions; splicing the plurality of iris images together to obtain a spliced image under the condition that the plurality of face images are judged to be the same face; and respectively using at least one of the face images, at least one of the iris images and the spliced image to carry out multi-step in-vivo detection, wherein if the multi-step in-vivo detection is passed, the multi-mode in-vivo detection of the iris face is passed.
7. A computer-readable storage medium for multi-modal liveness detection of an iris face, comprising a memory for storing processor-executable instructions that, when executed by the processor, perform steps comprising the multi-modal liveness detection method of an iris face according to any one of claims 1 to 5.
8. An apparatus for multi-modal liveness detection of an iris face, comprising at least one processor and a memory storing computer executable instructions, the processor implementing the steps of the iris face multi-modal liveness detection method as claimed in any one of claims 1 to 5 when executing the instructions.
9. An iris face multi-mode recognition method is characterized by comprising the following steps:
performing multi-mode living detection on the iris face by using the multi-mode living detection method of the iris face according to any one of claims 1 to 5;
and under the condition that the multi-modal living body detection of the iris human face passes, performing multi-modal recognition by using at least one of the first human face image, the second human face image and the spliced image and the first iris image.
10. An iris face multi-modal recognition device, characterized in that the device comprises:
a multi-mode in-vivo detection module, which is used for performing multi-mode in-vivo detection on the iris face by the iris face multi-mode in-vivo detection device of claim 6;
and the multi-modal recognition module is used for performing multi-modal recognition by using at least one of the first face image, the second face image and the spliced image and the first iris image under the condition that the multi-modal living body detection of the iris face passes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010453683.1A CN113723147A (en) | 2020-05-26 | 2020-05-26 | Iris face multi-mode in-vivo detection and identification method, device, medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010453683.1A CN113723147A (en) | 2020-05-26 | 2020-05-26 | Iris face multi-mode in-vivo detection and identification method, device, medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113723147A true CN113723147A (en) | 2021-11-30 |
Family
ID=78671835
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010453683.1A Pending CN113723147A (en) | 2020-05-26 | 2020-05-26 | Iris face multi-mode in-vivo detection and identification method, device, medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113723147A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789312A (en) * | 2023-12-28 | 2024-03-29 | 深圳市华弘智谷科技有限公司 | Method and device for identifying living body in VR and intelligent glasses |
CN118351602A (en) * | 2024-06-14 | 2024-07-16 | 杭州海康威视数字技术股份有限公司 | Iris recognition equipment and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184277A (en) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | Living body human face recognition method and device |
US20160183812A1 (en) * | 2014-12-24 | 2016-06-30 | Samsung Electronics Co., Ltd. | Biometric authentication method and apparatus |
CN107506696A (en) * | 2017-07-29 | 2017-12-22 | 广东欧珀移动通信有限公司 | Anti-fake processing method and related product |
CN109871811A (en) * | 2019-02-22 | 2019-06-11 | 中控智慧科技股份有限公司 | A kind of living object detection method based on image, apparatus and system |
-
2020
- 2020-05-26 CN CN202010453683.1A patent/CN113723147A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160183812A1 (en) * | 2014-12-24 | 2016-06-30 | Samsung Electronics Co., Ltd. | Biometric authentication method and apparatus |
CN105184277A (en) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | Living body human face recognition method and device |
CN107506696A (en) * | 2017-07-29 | 2017-12-22 | 广东欧珀移动通信有限公司 | Anti-fake processing method and related product |
CN109871811A (en) * | 2019-02-22 | 2019-06-11 | 中控智慧科技股份有限公司 | A kind of living object detection method based on image, apparatus and system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117789312A (en) * | 2023-12-28 | 2024-03-29 | 深圳市华弘智谷科技有限公司 | Method and device for identifying living body in VR and intelligent glasses |
CN117789312B (en) * | 2023-12-28 | 2024-10-18 | 深圳市华弘智谷科技有限公司 | Method and device for identifying living body in VR and intelligent glasses |
CN118351602A (en) * | 2024-06-14 | 2024-07-16 | 杭州海康威视数字技术股份有限公司 | Iris recognition equipment and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107609383B (en) | 3D face identity authentication method and device | |
CN107748869B (en) | 3D face identity authentication method and device | |
CN107633165B (en) | 3D face identity authentication method and device | |
CN109446981B (en) | Face living body detection and identity authentication method and device | |
EP3007104B1 (en) | Object detection and recognition under out of focus conditions | |
US20160019420A1 (en) | Multispectral eye analysis for identity authentication | |
US20170091550A1 (en) | Multispectral eye analysis for identity authentication | |
US20160019421A1 (en) | Multispectral eye analysis for identity authentication | |
KR20170134356A (en) | System and method for performing fingerprint-based user authentication using images photographed using a mobile device | |
CN109583304A (en) | A kind of quick 3D face point cloud generation method and device based on structure optical mode group | |
KR102160137B1 (en) | Apparatus and Method for Recognizing Fake Face By Using Minutia Data Variation | |
EP4343689A1 (en) | Body part authentication system and authentication method | |
CN113723147A (en) | Iris face multi-mode in-vivo detection and identification method, device, medium and equipment | |
CN111710031A (en) | Method for breaking password of intelligent terminal device by counterfeiting portrait | |
CN114299569A (en) | Safe face authentication method based on eyeball motion | |
CN112329727A (en) | Living body detection method and device | |
CN108197549A (en) | Face identification method and terminal based on 3D imagings | |
CN113515975A (en) | Face and iris image acquisition method and device, readable storage medium and equipment | |
KR20110024178A (en) | Device and method for face recognition using 3 dimensional shape information | |
CN108009532A (en) | Personal identification method and terminal based on 3D imagings | |
CN116824658A (en) | Face authentication method and system with interactive living body detection | |
WO2022222957A1 (en) | Method and system for identifying target | |
CN113435229B (en) | Multi-modal human face iris recognition method and device, readable storage medium and equipment | |
CN115995103A (en) | Face living body detection method, device, computer readable storage medium and equipment | |
CN113553890A (en) | Multi-modal biological feature fusion method and device, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |