CN109840461B - Identification method and device based on dynamic iris image - Google Patents

Identification method and device based on dynamic iris image Download PDF

Info

Publication number
CN109840461B
CN109840461B CN201711236717.6A CN201711236717A CN109840461B CN 109840461 B CN109840461 B CN 109840461B CN 201711236717 A CN201711236717 A CN 201711236717A CN 109840461 B CN109840461 B CN 109840461B
Authority
CN
China
Prior art keywords
iris
region
feature
images
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711236717.6A
Other languages
Chinese (zh)
Other versions
CN109840461A (en
Inventor
汪璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhenyuan Biological Data Co ltd
Original Assignee
Wuhan Zhenyuan Biological Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhenyuan Biological Data Co ltd filed Critical Wuhan Zhenyuan Biological Data Co ltd
Priority to CN201711236717.6A priority Critical patent/CN109840461B/en
Publication of CN109840461A publication Critical patent/CN109840461A/en
Application granted granted Critical
Publication of CN109840461B publication Critical patent/CN109840461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides a dynamic iris image-based identification method and device, belonging to the technical field of iris identification in the field of biological identification. The identification method based on the dynamic iris images has high exclusivity by using a stable characteristic sequence generated by searching a characteristic repeated region as a stable region by using a plurality of iris images as retrieval codes, thereby excluding most iris templates in a database and greatly reducing the range. And when the iris images are registered and identified, the characteristics are extracted by using the plurality of iris images, so that iris characteristic loss caused by changes of eyelashes, eyelids, pupils and the like can be compensated, and more complete iris images and comprehensive templates can be obtained, and the comparison accuracy is improved. And the pupil variation trend and the characteristic variation trend relationship are established, so that the effectiveness of the characteristics can be judged better. And searching for the repeated region through comparison of a plurality of irises, and setting the security level more flexibly for recording the probability of the repeated region.

Description

Identification method and device based on dynamic iris image
Technical Field
The invention relates to the technical field of iris recognition in the field of biological recognition, in particular to a dynamic iris image-based recognition method and a dynamic iris image-based recognition device.
Background
The existing iris comparison method is to reduce the comparison range by comparing one by one or establishing an iris index coding and retrieval mechanism, and to perform iris matching in the range. However, the current iris index coding is to extract some main and simplified features of the iris for coding, and although the matching range can be reduced, the repeatability is high. Moreover, the normalized iris image contains some invalid parts, which increases the difficulty of the iris index encoding and searching process. The iris image is affected by factors such as eyelash, eyelid and pupil change during acquisition to cause iris characteristic loss, thereby reducing comparison accuracy. Therefore, how to solve the above problems is a technical problem which needs to be solved urgently at present.
Disclosure of Invention
The invention provides a dynamic iris image-based identification method and a dynamic iris image-based identification device, and aims to solve the problems.
The invention provides a dynamic iris image-based identification method, which comprises the following steps: acquiring N iris images, wherein N is an integer greater than or equal to 2; screening M iris images meeting a preset rule from the N iris images, wherein M is an integer less than or equal to N; preprocessing the M iris images to generate corresponding M iris code images and recording pupil size change values corresponding to each iris code image; comparing the M iris code images to obtain effective areas appearing in all the iris code images; marking the effective region as a stable region, generating a stable region characteristic sequence, and taking the stable region characteristic sequence as a retrieval code; comparing the M iris code images to obtain a change area appearing and/or disappearing along with the change of the pupil in the iris code image; and taking the change region as a reference region, generating a reference region characteristic sequence, and taking the reference region characteristic sequence as a reference code.
Optionally, the comparing the M iris code images to obtain the effective regions appearing in all the iris code images includes: comparing the M iris code images and combining the change of the characteristics of the pupil and the iris to obtain a stable characteristic region in the iris image; generating a stable region feature sequence based on the stable feature region.
Optionally, the comparing the M iris code images and combining the change of the characteristics of the pupil and the iris to obtain the stable characteristic region objectively existing in all the iris images includes: sequencing the M iris code images according to the size of the pupil size change value; comparing the M iris code images to obtain the characteristic quantity of inconsistent change of the M iris code images and the pupil; marking the characteristic quantity as an iris characteristic region which does not exist objectively; recording the trend of pupil contraction or expansion and the change of the iris characteristics; acquiring objective but unstable characteristic regions in the M iris code images; acquiring a result of comparing every two iris code images; screening out a comparison result with the most characteristic regions from the results; and taking the alignment result as a stable characteristic region.
Optionally, the method further includes, after the taking the effective region as a stable region and generating a stable region feature sequence, and taking the stable region feature sequence as a search code: comparing the M iris code images meeting the preset requirement; acquiring a union set after comparison, generating a characteristic sequence, and taking the characteristic sequence as a comprehensive template; and associating the retrieval code and the reference code with the comprehensive template and storing.
Optionally, the associating and storing the retrieval code and the reference code with the synthesis template further includes: comparing the retrieval code and the reference code with a comparison code stored in a database in advance to obtain a comprehensive template meeting the preset similarity; and comparing the comprehensive template with a comprehensive template prestored in a database to obtain a comparison result.
The invention provides a recognition device based on dynamic iris images, which comprises: the iris image acquisition module is used for acquiring N iris images, wherein N is an integer greater than or equal to 2; the first iris image preprocessing module is used for screening M iris images meeting a preset rule from the N iris images, wherein M is an integer less than or equal to N; the second iris image preprocessing module is used for preprocessing the M iris images, generating corresponding M iris code images and recording pupil size change values corresponding to each iris code image; the effective region data acquisition module is used for comparing the M iris code images and acquiring effective regions appearing in all the iris code images; a stable region marking module, configured to mark the effective region as a stable region, generate a stable region feature sequence, and use the stable region feature sequence as a search code; the change region data acquisition module is used for comparing the M iris code images and acquiring a change region which appears and/or disappears along with the change of the pupil in the iris code images; and the reference region marking module is used for taking the change region as a reference region, generating a reference region characteristic sequence and taking the reference region characteristic sequence as a reference code.
Optionally, the second iris image preprocessing module includes: the first sub-module is used for comparing the M iris code images and acquiring objective stable characteristic regions in all the iris images by combining the change of characteristics of pupils and irises; a second sub-module for generating a stable region feature sequence based on the stable feature region.
Optionally, the first sub-module is specifically configured to: sequencing the M iris code images according to the size of the pupil size change value; comparing the M iris code images to obtain the characteristic quantity of inconsistent change of the M iris code images and the pupil; marking the characteristic quantity as an iris characteristic region which does not exist objectively; recording the trend of pupil contraction or expansion and the change of the iris characteristics; acquiring objective but unstable characteristic regions in the M iris code images; acquiring a result of comparing every two iris code images; screening out a comparison result with the most characteristic regions from the results; and taking the alignment result as a stable characteristic region.
Optionally, the marking a stable region module further includes: the data comparison module is used for comparing the M iris code images meeting the preset requirements; the comprehensive template generating module is used for acquiring the compared union set, generating a characteristic sequence and taking the characteristic sequence as a comprehensive template; and the database module is used for associating and storing the retrieval codes and the reference codes with the comprehensive template.
Optionally, the database module further includes: the characteristic sequence acquisition module is used for comparing the retrieval code and the reference code with a comparison code stored in a database in advance to acquire a comprehensive template meeting the preset similarity; and the characteristic sequence comparison module is used for comparing the comprehensive template with a comprehensive template pre-stored in a database to obtain a comparison result.
The identification method and device based on the dynamic iris image provided by the invention have the beneficial effects that: the stable characteristic sequence generated by searching the characteristic repeated area as the iris code image in the stable area by using a plurality of iris images is used as a retrieval code, and has high exclusivity, thereby excluding most iris templates in a database and greatly reducing the range. And when the iris images are registered and identified, the iris images are fused, so that iris characteristic loss caused by changes of eyelashes, eyelids, pupils and the like can be compensated, and relatively complete iris images and comprehensive templates can be obtained, and the comparison accuracy is improved. And the pupil variation trend and the characteristic variation trend relationship are established, so that the effectiveness of the characteristics can be judged better. And searching for the repeated region through comparison of a plurality of irises, and setting the security level more flexibly for recording the probability of the repeated region.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 2 is a flowchart of a dynamic iris image-based recognition method according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating pupil change in the dynamic iris image-based recognition method shown in FIG. 2;
FIG. 4 is a diagram illustrating pupil change in the dynamic iris image-based recognition method shown in FIG. 2;
fig. 5 is a flowchart of a dynamic iris image-based recognition method according to a second embodiment of the present invention;
fig. 6 is a functional block diagram of a recognition apparatus based on dynamic iris image according to a third embodiment of the present invention;
fig. 7 is a functional block diagram of a recognition apparatus based on dynamic iris images according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention. The electronic device 300 includes a dynamic iris image-based recognition apparatus, a memory 302, a storage controller 303, a processor 304, and a peripheral interface 305.
The memory 302, memory controller 303, processor 304 and peripheral interface 305 are electrically connected to each other, directly or indirectly, to enable data transfer or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The dynamic iris image-based recognition apparatus includes at least one software function module that may be stored in the memory 302 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 300. The processor 304 is configured to execute an executable module stored in the memory 302, such as a software functional module or a computer program included in the dynamic iris image-based recognition apparatus.
The Memory 302 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 302 is used for storing a program, and the processor 304 executes the program after receiving an execution instruction, and the method executed by the server 200 defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 304, or implemented by the processor 304.
The processor 304 may be an integrated circuit chip having signal processing capabilities. The processor 304 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 305 couples various input/output devices to the processor 304 as well as to the memory 302. In some embodiments, the peripheral interface 305, the processor 304, and the memory controller 303 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
Fig. 2 is a flowchart of a dynamic iris image-based recognition method according to a first embodiment of the present invention. The identification method based on the dynamic iris image is applied to a server, and a specific flow shown in fig. 2 will be described in detail below.
Step S101, acquiring N iris images, wherein N is an integer greater than or equal to 2.
S102, screening M iris images meeting a preset rule from the N iris images, wherein M is an integer less than or equal to N.
The preset rule can be in accordance with quality requirements, the in accordance with quality requirements refers to in accordance with national standard quality requirements, and the content of the preset rule comprises definition or pupil iris expansion ratio and the like. Here, the number of the carbon atoms is not particularly limited.
In this embodiment, M is an integer greater than 1 and less than or equal to N.
Step S103, preprocessing the M iris images to generate corresponding M iris code images and recording pupil size change values corresponding to each iris code image.
The preprocessing may be, but is not limited to, performing alignment, segmentation, normalization, and the like on the M iris images.
The pupil size change value can be a pupil-iris radius ratio, wherein the pupil-iris radius ratio is the ratio of the pupil radius to the iris radius, and the pupil-iris ratio eliminates the influence on the pupil radius caused by the lens distance and the zoom multiple, so that the pupil change condition can be accurately reflected.
The iris encoded image is an image which excludes occlusion factors such as eyelashes and eyelids, extracts effective information which can be used for identification from the image, and encodes the effective information. The extraction method includes a 2D Gabor filter, wavelet zero-crossing detection, laplacian of gaussian transform, and the like, and is not particularly limited herein.
As an implementation manner, the M iris images are preprocessed through an image processing algorithm and are filtered to generate corresponding M iris code images, and the pupil-iris radius ratio corresponding to each iris code image is recorded.
And step S104, comparing the M iris code images to acquire effective areas appearing in all the iris code images.
The normalized iris code image is the iris code image preprocessed in step S103, that is, the M iris code images.
The effective region refers to a region excluding all occluded pixels such as eyelashes and eyelids after iris image segmentation, namely, an effective region with the same characteristics.
As an implementation mode, firstly, comparing the M iris code images and combining the change of the characteristics of the pupil and the iris to acquire objective stable characteristic regions in all the iris images; and generating a stable region characteristic sequence based on the stable characteristic region.
Specifically, the M iris code images are sorted according to the size of the pupil size change value; comparing the M iris code images to obtain the characteristic quantity of inconsistent change of the M iris code images and the pupil; then, the feature quantity is marked as an objectively nonexistent iris feature region, for example, when M iris code images are compared and a pupil contracts (or expands), the change of part of features in the M iris code images and the change of the pupil are not consistent, the features are judged as the objectively nonexistent feature region, and the feature region is marked as the objectively nonexistent feature region. The consistency refers to that the characteristic changes correspondingly with the contraction (or expansion) of the pupil, for example, the characteristic increases (or decreases) with the contraction (or expansion) of the pupil.
The characteristic region which does not exist objectively may be caused by random noise introduced by environmental influence at the time of acquisition, such as a B region in fig. 3. The tendency of the pupil to contract or dilate and the change in the iris characteristics is then recorded, e.g. characteristic zone 30 for pupil radius 20, or 25 for pupil radius 30, and this characteristic zone decreases with the tendency of the pupil radius to increase. Then, objective but unstable feature regions in the M iris code images are obtained, for example, when the M iris code images are compared and in the process of pupil contraction or expansion, although the change of partial features in the M iris code images and the change of the pupil have consistency, each iris code image does not appear, and is judged to be an objective but unstable feature region, which may be caused by external interference during acquisition and cannot be acquired at a certain time. As shown in region C of fig. 4. And finally, taking the comparison result as a stable characteristic region. For example, after objective non-existing characteristic regions and unstable characteristic regions corresponding to M iris code images are excluded, two iris code images in the M iris code images are compared to obtain an intersection to obtain M/2 results, the comparison result with the most characteristic regions is selected from the results to serve as a stable characteristic region, a stable region characteristic sequence is generated based on the stable characteristic region, and the stable region characteristic sequence is used as a retrieval code. Therefore, the characteristics of the iris which objectively exists and is stable can be more preserved.
The feature region is the feature region with the largest total area, wherein the feature region refers to the feature region with the largest number of features of the result obtained by comparing two pairs of results and solving intersection, and the feature region is the region with the largest total area, and the feature region has the largest number of features of the result obtained by comparing two pairs of results and solving intersection.
It should be noted that the iris characteristic region near the eyelashes and the eyelids does not belong to the characteristic region which does not exist objectively, because the eyelashes have certain floatability and can block the iris region due to wind blowing; or the degree of eyelid openness, the occlusion of the iris feature area.
And step S105, marking the effective region as a stable region, generating a stable region characteristic sequence, and taking the stable region characteristic sequence as a retrieval code.
And S106, comparing the M iris code images to acquire a change area which appears and/or disappears along with the change of the pupil in the iris code image.
The change area is an area which disappears or appears when the pupil changes to a certain value.
And step S107, taking the change region as a reference region, generating a reference region characteristic sequence, and taking the reference region characteristic sequence as a reference code.
In this embodiment, as an implementation manner, a change region that disappears or appears when the pupil change reaches a certain value is marked as a reference region, a reference region feature sequence is generated, and the reference region feature sequence is used as a reference code.
The above steps are in addition to the objective presence, objective presence instability and objective absence of three iris features. This embodiment may be used to reference iris features that are virtually stable, but may be confounded as objectively present but unstable. Such as disappearance or appearance of crypts after the pupil has dilated or diminished to some extent. The crypts are used as more important individualized reference features to be excluded in the previous stable region selection process, and the accuracy of identification is not improved favorably.
Fig. 5 is a flowchart of a dynamic iris image-based recognition method according to a second embodiment of the present invention. The identification method based on the dynamic iris image is applied to a server, and a specific flow shown in fig. 5 will be described in detail below.
Step S201, acquiring N iris images, wherein N is an integer greater than or equal to 2.
Step S202, M iris images meeting a preset rule are screened from the N iris images, wherein M is an integer less than or equal to N.
Step S203, preprocessing the M iris images to generate corresponding M iris code images and recording the corresponding pupil-iris radius ratio of each iris code image.
And S204, comparing the M iris code images to acquire effective areas appearing in all the iris code images.
And step S205, marking the effective region as a stable region, generating a stable region characteristic sequence, and taking the stable region characteristic sequence as a retrieval code.
And S206, comparing the M iris code images to acquire a change area which appears and/or disappears along with the change of the pupil in the iris code image.
Step S207, using the variation region as a reference region, generating a reference region feature sequence, and using the reference region feature sequence as a reference code.
For the detailed implementation of steps S201 to S207, please refer to the corresponding steps in the first embodiment, which will not be described herein.
And S208, comparing the M iris code images meeting the preset requirement.
In one embodiment, the objective non-existing iris feature region is excluded, for example, the objective non-existing iris feature region marked in step S205 is excluded, and then a union is obtained, and all iris-encoded images in the union are used as the target iris-encoded image. And finally, searching a repetition region by comparing the M iris code images, and marking all the repetition regions and the repetition probability.
And S209, acquiring the compared union set, generating a characteristic sequence, and taking the characteristic sequence as a comprehensive template.
In one embodiment, the repetition region and repetition probability obtained in step S208 are used together with the recorded pupil contraction or dilation and the trend of iris feature changes as the integrated template.
And step S210, associating and storing the retrieval code, the reference code and the comprehensive template.
And associating the retrieval code and the reference code with the comprehensive template and storing the retrieval code and the reference code in a database. The database can be a local database or a database server. Here, the number of the carbon atoms is not particularly limited.
And step S211, comparing the retrieval code and the reference code with a comparison code pre-stored in a database to obtain a comprehensive template meeting the preset similarity.
The preset similarity value may be 90% to 100%, and the specific selection of the preset similarity value may be selected according to actual requirements, which is not specifically limited herein.
As an embodiment, the search code and the reference code are compared with the comparison code stored in the database in advance, so that most unmatched objects are eliminated.
Step S212, comparing the comprehensive template with a comprehensive template pre-stored in a database to obtain a comparison result.
When the identification is carried out, the comparison result is obtained by comparing the pre-stored comprehensive template with the comprehensive template generated by the iris of the person to be identified.
As an implementation manner, the comprehensive template is compared with a comprehensive template stored in a database in advance, and a template corresponding to the repetition probability is selected according to a preset security level for matching. For example, when selecting a low security level, only the regions with 100% overlap are selected for comparison, and when increasing the security level, the regions with 100% overlap and 90% overlap are selected for comparison, and so on, and the present invention is not limited specifically herein. During comparison, the effective characteristics can be selected for comparison according to and through establishing the relationship between the pupil variation trend and the characteristic variation trend, so that the effectiveness of the characteristics can be better judged. If certain features cannot be acquired due to pupil change caused by environmental influence during recognition, we can ignore or reduce the weight of the features according to the change trend, and the method is not limited in detail here.
For example, the feature points may be discriminated by using the tendency of the pupil contraction or expansion and the iris feature change. When the characteristics in the comprehensive template to be identified are the same as the characteristic change trend of the comprehensive template record pre-stored in the database, judging that the characteristics are credible characteristics; and when the identification features do not accord with the feature variation trend of the comprehensive template record stored in the database in advance, judging the identification features to be the unreliable features. And judging whether the comparison result passes, if the comparison passes, obtaining the authority, and if the comparison does not pass, rejecting the comparison.
Fig. 6 is a schematic diagram of functional modules of a recognition apparatus based on dynamic iris images according to a third embodiment of the present invention. The dynamic iris image-based recognition apparatus 400 includes an iris image acquisition module 410, a first iris image preprocessing module 420, a second iris image preprocessing module 430, an effective region data acquisition module 440, a mark stable region module 450, a change region data acquisition module 460, and a mark reference region module 470.
An iris image acquisition module 410, configured to acquire N iris images, where N is an integer greater than or equal to 2.
A first iris image preprocessing module 420, configured to screen M iris images that meet a preset rule from the N iris images, where M is an integer less than or equal to N.
And a second iris image preprocessing module 430, configured to preprocess the M iris images, generate corresponding M iris code images, and record a pupil-iris radius ratio corresponding to each iris code image.
Wherein the second iris image preprocessing module 430 includes: a first submodule 431 and a second submodule 432.
The first submodule 431 is configured to compare the M iris code images and obtain stable characteristic regions in all the iris images in a visitor manner by combining changes of characteristics of pupils and irises.
The first submodule 431 is specifically configured to: sorting the M iris code images according to the pupil-iris radius ratio; comparing the M iris code images to obtain the characteristic quantity of inconsistent change of the M iris code images and the pupil; marking the characteristic quantity as an iris characteristic region which does not exist objectively; recording the trend of pupil contraction or expansion and the change of the iris characteristics; acquiring objective but unstable characteristic regions in the M iris code images; acquiring a result of comparing every two iris code images; screening out a comparison result with the most characteristic regions from the results; and taking the alignment result as a stable characteristic region.
A second submodule 432, configured to generate a stable region feature sequence based on the stable feature region.
An effective region data obtaining module 440, configured to compare the M iris code images, and obtain effective regions appearing in all the iris code images.
A mark stable region module 450, configured to mark the effective region as a stable region, generate a stable region feature sequence, and use the stable region feature sequence as a search code.
A change region data obtaining module 460, configured to compare the M iris code images, and obtain a change region that appears and/or disappears along with pupil change in the iris code image.
A mark reference region module 470, configured to use the variation region as a reference region, generate a reference region feature sequence, and use the reference region feature sequence as a reference code.
Fig. 7 is a schematic diagram of functional modules of a recognition apparatus based on dynamic iris images according to a fourth embodiment of the present invention. The dynamic iris image-based recognition device 500 comprises an iris image acquisition module 510, a first iris image preprocessing module 520, a second iris image preprocessing module 530, an effective region data acquisition module 540, a stable region marking module 550, a changed region data acquisition module 560, a reference region marking module 570, a data comparison module 580, a comprehensive template generation module 590, a database module 591, a feature sequence acquisition module 592 and a feature sequence comparison module 593.
An iris image acquisition module 510, configured to acquire N iris images, where N is an integer greater than or equal to 2.
A first iris image preprocessing module 520, configured to screen M iris images that meet a preset rule from the N iris images, where M is an integer less than or equal to N.
A second iris image preprocessing module 530, configured to preprocess the M iris images, generate corresponding M iris code images, and record a pupil-iris radius ratio corresponding to each iris code image.
Wherein the second iris image preprocessing module 530 comprises: a first sub-module 531 and a second sub-module 532.
The first sub-module 531 is configured to compare the M iris code images and obtain stable feature regions in all the iris images in a visitor manner in combination with changes in characteristics of pupils and irises.
The first sub-module 531 is specifically configured to: sorting the M iris code images according to the pupil-iris radius ratio; comparing the M iris code images to obtain the characteristic quantity of inconsistent change of the M iris code images and the pupil; marking the characteristic quantity as an iris characteristic region which does not exist objectively; recording the trend of pupil contraction or expansion and the change of the iris characteristics; acquiring objective but unstable characteristic regions in the M iris code images; acquiring a result of comparing every two iris code images; screening out a comparison result with the most characteristic regions from the results; and taking the alignment result as a stable characteristic region.
A second sub-module 532 for generating a stable region feature sequence based on the stable feature region.
An effective region data obtaining module 540, configured to compare the M iris code images, and obtain effective regions appearing in all the iris code images.
And a mark stable region module 550, configured to mark the effective region as a stable region, generate a stable region feature sequence, and use the stable region feature sequence as a search code.
A change region data acquiring module 560, configured to compare the M iris code images, and acquire a change region that appears and/or disappears along with pupil change in the iris code image;
a mark reference region module 570, configured to use the variation region as a reference region, generate a reference region feature sequence, and use the reference region feature sequence as a reference code.
And the data comparison module 580 is configured to compare the M iris code images meeting preset requirements.
The data comparison module 580 is specifically configured to: excluding the objectively non-existent iris feature region; and searching for a repetition region by comparing the M iris code images, and marking all the repetition regions and the repetition probability.
And a comprehensive template generating module 590, configured to obtain the aligned union set, generate a feature sequence, and use the feature sequence as a comprehensive template.
And the database module 591 is used for associating and storing the retrieval codes and the reference codes with the comprehensive template.
A signature sequence obtaining module 592, configured to compare the search code and the reference code with a comparison code pre-stored in a database, and obtain a comprehensive template meeting a preset similarity.
And the characteristic sequence comparison module 593 is used for comparing the comprehensive template with a comprehensive template stored in the database in advance to obtain a comparison result.
In summary, the identification method and device based on the dynamic iris image provided by the invention have the beneficial effects that: the stable characteristic sequence generated by searching the characteristic repeated area as the iris code image in the stable area by using a plurality of iris images is used as a retrieval code, and has high exclusivity, thereby excluding most iris templates in a database and greatly reducing the range. And when the iris images are registered and identified, the iris images are fused, so that iris characteristic loss caused by changes of eyelashes, eyelids, pupils and the like can be compensated, and relatively complete iris images and comprehensive templates can be obtained, and the comparison accuracy is improved. And the pupil variation trend and the characteristic variation trend relationship are established, so that the effectiveness of the characteristics can be judged better. And searching for the repeated region through comparison of a plurality of irises, and setting the security level more flexibly for recording the probability of the repeated region.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (6)

1. A recognition method based on dynamic iris images is characterized by comprising the following steps:
acquiring N iris images, wherein N is an integer greater than or equal to 2;
screening M iris images meeting a preset rule from the N iris images, wherein M is an integer less than or equal to N;
preprocessing the M iris images to generate corresponding M iris code images and recording pupil size change values corresponding to each iris code image;
comparing the M iris code images to obtain effective areas appearing in all the iris code images in the M iris code images;
taking the effective region as a stable region, generating a stable region characteristic sequence, and taking the stable region characteristic sequence as a retrieval code;
comparing the M iris code images to obtain a change area appearing and/or disappearing along with the change of the pupil in the iris code image;
taking the change region as a reference region, generating a reference region characteristic sequence, and taking the reference region characteristic sequence as a reference code;
comparing the M iris code images meeting the preset requirement, wherein the comparison of the M iris code images meeting the preset requirement comprises the following steps: eliminating marked iris characteristic regions which do not exist objectively, and then solving a union set; all iris code images in the union set are used as target iris code images; searching for a repeated region by comparing the M iris code images, marking all the repeated regions and the repeated probability, and obtaining a compared iris feature union set;
acquiring the compared iris feature union set, generating a feature sequence, and taking the feature sequence as a comprehensive template;
associating and storing the retrieval code, the reference code and the comprehensive template;
comparing the retrieval code and the reference code with a comparison code stored in a database in advance to obtain a comprehensive template meeting the preset similarity;
comparing the comprehensive template with a comprehensive template pre-stored in a database to obtain a comparison result;
wherein the objectively non-existent iris feature region is obtained by:
and determining the feature quantity which does not change correspondingly with the contraction or expansion of the pupil as the objective non-existing iris feature region, and marking the feature quantity as the objective non-existing iris feature region.
2. The method according to claim 1, wherein said comparing said M iris-coded images to obtain valid regions appearing in all said iris-coded images comprises:
comparing the M iris code images pairwise and combining the change of characteristics of pupils and irises to obtain stable characteristic regions in all the iris images;
and generating a stable region feature sequence based on the stable feature region, wherein the stable feature region represents a region of the iris image from which the same features appearing due to external interference are removed.
3. The method according to claim 2, wherein said comparing said M iris-coded images and combining the pupil-iris feature changes to obtain all the stable feature regions objectively existing in said iris images, comprises:
sequencing the M iris code images according to the size of the pupil size change value;
comparing the M iris code images to obtain a characteristic quantity of inconsistent change between the M iris code images and the pupil, wherein the comparing the M iris code images comprises: comparing the change of the characteristics in the M iris code images with the change of pupil contraction or expansion, wherein the characteristic quantity with inconsistent change is the characteristic quantity which does not generate corresponding change along with the contraction or expansion change of the pupil;
labeling the feature quantity as an objectively absent iris feature region, wherein the labeling the feature quantity as an objectively absent iris feature region includes: judging the characteristic quantity which does not change correspondingly with the contraction or expansion of the pupil into the objective nonexistent iris characteristic region, and marking the objective nonexistent iris characteristic region;
recording a trend of pupil constriction or dilation versus a change in the iris feature, wherein the trend of change includes a decrease in the iris feature as the pupil constriction or dilation decreases, or an increase in the iris feature as the pupil constriction or dilation increases, or an increase in the iris feature as the pupil constriction or dilation decreases, or a decrease in the iris feature as the pupil constriction or dilation increases;
acquiring objective but unstable characteristic regions in the M iris code images, wherein the objective but unstable characteristic regions are acquired by the following method: judging that the iris feature change trend is consistent with the pupil contraction or expansion change trend and is not a feature appearing in each coded image as the objective but unstable feature region;
obtaining a comparison result of every two iris code images, wherein the comparison result is obtained by excluding the corresponding objective nonexistent iris characteristic regions and objective but unstable characteristic regions of the M iris code images, comparing every two iris code images and solving an intersection;
screening out a comparison result with the most characteristic regions from the results, wherein the most characteristic regions refer to the characteristic regions with the most quantity of characteristics of the regions with the characteristics in the results obtained by comparing every two pairs of the results and solving intersection sets relative to the results obtained by comparing every two pairs of the results and solving intersection sets, and the characteristic regions with the largest total area;
and taking the alignment result as a stable characteristic region.
4. An identification device based on a dynamic iris image, comprising:
the iris image acquisition module is used for acquiring N iris images, wherein N is an integer greater than or equal to 2;
the first iris image preprocessing module is used for screening M iris images meeting a preset rule from the N iris images, wherein M is an integer less than or equal to N;
the second iris image preprocessing module is used for preprocessing the M iris images, generating corresponding M iris code images and recording pupil size change values corresponding to each iris code image;
the effective region data acquisition module is used for comparing the M iris code images and acquiring effective regions appearing in all the iris code images;
a stable region marking module, configured to mark the effective region as a stable region, generate a stable region feature sequence, and use the stable region feature sequence as a search code;
the change region data acquisition module is used for comparing the M iris code images and acquiring a change region which appears and/or disappears along with the change of the pupil in the iris code images;
a reference region marking module, configured to use the change region as a reference region, generate a reference region feature sequence, and use the reference region feature sequence as a reference code;
wherein the mark stable region module then further comprises:
the data comparison module is used for comparing the M iris code images meeting the preset requirements, wherein the M iris code images meeting the preset requirements are compared, and the data comparison module comprises: eliminating marked iris characteristic regions which do not exist objectively, and then solving a union set; all iris code images in the union set are used as target iris code images; searching for a repeated region by comparing the M iris code images, marking all the repeated regions and the repeated probability, and obtaining a compared iris feature union set;
a comprehensive template generation module, configured to obtain the compared iris feature union set, generate a feature sequence, and use the feature sequence as a comprehensive template;
the database module is used for associating and storing the retrieval codes, the reference codes and the comprehensive template;
wherein, the database module later still includes:
the characteristic sequence acquisition module is used for comparing the retrieval code and the reference code with a comparison code stored in a database in advance to acquire a comprehensive template meeting the preset similarity;
the characteristic sequence comparison module is used for comparing the comprehensive template with a comprehensive template stored in a database in advance to obtain a comparison result;
wherein the second iris image preprocessing module is further configured to obtain the iris feature region that does not objectively exist by:
and determining the feature quantity which does not change correspondingly with the contraction or expansion of the pupil as the objective non-existing iris feature region, and marking the feature quantity as the objective non-existing iris feature region.
5. The apparatus of claim 4, wherein the second iris image preprocessing module comprises:
the first sub-module is used for comparing the M iris code images and acquiring objective stable characteristic regions in all the iris images by combining the change of characteristics of pupils and irises;
a second sub-module for generating a stable region feature sequence based on the stable feature region, wherein the stable feature region represents a region of the iris image from which the same feature appearing due to external interference is removed.
6. The apparatus of claim 5, wherein the first sub-module is specifically configured to:
sequencing the M iris code images according to the size of the pupil size change value;
comparing the M iris code images to obtain a characteristic quantity of inconsistent change between the M iris code images and the pupil, wherein the comparing the M iris code images comprises: comparing the change of the characteristics in the M iris code images with the change of pupil contraction or expansion, wherein the characteristic quantity with inconsistent change is the characteristic quantity which does not generate corresponding change along with the contraction or expansion change of the pupil;
labeling the feature quantity as an objectively absent iris feature region, wherein the labeling the feature quantity as an objectively absent iris feature region includes: judging the characteristic quantity which does not change correspondingly with the contraction or expansion of the pupil as the objective nonexistent characteristic area, and marking the objective nonexistent characteristic area;
recording a trend of pupil constriction or dilation versus a change in the iris feature, wherein the trend of change includes a decrease in the iris feature as the pupil constriction or dilation decreases, or an increase in the iris feature as the pupil constriction or dilation increases, or an increase in the iris feature as the pupil constriction or dilation decreases, or a decrease in the iris feature as the pupil constriction or dilation increases;
acquiring objective but unstable characteristic regions in the M iris code images, wherein the objective but unstable characteristic regions are acquired by the following method: judging that the iris feature change trend is consistent with the pupil contraction or expansion change trend and is not a feature appearing in each coded image as the objective but unstable feature region;
obtaining a comparison result of every two iris code images, wherein the comparison result is obtained by excluding the corresponding objective nonexistent iris characteristic regions and objective but unstable characteristic regions of the M iris code images, comparing every two iris code images and solving an intersection;
screening out a comparison result with the most characteristic regions from the results, wherein the most characteristic regions refer to the characteristic regions with the most quantity of characteristics of the regions with the characteristics in the results obtained by comparing every two pairs of the results and solving intersection sets relative to the results obtained by comparing every two pairs of the results and solving intersection sets, and the characteristic regions with the largest total area;
and taking the alignment result as a stable characteristic region.
CN201711236717.6A 2017-11-28 2017-11-28 Identification method and device based on dynamic iris image Active CN109840461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711236717.6A CN109840461B (en) 2017-11-28 2017-11-28 Identification method and device based on dynamic iris image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711236717.6A CN109840461B (en) 2017-11-28 2017-11-28 Identification method and device based on dynamic iris image

Publications (2)

Publication Number Publication Date
CN109840461A CN109840461A (en) 2019-06-04
CN109840461B true CN109840461B (en) 2021-05-25

Family

ID=66882658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711236717.6A Active CN109840461B (en) 2017-11-28 2017-11-28 Identification method and device based on dynamic iris image

Country Status (1)

Country Link
CN (1) CN109840461B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905816A (en) * 2021-03-19 2021-06-04 上海聚虹光电科技有限公司 Iris search identification method, iris search identification device, iris search identification processor and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725765A (en) * 2010-01-22 2012-10-10 虹膜技术公司 Device and method for iris recognition using a plurality of iris images having different iris sizes
CN103544420A (en) * 2013-08-15 2014-01-29 马建 Anti-fake iris identity authentication method used for intelligent glasses
CN103577813A (en) * 2013-11-25 2014-02-12 中国科学院自动化研究所 Information fusion method for heterogeneous iris recognition
CN105354475A (en) * 2015-11-30 2016-02-24 贵州大学 Pupil identification based man-machine interaction identification method and system
CN106250810A (en) * 2015-06-15 2016-12-21 摩福公司 By iris identification, individuality is identified and/or the method for certification
CN106575357A (en) * 2014-07-24 2017-04-19 微软技术许可有限责任公司 Pupil detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102725765A (en) * 2010-01-22 2012-10-10 虹膜技术公司 Device and method for iris recognition using a plurality of iris images having different iris sizes
CN103544420A (en) * 2013-08-15 2014-01-29 马建 Anti-fake iris identity authentication method used for intelligent glasses
CN103577813A (en) * 2013-11-25 2014-02-12 中国科学院自动化研究所 Information fusion method for heterogeneous iris recognition
CN106575357A (en) * 2014-07-24 2017-04-19 微软技术许可有限责任公司 Pupil detection
CN106250810A (en) * 2015-06-15 2016-12-21 摩福公司 By iris identification, individuality is identified and/or the method for certification
CN105354475A (en) * 2015-11-30 2016-02-24 贵州大学 Pupil identification based man-machine interaction identification method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于分裂合并的虹膜识别算法;苑玮琦等;《微计算机信息》;20100930;第26卷(第25期);第35-37页 *
基于序列图像的虹膜特征提取与分类;田启川等;《天津大学学报》;20071231;第40卷(第12期);第1441-1446页 *
基于空域与频域稳定特征融合的离焦虹膜识别;苑玮琦等;《仪器仪表学报》;20131031;第34卷(第10期);第2300-2308页 *
模糊虹膜图像纹理特征提取与识别方法研究;张开营;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110815(第08期);第30页第5.1节、第35-36页第5.2.3-5.2.4节,图5.4 *

Also Published As

Publication number Publication date
CN109840461A (en) 2019-06-04

Similar Documents

Publication Publication Date Title
CN106446816B (en) Face recognition method and device
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
US8744196B2 (en) Automatic recognition of images
Ryan et al. An examination of character recognition on ID card using template matching approach
US8649602B2 (en) Systems and methods for tagging photos
Yang et al. A framework for improved video text detection and recognition
US20200334486A1 (en) System and a method for semantic level image retrieval
US20160307057A1 (en) Fully Automatic Tattoo Image Processing And Retrieval
US10354134B1 (en) Feature classification with spatial analysis
CN111931548B (en) Face recognition system, method for establishing face recognition data and face recognition method
CN111242124A (en) Certificate classification method, device and equipment
US10007678B2 (en) Image processing apparatus, image processing method, and recording medium
WO2020087950A1 (en) Database updating method and device, electronic device, and computer storage medium
Basar et al. A novel defocused image segmentation method based on PCNN and LBP
CN114155172A (en) Image processing method and system
CN109840461B (en) Identification method and device based on dynamic iris image
Karanje et al. Survey on text detection, segmentation and recognition from a natural scene images
Eid et al. Development of Iris Security System Using Adaptive Quality-Based Template Fusion
TWI714321B (en) Method, apparatus and electronic device for database updating and computer storage medium thereof
CN114925239B (en) Intelligent education target video big data retrieval method and system based on artificial intelligence
Abavisani et al. A robust sparse representation based face recognition system for smartphones
Thilagavathy et al. Fuzzy based edge enhanced text detection algorithm using MSER
Chatbri et al. An application-independent and segmentation-free approach for spotting queries in document images
Mr et al. Developing a novel technique to match composite sketches with images captured by unmanned aerial vehicle
KR20200124887A (en) Method and Apparatus for Creating Labeling Model with Data Programming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant