CN112905816A - Iris search identification method, iris search identification device, iris search identification processor and electronic device - Google Patents

Iris search identification method, iris search identification device, iris search identification processor and electronic device Download PDF

Info

Publication number
CN112905816A
CN112905816A CN202110296964.5A CN202110296964A CN112905816A CN 112905816 A CN112905816 A CN 112905816A CN 202110296964 A CN202110296964 A CN 202110296964A CN 112905816 A CN112905816 A CN 112905816A
Authority
CN
China
Prior art keywords
iris
image
feature
search
compared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110296964.5A
Other languages
Chinese (zh)
Other versions
CN112905816B (en
Inventor
陈园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Irisian Optronics Technology Co ltd
Original Assignee
Shanghai Irisian Optronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Irisian Optronics Technology Co ltd filed Critical Shanghai Irisian Optronics Technology Co ltd
Priority to CN202110296964.5A priority Critical patent/CN112905816B/en
Priority claimed from CN202110296964.5A external-priority patent/CN112905816B/en
Publication of CN112905816A publication Critical patent/CN112905816A/en
Application granted granted Critical
Publication of CN112905816B publication Critical patent/CN112905816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification

Abstract

The invention discloses an iris search identification method, an iris search identification device, a processor and an electronic device. The method comprises the following steps: carrying out characteristic coding processing on iris images to be stored in a target scene, and creating an iris database, wherein the iris database is used for carrying out grouping storage on characteristic codes of the iris images to be stored in a storage way according to different iris texture classifications; carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared; determining iris texture classification corresponding to the feature codes to be compared; and searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database or not. The invention solves the technical problem that the search time of the iris 1: N recognition mode is exponentially increased due to the increase of the data volume of the iris database in the process of carrying out the feature comparison of the iris 1: N recognition mode provided by the related technology.

Description

Iris search identification method, iris search identification device, iris search identification processor and electronic device
Technical Field
The invention relates to the field of iris recognition, in particular to an iris search recognition method, an iris search recognition device, an iris search recognition processor and an iris search recognition electronic device.
Background
Currently, with the rapid development of iris recognition technology, iris recognition can be mainly classified into one-to-one (i.e., 1:1) recognition mode and one-to-many (1: N) recognition mode according to different application scenarios. The 1:1 recognition mode essentially represents a process of comparing the characteristics of the current iris data and the known iris data by a computer to obtain whether the current iris data and the known iris data are matched. The 1:1 recognition mode is generally applied to authentication verification and the like to prove whether current iris data and known iris data are the same iris. The 1: N identification mode essentially represents a matching process that a computer compares current iris data with all iris data contained in a massive iris database one by one to obtain a target iris matched with the current iris data in the iris database, so that the identity of the current verifier is judged according to a matching result. The 1: N identification mode is generally applied to occasions such as entrance guard, attendance card punching and the like. In recent years, with the implementation of centralized collection of iris features nationwide by the ministry of public security, iris features are used in scenes such as suspect tracking, personnel query and the like as well as biological features such as fingerprints and human faces, and the data volume in iris databases of the scenes is huge, so that the search for an implementation method for rapidly completing iris recognition is urgent and becomes a technical problem to be solved urgently.
Disclosure of Invention
At least part of embodiments of the invention provide an iris searching and identifying method, device, processor and electronic device, so as to solve at least the technical problem that in the process of carrying out feature comparison on an iris 1: N identification mode provided in the related technology, the search time of the iris 1: N identification mode is exponentially increased due to the increase of the data volume of an iris database.
According to an embodiment of the present invention, there is provided an iris search recognition method, including:
carrying out characteristic coding processing on iris images to be stored in a target scene, and creating an iris database, wherein the iris database is used for carrying out grouping storage on characteristic codes of the iris images to be stored in a storage mode according to different iris texture classifications; carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared; determining iris texture classification corresponding to the feature codes to be compared; and searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database or not.
Optionally, the characteristic encoding processing is performed on the iris image to be put in storage in the target scene, and creating the iris database includes: acquiring an iris image to be put in a storage in a target scene by using iris acquisition equipment; segmenting the iris image to be put in storage to obtain a first eyelid eyelash area mask image; expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image; and performing characteristic coding processing on the first iris expansion image to create an iris database.
Optionally, the segmenting the iris image to be put in storage to obtain a first eyelid eyelash region mask image includes: performing circle fitting on a boundary line between a pupil and an iris in the iris image to be put in storage to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on a boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and (4) dividing eyelids and eyelashes in the iris image to be put in storage to obtain a first eyelid and eyelash region mask image.
Optionally, the expanding the to-be-binned iris image and the first eyelid eyelash region mask image to obtain a first iris expanded image includes: and expanding the to-be-stored iris image and the first eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain a first iris expanded image.
Optionally, the feature encoding processing is performed on the first iris expansion image, and the creating an iris database includes: performing feature coding processing on the first iris expansion image to obtain a feature code with a preset length; carrying out gray level co-occurrence matrix statistics on a coverage area which is not covered by a first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of iris textures; and establishing a plurality of sequences according to the value distribution of the first content descriptor, and storing the feature codes with preset lengths into a first target sequence according to the value size of the first content descriptor to obtain an iris database, wherein the plurality of sequences are used for determining a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
Optionally, the establishing of the plurality of sequences according to the distribution of the values of the first content descriptor includes: selecting a second minimum value and a second maximum value of the first content descriptor according to the value distribution of the first content descriptor; calculating a value interval based on the second smallest value and the second largest value; and establishing a plurality of sequences by using the second minimum value, the second maximum value and the value interval.
Optionally, the performing feature coding processing on the iris image to be recognized to obtain the feature code to be compared includes: acquiring an iris image to be recognized in a target scene by using iris acquisition equipment; segmenting the iris image to be recognized to obtain a second eyelid eyelash area mask image; the iris image to be recognized and the second eyelid eyelash area mask image are unfolded to obtain a second iris unfolding image; and carrying out characteristic coding processing on the second iris expansion image to obtain a characteristic code to be compared.
Optionally, the segmenting the iris image to be recognized to obtain a second eyelid eyelash region mask image includes: performing circle fitting on a boundary line between a pupil and an iris in the iris image to be recognized to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on the boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and segmenting eyelids and eyelashes in the iris image to be recognized to obtain a second eyelid and eyelash region mask image.
Optionally, the expanding the iris image to be recognized and the second eyelid eyelash region mask image to obtain a second iris expanded image includes: and unfolding the iris image to be recognized and the second eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain a second iris unfolded image.
Optionally, determining the iris texture classification corresponding to the feature code to be compared includes: carrying out gray level co-occurrence matrix statistics on an area, which is not covered by the second eyelid eyelash area mask image, in the second iris expansion image to obtain a second co-occurrence matrix, and calculating a second content descriptor corresponding to the second co-occurrence matrix, wherein the second content descriptor is used for reflecting the depth degree of the iris texture; and determining the iris texture classification corresponding to the feature codes to be compared based on the second content descriptor.
Optionally, the searching and comparing the feature code to be compared with the feature code stored in the iris database based on the iris texture classification, and obtaining the identification result includes: selecting a second target sequence corresponding to the distribution of the values of the second content descriptor; searching and comparing the feature codes to be compared with the feature codes stored in the second target sequence to obtain a comparison result; when the comparison result meets a preset condition, obtaining an identification result; and when the comparison result does not meet the preset condition, continuously searching and comparing the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain an identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
Optionally, performing gray level co-occurrence matrix statistics on a coverage area of the first iris expansion image not covered by the first eyelid eyelash area mask image, and obtaining a first co-occurrence matrix includes: dividing the gray scale into K levels; and respectively carrying out neighborhood gray value change statistics from multiple directions to obtain a first symbiotic matrix, wherein the corresponding size of the first symbiotic matrix is K x K, and K is a positive integer.
Optionally, the following formula is adopted to calculate the first content descriptor corresponding to the first symbiotic matrix:
Figure BDA0002984709590000031
and the correlation of the CON value obtained by calculating two images of the same iris is greater than that of the CON value obtained by calculating two images between different irises.
According to an embodiment of the present invention, there is also provided an iris search recognition apparatus including:
the device comprises a creating module and a storage module, wherein the creating module is used for carrying out characteristic coding processing on iris images to be stored in a warehouse in a target scene and creating an iris database, and the iris database is used for carrying out grouping storage on characteristic codes of the iris images to be stored in the warehouse according to different iris texture classifications; the encoding module is used for carrying out feature encoding processing on the iris image to be identified to obtain a feature code to be compared; the determining module is used for determining the iris texture classification corresponding to the feature code to be compared; and the identification module is used for searching and comparing the characteristic codes to be compared with the characteristic codes stored in the iris database based on the iris texture classification to obtain an identification result, wherein the identification result is used for indicating whether the characteristic codes matched with the characteristic codes to be compared exist in the iris database or not.
Optionally, the creating module is configured to acquire an iris image to be put in storage in a target scene by using iris acquisition equipment; segmenting the iris image to be put in storage to obtain a first eyelid eyelash area mask image; expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image; and performing characteristic coding processing on the first iris expansion image to create an iris database.
Optionally, the creating module is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be put in storage to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on a boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and (4) dividing eyelids and eyelashes in the iris image to be put in storage to obtain a first eyelid and eyelash region mask image.
Optionally, the creating module is configured to perform expansion processing on the to-be-put iris image and the first eyelid eyelash region mask image according to the inner circle information and the outer circle information to obtain a first iris expanded image.
Optionally, the creating module is configured to perform feature coding processing on the first iris expansion image to obtain a feature code with a preset length; carrying out gray level co-occurrence matrix statistics on a coverage area which is not covered by a first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of iris textures; and establishing a plurality of sequences according to the value distribution of the first content descriptor, and storing the feature codes with preset lengths into a first target sequence according to the value size of the first content descriptor to obtain an iris database, wherein the plurality of sequences are used for determining a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
Optionally, the creating module is configured to select a second smallest value and a second largest value of the first content descriptor according to the value distribution of the first content descriptor; calculating a value interval based on the second smallest value and the second largest value; and establishing a plurality of sequences by using the second minimum value, the second maximum value and the value interval.
Optionally, the encoding module is configured to acquire an iris image to be recognized in a target scene by using an iris acquisition device; segmenting the iris image to be recognized to obtain a second eyelid eyelash area mask image; the iris image to be recognized and the second eyelid eyelash area mask image are unfolded to obtain a second iris unfolding image; and carrying out characteristic coding processing on the second iris expansion image to obtain a characteristic code to be compared.
Optionally, the encoding module is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be recognized to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on the boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and segmenting eyelids and eyelashes in the iris image to be recognized to obtain a second eyelid and eyelash region mask image.
Optionally, the encoding module is configured to perform expansion processing on the iris image to be recognized and the second eyelid eyelash region mask image according to the inner circle information and the outer circle information to obtain a second iris expanded image.
Optionally, the determining module is configured to perform gray level co-occurrence matrix statistics on an area, which is not covered by the second eyelid eyelash area mask image, in the second iris expansion image to obtain a second co-occurrence matrix, and calculate a second content descriptor corresponding to the second co-occurrence matrix, where the second content descriptor is used to reflect a depth of an iris texture.
Optionally, the identification module is configured to select a second target sequence corresponding to distribution where the value of the second content descriptor is located; searching and comparing the feature codes to be compared with the feature codes stored in the second target sequence to obtain a comparison result; when the comparison result meets a preset condition, obtaining an identification result; and when the comparison result does not meet the preset condition, continuously searching and comparing the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain an identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
Optionally, a creating module for dividing the gray scale into K levels; and respectively carrying out neighborhood gray value change statistics from multiple directions to obtain a first symbiotic matrix, wherein the corresponding size of the first symbiotic matrix is K x K, and K is a positive integer.
Optionally, the creating module is configured to calculate a first content descriptor corresponding to the first symbiotic matrix by using the following formula:
Figure BDA0002984709590000051
and the correlation of the CON value obtained by calculating two images of the same iris is greater than that of the CON value obtained by calculating two images between different irises.
According to an embodiment of the present invention, there is further provided a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to execute the iris search recognition method in any one of the above methods when the computer program is executed.
There is further provided, according to an embodiment of the present invention, a processor for executing a program, where the program is configured to execute the iris search recognition method in any one of the above methods when executed.
There is further provided, according to an embodiment of the present invention, an electronic apparatus including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the iris search recognition method in any one of the above methods.
In at least part of embodiments of the invention, the iris database is created by performing feature coding processing on the iris images to be put in storage in a target scene in a mode of performing grouping storage on the feature codes of the iris images to be put in storage according to different iris texture classifications, determining the iris texture classification corresponding to the feature codes to be compared by performing feature coding processing on the iris images to be recognized to obtain the feature codes to be compared, and performing searching comparison processing on the feature codes to be compared and the feature codes stored in the iris database based on the iris texture classification to determine whether the feature codes matched with the feature codes to be compared exist in the iris database, so that the aim of effectively reducing the searching range of iris data by classifying the iris textures is achieved, and the searching duration, the searching duration and the searching time length of the iris 1: N recognition mode are remarkably reduced, The technical effect of improving the search efficiency of the iris 1: N identification mode is achieved, and the technical problem that in the process of carrying out feature comparison on the iris 1: N identification mode provided by the related technology, the search time of the iris 1: N identification mode is exponentially increased due to the increase of the data volume of an iris database is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of an iris search recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of an iris expansion image according to an alternative embodiment of the present invention;
fig. 3 is a block diagram illustrating an iris search recognition apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Iris recognition typically includes two parts, which are: feature encoding and feature comparison. In general, the feature coding of a single iris image data is stable, and the feature coding does not increase along with the increase of the data volume of the iris database. However, in the process of feature comparison, the increase of the data volume of the iris database will lead to exponential increase of the search time of the iris 1: N recognition mode. Considering that different iris data have obvious texture difference and the texture features of the same iris are stable, the search range of the iris data can be effectively reduced by classifying the iris textures.
In accordance with one embodiment of the present invention, there is provided an embodiment of an iris search recognition method, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in an order different from that shown.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device for communication function, an input/output device, a display device, and an iris acquisition device. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the iris search recognition method in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer programs stored in the memory, that is, implements the iris search recognition method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
Iris acquisition devices may include, but are not limited to, iris cameras, infrared lamps, and infrared band pass filters. Types of iris acquisition devices may include, but are not limited to: iris acquisition equipment adopting active visual feedback, iris acquisition equipment adopting distance measurement feedback, iris acquisition equipment expanding acquisition distance, iris acquisition equipment based on a double telecentric lens and the like.
In this embodiment, an iris search recognition method operating in the mobile terminal is provided, and fig. 1 is a flowchart of an iris search recognition method according to an embodiment of the present invention, for accelerating an iris 1: N recognition mode, as shown in fig. 1, the method includes the following steps:
step S10, performing feature coding processing on the iris images to be stored in a storage in a target scene, and creating an iris database, wherein the iris database is used for grouping and storing the feature codes of the iris images to be stored in the storage according to different iris texture classifications;
step S12, carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared;
step S14, determining iris texture classification corresponding to the feature code to be compared;
and step S16, searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database.
Through the steps, the iris database can be created by performing characteristic coding processing on the iris images to be put in storage in a target scene in a mode of grouping and storing the characteristic codes of the iris images to be put in storage according to different iris texture classifications, the iris database is used for performing characteristic coding processing on the iris images to be recognized to obtain the characteristic codes to be compared, determining the iris texture classification corresponding to the characteristic codes to be compared, and performing searching and comparing processing on the characteristic codes to be compared and the characteristic codes stored in the iris database based on the iris texture classification to determine whether the characteristic codes matched with the characteristic codes to be compared exist in the iris database, so that the aim of effectively reducing the searching range of iris data by classifying the iris textures is fulfilled, the searching duration of the iris 1: N recognition mode is obviously reduced, and the searching efficiency of the iris 1: N recognition mode is improved, and further, the technical problem that in the process of carrying out feature comparison on the iris 1: N recognition mode provided by the related technology, the search time of the iris 1: N recognition mode is exponentially increased due to the increase of the data volume of the iris database is solved.
The target scene may include, but is not limited to, practical application scenes such as entrance guard, attendance checking and the like. The iris image to be binned in the target scene may generally include: iris images of all workers in a particular work area (e.g., a work park at a company). The iris image to be recognized may generally include: iris images of the current identity verifier. Therefore, whether the iris image matched with the iris image to be recognized exists in the iris database is verified through the iris 1: N recognition mode.
In the iris 1: N recognition mode provided by the related art, if an iris image matched with an iris image to be recognized is desired to be found from an iris database, the iris image to be recognized and N iris images stored in the iris database need to be compared one by one until the matched iris image is found. However, by adopting the technical scheme provided by the embodiment of the invention, the iris database created by grouping and storing the feature codes of the iris images to be stored in the database according to different iris texture classifications is adopted, and the iris images to be recognized only need to be compared with the iris images contained in the corresponding iris texture classifications, so that the comparison times of the iris images are obviously reduced, and the comparison efficiency of the iris images is improved.
For example: in the iris 1: N recognition mode provided by the related art, if the iris image to be recognized is compared with N iris images stored in the iris database one by one, the comparison is required at most N times. However, by adopting the technical solution provided by the embodiment of the present invention, assuming that N iris images are divided into M groups according to different iris texture classifications, the comparison between the iris image to be recognized and N/M iris images stored in the iris database can be reduced one by one, thereby significantly reducing the comparison times of the iris images and improving the comparison efficiency of the iris images.
Optionally, in step S10, performing feature encoding processing on the iris image to be binned in the target scene, and creating the iris database may include the following steps:
s100, acquiring an iris image to be put in a warehouse in a target scene by using iris acquisition equipment;
step S101, segmenting an iris image to be put in storage to obtain a first eyelid eyelash area mask image;
step S102, expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image;
and step S103, performing characteristic coding processing on the first iris expansion image to create an iris database.
The iris searching and identifying method mainly comprises an iris database creating process and an iris data identifying process.
In the process of creating the iris database, the iris image acquisition is mainly carried out on all the personnel in the target application scene, and the characteristic coding classification storage is carried out on the iris image data. The iris collecting device is responsible for collecting the iris image to be warehoused, and the image size of the iris collecting device can be generally set to 640 × 480. Specifically, in the process of performing feature coding processing on an iris image to be warehoused in a target scene, firstly, iris images (namely, iris images to be warehoused) of all workers in a specific working area can be acquired in practical application scenes such as entrance guard, attendance card punching and the like by using iris acquisition equipment; secondly, segmentation processing can be carried out on the iris image to be put in storage to obtain a first eyelid eyelash area mask image; then, expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image; finally, an iris database is created by performing a feature encoding process on the first iris expansion image.
Optionally, in step S101, performing segmentation processing on the iris image to be put in storage to obtain a first eyelid eyelash region mask image may include the following steps:
step S1010, performing circle fitting on a boundary line between a pupil and an iris in the iris image to be put in storage to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle;
step S1011, performing circle fitting on the boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle;
step S1012, segmenting eyelids and eyelashes in the iris image to be put in storage to obtain a first eyelid and eyelash region mask image.
The process of segmenting the iris image to be put in storage mainly comprises the following aspects:
(1) segmenting pupils in the iris image to be put in storage, and realizing circle fitting at a boundary line between the pupils and the iris to obtain corresponding inner circle information, wherein the inner circle information comprises: the position (pupilX, pupilY) of the center of the inner circle and the inner circle radius pupilR.
(2) The method comprises the following steps of realizing circle fitting at a boundary line between an iris and a sclera in an iris image to be put in storage to obtain corresponding excircle information, wherein the excircle information comprises: the position (irisX, irisY) of the center of the excircle and the excircle radius irisR.
(3) And segmenting eyelids and eyelashes in the iris image to be put in storage to obtain corresponding eyelid and eyelash region mask images.
Optionally, in step S102, performing expansion processing on the to-be-binned iris image and the first eyelid eyelash region mask image to obtain a first iris expanded image may include the following steps:
and step S1020, performing expansion processing on the iris image to be stored and the first eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain a first iris expanded image.
After the inner circle information and the outer circle information are obtained, the to-be-put-in iris image and the first eyelid eyelash area mask image can be unfolded by using the position (pupix, pupiy) of the center of the inner circle, the inner circle radius pupir, the position (irisX, irisY) of the center of the outer circle, and the outer circle radius irisR, so that a first iris unfolded image is obtained. The unfolding process is usually represented by calculating the pixel coordinate position in the image according to polar coordinates, wherein the polar radius corresponds to the height of the unfolded image, and the polar angle corresponds to the width of the unfolded image. Namely, a plurality of rays are emitted outwards at the center of the pupil according to different angles (0-360 degrees), and pixel values on all the rays are sampled according to different polar radii (the maximum value is IrisR, the minimum value is pupiR, and the middle sampling interval is fixed) and are respectively stored on position coordinates corresponding to an expanded image, so that the expanded sampling processing of the iris area is completed.
Fig. 2 is a schematic diagram of an iris expansion image according to an alternative embodiment of the present invention, and as shown in fig. 2, an iris expansion rectangular chart may be obtained by expanding an iris original image using a position (pupix, pupy) of a center of an inner circle, a radius (pupir) of the inner circle, a position (irisX, irisY) of a center of an outer circle, and a radius (irisR) of the outer circle, and then a pixel of a position corresponding to an eyelid eyelash mask in the iris expansion rectangular chart is set to 0, that is, the position corresponding to the eyelid eyelash mask is changed to a black area.
Optionally, in step S103, performing feature encoding processing on the first iris expansion image, and creating the iris database may include performing the following steps:
step S1030, performing feature coding processing on the first iris expansion image to obtain a feature code with a preset length;
step S1031, carrying out gray level co-occurrence matrix statistics on the coverage area which is not covered by the first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of the iris texture;
step S1032, a plurality of sequences are established according to the value distribution of the first content descriptor, and the feature codes with preset lengths are stored in a first target sequence according to the value size of the first content descriptor to obtain an iris database, wherein the plurality of sequences are used for storing a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
In an alternative embodiment, the feature code with a preset length may be obtained by performing a feature encoding process on the first iris expansion image, for example: a signature code of length L. Performing gray level co-occurrence matrix statistics on the covered area (i.e. the rest areas except the black area) of the first iris expansion image which is not covered by the first eyelid eyelash area mask image, dividing the gray level into K levels, performing neighborhood gray value variation statistics from a plurality of directions (for example, four directions of a horizontal 0 degree direction, a vertical 90 degree direction, an oblique diagonal 45 degree direction and an oblique diagonal 135 degree direction) respectively to obtain a co-occurrence matrix G with the size K x K, and calculating a content descriptor CON for representing the co-occurrence matrix G by adopting the following formula:
Figure BDA0002984709590000111
the value of CON can reflect the depth of iris texture, so that the calculated CON values of two images of the same iris have relatively high correlation, and the calculated CON values of two images of different irises have relatively low correlation.
In an optional embodiment, in order to establish a plurality of sequences according to the value distribution of the first content descriptor, a sub-small value and a sub-large value of the first content descriptor need to be selected according to the value distribution of the first content descriptor, a value interval needs to be calculated based on the sub-small value and the sub-large value, and then a plurality of sequences are established by using the sub-small value, the sub-large value and the value interval.
Repeatedly executing the steps for each iris image in the template iris library, performing CON value calculation to obtain C (C1, C2, … cn), then respectively establishing N groups of sequences (namely N distribution segments) according to the distribution condition of CON value taking, and further storing the feature codes of the iris images to be put in storage in the corresponding group sequences (equivalent to the first target sequence) according to the size of the CON value taking, wherein the grouping mode is as follows:
(1) selecting the next small value Cmin and the next large value Cmax in the C (C1, C2, … cn) obtained by the calculation, and calculating the average value interval delta which is (Cmax-Cmin)/N;
(2) the CON value distribution segment N is calculated based on Cmin, delta and Cmax, i.e.,
{[0,Cmin),[Cmin,Cmin+delta),……[Cmin+N*delta,Cmax),[Cmax,∞)}。
and finally, repeatedly executing the steps on the iris images registered in the actual application scene, and storing the CON value of each registered iris image into the corresponding registered iris database D { D1, D2, … … and dN } according to the distribution section to which the CON value belongs, wherein D1, D2, … … and dN are respectively sub-databases corresponding to the distribution section to which the CON value belongs.
Optionally, in step S12, performing feature encoding processing on the iris image to be recognized to obtain a feature code to be compared may include the following steps:
step S120, collecting an iris image to be recognized in a target scene by using iris collecting equipment;
step S121, segmenting an iris image to be recognized to obtain a second eyelid eyelash area mask image;
step S122, expanding the iris image to be recognized and the second eyelid eyelash area mask image to obtain a second iris expanded image;
and S123, performing characteristic coding processing on the second iris expansion image to obtain a characteristic code to be compared.
In the iris data recognition process, the iris acquisition device is responsible for acquiring an iris image to be recognized, and the image size of the iris acquisition device can be generally set to 640 × 480. Specifically, in the process of performing feature coding processing on an iris image to be recognized, firstly, iris acquisition equipment can be used for acquiring an iris image (namely the iris image to be recognized) of an identity verifier in practical application scenes such as entrance guard, attendance card punching and the like; secondly, segmentation processing can be carried out on the iris image to be recognized to obtain a second eyelid eyelash area mask image; then, expanding the iris image to be recognized and the second eyelid eyelash area mask image to obtain a second iris expanded image; and finally, carrying out characteristic coding processing on the second iris expansion image to obtain a characteristic code to be compared.
Alternatively, in step S121, performing segmentation processing on the iris image to be recognized to obtain a second eyelid eyelash region mask image may include the following steps:
step S1210, performing circle fitting on a boundary line between a pupil and an iris in an iris image to be recognized to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle;
step S1211, performing circle fitting on a boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle;
step S1212, segmenting eyelids and eyelashes in the iris image to be recognized, to obtain a second eyelid and eyelash region mask image.
Similar to the segmentation process of the iris image to be put in storage, the process of segmenting the iris image to be recognized mainly comprises the following aspects:
(1) segmenting pupils in the iris image to be recognized, and realizing circle fitting at a boundary line between the pupils and the irises to obtain corresponding inner circle information, wherein the inner circle information comprises: the position (pupilX, pupilY) of the center of the inner circle and the inner circle radius pupilR.
(2) The method comprises the following steps of realizing circle fitting at a boundary line between an iris and a sclera in an iris image to be recognized, and obtaining corresponding excircle information, wherein the excircle information comprises: the position (irisX, irisY) of the center of the excircle and the excircle radius irisR.
(3) And segmenting eyelids and eyelashes in the iris image to be recognized to obtain corresponding eyelid and eyelash region mask images.
Optionally, in step S122, performing expansion processing on the iris image to be recognized and the second eyelid eyelash region mask image to obtain a second iris expanded image may include the following steps:
step S1220, performing expansion processing on the iris image to be recognized and the second eyelid eyelash region mask image according to the inner circle information and the outer circle information to obtain a second iris expanded image.
After the inner circle information and the outer circle information are obtained, the iris image to be recognized and the second eyelid eyelash area mask image can be unfolded by using the position (pupix, pupiy) of the center of the inner circle, the inner circle radius pupir, the position (irisX, irisY) of the center of the outer circle, and the outer circle radius irisR, so that a second iris unfolded image is obtained.
Optionally, in step S14, determining the iris texture classification corresponding to the feature codes to be compared may include the following steps:
step S140, carrying out gray level co-occurrence matrix statistics on the coverage area of the second eyelid eyelash area mask image in the second iris expansion image to obtain a second co-occurrence matrix, and calculating a second content descriptor corresponding to the second co-occurrence matrix, wherein the second content descriptor is used for reflecting the depth degree of the iris texture;
step S141, determining iris texture classification corresponding to the feature code to be compared based on the second content descriptor.
In an alternative embodiment, the feature code with a preset length may be obtained by performing a feature encoding process on the second iris expansion image, for example: characteristic code with length Lpre. Carrying out gray level co-occurrence matrix statistics on the coverage area of the second iris expansion image not covered by the second eyelid eyelash area mask image to obtain a corresponding co-occurrence matrix G with the size of K x KpreAnd calculating the characterization co-occurrence matrix G by adopting the formulapreContent descriptor CONpre
Optionally, in step S16, performing a search and comparison process on the feature codes to be compared and the feature codes stored in the iris database based on the iris texture classification, and obtaining the identification result may include the following steps:
step S160, selecting a second target sequence corresponding to the distribution of the value of the second content descriptor;
step S161, searching and comparing the feature code to be compared with the feature code stored in the second target sequence to obtain a comparison result;
step S162, obtaining an identification result when the comparison result meets a preset condition; and when the comparison result does not meet the preset condition, continuously searching and comparing the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain an identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
In the process of searching and comparing the characteristic code to be compared with the characteristic code stored in the iris database based on the iris texture classification, a content descriptor CON (CON) can be selectedpreIs taken to belong to the distribution segment (equivalent to the second target sequence), and then the feature code is usedpreAnd comparing the data with all the feature codes in the sub-database of the distribution segment one by one, reserving the corresponding score values and selecting the maximum value S and the corresponding identification thereof. If S is larger than the preset threshold value T (equivalent to the preset condition), the matching is successful, namely the iris image to be recognized is matched with the mark corresponding to the maximum value S. If S is smaller than or equal to the preset threshold value T, repeatedly searching and comparing in a neighborhood sub-database (equivalent to the neighborhood sequence of the second target sequence) of the distributed segment sub-database until matching is successful or comparison between the feature code to be identified and all feature codes of the whole iris database is completed.
In an alternative embodiment, the search comparison may be performed in the neighborhood sub-database of the distributed segment sub-database as follows:
suppose CONpreThe distribution section to which the value belongs is the Nth section, and if the matching is not successful in the Nth section, matching analysis is carried out on the (N-1) th section and the (N + 1) th section at the same time; if the matching is not successful in the (N-1) th segment and the (N + 1) th segment, the matching analysis is performed on the (N-2) th segment and the (N + 2) th segment at the same time, and so on … … is performed until the matching is successful or the comparison between the feature code to be identified and all the feature codes of the whole database is completed.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an iris search and recognition apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the details already described are not repeated. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram illustrating an iris search recognition apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus including: the creating module 10 is used for performing feature coding processing on the iris images to be stored in a storage in a target scene and creating an iris database, wherein the iris database is used for performing grouping storage on feature codes of the iris images to be stored in the storage according to different iris texture classifications; the encoding module 20 is used for performing feature encoding processing on the iris image to be identified to obtain a feature code to be compared; a determining module 30, configured to determine an iris texture classification corresponding to the feature code to be compared; and the identification module 40 is used for searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain an identification result, wherein the identification result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database.
Optionally, the creating module 10 is configured to collect an iris image to be put in storage in a target scene by using an iris collecting device; segmenting the iris image to be put in storage to obtain a first eyelid eyelash area mask image; expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image; and performing characteristic coding processing on the first iris expansion image to create an iris database.
Optionally, the creating module 10 is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be put in storage to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on a boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and (4) dividing eyelids and eyelashes in the iris image to be put in storage to obtain a first eyelid and eyelash region mask image.
Optionally, the creating module 10 is configured to perform expansion processing on the to-be-put iris image and the first eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain a first iris expanded image.
Optionally, the creating module 10 is configured to perform feature coding processing on the first iris expansion image to obtain a feature code with a preset length; carrying out gray level co-occurrence matrix statistics on a coverage area which is not covered by a first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of iris textures; and establishing a plurality of sequences according to the value distribution of the first content descriptor, and storing the feature codes with preset lengths into a first target sequence according to the value size of the first content descriptor to obtain an iris database, wherein the plurality of sequences are used for storing a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
Optionally, the creating module 10 is configured to select a second minimum value and a second maximum value of the first content descriptor according to the value distribution of the first content descriptor; calculating a value interval based on the second smallest value and the second largest value; and establishing a plurality of sequences by using the second minimum value, the second maximum value and the value interval.
Optionally, the encoding module 20 is configured to acquire an iris image to be recognized in a target scene by using an iris acquisition device; segmenting the iris image to be recognized to obtain a second eyelid eyelash area mask image; the iris image to be recognized and the second eyelid eyelash area mask image are unfolded to obtain a second iris unfolding image; and carrying out characteristic coding processing on the second iris expansion image to obtain a characteristic code to be compared.
Optionally, the encoding module 20 is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be recognized, so as to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; and performing circle fitting on the boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and segmenting eyelids and eyelashes in the iris image to be recognized to obtain a second eyelid and eyelash region mask image.
Optionally, the encoding module 20 is configured to perform expansion processing on the iris image to be recognized and the second eyelid eyelash region mask image according to the inner circle information and the outer circle information to obtain a second iris expanded image.
Optionally, the determining module 30 is configured to perform gray level co-occurrence matrix statistics on an area, which is not covered by the second eyelid eyelash area mask image, in the second iris expansion image to obtain a second co-occurrence matrix, and calculate a second content descriptor corresponding to the second co-occurrence matrix, where the second content descriptor is used to reflect the depth of the iris texture.
Optionally, the identifying module 40 is configured to select a second target sequence corresponding to distribution where the value of the second content descriptor is located; searching and comparing the feature codes to be compared with the feature codes stored in the second target sequence to obtain a comparison result; when the comparison result meets a preset condition, obtaining an identification result; and when the comparison result does not meet the preset condition, continuously searching and comparing the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain an identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s1, performing feature coding processing on the iris images to be stored in a storage in the target scene, and creating an iris database, wherein the iris database is used for grouping and storing the feature codes of the iris images to be stored in the storage according to different iris texture classifications;
s2, carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared;
s3, determining iris texture classification corresponding to the feature codes to be compared;
and S4, searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, performing feature coding processing on the iris images to be stored in a storage in the target scene, and creating an iris database, wherein the iris database is used for grouping and storing the feature codes of the iris images to be stored in the storage according to different iris texture classifications;
s2, carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared;
s3, determining iris texture classification corresponding to the feature codes to be compared;
and S4, searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (29)

1. An iris search recognition method, comprising:
carrying out characteristic coding processing on iris images to be put in storage in a target scene, and creating an iris database, wherein the iris database is used for carrying out grouping storage on characteristic codes of the iris images to be put in storage according to different iris texture classifications;
carrying out feature coding processing on the iris image to be recognized to obtain a feature code to be compared;
determining the iris texture classification corresponding to the feature code to be compared;
and searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain a recognition result, wherein the recognition result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database or not.
2. The iris search recognition method according to claim 1, wherein the performing of feature coding on the iris image to be binned in the target scene and creating the iris database comprises:
acquiring the iris image to be put in storage in the target scene by using iris acquisition equipment;
segmenting the iris image to be put in storage to obtain a first eyelid eyelash area mask image;
expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image;
and performing characteristic coding processing on the first iris expansion image to create the iris database.
3. The iris search and recognition method of claim 2, wherein the step of segmenting the iris image to be put in storage to obtain the first eyelid eyelash region mask image comprises:
performing circle fitting on a boundary line between the pupil and the iris in the iris image to be put in storage to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle;
performing circle fitting on a boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle;
and segmenting eyelids and eyelashes in the iris image to be put in storage to obtain the first eyelid and eyelash region mask image.
4. The iris search and recognition method according to claim 3, wherein the expanding the to-be-binned iris image and the first eyelid eyelash region mask image to obtain the first iris expanded image comprises:
and expanding the iris image to be put in storage and the first eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain the first iris expanded image.
5. The iris search recognition method as claimed in claim 4, wherein the performing of the feature encoding process on the first iris expansion image and the creating of the iris database comprise:
performing characteristic coding processing on the first iris expansion image to obtain a characteristic code with a preset length;
carrying out gray level co-occurrence matrix statistics on a coverage area which is not covered by the first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of iris texture;
and establishing a plurality of sequences according to the value distribution of the first content descriptor, and storing the feature codes with preset lengths into a first target sequence according to the value size of the first content descriptor to obtain the iris database, wherein the plurality of sequences are used for determining a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
6. The iris search recognition method as claimed in claim 5, wherein the establishing of the plurality of sequences according to the distribution of values of the first content descriptor comprises:
selecting a second minimum value and a second maximum value of the first content descriptor according to the value distribution of the first content descriptor;
calculating a value interval based on the second smallest value and the second largest value;
and establishing the plurality of sequences by using the secondary small value, the secondary large value and the value interval.
7. The iris search recognition method of claim 1, wherein the feature coding processing of the iris image to be recognized to obtain the feature code to be compared comprises:
acquiring the iris image to be recognized in the target scene by using iris acquisition equipment;
segmenting the iris image to be identified to obtain a second eyelid eyelash area mask image;
expanding the iris image to be recognized and the second eyelid eyelash area mask image to obtain a second iris expanded image;
and carrying out characteristic coding processing on the second iris expansion image to obtain the characteristic code to be compared.
8. The iris search recognition method as claimed in claim 7, wherein the segmenting process of the iris image to be recognized to obtain the second eyelid eyelash region mask image comprises:
performing circle fitting on a boundary line between the pupil and the iris in the iris image to be recognized to obtain inner circle information, wherein the inner circle information comprises: the position of the center of the inner circle and the radius of the inner circle;
performing circle fitting on the boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle;
and segmenting eyelids and eyelashes in the iris image to be identified to obtain a second eyelid and eyelash region mask image.
9. The iris search recognition method as claimed in claim 8, wherein the expanding process is performed on the iris image to be recognized and the second eyelid eyelash region mask image, and obtaining the second iris expanded image comprises:
and unfolding the iris image to be recognized and the second eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain a second iris unfolded image.
10. The iris search and identification method as claimed in claim 9, wherein determining the iris texture classification corresponding to the feature code to be compared comprises:
carrying out gray level co-occurrence matrix statistics on an area, which is not covered by the second eyelid eyelash area mask image, in the second iris expansion image to obtain a second co-occurrence matrix, and calculating a second content descriptor corresponding to the second co-occurrence matrix, wherein the second content descriptor is used for reflecting the depth degree of the iris texture;
and determining the iris texture classification corresponding to the feature codes to be compared based on the second content descriptor.
11. The iris search and recognition method of claim 10, wherein the searching and comparing the feature code to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain the recognition result comprises:
selecting a second target sequence corresponding to the distribution of the values of the second content descriptors;
searching and comparing the feature codes to be compared with the feature codes stored in the second target sequence to obtain a comparison result;
when the comparison result meets a preset condition, obtaining the identification result; and when the comparison result does not meet the preset condition, continuing to search and compare the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain the identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
12. The iris search recognition method of claim 5, wherein performing gray level co-occurrence matrix statistics on the area of the first iris expansion image not covered by the first eyelid eyelash area mask image to obtain the first co-occurrence matrix comprises:
dividing the gray scale into K levels;
and respectively carrying out neighborhood gray value change statistics from multiple directions to obtain the first symbiotic matrix, wherein the corresponding size of the first symbiotic matrix is K x K, and K is a positive integer.
13. The iris search recognition method as claimed in claim 12, wherein the first content descriptor corresponding to the first birth matrix is calculated by using the following formula:
Figure FDA0002984709580000041
and the CON is the first content descriptor, the value of the CON is used for reflecting the depth of the iris texture, and the correlation of the CON value obtained by calculating two images of the same iris is larger than the correlation of the CON value obtained by calculating two images between different irises.
14. An iris search recognition apparatus comprising:
the device comprises a creating module and a storage module, wherein the creating module is used for carrying out characteristic coding processing on iris images to be stored in a warehouse in a target scene and creating an iris database, and the iris database is used for grouping and storing the characteristic codes of the iris images to be stored in the warehouse according to different iris texture classifications;
the encoding module is used for carrying out feature encoding processing on the iris image to be identified to obtain a feature code to be compared;
the determining module is used for determining the iris texture classification corresponding to the feature code to be compared;
and the identification module is used for searching and comparing the feature codes to be compared with the feature codes stored in the iris database based on the iris texture classification to obtain an identification result, wherein the identification result is used for indicating whether the feature codes matched with the feature codes to be compared exist in the iris database or not.
15. An iris search recognition device according to claim 14, wherein said creating module is configured to collect the iris image to be binned in the target scene by using an iris collecting apparatus; segmenting the iris image to be put in storage to obtain a first eyelid eyelash area mask image; expanding the iris image to be put in storage and the first eyelid eyelash area mask image to obtain a first iris expanded image; and performing characteristic coding processing on the first iris expansion image to create the iris database.
16. The iris search and recognition device of claim 15, wherein the creating module is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be binned to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; performing circle fitting on a boundary line between the iris and the sclera in the iris image to be put in storage to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and segmenting eyelids and eyelashes in the iris image to be put in storage to obtain the first eyelid and eyelash region mask image.
17. The iris search recognition device as claimed in claim 16, wherein the creating module is configured to perform expansion processing on the iris image to be binned and the first eyelid eyelash region mask image according to the inner circle information and the outer circle information to obtain the first iris expanded image.
18. The iris search and recognition apparatus of claim 17, wherein the creating module is configured to perform a feature coding process on the first iris expansion image to obtain a feature code with a preset length; carrying out gray level co-occurrence matrix statistics on a coverage area which is not covered by the first eyelid eyelash area mask image in the first iris expansion image to obtain a first co-occurrence matrix, and calculating a first content descriptor corresponding to the first co-occurrence matrix, wherein the first content descriptor is used for reflecting the depth degree of iris texture; and establishing a plurality of sequences according to the value distribution of the first content descriptor, and storing the feature codes with preset lengths into a first target sequence according to the value size of the first content descriptor to obtain the iris database, wherein the plurality of sequences are used for determining a plurality of iris texture classifications, and the first target sequence is a sequence selected from the plurality of sequences.
19. The iris search and recognition apparatus of claim 18, wherein the creating module is configured to select a next-smallest value and a next-largest value of the first content descriptor according to a distribution of values of the first content descriptor; calculating a value interval based on the second smallest value and the second largest value; and establishing the plurality of sequences by using the secondary small value, the secondary large value and the value interval.
20. An iris search recognition device according to claim 14, wherein said coding module is configured to capture the iris image to be recognized in the target scene by using an iris capture device; segmenting the iris image to be identified to obtain a second eyelid eyelash area mask image; expanding the iris image to be recognized and the second eyelid eyelash area mask image to obtain a second iris expanded image; and carrying out characteristic coding processing on the second iris expansion image to obtain the characteristic code to be compared.
21. The iris search and recognition device as claimed in claim 20, wherein the encoding module is configured to perform circle fitting on a boundary line between a pupil and an iris in the iris image to be recognized to obtain inner circle information, where the inner circle information includes: the position of the center of the inner circle and the radius of the inner circle; performing circle fitting on the boundary line between the iris and the sclera in the iris image to be recognized to obtain excircle information, wherein the excircle information comprises: the position of the center of the excircle and the radius of the excircle; and segmenting eyelids and eyelashes in the iris image to be identified to obtain a second eyelid and eyelash region mask image.
22. An iris search recognition device as claimed in claim 21, wherein the encoding module is configured to perform expansion processing on the iris image to be recognized and the second eyelid eyelash area mask image according to the inner circle information and the outer circle information to obtain the second iris expansion image.
23. The apparatus according to claim 22, wherein the determining module is configured to perform gray level co-occurrence matrix statistics on an area of the second iris expansion image not covered by the second eyelid eyelash area mask image to obtain a second co-occurrence matrix, and calculate a second content descriptor corresponding to the second co-occurrence matrix, wherein the second content descriptor is used for reflecting a depth of an iris texture.
24. The iris search and recognition apparatus of claim 23, wherein the recognition module is configured to select a second target sequence corresponding to distribution of values of the second content descriptor; searching and comparing the feature codes to be compared with the feature codes stored in the second target sequence to obtain a comparison result; when the comparison result meets a preset condition, obtaining the identification result; and when the comparison result does not meet the preset condition, continuing to search and compare the feature code to be compared with the feature code stored in the neighborhood sequence of the second target sequence until the comparison is successful to obtain the identification result, or completely comparing the feature code to be compared with the feature code stored in the iris database.
25. An iris search recognition apparatus as claimed in claim 18, wherein the creating module is configured to divide gray scale into K levels; and respectively carrying out neighborhood gray value change statistics from multiple directions to obtain the first symbiotic matrix, wherein the corresponding size of the first symbiotic matrix is K x K, and K is a positive integer.
26. The apparatus of claim 25, wherein the creating module is configured to calculate the first content descriptor corresponding to the first symbiotic matrix by using the following formula:
Figure FDA0002984709580000061
and the CON is the first content descriptor, the value of the CON is used for reflecting the depth of the iris texture, and the correlation of the CON value obtained by calculating two images of the same iris is larger than the correlation of the CON value obtained by calculating two images between different irises.
27. A non-volatile storage medium, in which a computer program is stored, wherein the computer program is configured to execute the iris search recognition method according to any one of claims 1 to 13 when the computer program runs.
28. A processor for executing a program, wherein the program is configured to execute the iris search recognition method according to any one of claims 1 to 13.
29. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the iris search recognition method according to any one of claims 1 to 13.
CN202110296964.5A 2021-03-19 Iris search recognition method and device, processor and electronic device Active CN112905816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110296964.5A CN112905816B (en) 2021-03-19 Iris search recognition method and device, processor and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110296964.5A CN112905816B (en) 2021-03-19 Iris search recognition method and device, processor and electronic device

Publications (2)

Publication Number Publication Date
CN112905816A true CN112905816A (en) 2021-06-04
CN112905816B CN112905816B (en) 2024-05-17

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435416A (en) * 2021-08-25 2021-09-24 北京万里红科技股份有限公司 Iris searching method and computing device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004206444A (en) * 2002-12-25 2004-07-22 Matsushita Electric Ind Co Ltd Individual authentication method and iris authentication device
US20070140531A1 (en) * 2005-01-26 2007-06-21 Honeywell International Inc. standoff iris recognition system
CN103544420A (en) * 2013-08-15 2014-01-29 马建 Anti-fake iris identity authentication method used for intelligent glasses
CN107292242A (en) * 2017-05-31 2017-10-24 华为技术有限公司 A kind of iris identification method and terminal
CN109840461A (en) * 2017-11-28 2019-06-04 武汉真元生物数据有限公司 A kind of recognition methods and device based on dynamic iris image
CN110059586A (en) * 2019-03-29 2019-07-26 电子科技大学 A kind of Iris Location segmenting system based on empty residual error attention structure
CN111144413A (en) * 2019-12-30 2020-05-12 福建天晴数码有限公司 Iris positioning method and computer readable storage medium
CN111914585A (en) * 2018-07-03 2020-11-10 上海斐讯数据通信技术有限公司 Iris identification method and system
CN111950403A (en) * 2020-07-28 2020-11-17 武汉虹识技术有限公司 Iris classification method and system, electronic device and storage medium
CN112270271A (en) * 2020-10-31 2021-01-26 重庆商务职业学院 Iris identification method based on wavelet packet decomposition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004206444A (en) * 2002-12-25 2004-07-22 Matsushita Electric Ind Co Ltd Individual authentication method and iris authentication device
US20070140531A1 (en) * 2005-01-26 2007-06-21 Honeywell International Inc. standoff iris recognition system
CN103544420A (en) * 2013-08-15 2014-01-29 马建 Anti-fake iris identity authentication method used for intelligent glasses
CN107292242A (en) * 2017-05-31 2017-10-24 华为技术有限公司 A kind of iris identification method and terminal
CN109840461A (en) * 2017-11-28 2019-06-04 武汉真元生物数据有限公司 A kind of recognition methods and device based on dynamic iris image
CN111914585A (en) * 2018-07-03 2020-11-10 上海斐讯数据通信技术有限公司 Iris identification method and system
CN110059586A (en) * 2019-03-29 2019-07-26 电子科技大学 A kind of Iris Location segmenting system based on empty residual error attention structure
CN111144413A (en) * 2019-12-30 2020-05-12 福建天晴数码有限公司 Iris positioning method and computer readable storage medium
CN111950403A (en) * 2020-07-28 2020-11-17 武汉虹识技术有限公司 Iris classification method and system, electronic device and storage medium
CN112270271A (en) * 2020-10-31 2021-01-26 重庆商务职业学院 Iris identification method based on wavelet packet decomposition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FEI YAN等: "Iris Segmentation Using Watershed and Region Merging", 《2014 9TH IEEE CONFERENCE ON INDUSTIRAL ELECTRONICS AND APPLICATIONS》, pages 1 - 6 *
秦武旻等: "虹膜快速检测与精确定位的算法研究", 《国外电子测量技术》, vol. 36, no. 4, pages 25 - 28 *
罗忠亮等: "虹膜分割中眼睑和睫毛的检测", 《石河子大学学报(自然科学版)》, vol. 27, no. 3, pages 379 - 382 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435416A (en) * 2021-08-25 2021-09-24 北京万里红科技股份有限公司 Iris searching method and computing device

Similar Documents

Publication Publication Date Title
CN109117803B (en) Face image clustering method and device, server and storage medium
Daradkeh et al. Development of effective methods for structural image recognition using the principles of data granulation and apparatus of fuzzy logic
CN111444828B (en) Model training method, target detection method, device and storage medium
CN110362677B (en) Text data category identification method and device, storage medium and computer equipment
CN111931592B (en) Object recognition method, device and storage medium
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN112487886A (en) Method and device for identifying face with shielding, storage medium and terminal
CN108563651B (en) Multi-video target searching method, device and equipment
CN111783882B (en) Key point detection method and device, electronic equipment and storage medium
WO2023273058A1 (en) Identity identification method, system and apparatus, and computer device and storage medium
CN110909817B (en) Distributed clustering method and system, processor, electronic device and storage medium
CN109711287B (en) Face acquisition method and related product
Arakala et al. Value of graph topology in vascular biometrics
CN112052251B (en) Target data updating method and related device, equipment and storage medium
Ben Jemaa et al. Finger surfaces recognition using rank level fusion
CN111178455B (en) Image clustering method, system, device and medium
CN112348008A (en) Certificate information identification method and device, terminal equipment and storage medium
CN104635930A (en) Information processing method and electronic device
CN111709473A (en) Object feature clustering method and device
CN112905816B (en) Iris search recognition method and device, processor and electronic device
CN111177450A (en) Image retrieval cloud identification method and system and computer readable storage medium
CN112905816A (en) Iris search identification method, iris search identification device, iris search identification processor and electronic device
CN112232890B (en) Data processing method, device, equipment and storage medium
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
Yuan et al. Real-time ear detection based on embedded systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant