CN111783581A - Iris identification method and related device - Google Patents
Iris identification method and related device Download PDFInfo
- Publication number
- CN111783581A CN111783581A CN202010568634.2A CN202010568634A CN111783581A CN 111783581 A CN111783581 A CN 111783581A CN 202010568634 A CN202010568634 A CN 202010568634A CN 111783581 A CN111783581 A CN 111783581A
- Authority
- CN
- China
- Prior art keywords
- image
- iris
- target object
- image processing
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 80
- 238000000605 extraction Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 10
- 238000013441 quality evaluation Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 7
- 239000013598 vector Substances 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000005286 illumination Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application discloses an iris recognition method and a related device, relates to the technical field of positioning, and is used for solving the problem that in the related technology, iris recognition is difficult to popularize due to the fact that images and iris recognition algorithms under different environments are difficult to match. In the embodiment of the application, through a parallel execution method, for example, parallel image acquisition, parallel image processing and parallel iris feature comparison, more images can be processed in unit time, so that the false rejection rate is reduced in a mode of improving the false rejection rate, and the iris features of a user can be better identified, so that the use experience of the user is improved. Based on the improvement of the overall accuracy of the iris comparison result, the iris identification method can be more suitable and popularized.
Description
Technical Field
The present application relates to the field of biometric identification technologies, and in particular, to an iris identification method and a related apparatus.
Background
The identity is identified according to the unique biological characteristics and behavioral characteristics of each person, and the method has better safety and confidentiality. The biological features may include facial features, fingerprint features, and iris features. And compared with other biological characteristics, the iris characteristics have the advantages of high stability, high reliability, high anti-counterfeiting performance and the like. However, due to the strict requirements of iris recognition on images, the application of iris recognition has been a big problem in the industry.
Disclosure of Invention
The embodiment of the application provides an identification method and a related device, which are used for solving the problem that the application of iris identification in the related technology is difficult to popularize.
In a first aspect, an embodiment of the present application provides an iris identification method, where the method includes:
simultaneously acquiring at least one image of the target object;
performing quality analysis on each image to obtain an image meeting the quality requirement;
processing each image meeting the quality requirement by adopting at least one image processing mode to obtain an image for feature extraction; when a plurality of images of the target object are acquired, the image processing mode comprises at least one;
respectively extracting the features of each image for feature extraction to obtain iris feature information;
and comparing the iris characteristic information with the characteristics in a pre-stored characteristic library to obtain the iris identification result of the target object.
In one embodiment, the acquiring at least one image of the target object comprises:
controlling at least one image acquisition device in the image acquisition device array to respectively adopt at least one set of pre-configured acquisition parameters to carry out image acquisition on the target object to obtain at least one image of the target object; the arrangement mode of the image acquisition devices in the array can be customized.
In one embodiment, when the image capturing device is controlled to capture an image by using a plurality of sets of capturing parameters, the plurality of sets of capturing parameters include capturing parameters suitable for different shooting environments, wherein each shooting environment corresponds to one or more sets of capturing parameters.
In one embodiment, in the array of image capture devices:
the plurality of image acquisition devices with the same focal length are arranged in parallel to increase the shooting range;
and or (b) a,
a plurality of image acquisition devices with different focal lengths are arranged in parallel to increase the shooting depth.
In one embodiment, the at least one image processing mode includes: and image processing modes corresponding to different image processing algorithms, wherein each image processing algorithm adopts at least one configuration parameter for image processing.
In one embodiment, the comparing the iris feature information with features in a pre-stored feature library to obtain the iris recognition result of the target object includes:
caching the acquired multiple iris characteristic information: and the number of the first and second electrodes,
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library in a parallel comparison mode to obtain the iris identification result of the target object.
In an embodiment, the comparing the plurality of iris feature information with the features in the pre-stored feature library in a parallel comparison manner includes:
and comparing the plurality of iris feature information with the features in the pre-stored feature library by adopting a high-bit-width high-frequency method.
In a second aspect, an embodiment of the present application provides an iris recognition apparatus, including:
the image acquisition module is used for simultaneously acquiring at least one image of the target object;
the quality evaluation module is used for carrying out quality analysis on each image to obtain an image meeting the quality requirement;
the image processing module is used for processing each image meeting the quality requirement by adopting at least one image processing mode to obtain an image for feature extraction; when a plurality of images of the target object are acquired, the image processing mode comprises at least one;
the characteristic extraction module is used for respectively extracting the characteristics of each image for characteristic extraction to obtain iris characteristic information;
and the characteristic comparison module is used for comparing the iris characteristic information with the characteristics in a pre-stored characteristic library to obtain the iris identification result of the target object.
In one embodiment, the image acquisition module is configured to:
controlling at least one image acquisition device in the image acquisition device array to respectively adopt at least one set of pre-configured acquisition parameters to carry out image acquisition on the target object to obtain at least one image of the target object; the arrangement mode of the image acquisition devices in the array can be customized.
In an embodiment, the image capturing module is configured to, when the image capturing apparatus is controlled to capture an image using multiple sets of capturing parameters, include capturing parameters suitable for different shooting environments, where each shooting environment corresponds to one or more sets of capturing parameters.
In one embodiment, in the array of image capture devices:
the plurality of image acquisition devices with the same focal length are arranged in parallel to increase the shooting range;
and or (b) a,
a plurality of image acquisition devices with different focal lengths are arranged in parallel to increase the shooting depth.
In one embodiment, the at least one image processing mode includes: and image processing modes corresponding to different image processing algorithms, wherein each image processing algorithm adopts at least one configuration parameter for image processing.
In one embodiment, the feature alignment module is configured to:
caching the acquired multiple iris characteristic information: and the number of the first and second electrodes,
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library in a parallel comparison mode to obtain the iris identification result of the target object.
In an embodiment, the feature comparison module, when performing comparison between the plurality of iris feature information and features in the pre-stored feature library in a parallel comparison manner, is configured to:
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library by adopting a high-bit-width high-frequency device.
In a third aspect, another embodiment of the present application also provides a computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any of the iris recognition methods provided by the embodiments of the present application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores a computer program, and the computer program is configured to cause a computer to execute any one of the iris identification methods in the embodiments of the present application.
Therefore, through a parallel execution method, such as parallel image acquisition, parallel image extraction and parallel iris feature comparison, more images can be processed in unit time, the false recognition rate is improved to reduce the false rejection rate, and the iris features of the user can be better identified, so that the use experience of the user is improved. Based on the improvement of the overall accuracy of the iris comparison result, the iris identification method can be more suitable and popularized.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an application environment according to one embodiment of the present application;
FIG. 2 is a schematic overall flow chart of an iris identification method;
FIG. 3 is a schematic flow chart of a method of iris recognition according to an embodiment of the present application;
4-6 are schematic diagrams of an array arrangement for parallel acquisition of images according to an embodiment of the present application;
FIG. 7 is a schematic overall flow chart of an iris identification method according to an embodiment of the present application;
fig. 8 is a second flowchart illustrating an overall iris recognition method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an iris recognition apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a computing device according to one embodiment of the present application.
Detailed Description
In order to solve the problem that iris recognition is difficult to popularize in the related art, the embodiment of the application provides an iris recognition method which is used for improving the applicable type of iris recognition so as to expect better popularization of application of iris recognition.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
FIG. 1 is a schematic diagram of an application environment according to one embodiment of the present application.
As shown in fig. 1, the application environment may include at least one server 20 and a plurality of terminal devices 10. The terminal device 10 may receive and transmit information with the server 20 via the network 40, for example, the terminal device may collect an image of a user to facilitate iris recognition, the terminal device may also feed back a result of iris recognition, and the terminal device may further provide an administrator with configuration parameters, such as collection parameters when collecting an image, and further, for example, configuration parameters of algorithms for iris recognition. In short, the terminal device as a human-computer interaction portal can access and set the whole iris recognition process in response to user operation.
The server 20 may access the database 30 to obtain the content required by the terminal device 10, for example, the database may store iris features, and the terminal device may obtain the iris features in the authority from the server according to its authority so as to perform feature comparison with the iris features in the captured image. Terminal devices (e.g., between 10_1 and 10_2 or 10_ N) may also communicate with each other via network 40, if desired. For example, different departments of the same enterprise may share some iris feature information.
Network 40 may be a network for information transfer in a broad sense and may include one or more communication networks such as a wireless communication network, the internet, a private network, a local area network, a metropolitan area network, a wide area network, or a cellular data network, among others.
It should be noted that the underlying concepts of the example embodiments of the present application may not be altered if additional modules are added or removed from the illustrated environments. In addition, although a bidirectional arrow from the database 30 to the server 20 is shown in the figure for convenience of explanation, it will be understood by those skilled in the art that the above-described data transmission and reception may be realized through the network 40.
In the application environment shown in fig. 1, terminal device 10 is any suitable electronic device that may be used for network access, including but not limited to a computer, a smart phone, a tablet, a smart agent, or other type of terminal or client. The server 20 is any server accessible via a network that provides information required for interactive services, such as take-away services, door-to-door services, car-calling services, etc., based on location technology. One or a part of the terminal devices will be selected for description in the following description (for example, the terminal device 10-1).
It will be understood by those skilled in the art that the above-mentioned 1 … N terminal devices are intended to represent the vast number of terminals present in a real network, and that the single server 20 and database 30 shown are intended to represent that the solution of the present application may involve the operation of both the server and the database. The detailed description of the specifically numbered terminals and individual servers and databases is for convenience of description at least and does not imply limitations on the types or locations of the terminals and servers or other information.
For the understanding of the embodiments of the present application, some terms involved are explained first:
the false rejection rate: it is understood that true is considered as false, i.e. the iris information of the true user is not successfully identified, for example, the iris information of the employee of the enterprise is processed and then is considered as the non-employee of the enterprise. For another example, the registered user is a true user, the unregistered user is a false user, and iris recognition is performed on the registered employee of the enterprise to determine that the registered employee is a false user. The true rate can be considered as the probability of identifying true as false.
The false recognition rate is as follows: meaning that a false is deemed to be true, e.g., a non-business employee is deemed to be a business employee, and a false is deemed to be true. The false rate is understood to be the probability of identifying false as true.
The concept of the technical solution proposed by the embodiments of the present application will be explained below with the understanding of some of the terms involved.
The inventor researches and finds that one important reason that the application of the iris recognition is difficult to popularize is that the requirement of the iris recognition on the image is strict, and if the image does not meet the requirement, the iris feature recognition is difficult to perform. The requirements for the image may be reflected in an index of iris recognition, such as a rejection rate. Therefore, due to strict requirements on images, the false rejection rate of iris recognition is difficult to reduce, the iris of a user is difficult to be recognized well, the user experience is low, and the method for iris recognition is difficult to popularize.
In view of this, in the embodiment of the present application, from how to reduce the false rejection rate in the unit time, the accuracy of iris recognition is improved, and the use experience of the user is improved. Therefore, the iris identification method provided by the embodiment of the application can be implemented by high-performance high-speed processing platforms such as ASIC (application specific integrated circuit), FPGA (field programmable gate array), DSP (digital signal processor), PC (personal computer). In unit time, the slight increase of the false recognition rate is exchanged for the great decrease of the false rejection rate, so that the overall use feeling of the iris recognition system is improved.
As shown in fig. 2, the entire process of iris recognition may include a plurality of stages, each of which may include, in processing order:
1. image acquisition, namely image acquisition is performed on a user (hereinafter referred to as a target object) needing iris recognition;
2. and (3) quality evaluation: because iris recognition has certain requirements on images, quality evaluation of acquired images is often required before iris features are extracted. The image satisfying the quality requirement will continue to perform the subsequent processing flow. The parameters for quality evaluation may include: illumination range, focus sharpness, image sharpness, pupil zoom frequency, angle of oblique eye, etc. When the oblique eye angle is adopted for quality evaluation, the relative position change of the pupil and the iris is large if the oblique eye angle is too large (if the oblique eye angle is larger than a preset oblique eye angle value), the image quality is considered to be poor, the image can be discarded, and if the oblique eye angle is not large, stretching with different strengths can be adopted during iris normalization according to the angle of the oblique eye.
3. Image processing: in order to meet the corresponding requirements of the feature extraction of the iris, the image often needs to be further processed before the iris feature extraction is performed.
4. And (4) feature extraction, namely extracting iris features.
5. And (4) comparing the features, namely comparing the extracted iris features with the features in the feature library to complete iris recognition.
6. The feature storage, the feature storage and the feature comparison may be performed in parallel, which is not limited in this application. And (4) feature storage, namely storing the extracted features as the name implies.
In the corresponding technique, the execution of the above-described respective stages is performed in series, and the number of images that can be processed per unit time is limited. In order to improve the rate of acceptance, the parallel execution ideas, such as image acquisition, image processing and feature comparison, proposed in the embodiments of the present application for the corresponding stages may be implemented in a parallel execution manner. Because the parallel execution is carried out, the characteristic vectors of the iris characteristics can be obtained in more situations to carry out characteristic comparison, the times of characteristic comparison are greatly improved, the error probability is improved, the false rate is improved, and the false rate is improved in the same way, so that higher probability can pass through authentication, and the false rejection rate is reduced.
For example, if the acceptance rate is P, the rejection rate is Q. The identification times in unit time is N, the false recognition rate in unit time is P × N, and the false rejection rate in unit time is QN. For example, if the scheme of executing the respective stages in series in the related art is adopted, the number of times of recognition per unit time is 1, the recognition rate is 1/100000, and the rejection rate is 0.1. If the parallel scheme provided by the embodiment of the application is adopted, the number of identification times in unit time is 3, the recognition rate in unit time is 3/100000, and the corresponding false rejection rate is 0.0001, so that the false rejection rate is far less than 0.1.
For another example, assume that a total of 3 times per unit time are compared. Then, if only one pass is made in the 3-time comparison pairs, the whole result is passed, and if all the 3-time comparison pairs are not passed, the whole result is not passed. If the recognition rate is set as P, if the judgment is carried out only 1 time, the recognition rate is P, if the judgment is carried out 2 times, if only 1 recognition bit in the 2 times is false, the overall result is recognized as false, the recognition rate is P + P, if the judgment is carried out N times, if only 1 recognition bit in the N times is recognized as false, the overall result is recognized as false, and the recognition rate is N P.
The false reject rate is to judge whether the true is false, the false reject rate is Q, if the true reject rate is judged for 1 time, the false reject rate is Q, if the true reject rate is judged for 2 times, the true reject rate is Q, if the true reject rate is 2 times, the true reject rate is Q, and the overall result is true. If the judgment is carried out for N times, the N times are true, the whole result is true, and the false rejection rate is QN。
Therefore, by means of the parallel execution scheme, the use feeling of the user can be improved by reducing the false rejection rate in unit time.
After understanding the technical concept of the embodiments of the present application, the iris identification method provided by the embodiments of the present application is further described below with reference to the accompanying drawings.
Fig. 3 is a schematic flow chart of an iris identification method according to an embodiment of the present application, including:
when acquiring an image, multiple images may be acquired for any target object, for example, in step 301, at least one image of the target object may be acquired at the same time; then, when analyzing the images, in step 302, quality analysis may be performed on each image to obtain an image meeting quality requirements; then, in step 303, for each image that meets the quality requirement, at least one image processing method is respectively adopted to process the image to obtain an image for feature extraction. After the processed images are obtained, in step 304, feature extraction may be performed on each image for feature extraction to obtain iris feature information; then, in step 305, comparing the iris feature information with features in a pre-stored feature library to obtain an iris identification result of the target object.
When one image of the target object is acquired, the image processing modes are at least two, and when a plurality of images of the target object are acquired, the image processing modes comprise at least one. That is, the parallel image acquisition refers to acquiring multiple images of the same target object at the same time, and the parallel image processing refers to performing image processing on the images of the same target object by adopting different image processing modes. Whether physical image acquisition or image processing is performed in parallel can enable the false rejection rate per unit time to be reduced. Therefore, in implementation, one parallel mode or a plurality of parallel modes can be adopted.
In addition, in addition to the parallel execution of the image acquisition node and the image processing node, the parallel execution of the feature comparison stage may also be included. The implementation of these several parallel modes can be described as follows:
1. parallel execution of image acquisition:
the parallel implementable manner of image acquisition may include the following two:
a) and adopting different acquisition parameters to acquire the image of the same target object. For example, in order to accurately perform iris feature recognition under different environments, certain requirements are imposed on acquisition parameters. Such as outdoor, indoor, daytime, night, etc., different acquisition parameters can be configured to better acquire images meeting quality requirements.
Therefore, different acquisition parameters can adapt to different environments, so that the iris identification method can keep stable performance in different environments.
b) And a proper and larger acquisition area is obtained through different arrangement modes of the lenses. The acquisition array can be formed by a plurality of image acquisition devices to increase the shooting range and/or the shooting depth. The shooting range refers to a larger area of a plane at a certain distance from the lens, which can be collected; the shooting depth refers to a clear image which can be acquired in a larger distance range.
For example, as shown in fig. 4, the array of nine circles on the left side is an image capturing device array. When the target object is subjected to image acquisition, a plurality of images of the target object can be obtained by adopting the array. Fig. 4 shows that the lenses with the same focal length acquire images in parallel, which can increase the acquisition breadth. Images of the same target object at different focal lengths are obtained.
Wherein the image capturing device array may be configured in a manner that increases the depth of capture. As shown in fig. 5, lenses with different focal lengths are parallel, so that the acquisition depth can be increased.
When both of the modes of fig. 4 and 5 are employed, an image with increased width and depth can be obtained. The result is shown in fig. 6, where the left side of fig. 6 is the image taken at a single breadth and depth. The right side in fig. 6 is a shooting range that can be covered by the image after increasing the shooting width and the shooting depth.
In addition, the size of the image captured by different image capture devices in the array of image capture devices is different.
One of the main problems due to the limited application of iris recognition is poor user experience. The reason of poor use feeling is that the requirement of acquiring the iris image for a user is too high, and a clear image which can be acquired only in a small area in front of the lens is required, so that the use feeling of the user can be greatly improved by increasing the width and the depth of iris recognition. The user can acquire the image of the user within a relatively loose distance range, and the subsequent iris recognition is completed. For example, it can be understood that when multiple images of the same user with different extents and depths are acquired, one or more suitable images can always complete the subsequent iris recognition process, so as to complete the iris recognition of the same user.
On the basis of parallel image acquisition, various image processing algorithms can be further matched with the acquired multiple images for use, and different images of the same user are processed by adopting proper processing algorithms so as to extract iris features and compare the features.
2. Parallel execution of image processing:
similarly, the parallel execution of the image processing may also include the following two embodiments:
mode 1), a parallel mode of multiple image processing algorithms is adopted, that is, different image processing algorithms are adopted to process the same image, or different images are processed by different image processing algorithms, so that the image processing algorithms can be suitable for different images.
Mode 2), a parallel mode using multiple configuration parameters. That is, in the same image processing algorithm, when the configuration parameters are different, there is a difference between the processing results. Therefore, in order to extract the iris features better, the same image processing algorithm can adopt different configuration parameters to perform image processing. For example, for iris positioning, some configuration parameters are suitable for positioning under the condition of strong illumination, some parameters are suitable for positioning under the condition of weak illumination, and if two sets of parameters are processed in parallel, positioning can be performed under the condition of strong illumination and positioning can be performed under the condition of weak illumination.
Therefore, different image processing modes can adapt to images with different sizes and different gray scales, and the iris area in the image can be found with more possibility.
In connection with the parallel manner of image acquisition mentioned in point 1, the first aspect: one image can be recognized by different image processing methods, so that one or more image processing methods can be applied to the image. In the second aspect, the shooting range and the shooting depth are increased, and the same user acquires images with different depths and shooting ranges, namely the same user can have multiple images for iris recognition, so that one or more images can be obtained to complete iris recognition. Even further, in a third aspect, the two aforementioned aspects may be combined to provide more possibilities for improving the applicability of iris recognition.
3. Parallel execution of feature alignments:
in implementation, a high-order high-frequency method can be adopted to compare the plurality of iris feature information with the features in the pre-stored feature library.
For example, when iris feature information is input simultaneously, the iris feature information can be cached in a data caching mode, and then parallel comparison is performed, so that comparison operation of a large amount of iris feature information is completed.
In one embodiment, multiple comparison spaces may be opened up to satisfy the requirement that the feature vectors obtained at the same time can be compared with the feature vectors in the feature library at the same time. For example, 5 image processing algorithms are used to obtain 5 feature vectors, and the 5 feature vectors can be simultaneously compared with feature vectors in the feature vector library.
To sum up, in the embodiment of the present application, the overall processing flow may be as shown in fig. 7, and in the image acquisition stage, the parallelism of image acquisition may be realized by configuring a plurality of sets of acquisition parameters and by laying out an image acquisition device array, and then the quality of the acquired images is evaluated. In the image processing stage, different image processing algorithms or different image processing parameters can be adopted to complete parallel execution of the image processing mode, and in the feature extraction stage, iris feature information can be extracted from different images, so that subsequent parallel execution feature comparison is completed. The feature storage in fig. 7 may be understood as storing iris feature information so as to perform feature comparison in parallel.
Thus, the iris identification method in the embodiment of the present application may be expressed as shown in fig. 8: the images arriving through different paths enter the processing flow in a time-sharing manner, the overall efficiency of iris recognition can be improved, different paths represent different trying ways, and the method can not be limited to images acquired by different image acquisition devices, different parameters (such as image acquisition parameters and image processing parameters), even different image processing algorithms and different iris feature extraction methods.
Of course, in the embodiment of the present application, based on the same inventive concept, the image quality evaluation and the iris feature extraction stages may also be performed in parallel. The method is used for reducing the false rejection rate in unit time, so that the iris identification method can improve the user experience and can be popularized and applied more.
Based on the same inventive concept, an embodiment of the present application further provides an iris recognition apparatus, as shown in fig. 9, including:
an image collecting module 901, configured to collect at least one image of a target object at the same time;
a quality evaluation module 902, configured to perform quality analysis on each image to obtain an image meeting quality requirements;
an image processing module 903, configured to process each image that meets the quality requirement by using at least one image processing method to obtain an image for feature extraction; when a plurality of images of the target object are acquired, the image processing mode comprises at least one;
a feature extraction module 904, configured to perform feature extraction on each image for feature extraction, respectively, to obtain iris feature information;
the feature comparison module 905 is configured to compare the iris feature information with features in a pre-stored feature library to obtain an iris identification result of the target object.
In one embodiment, the image acquisition module is configured to:
controlling at least one image acquisition device in the image acquisition device array to respectively adopt at least one set of pre-configured acquisition parameters to carry out image acquisition on the target object to obtain at least one image of the target object; the arrangement mode of the image acquisition devices in the array can be customized.
In an embodiment, the image capturing module is configured to, when the image capturing apparatus is controlled to capture an image using multiple sets of capturing parameters, include capturing parameters suitable for different shooting environments, where each shooting environment corresponds to one or more sets of capturing parameters.
In one embodiment, in the array of image capture devices:
the plurality of image acquisition devices with the same focal length are arranged in parallel to increase the shooting range;
and or (b) a,
a plurality of image acquisition devices with different focal lengths are arranged in parallel to increase the shooting depth.
In one embodiment, the at least one image processing mode includes: and image processing modes corresponding to different image processing algorithms, wherein each image processing algorithm adopts at least one configuration parameter for image processing.
In one embodiment, the feature alignment module is configured to:
caching the acquired multiple iris characteristic information: and the number of the first and second electrodes,
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library in a parallel comparison mode to obtain the iris identification result of the target object.
In an embodiment, the feature comparison module, when performing comparison between the plurality of iris feature information and features in the pre-stored feature library in a parallel comparison manner, is configured to:
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library by adopting a high-bit-width high-frequency device.
For specific functional implementation and beneficial effects applied to the iris recognition device, reference may be made to the above description in connection with fig. 1 to 8, and further description is omitted here.
Having described an iris recognition method and apparatus according to an exemplary embodiment of the present application, a computing device according to another exemplary embodiment of the present application will be described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the present specification as described above as applied to the iris recognition method according to various exemplary embodiments of the present application.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 10. The computing device 130 shown in fig. 10 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present application.
As shown in fig. 10, computing device 130 is embodied in the form of a general purpose computing device. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
In some possible embodiments, aspects of an iris recognition method provided herein may also be implemented in the form of a program product including program code for causing a computer device to perform steps of an iris recognition method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for application to iris recognition of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (16)
1. An iris identification method, comprising:
simultaneously acquiring at least one image of the target object;
performing quality analysis on each image to obtain an image meeting the quality requirement;
processing each image meeting the quality requirement by adopting at least one image processing mode to obtain an image for feature extraction; when a plurality of images of the target object are acquired, the image processing mode comprises at least one;
respectively extracting the features of each image for feature extraction to obtain iris feature information;
and comparing the iris characteristic information with the characteristics in a pre-stored characteristic library to obtain the iris identification result of the target object.
2. The method of claim 1, wherein the acquiring at least one image of a target object comprises:
controlling at least one image acquisition device in the image acquisition device array to respectively adopt at least one set of pre-configured acquisition parameters to carry out image acquisition on the target object to obtain at least one image of the target object; the arrangement mode of the image acquisition devices in the array can be customized.
3. The method according to claim 2, wherein when controlling the image capturing device to capture images using multiple sets of capture parameters, the multiple sets of capture parameters include capture parameters suitable for different shooting environments, wherein each shooting environment corresponds to one or more sets of capture parameters.
4. The method of claim 2, wherein, in the array of image acquisition devices:
the plurality of image acquisition devices with the same focal length are arranged in parallel to increase the shooting range;
and or (b) a,
a plurality of image acquisition devices with different focal lengths are arranged in parallel to increase the shooting depth.
5. The method according to any one of claims 1-4, wherein the at least one image processing mode comprises: and image processing modes corresponding to different image processing algorithms, wherein each image processing algorithm adopts at least one configuration parameter for image processing.
6. The method according to claim 1, wherein the comparing the iris feature information with features in a pre-stored feature library to obtain the iris recognition result of the target object comprises:
caching the acquired multiple iris characteristic information: and the number of the first and second electrodes,
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library in a parallel comparison mode to obtain the iris identification result of the target object.
7. The method according to claim 6, wherein the comparing the plurality of iris feature information with the features in the pre-stored feature library in a parallel comparison manner comprises:
and comparing the plurality of iris feature information with the features in the pre-stored feature library by adopting a high-bit-width high-frequency method.
8. An iris recognition apparatus, comprising:
the image acquisition module is used for simultaneously acquiring at least one image of the target object;
the quality evaluation module is used for carrying out quality analysis on each image to obtain an image meeting the quality requirement;
the image processing module is used for processing each image meeting the quality requirement by adopting at least one image processing mode to obtain an image for feature extraction; when a plurality of images of the target object are acquired, the image processing mode comprises at least one;
the characteristic extraction module is used for respectively extracting the characteristics of each image for characteristic extraction to obtain iris characteristic information;
and the characteristic comparison module is used for comparing the iris characteristic information with the characteristics in a pre-stored characteristic library to obtain the iris identification result of the target object.
9. The apparatus of claim 8, wherein the image acquisition module is configured to:
controlling at least one image acquisition device in the image acquisition device array to respectively adopt at least one set of pre-configured acquisition parameters to carry out image acquisition on the target object to obtain at least one image of the target object; the arrangement mode of the image acquisition devices in the array can be customized.
10. The apparatus according to claim 9, wherein the image capturing module is configured to, when the image capturing apparatus is controlled to capture an image by using a plurality of sets of capturing parameters, include capturing parameters suitable for different shooting environments, where each shooting environment corresponds to one or more sets of capturing parameters.
11. The apparatus of claim 2, wherein in the array of image capture devices:
the plurality of image acquisition devices with the same focal length are arranged in parallel to increase the shooting range;
and or (b) a,
a plurality of image acquisition devices with different focal lengths are arranged in parallel to increase the shooting depth.
12. The apparatus according to any one of claims 8-11, wherein the at least one image processing mode comprises: and image processing modes corresponding to different image processing algorithms, wherein each image processing algorithm adopts at least one configuration parameter for image processing.
13. The apparatus of claim 8, wherein the feature comparison module is configured to:
caching the acquired multiple iris characteristic information: and the number of the first and second electrodes,
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library in a parallel comparison mode to obtain the iris identification result of the target object.
14. The apparatus according to claim 6, wherein the feature comparison module, when performing the comparison of the plurality of iris feature information with the features in the pre-stored feature library in a parallel comparison manner, is configured to:
and comparing the plurality of iris characteristic information with the characteristics in the pre-stored characteristic library by adopting a high-bit-width high-frequency device.
15. A computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A computer storage medium, characterized in that the computer storage medium stores a computer program for causing a computer to perform the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010568634.2A CN111783581A (en) | 2020-06-19 | 2020-06-19 | Iris identification method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010568634.2A CN111783581A (en) | 2020-06-19 | 2020-06-19 | Iris identification method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111783581A true CN111783581A (en) | 2020-10-16 |
Family
ID=72757539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010568634.2A Pending CN111783581A (en) | 2020-06-19 | 2020-06-19 | Iris identification method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111783581A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598794A (en) * | 2015-01-21 | 2015-05-06 | 北京天诚盛业科技有限公司 | Method, device and system for quickly processing iris information |
CN106203297A (en) * | 2016-06-30 | 2016-12-07 | 北京七鑫易维信息技术有限公司 | A kind of personal identification method and device |
CN107358183A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Living iris detection method and Related product |
CN107403148A (en) * | 2017-07-14 | 2017-11-28 | 广东欧珀移动通信有限公司 | Iris identification method and related product |
CN108133187A (en) * | 2017-12-22 | 2018-06-08 | 吉林大学 | Dimensional variation invariant feature and the one-to-one iris identification method of more algorithms voting |
CN108388858A (en) * | 2018-02-11 | 2018-08-10 | 北京京东金融科技控股有限公司 | Iris method for anti-counterfeit and device |
WO2019011099A1 (en) * | 2017-07-14 | 2019-01-17 | Oppo广东移动通信有限公司 | Iris living-body detection method and related product |
WO2019024717A1 (en) * | 2017-07-29 | 2019-02-07 | Oppo广东移动通信有限公司 | Anti-counterfeiting processing method and related product |
CN109376725A (en) * | 2018-12-21 | 2019-02-22 | 北京无线电计量测试研究所 | A kind of identification check method and apparatus based on iris recognition |
CN109740501A (en) * | 2018-12-28 | 2019-05-10 | 广东亿迅科技有限公司 | A kind of Work attendance method and device of recognition of face |
-
2020
- 2020-06-19 CN CN202010568634.2A patent/CN111783581A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598794A (en) * | 2015-01-21 | 2015-05-06 | 北京天诚盛业科技有限公司 | Method, device and system for quickly processing iris information |
CN106203297A (en) * | 2016-06-30 | 2016-12-07 | 北京七鑫易维信息技术有限公司 | A kind of personal identification method and device |
CN107358183A (en) * | 2017-06-30 | 2017-11-17 | 广东欧珀移动通信有限公司 | Living iris detection method and Related product |
CN107403148A (en) * | 2017-07-14 | 2017-11-28 | 广东欧珀移动通信有限公司 | Iris identification method and related product |
WO2019011099A1 (en) * | 2017-07-14 | 2019-01-17 | Oppo广东移动通信有限公司 | Iris living-body detection method and related product |
WO2019024717A1 (en) * | 2017-07-29 | 2019-02-07 | Oppo广东移动通信有限公司 | Anti-counterfeiting processing method and related product |
CN108133187A (en) * | 2017-12-22 | 2018-06-08 | 吉林大学 | Dimensional variation invariant feature and the one-to-one iris identification method of more algorithms voting |
CN108388858A (en) * | 2018-02-11 | 2018-08-10 | 北京京东金融科技控股有限公司 | Iris method for anti-counterfeit and device |
CN109376725A (en) * | 2018-12-21 | 2019-02-22 | 北京无线电计量测试研究所 | A kind of identification check method and apparatus based on iris recognition |
CN109740501A (en) * | 2018-12-28 | 2019-05-10 | 广东亿迅科技有限公司 | A kind of Work attendance method and device of recognition of face |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3506589B1 (en) | User identity verification method, apparatus and system | |
WO2019119505A1 (en) | Face recognition method and device, computer device and storage medium | |
CN110414373B (en) | Deep learning palm vein recognition system and method based on cloud edge-side cooperative computing | |
KR101629224B1 (en) | Authentication method, device and system based on biological characteristics | |
CN111886842B (en) | Remote user authentication using threshold-based matching | |
US8218828B2 (en) | Systems and methods for biometric information automation | |
CN110889009B (en) | Voiceprint clustering method, voiceprint clustering device, voiceprint processing equipment and computer storage medium | |
US10943098B2 (en) | Automated and unsupervised curation of image datasets | |
CN113949577A (en) | Data attack analysis method applied to cloud service and server | |
CN107679457A (en) | User identity method of calibration and device | |
CN110008892A (en) | A kind of fingerprint verification method and device even referring to fingerprint image acquisition based on four | |
CN109614780B (en) | Biological information authentication method and device, storage medium and electronic equipment | |
CN103984415B (en) | A kind of information processing method and electronic equipment | |
US20150120837A1 (en) | Method for Presenting Schedule Reminder Information, Terminal Device, and Cloud Server | |
US11822587B2 (en) | Server and method for classifying entities of a query | |
Mukherjee et al. | Energy efficient face recognition in mobile-fog environment | |
CN111783581A (en) | Iris identification method and related device | |
CN115984977A (en) | Living body detection method and system | |
CN113011301A (en) | Living body identification method and device and electronic equipment | |
CN111079704A (en) | Face recognition method and device based on quantum computation | |
CN117333926B (en) | Picture aggregation method and device, electronic equipment and readable storage medium | |
CN103886660A (en) | Network finger vein access control system and control method thereof | |
CN113111726B (en) | Vibration motor equipment fingerprint extraction and identification method based on homologous signals | |
CN112115446B (en) | Skyline query biological feature-based identity authentication method and system | |
CN114519882A (en) | Face recognition method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211019 Address after: 102300 Longyue Chang'an Oxford Park, Mentougou District, Beijing Applicant after: Wang Yingzhe Address before: 102300 Longyue Chang'an Oxford Park, Yongding town, Mentougou District, Beijing Applicant before: He Jinrong |
|
TA01 | Transfer of patent application right |