CN116740796A - Iris recognition method and system - Google Patents

Iris recognition method and system Download PDF

Info

Publication number
CN116740796A
CN116740796A CN202210569746.9A CN202210569746A CN116740796A CN 116740796 A CN116740796 A CN 116740796A CN 202210569746 A CN202210569746 A CN 202210569746A CN 116740796 A CN116740796 A CN 116740796A
Authority
CN
China
Prior art keywords
iris
gray
image
relative position
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210569746.9A
Other languages
Chinese (zh)
Inventor
袁钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Kingcome Optoelectronics Co ltd
Original Assignee
Hunan Kingcome Optoelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Kingcome Optoelectronics Co ltd filed Critical Hunan Kingcome Optoelectronics Co ltd
Priority to CN202210569746.9A priority Critical patent/CN116740796A/en
Publication of CN116740796A publication Critical patent/CN116740796A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to an iris recognition method and system, which are characterized in that an eye image is directly acquired, then pupil outline is acquired from the eye image, the pupil center is used as a reference point in the recognition process, iris characteristics in the whole eye image are acquired, and the iris is recognized by taking the relative position of the iris characteristics and the pupil center as recognition characteristics. Since the relative position data does not require the contour of the sclera, accurate identification can be performed when the upper and lower eyelids occlude the scleral edge; meanwhile, due to the uniqueness of the relative position data, the iris image can be recognized only by means of partial iris features to a certain extent; thereby overcoming the problem of incorrect recognition results caused by the defect of iris characteristic information due to the shielding of the upper eyelid and the lower eyelid.

Description

Iris recognition method and system
Technical Field
The application relates to the technical field of biological recognition, in particular to an iris recognition method and system.
Background
Iris recognition is known as a kind of biological recognition with very high security level in industry, and is generally applied to scenes with high security requirements, such as access control systems and security systems of financial industry or government departments, etc., and with the development of electronic technology, iris recognition technology is also gradually applied to part of terminal devices, such as mobile phones, computers, etc.
In iris recognition at present, a camera is required to collect facial images of a user, eye images are extracted from the facial images, and then the iris is recognized, but the existing recognition mode has the following defects:
(1) When an eye image is acquired, the complete sclera boundary is difficult to obtain due to shielding of the upper eyelid and the lower eyelid, so that the recognition accuracy of the existing recognition method is affected;
(2) The obstruction of the upper eyelid and the lower eyelid also causes the missing of iris characteristic information, so that the error of a recognition result with a certain probability can be caused.
Disclosure of Invention
In view of the above, the present application aims to provide an iris recognition method and system, which can eliminate the problems of insufficient recognition accuracy, recognition errors, etc. caused by the shielding of the upper eyelid and the lower eyelid.
In order to achieve the above purpose, the present application adopts the following technical scheme:
the application discloses an iris recognition method, which comprises the following steps:
acquiring an eye image and converting the eye image into a gray image;
extracting the contour of the gray level image to obtain the pupil contour in the gray level image, and positioning the pupil contour to obtain the pupil center;
extracting texture features in the gray level image to obtain iris features, and obtaining relative position data between the pupil center and the iris features;
and comparing the relative position data with the relative position template data pre-stored in a database, and completing iris recognition based on a comparison result.
Optionally, the step of converting the eye image into a gray scale image comprises:
acquiring each pixel point A in the eye image i RGB values of (a) of each pixel point a i Conversion of RGB values into GRAY values GRAY (A i ) Obtaining a gray image; the mathematical expression for converting into gray values is:
GRAY(Ai)=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is pixel point A i G is the pixel point A i B is the pixel point A i Is a blue value of (c).
Optionally, the step of extracting the contour of the gray-scale image to obtain the pupil contour in the gray-scale image includes:
copying a gray level image, setting a first gray level threshold value in the copied image, and performing binarization processing on the copied image according to the first gray level threshold value to obtain a black-and-white image;
any one pixel point A is taken i Judging whether the pixel point A exists or not i GRAY value GRAY (a) i ) Different adjacent pixel points, if yes, reserving the pixel point A i GRAY value GRAY (a) i ) If not, then for the pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i )=255;
Extracting partial pixel points conforming to the complete circular characteristics from the pixel points with reserved gray values, extracting the partial pixel points, and mapping the partial pixel points to the gray image according to the original positions to obtain the pupil outline.
Optionally, and locating the pupil outline, the step of obtaining the pupil center includes:
setting a plurality of mutually non-parallel tangent line groups, wherein each tangent line group comprises a first parallel line S1 and a second parallel line S2;
moving the first parallel line S1 and the second parallel line S2 until both the first parallel line S1 and the second parallel line S2 are tangent to the pupil contour, and the first parallel line S1 does not coincide with the second parallel line S2;
obtaining a third parallel line S3 according to the positions of the first parallel line S1 and the second parallel line S2, wherein the third parallel line S3 is positioned at the center position of the first parallel line S1 and the second parallel line S2, and the third parallel line S3 is parallel to the first parallel line S1 and the second parallel line S2;
acquiring a plurality of intersection points O where third parallel lines S3 in a plurality of tangent lines are intersected 1 -O k Randomly selecting a plurality of pixel points A in the pupil outline i1 -A ij Calculating the intersection point O 1 -O k And the pixel point A i1 -A ij The variance of the distance between them is expressed mathematically as:
wherein sigma k Represents the intersection point O k And the pixel point A i1 -A ij Variance of distance between, f (A) ij ,O k ) Represents the intersection point O k And pixel point A ij The distance between L and O represents the intersection point k And the pixel point A i1 -A ij Average value of the distance between them;
and selecting the intersection point with the smallest variance as the pupil center.
Optionally, the step of extracting texture features in the gray scale image to obtain iris features includes:
removing pixel points in the pupil outline to obtain an iris region image;
setting a second gray level threshold value, and performing binarization processing on the iris region image according to the second gray level threshold value;
any pixel point A is taken from the iris region image after binarization processing i Judging whether the pixel point A exists or not i GRAY value GRAY (a) i ) Different adjacent pixel points, if yes, reserving the pixel point A i GRAY value GRAY (a) i ) If not, then for the pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i ) =255, obtaining a texture image;
building a training data set comprising a plurality of texture images, and training a preset artificial neural network through the training data set to obtain an identification model;
and identifying the texture image through the identification model to obtain the iris characteristic.
Optionally, the step of obtaining relative position data between the pupil center and the iris feature comprises:
extracting the centroid of the iris feature, wherein the iris feature comprises a spot feature, a filament feature, a coronal feature, a stripe feature and a crypt feature;
acquiring a line connecting the centroid of each iris feature with the pupil center, calculating the length of the line, the included angle between the lines, and producing a three-dimensional vector (X n ,Y n Z), wherein X n For the length of the line n, Y n Z is the kind of para-iris feature for the included angle between the connecting line n and the connecting line n+1;
and generating the relative position data according to all the three-dimensional vectors.
Optionally, comparing the relative position data with the relative position template data pre-stored in the database, and completing the iris recognition based on the comparison result comprises:
extracting a plurality of three-dimensional vectors to be identified from the relative position data;
when the number of the three-dimensional vectors to be identified exceeds a first threshold value or the ratio of the number of the three-dimensional vectors to be identified to the number of the three-dimensional vectors in the relative position template data is larger than a second threshold value, comparing the three-dimensional vectors to be identified with the three-dimensional vectors in the relative position template data;
when the three-dimensional vector to be identified with the third threshold value exceeding is contained in the relative position template data, the three-dimensional vector to be identified is prestored in the database, and identification information corresponding to the relative position data is output, so that the identification of the eye image is completed.
The application also provides an iris recognition system, comprising:
the acquisition module is used for acquiring an eye image and converting the eye image into a gray image;
the operation module is used for extracting the outline of the gray level image to obtain the pupil outline in the gray level image, and positioning the pupil outline to obtain the pupil center; extracting texture features in the gray level image to obtain iris features, and obtaining relative position data between the pupil center and the iris features;
and the identification module is used for comparing the relative position data with various relative position data pre-stored in a database, and completing iris identification based on a comparison result.
The present application also provides a storage medium in which a computer program is stored which, when loaded and executed by a processor, implements an iris recognition method as described above.
The present application also provides an electronic device including: a processor and a memory; wherein the memory is used for storing a computer program; the processor is configured to load and execute the computer program to cause the electronic device to perform an iris recognition method as described above.
The beneficial effects of the application are as follows: according to the iris recognition method and system, the iris features in the whole eye image are obtained by directly acquiring the eye image, then acquiring the pupil outline from the eye image, taking the pupil center as a reference point in the recognition process, and the iris features and the relative position of the pupil center are taken as recognition features, so that the iris recognition is completed. Since the relative position data does not require the contour of the sclera, accurate identification can be performed when the upper and lower eyelids occlude the scleral edge; meanwhile, due to the uniqueness of the relative position data, the iris image can be recognized only by means of partial iris features to a certain extent; thereby overcoming the problem of incorrect recognition results caused by the defect of iris characteristic information due to the shielding of the upper eyelid and the lower eyelid.
Drawings
The application is further described below with reference to the accompanying drawings and examples:
FIG. 1 is a flow chart of an iris recognition method according to an embodiment of the application;
fig. 2 is a block diagram of an iris recognition system according to an embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present application by way of illustration, and only the layers related to the present application are shown in the drawings and are not drawn according to the number, shape and size of the layers in actual implementation, and the form, number and proportion of the layers in actual implementation may be arbitrarily changed, and the layer layout may be more complex.
In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present application, however, it will be apparent to one skilled in the art that embodiments of the present application may be practiced without these specific details.
The iris recognition method and the iris recognition system are applied to the field of biological recognition, and an execution object is a PC host, a mobile terminal or a server.
As shown in fig. 1: the iris recognition method of the embodiment comprises the following steps:
s1, acquiring an eye image, and converting the eye image into a gray image;
s2, extracting the outline of the gray level image to obtain the pupil outline in the gray level image, and positioning the pupil outline to obtain the pupil center;
s3, extracting texture features in the gray level image to obtain iris features, and obtaining relative position data between the pupil center and the iris features;
s4, comparing the relative position data with the relative position template data pre-stored in the database, and completing iris recognition based on a comparison result.
The pupil outline is circular, and after the graying treatment, the gray value of the pupil is obviously smaller than other areas, so that the pupil outline is convenient to extract. The peripheral iris features can be extracted by taking the outline of the pupil as a base point. The iris features are quite rich, so that iris recognition can be performed by utilizing only one relatively complete iris region, the features in the iris comprise speckle features, filament features, crown features, stripe features, crypt features and the like, the iris features in partial regions are extracted, recognition is completed through the relative position features among the iris features, meanwhile, the iris features are digitized, and the operation amount and the storage amount are reduced.
In this embodiment, the step of converting the eye image into the gray scale image includes:
s101, acquiring each pixel point A in an eye image i RGB values of (a) for each pixel point a i Conversion of RGB values into GRAY values GRAY (A i ) Obtaining a gray image; the mathematical expression for converting into gray values is:
GRAY(Ai)=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is pixel point A i G is the pixel point A i B is the pixel point A i Is a blue value of (c).
In some embodiments, the step of extracting the contour of the gray scale image to obtain the pupil contour in the gray scale image comprises:
s201, copying a gray level image, setting a first gray level threshold value in the copied image, and performing binarization processing on the copied image according to the first gray level threshold value to obtain a black-and-white image;
s202, any pixel point A is taken i Judging whether the pixel point A exists i GRAY value GRAY (a) i ) Different adjacent pixels, if yes, then the protectionPixel point A i GRAY value GRAY (a) i ) If not, then for pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i ) =255; if the pixel is the contour pixel, the gray value of the pixel adjacent to the contour pixel is different from the value of the pixel, the color of the pixel is reserved at the moment, and if the pixel is not the contour pixel, so that the pixel is changed into white; finally, the pupil outline is the pixel point which accords with the standard circular characteristic;
s203, extracting partial pixel points conforming to the complete circular characteristic from the pixel points with reserved gray values, extracting partial pixel points, and mapping the partial pixel points to the gray image according to the original positions to obtain the pupil outline.
In some embodiments, locating the pupil profile, the step of obtaining the pupil center comprises:
s204, arranging a plurality of mutually non-parallel tangent line groups, wherein each tangent line group comprises a first parallel line S1 and a second parallel line S2;
s205, moving the first parallel line S1 and the second parallel line S2 until the first parallel line S1 and the second parallel line S2 are tangential to the pupil outline, and the first parallel line S1 is not overlapped with the second parallel line S2;
specifically, the pupil is circular, so that the center of the pupil is judged simply, and only 3-6 groups of parallel lines are needed to be arranged;
s206, obtaining a third parallel line S3 according to the positions of the first parallel line S1 and the second parallel line S2, wherein the third parallel line S3 is positioned at the center of the first parallel line S1 and the second parallel line S2, and the third parallel line S3 is parallel to the first parallel line S1 and the second parallel line S2;
s207, obtaining a plurality of intersection points O where the third parallel lines S3 in the plurality of tangent lines are intersected 1 -O k Taking three groups of parallel lines as an example, at most 3 intersection points can be obtained, the calculated amount is small, if a large number of parallel line calculation centers are adopted, a K-means clustering method can be selected, and in the embodiment, variance calculation is adopted, so that a final midpoint is obtained;
s208, randomly selecting pupil outlineA plurality of pixel points a in (a) i1 -A ij Calculate the intersection point O 1 -O k And pixel point A i1 -A ij The variance of the distance between them is expressed mathematically as:
wherein sigma k Represents the intersection point O k And pixel point A i1 -A ij Variance of distance between, f (A) ij ,O k ) Represents the intersection point O k And pixel point A ij The distance between L and O represents the intersection point k And pixel point A i1 -A ij Average value of the distance between them;
s209, selecting the intersection point with the smallest variance as the pupil center.
In some embodiments, extracting texture features in the gray scale image, the step of obtaining iris features comprises:
s301, removing pixel points in the pupil outline, obtaining an iris region image, and after obtaining the pupil center, directly removing the pixel points without calculating the pupil image, so that the calculated amount is reduced;
s302, setting a second gray level threshold, and performing binarization processing on the iris region image according to the second gray level threshold; the purpose of the binarization processing is to extract iris features which are obvious in the processing, specifically including speckle features, filament features, coronal features, stripe features, crypt features and the like; after binarization processing, the iris region image is changed into a black-and-white image, so that the operation amount is reduced;
s303, any pixel point A is taken from the iris region image subjected to binarization processing i Judging whether the pixel point A exists i GRAY value GRAY (a) i ) Different adjacent pixel points, if yes, the pixel point A is reserved i GRAY value GRAY (a) i ) If not, then for pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i ) =255, obtaining a texture image;
s304, a training data set comprising a plurality of texture images is established, and a preset artificial neural network is trained through the training data set to obtain an identification model;
s305, recognizing the texture image through the recognition model to obtain iris features.
The spot features, the filament features, the crown features, the stripe features and the crypt features in the iris image are highlighted through binarization processing, and if all the spot features, the filament features, the crown features, the stripe features and the crypt features cannot be acquired at the same time through one binarization, the second gray level threshold can be modified for multiple times and extracted for multiple times, so that not less than 20 iris features are acquired; and then acquiring the outlines of all iris features, acquiring an identification model through training a neural network, and identifying the types of the iris features.
In some embodiments, the step of obtaining relative position data between the pupil center and the iris feature comprises:
s306, extracting the mass center of iris features, wherein the iris features comprise spot features, filament features, coronal features, stripe features and crypt features; since the iris features have been processed above, only the outline remains, which can be regarded as a closed image, the centroid in the iris feature outline can be obtained directly using opencv (an image processing tool);
s307, obtaining the connection line between the centroid of each iris feature and the pupil center, calculating the length of the connection line and the included angle between the connection lines, and producing a three-dimensional vector (X n ,Y n Z), wherein X n For the length of the line n, Y n Z is the kind of para-iris feature for the included angle between the connecting line n and the connecting line n+1;
s308, generating relative position data according to all the three-dimensional vectors.
Through the steps, each iris image is converted into relative position data, the relative position data is used for representing the iris image, the relative position data at least comprises 20 three-dimensional vectors, the iris image which is stored in a database and is processed and extracted in advance is complete, and when the iris image is compared, the iris image acquired in situ is required to be converted into the relative position data, and the relative position data is compared with the three-dimensional vectors in the database.
In some embodiments, comparing the relative position data with the relative position template data pre-stored in the database, the step of completing the identification of the iris based on the comparison result comprises:
s401, extracting a plurality of three-dimensional vectors to be identified from relative position data; the three-dimensional vector represents the relative positions of various iris features and pupil centers in the iris image, including distances and angles;
s402, comparing the three-dimensional vector to be identified with the three-dimensional vector in the relative position template data when the number of the three-dimensional vectors to be identified exceeds a first threshold or the ratio of the number of the three-dimensional vectors to be identified to the number of the three-dimensional vectors in the relative position template data is larger than a second threshold; the first threshold value is 20, which means that when only the relative position data contains at least 20 three-dimensional vectors, the iris characteristic can be described by judging the relative position data; the second threshold value is 30%, which means that if the number of three-dimensional vectors in the relative position data is 30% of the average value of the number of three-dimensional vectors of the relative position template data in the database, the next comparison can be performed;
s403, when the three-dimensional vector to be identified with the third threshold value is contained in the relative position template data, the three-dimensional vector to be identified is pre-stored in a database, and identification information corresponding to the relative position data is output, so that the identification of the eye image is completed.
When the comparison is a fixed number, the third threshold is set to 19, which means that when at least 19 kinds of iris features exist in the iris image, the distance from the center of the pupil and the relative angle between the iris features are consistent with the pre-stored information, the identification can be completed
According to the iris recognition method, the iris features in the whole eye image are obtained by directly acquiring the eye image, then acquiring the pupil outline from the eye image, taking the pupil center as a reference point in the recognition process, and the iris is recognized by taking the relative position of the iris features and the pupil center as the recognition feature. Since the relative position data does not require the contour of the sclera, accurate identification can be performed when the upper and lower eyelids occlude the scleral edge; meanwhile, due to the uniqueness of the relative position data, the iris image can be recognized only by means of partial iris features to a certain extent; thereby overcoming the problem of incorrect recognition results caused by the defect of iris characteristic information due to the shielding of the upper eyelid and the lower eyelid.
The application also provides an iris recognition system, comprising:
the acquisition module is used for acquiring an eye image and converting the eye image into a gray image;
the operation module is used for extracting the outline of the gray level image to obtain the pupil outline in the gray level image, and positioning the pupil outline to obtain the pupil center; extracting texture features in the gray level image to obtain iris features and obtain relative position data between the pupil center and the iris features;
the identification module is used for comparing the relative position data with various relative position data pre-stored in the database and completing iris identification based on the comparison result.
According to the iris recognition system, the iris characteristics in the whole eye image are obtained by directly acquiring the eye image, then acquiring the pupil outline from the eye image, taking the pupil center as a reference point in the recognition process, and the iris is recognized by taking the relative position of the iris characteristics and the pupil center as the recognition characteristics. Since the relative position data does not require the contour of the sclera, accurate identification can be performed when the upper and lower eyelids occlude the scleral edge; meanwhile, due to the uniqueness of the relative position data, the iris image can be recognized only by means of partial iris features to a certain extent; thereby overcoming the problem of incorrect recognition results caused by the defect of iris characteristic information due to the shielding of the upper eyelid and the lower eyelid.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods of the present embodiments.
The embodiment also provides an electronic terminal, including: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the terminal executes any one of the methods in the embodiment.
The computer readable storage medium in this embodiment, as will be appreciated by those of ordinary skill in the art: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
The electronic terminal provided in this embodiment includes a processor, a memory, a transceiver, and a communication interface, where the memory and the communication interface are connected to the processor and the transceiver and complete communication with each other, the memory is used to store a computer program, the communication interface is used to perform communication, and the processor and the transceiver are used to run the computer program, so that the electronic terminal performs each step of the above method.
In this embodiment, the memory may include a random access memory (Random Access Memory, abbreviated as RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In the above embodiments, while the present application has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of these embodiments will be apparent to those skilled in the art in light of the foregoing description. The embodiments of the application are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (10)

1. An iris recognition method is characterized in that: the method comprises the following steps:
acquiring an eye image and converting the eye image into a gray image;
extracting the contour of the gray level image to obtain the pupil contour in the gray level image, and positioning the pupil contour to obtain the pupil center;
extracting texture features in the gray level image to obtain iris features, and obtaining relative position data between the pupil center and the iris features;
and comparing the relative position data with the relative position template data pre-stored in a database, and completing iris recognition based on a comparison result.
2. An iris recognition method as claimed in claim 1, wherein: the step of converting the eye image into a gray scale image comprises:
acquiring each pixel point A in the eye image i RGB values of (a) of each pixel point a i Conversion of RGB values into GRAY values GRAY (A i ) Obtaining a gray image; the mathematical expression for converting into gray values is:
GRAY(Ai)=(R 2.2 ×0.2937+G 2.2 ×0.6274+B 2.2 ×0.0753) 1/2.2
wherein R is pixel point A i G is the pixel point A i B is the pixel point A i Is a blue value of (c).
3. An iris recognition method as claimed in claim 1, wherein: the step of extracting the contour of the gray level image to obtain the pupil contour in the gray level image comprises the following steps:
copying a gray level image, setting a first gray level threshold value in the copied image, and performing binarization processing on the copied image according to the first gray level threshold value to obtain a black-and-white image;
any one pixel point A is taken i Judging whether the pixel point A exists or not i GRAY value GRAY (a) i ) Different adjacent pixel points, if yes, reserving the pixel point A i GRAY value GRAY (a) i ) If not, then for the pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i )=255;
Extracting partial pixel points conforming to the complete circular characteristics from the pixel points with reserved gray values, extracting the partial pixel points, and mapping the partial pixel points to the gray image according to the original positions to obtain the pupil outline.
4. A method of iris recognition as claimed in claim 3, wherein: and positioning the pupil outline, wherein the step of obtaining the pupil center comprises the following steps:
setting a plurality of mutually non-parallel tangent line groups, wherein each tangent line group comprises a first parallel line S1 and a second parallel line S2;
moving the first parallel line S1 and the second parallel line S2 until both the first parallel line S1 and the second parallel line S2 are tangent to the pupil contour, and the first parallel line S1 does not coincide with the second parallel line S2;
obtaining a third parallel line S3 according to the positions of the first parallel line S1 and the second parallel line S2, wherein the third parallel line S3 is positioned at the center position of the first parallel line S1 and the second parallel line S2, and the third parallel line S3 is parallel to the first parallel line S1 and the second parallel line S2;
acquiring a plurality of intersection points O where third parallel lines S3 in a plurality of tangent lines are intersected 1 -O k Randomly selecting a plurality of pixel points A in the pupil outline i1 -A ij Calculating the intersection point O 1 -O k And the pixel point A i1 -A ij The variance of the distance between them is expressed mathematically as:
wherein sigma k Represents the intersection point O k And the pixel point A i1 -A ij Variance of distance between, f (A) ij ,O k ) Represents the intersection point O k And pixel point A ij The distance between L and O represents the intersection point k And the pixel point A i1 -A ij Average value of the distance between them;
and selecting the intersection point with the smallest variance as the pupil center.
5. An iris recognition method as claimed in claim 1, wherein: the step of extracting texture features in the gray level image to obtain iris features comprises the following steps:
removing pixel points in the pupil outline to obtain an iris region image;
setting a second gray level threshold value, and performing binarization processing on the iris region image according to the second gray level threshold value;
any pixel point A is taken from the iris region image after binarization processing i Judging whether the pixel point A exists or not i GRAY value GRAY (a) i ) Different adjacent pixel points, if yes, reserving the pixel point A i GRAY value GRAY (a) i ) If not, then for the pixel point A i Performing assignment to obtain updated pixel point A i GRAY value GRAY' (a) i ) =255, obtaining a texture image;
building a training data set comprising a plurality of texture images, and training a preset artificial neural network through the training data set to obtain an identification model;
and identifying the texture image through the identification model to obtain the iris characteristic.
6. An iris recognition method as claimed in claim 1, wherein: the step of obtaining relative position data between the pupil center and the iris feature comprises:
extracting the centroid of the iris feature, wherein the iris feature comprises a spot feature, a filament feature, a coronal feature, a stripe feature and a crypt feature;
acquiring a line connecting the centroid of each iris feature with the pupil center, calculating the length of the line, the included angle between the lines, and producing a three-dimensional vector (X n ,Y n Z), wherein X n For the length of the line n, Y n Z is the kind of para-iris feature for the included angle between the connecting line n and the connecting line n+1;
and generating the relative position data according to all the three-dimensional vectors.
7. The method of iris recognition according to claim 6, wherein: comparing the relative position data with the relative position template data pre-stored in a database, and completing iris recognition based on the comparison result, wherein the iris recognition step comprises the following steps:
extracting a plurality of three-dimensional vectors to be identified from the relative position data;
when the number of the three-dimensional vectors to be identified exceeds a first threshold value or the ratio of the number of the three-dimensional vectors to be identified to the number of the three-dimensional vectors in the relative position template data is larger than a second threshold value, comparing the three-dimensional vectors to be identified with the three-dimensional vectors in the relative position template data;
when the three-dimensional vector to be identified with the third threshold value exceeding is contained in the relative position template data, the three-dimensional vector to be identified is prestored in the database, and identification information corresponding to the relative position data is output, so that the identification of the eye image is completed.
8. An iris recognition system, characterized in that: comprising the following steps:
the acquisition module is used for acquiring an eye image and converting the eye image into a gray image;
the operation module is used for extracting the outline of the gray level image to obtain the pupil outline in the gray level image, and positioning the pupil outline to obtain the pupil center; extracting texture features in the gray level image to obtain iris features, and obtaining relative position data between the pupil center and the iris features;
and the identification module is used for comparing the relative position data with various relative position data pre-stored in a database, and completing iris identification based on a comparison result.
9. A storage medium in which a computer program is stored, characterized in that the computer program, when loaded and executed by a processor, implements an iris recognition method as claimed in any one of claims 1 to 7.
10. An electronic device, comprising: a processor and a memory; wherein the memory is used for storing a computer program; the processor is configured to load and execute the computer program to cause the electronic device to perform an iris recognition method as claimed in any one of claims 1 to 7.
CN202210569746.9A 2022-05-24 2022-05-24 Iris recognition method and system Pending CN116740796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210569746.9A CN116740796A (en) 2022-05-24 2022-05-24 Iris recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210569746.9A CN116740796A (en) 2022-05-24 2022-05-24 Iris recognition method and system

Publications (1)

Publication Number Publication Date
CN116740796A true CN116740796A (en) 2023-09-12

Family

ID=87905012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210569746.9A Pending CN116740796A (en) 2022-05-24 2022-05-24 Iris recognition method and system

Country Status (1)

Country Link
CN (1) CN116740796A (en)

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110060237B (en) Fault detection method, device, equipment and system
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
WO2020177470A1 (en) Verification code recognition method and apparatus, terminal, and storage medium
Chuang et al. Deep-learning based joint iris and sclera recognition with yolo network for identity identification
CN111524144B (en) Intelligent lung nodule diagnosis method based on GAN and Unet network
CN110909618A (en) Pet identity recognition method and device
WO2017161636A1 (en) Fingerprint-based terminal payment method and device
CN112836625A (en) Face living body detection method and device and electronic equipment
CN113592886A (en) Method and device for examining architectural drawings, electronic equipment and medium
CN115423870A (en) Pupil center positioning method and device
KR101151435B1 (en) Apparatus and method of recognizing a face
CN114359545A (en) Image area identification method and device and electronic equipment
CN111353325A (en) Key point detection model training method and device
CN111881789A (en) Skin color identification method and device, computing equipment and computer storage medium
CN113343987B (en) Text detection processing method and device, electronic equipment and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN116740796A (en) Iris recognition method and system
US11410455B2 (en) Method and device for fingerprint image recognition, and computer-readable medium
CN113658195B (en) Image segmentation method and device and electronic equipment
CN116129496A (en) Image shielding method and device, computer equipment and storage medium
CN114677737A (en) Biological information identification method, apparatus, device and medium
CN114529570A (en) Image segmentation method, image identification method, user certificate subsidizing method and system
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN113554037A (en) Feature extraction method and device based on model simplification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination