CN112001244B - Computer-aided iris comparison method and device - Google Patents

Computer-aided iris comparison method and device Download PDF

Info

Publication number
CN112001244B
CN112001244B CN202010694591.2A CN202010694591A CN112001244B CN 112001244 B CN112001244 B CN 112001244B CN 202010694591 A CN202010694591 A CN 202010694591A CN 112001244 B CN112001244 B CN 112001244B
Authority
CN
China
Prior art keywords
iris
image
iris image
normalization
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010694591.2A
Other languages
Chinese (zh)
Other versions
CN112001244A (en
Inventor
陈子龙
马力
王子政
苗迪
胡文锋
邱显超
秦旗
刘京
刘寰
王玥
苗振民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Irisking Co ltd
Institute of Forensic Science Ministry of Public Security PRC
Original Assignee
Beijing Irisking Co ltd
Institute of Forensic Science Ministry of Public Security PRC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Irisking Co ltd, Institute of Forensic Science Ministry of Public Security PRC filed Critical Beijing Irisking Co ltd
Priority to CN202010694591.2A priority Critical patent/CN112001244B/en
Publication of CN112001244A publication Critical patent/CN112001244A/en
Application granted granted Critical
Publication of CN112001244B publication Critical patent/CN112001244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention provides a computer-aided iris comparison method and device, wherein the method comprises the following steps: acquiring first and second iris images; rotating the iris in the first iris image and the iris in the second iris image to the same angle; iris positioning and normalization operation are carried out on the rotated first iris image and the rotated second iris image; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangular expansion normalization; calculating iris texture characteristic attribute information of corresponding positions in the first iris image and the second iris image after normalization operation; outputting iris texture characteristic attribute information for display so as to obtain texture consistency comparison results of corresponding areas in the two iris images; and if the iris texture features which are consistent in comparison and identification meet the identity identification, identifying that the first iris image and the second iris image come from the same iris. The scheme fills the blank in the field of artificial iris comparison.

Description

Computer-aided iris comparison method and device
Technical Field
The invention relates to the technical field of iris recognition, in particular to a computer-aided iris comparison method and device.
Background
Iris recognition is regarded as a highly safe biometric recognition method because of its high stability, uniqueness, and anti-counterfeit property, and is being increasingly widely used in high recognition accuracy application scenarios. The existing iris recognition methods are all computer iris recognition, namely, a computer automatically collects iris images of users, extracts iris features, compares the iris features with iris feature templates in a database, and completes recognition of user identities.
However, at present, the iris recognition level of the computer cannot accurately process all iris images, for example, partial iris images have the conditions of mydriasis, irregular iris and the like, so that iris recognition fails; for example, when the identity of the cadaver is confirmed by iris recognition, the identity authentication is failed due to the distortion of the iris of the cadaver. In addition, in the judicial authentication process, when the biological characteristics are authenticated, the objective biological characteristics are required to be authenticated manually and the authentication result is given, so that the result of identifying the biological characteristics by a computer cannot be used as the basis of judicial authentication, which makes the iris authentication field in a blank state.
Disclosure of Invention
In view of the above, the invention provides a computer-aided iris comparison method and a device thereof, so as to realize the iris comparison of manual participation, fill the blank in the field of artificial iris comparison, improve the iris recognition success rate and meet the requirements of iris identification.
In order to achieve the above purpose, the invention is realized by adopting the following scheme:
according to an aspect of an embodiment of the present invention, there is provided a computer-aided iris alignment method including:
acquiring a first iris image and a second iris image to be compared;
rotating the iris in the first iris image and the iris in the second iris image to the same angle;
iris positioning is carried out on the rotated first iris image and the rotated second iris image;
normalizing the first iris image after iris positioning and the second iris image after iris positioning; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangular expansion normalization;
calculating attribute information of iris texture features of corresponding positions in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation;
outputting attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of textures of corresponding areas in the first iris image and the second iris image;
And under the condition that the iris texture features of the corresponding positions with the quantity larger than or equal to the set quantity are consistent in the consistency comparison result, confirming that the irises in the first iris image and the second iris image are both from the same iris.
In some embodiments, before rotating the iris in the first iris image and the iris in the second iris image to the same angle, the method further comprises: and receiving a judging result for confirming that the first iris image and the second iris image both accord with the iris image data quality requirement.
In some embodiments, rotating the iris in the first iris image and the iris in the second iris image to the same angle includes: acquiring a straight line where an inner and outer corner connecting line corresponding to the iris in the first iris image is located and a straight line where an inner and outer corner connecting line corresponding to the iris in the second iris image is located; and rotating the iris image to enable the straight line where the inner and outer eye corner connecting lines corresponding to the iris in the first iris image are located to be consistent with the direction of the straight line where the inner and outer eye corner connecting lines corresponding to the iris in the second iris image are located.
In some embodiments, acquiring a first iris image and a second iris image to be aligned includes: and acquiring and displaying a first iris image and a second iris image to be compared. The method for obtaining the straight line of the inner and outer corner connecting lines corresponding to the irises in the first iris image and the straight line of the inner and outer corner connecting lines corresponding to the irises in the second iris image comprises the following steps: receiving an inner canthus click command and an outer canthus click command aiming at an iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the first iris image; and receiving an inner canthus click command and an outer canthus click command aiming at the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the second iris image.
In some embodiments, iris positioning the rotated first iris image and the rotated second iris image includes: respectively carrying out iris positioning on the rotated first iris image and the rotated second iris image by utilizing an iris positioning neural network model to obtain an iris boundary position in the rotated first iris image and an iris boundary position in the rotated second iris image; the iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the depth convolution network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are connected with the input of the output layer; the depth convolution network sharing layer is used for converting an input iris image into a sharing feature map; the key point regression network layer is used for extracting the position information of the key points of the iris region in the iris image from the shared feature map; the iris segmentation network layer is used for segmenting the shared feature image into segmented images corresponding to each pixel of the shared feature image, and the pixel value of each pixel in the segmented images is used for identifying whether the corresponding pixel belongs to an iris region in the iris image; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmented image and the input iris image, and combining the position information of the initial iris boundary and the key points of the iris region to obtain the iris boundary position in the iris image.
In some embodiments, the computer-aided iris alignment method further comprises: training based on a total loss function to obtain the iris positioning neural network model;
wherein the overall loss function is expressed as:
Loss=λ reg ×Loss regcls ×Loss cls
wherein Loss represents the overall Loss function, loss reg Representing a key point regression Loss function corresponding to the key point regression network layer cls Represents the iris segmentation average segmentation error lambda corresponding to the iris segmentation network layer reg Weights, lambda, representing the key point regression loss function cls The weight of the average segmentation error of iris segmentation is represented, N represents the number of training samples, and M represents oneThe number of key points in the iris image, i represents the serial number of the training sample, j represents the serial number of the key points in the iris image, i and j are positive integers, i is more than or equal to 1 and less than or equal to N, j is more than or equal to 1 and less than or equal to M, and x ij 、y ij Representing the row and column coordinates of the predicted keypoint,the row coordinates and column coordinates of the key points of the mark are respectively represented by i ', j' and row numbers of the pixels, H, W respectively represents the height and width of the iris image, N represents the number of samples, N is a positive integer, 1N, G (i ', j') represents the pixel value of the divided image indicating whether the pixel in the i 'th row and j' th column is an iris region, and M (i ', j') represents whether the pixel in the i 'th row and j' th column is an iris region prediction result.
In some embodiments, iris positioning the rotated first iris image and the rotated second iris image further comprises: outputting and displaying the iris boundary position in the rotated first iris image and the iris boundary position in the rotated second iris image to obtain an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting iris boundaries; and adjusting the iris boundary position of the corresponding iris image according to the instruction of manually adjusting the key points of the iris region and/or the instruction of adjusting the iris boundary.
In some embodiments, the normalization operation includes iris radius normalization and pupil radius normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, including: and performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
In some embodiments, performing iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning by performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning includes: the iris radius normalization is carried out on the first iris image after iris positioning and the second iris image after iris positioning by carrying out scaling operation on the first iris image after iris positioning and the second iris image after iris positioning, so that the iris in the first iris image after iris positioning and the iris in the second iris image after iris positioning have the same iris radius; and carrying out affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, so that the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization have the same pupil radius, and pupil radius normalization is carried out on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, so that iris radius normalization and pupil radius normalization are carried out on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
In some embodiments, the normalizing operation further comprises rectangular expansion normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, and further comprising: respectively performing rectangular unfolding normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle; wherein the first iris image and the second iris image after the iris thereof rotates to the same angle include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
In some embodiments, the normalized first iris image includes: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangular expansion normalization; the second iris image after normalization operation includes: one or more of the second iris image normalized by iris radius, the second iris image normalized by pupil radius, and the second iris image normalized by rectangular expansion.
In some embodiments, calculating attribute information of iris texture features of corresponding positions in the normalized first iris image and the normalized second iris image includes: receiving a marking instruction for an iris texture feature in an image selected from the first iris image subjected to normalization operation and the second iris image subjected to normalization operation; according to the mark position information corresponding to the mark instruction aiming at the iris texture feature in the selected image, obtaining iris texture features of corresponding positions in the rest images in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation; and calculating attribute information of iris texture features corresponding to the mark position information in each image of the first iris image subjected to normalization operation and the second iris image subjected to normalization operation.
In some embodiments, the consistency comparison result of textures of the corresponding areas in the first iris image and the second iris image is a comprehensive result of consistency comparison results of iris texture features of the corresponding positions in each pair of images, which are obtained by combining the first iris image subjected to normalization operation and the second iris image subjected to normalization operation one by one; and combining the obtained consistency comparison results of the iris texture features at the corresponding positions in the pair of images to obtain the classification comparison result of the attribute information of the iris texture features.
In some embodiments, the attribute information of the iris texture feature includes iris texture feature position information and iris texture feature image information.
In some embodiments, the iris texture feature location information comprises: one or more of center, center of gravity, average gray scale after normalization of image gray scale, center under polar coordinates of origin with pupil center, center of gravity under polar coordinates of origin with pupil center, center under polar coordinates of origin with iris center, and center of gravity under polar coordinates of origin with iris center; the iris texture feature image information includes: average gray scale of iris texture features and/or average gray scale of normalized image gray scale.
According to another aspect of an embodiment of the present invention, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any of the embodiments described above when the program is executed.
According to a further aspect of embodiments of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the embodiments described above.
The computer-aided iris comparison method, the electronic equipment and the computer-readable storage medium realize iris comparison with artificial participation, fill the blank in the field of artificial iris comparison, improve the iris recognition success rate and meet the requirements of iris identification, lay a technical foundation for application of the iris in the public security and judicial fields, and provide an effective method theory.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a flow chart of a computer aided iris alignment method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a computer-aided iris alignment method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an iris radius normalization process for iris images in accordance with an embodiment of the invention;
FIG. 4 is a schematic diagram of a rectangular expansion normalization process for iris images according to an embodiment of the invention;
fig. 5 is a flowchart of an iris positioning method according to an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
At present, the iris recognition methods are all computer iris recognition, and manual experience judgment cannot be included. Therefore, in the case where the computer iris recognition level cannot process the iris image with hundred percent accuracy, it is impossible to reduce the computer iris recognition error by using the artificial experience judgment. In addition, iris identification results are obtained by manually comparing irises, and the field is still blank, so that no method can be used as an execution basis in the field of iris identification.
In order to solve the above problems, the embodiment of the invention provides a computer-aided iris comparison method to improve the iris recognition success rate and/or meet the iris recognition requirement, which is helpful to promote the development of the iris recognition field (such as judicial recognition basis), and provides relevant technical basis and method basis for iris recognition work.
FIG. 1 is a flow chart of a computer aided iris alignment method according to an embodiment of the present invention. As shown in fig. 1, the computer-aided iris comparison method of the embodiments includes the following steps S110 to S170.
Specific embodiments of step S110 to step S170 will be described in detail below.
Step S110: and acquiring a first iris image and a second iris image to be compared.
In step S110, the obtained iris image may be a human eye image, that is, may include not only the iris but also the iris surrounding portions such as eyelid. For simplicity of description, the comparison method of the present invention is described by taking two iris images as an example, but it is not excluded that two or more iris images are obtained and two or more iris images are compared. In practical applications, one of the first iris image and the second iris image may be an image acquired on site, the other image may be a database or other historically obtained image, or both images may be database images or both images acquired on site. In addition, the process of the execution device acquiring the iris image may be to load the iris image stored or transmitted to the execution device.
In order to obtain a better comparison result, the requirements of iris identification are better met, and after the iris images to be compared are obtained, whether the obtained two iris images meet the manual comparison requirement can be judged manually before subsequent processing (such as step S120-step S170). The iris image can be used for comparison only if the iris image meets the comparison requirement.
Illustratively, after the step S110 (before the step S120 described later), the computer-aided iris alignment method shown in fig. 1 may further include the steps of: and receiving a judging result for confirming that the first iris image and the second iris image both accord with the iris image data quality requirement.
The iris image data quality requirements may include "national standard of the people's republic of China-information technology biometric sample quality sixth section: iris image data and the technical requirement of security iris recognition application image of the public safety industry standard of the people's republic of China. Whether the first iris image and the second iris image meet the manual comparison requirement or not can be judged manually according to the standards, and a judgment result is input, if yes, or not, the execution device can continue to execute the subsequent steps, and if not, a prompt can be given.
Step S120: and rotating the iris in the first iris image and the iris in the second iris image to the same angle.
When an image of a human eye is taken, there may be a deviation in the circumferential direction of the irises in the two iris images to be compared due to the influence of the photographing angle or the posture of the human face, for example, the directions of the inner and outer corner connecting lines of the eyes to which the irises belong are not uniform, more specifically, such as the deviation from the horizontal axis. This condition may be referred to as angular inconsistency of the iris. Therefore, by this step S120, the case where the angles of the irises in the first iris image and the second iris image are not identical can be eliminated, thereby improving the accuracy of the comparison result.
Specifically, in the above step S120, the current angle of the iris in the iris image may be determined by the canthus, and then the rotation adjustment is performed. The current angle condition can be determined by the inner and outer corners of the eyes where one iris is in the same iris image. It is certainly not excluded that when the iris image includes two eyes, the angles of the two eyes are used to determine the current angle conditions of the two irises, so long as the determination modes of the current angle conditions of the two irises to be compared are the same.
Illustratively, the step S120 may specifically include the step of rotating the iris in the first iris image and the iris in the second iris image to the same angle: s121, acquiring a straight line where an inner and outer corner connecting line corresponding to an iris in the first iris image is located and a straight line where an inner and outer corner connecting line corresponding to an iris in the second iris image is located; s122, enabling the straight line where the inner and outer eye corner connecting lines corresponding to the irises in the first iris image are located to be consistent with the direction of the straight line where the inner and outer eye corner connecting lines corresponding to the irises in the second iris image by rotating the iris image.
The step S121 may be performed by an execution device, for example, automatically identifying the positions of the inner and outer canthus corresponding to one iris, and then generating a line connecting the inner and outer canthus to find the straight line. Of course, there may be cases where the eyes in some iris images are difficult to automatically recognize, for example, not complete enough, and then, in order for the person to participate in recognizing the positions of the eyes, it is possible to more accurately determine the current angle situation of the iris, and it is allowed to manually complete the above steps S121 and S122.
For example, in order to facilitate manual viewing of the iris image, the step S110 described above, that is, obtaining the first iris image and the second iris image to be compared, may specifically include the steps of: and acquiring and displaying a first iris image and a second iris image to be compared. Further, the step S121 of acquiring a straight line where the inner and outer corner connecting lines corresponding to the irises in the first iris image and a straight line where the inner and outer corner connecting lines corresponding to the irises in the second iris image are located may include the steps of: s1211, receiving an inner canthus click command and an outer canthus click command for the iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus click command and the outer canthus click command for the iris in the first iris image; s1212, receiving an inner canthus clicking instruction and an outer canthus clicking instruction aiming at the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus clicking instruction and the outer canthus clicking instruction aiming at the iris in the second iris image.
In step S1211 described above, for example, the inner canthus and the outer canthus may be selected by a single click, and a connection line may be generated by connecting the inner canthus and the outer canthus. In step S122, the inner and outer corner connecting lines corresponding to the irises in the two iris images may be manually adjusted to be aligned with each other, for example, to be aligned with the horizontal direction. Of course, it can be adjusted to coincide with other directions, such as the vertical direction. Of course, the inner and outer corner connection lines can be automatically adjusted to be consistent with the standard direction.
Step S130: and iris positioning is carried out on the rotated first iris image and the rotated second iris image.
In the specific implementation, iris positioning can be performed on the two images through the neural network model obtained through training. The neural network model may be an existing neural network model or may be a modified neural network model.
For example, the step S130, that is, iris positioning the rotated first iris image and the rotated second iris image, may include the steps of: s131, iris positioning is carried out on the rotated first iris image and the rotated second iris image respectively by utilizing an iris positioning neural network model, so that the iris boundary position in the rotated first iris image and the iris boundary position in the rotated second iris image are obtained; the iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the depth convolution network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are connected with the input of the output layer; the depth convolution network sharing layer is used for converting an input iris image into a sharing feature map; the key point regression network layer is used for extracting the position information of the key points of the iris region in the iris image from the shared feature map; the iris segmentation network layer is used for segmenting the shared feature image into segmented images corresponding to each pixel of the shared feature image, and the pixel value of each pixel in the segmented images is used for identifying whether the corresponding pixel belongs to an iris region in the iris image; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmented image and the input iris image, and combining the position information of the initial iris boundary and the key points of the iris region to obtain the iris boundary position in the iris image.
In step S131, the rotated first iris image may be input into the iris positioning neural network model to obtain an iris boundary position in the first iris image, and the rotated second iris image may be input into the iris positioning neural network model to obtain an iris boundary position in the second iris image. The iris boundary position refers to the position of the boundary of the iris in the iris image, and may include the position of the inner circle of the iris and the position of the outer circle of the iris, and may further include the boundary position of the part of the eyelid covering the iris in the case that the iris is covered by the part of the eyelid. Wherein, the eyelid position (the boundary position of the part of the eyelid covering the iris) covering the iris region can be obtained by polynomial fitting according to the key point position of the iris region.
In addition, after the iris image is input to the deep convolution network sharing layer, a feature map (channel) is obtained, and this feature map can be used as an input to the key point regression network layer and the iris segmentation network layer, and is therefore called a shared feature map. The key point regression network layer may be a regression network, the key point may be a key point of an iris region in the iris image, and the position information of the key point of the iris region may be row and column coordinates of the key point. The iris segmentation network layer may be an iris segmentation network, the shared feature map is input to the iris segmentation network and subjected to multi-scale transformation, a segmented image corresponding to each pixel of the iris image one by one may be output, each pixel value in the segmented image may be used to identify whether the pixel belongs to an iris region, for example, a pixel value of 1 indicates that the pixel belongs to an iris region in the iris image, and a pixel value of 0 indicates that the pixel does not belong to an iris region in the iris image, and in addition, one segmented image may include one or more pixels. The output layer can further calculate according to the input iris image, the output result of the iris segmentation network layer and the output result of the key point regression network layer to obtain a required result, for example, the iris boundary can be subjected to operations such as integral difference and the like according to the iris segmentation image and the iris image to output a refined positioning result (initial iris boundary) of the iris boundary, and then, comprehensive treatment can be performed based on the key points of the iris region and the initial iris boundary of a certain strategy, for example, the initial iris boundary is adjusted according to the positions of the key points, and a final iris boundary positioning result is output.
Further, in some embodiments, a process of training the model may be included. For example, the step S130, that is, iris positioning the rotated first iris image and the rotated second iris image, may further include the steps of: and S132, training to obtain the iris positioning neural network model based on the overall loss function.
Wherein the overall loss function is expressed as:
Loss=λ reg ×Loss regcls ×Loss cls
wherein Loss represents the overall Loss function, loss reg Representing a key point regression Loss function corresponding to the key point regression network layer cls Represents the iris segmentation average segmentation error lambda corresponding to the iris segmentation network layer reg Weights, lambda, representing the key point regression loss function cls The weight of the average segmentation error of iris segmentation is represented, N represents the number of training samples, M represents the number of key points in an iris image, i represents the sequence number of the training samples, j represents the sequence number of the key points in the iris image, i and j are positive integers, i is not less than 1 but not more than N, j is not less than 1 but not more than M, x ij 、y ij Representing the row and column coordinates of the predicted keypoint,the row coordinates and column coordinates of the key points of the mark are respectively represented by i ', j' and row numbers of the pixels, H, W respectively represents the height and width of the iris image, N represents the number of samples, N is a positive integer, 1N, G (i ', j') represents the pixel value of the divided image indicating whether the pixel in the i 'th row and j' th column is an iris region, and M (i ', j') represents whether the pixel in the i 'th row and j' th column is an iris region prediction result.
More specifically, step S132, that is, training based on the overall loss function to obtain the iris positioning neural network model (model training method), may specifically include the steps of:
s1321, inputting iris images in training samples to a deep convolution network sharing layer to obtain corresponding sharing feature images;
s1322, inputting the shared feature map to a key point regression network layer to obtain the position information of the key points of the corresponding iris region; calculating the values of regression network loss functions according to the position information of all key points of the iris images of each training sample;
s1323, inputting the shared feature map to an iris segmentation network layer to obtain a segmented image corresponding to each pixel of the corresponding iris image, wherein the pixel value of each pixel in the segmented image is used for identifying whether the pixel belongs to the iris region of the corresponding iris image; if the pixel value is 1, the pixel is represented as belonging to the iris region, and if the pixel value is 0, the pixel is represented as not belonging to the iris region; calculating the value of a segmentation network loss function according to the pixel values in all the segmentation images of the iris images of each training sample and the iris areas marked correspondingly by the iris images of the corresponding training samples;
S1324, setting weights for the values of the regression network loss function and the values of the segmentation network loss function, returning to the network model comprising the deep convolution network sharing layer, the key point regression network layer and the iris segmentation network layer, and performing iterative training until the values of the regression network loss function and the segmentation network loss function reach the set requirement, and if the comprehensive loss function reaches the set value or reaches the set training times, obtaining the trained network model.
S1325, connecting the trained network model to an output layer to obtain an iris positioning neural network model; the output layer is used for carrying out operation comprising integral difference on the iris boundary according to the iris image to be compared and the segmented image thereof to obtain an initial iris boundary, and obtaining the iris boundary position according to the position information of the iris region key point of the iris image to be compared and the corresponding initial iris boundary.
Further, in other embodiments, the step S130, that is, performing iris positioning on the rotated first iris image and the rotated second iris image, may further include, in addition to the step S131, the steps of: s133, outputting and displaying the iris boundary position in the first iris image after rotation and the iris boundary position in the second iris image after rotation so as to obtain an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting the iris boundary; s134, adjusting the iris boundary positions of the corresponding iris images according to the instruction of manually adjusting the key points of the iris areas and/or the instruction of adjusting the iris boundaries.
In the step S131, the iris boundary position in the rotated first iris image and the iris boundary position in the rotated second iris image are obtained, and in the step S133, the iris boundary position obtained in the step S131 may be displayed for manual viewing. The iris boundary position found in each iris image can be judged visually whether to deviate or not, and if so, the iris boundary position can be adjusted. For example, the key points automatically found in the iris image can be added, deleted and changed, and the positions of the key points can be dragged when the key points are changed. In addition, the iris boundary automatically found in the iris image may be adjusted, for example, the boundary may be locally deformed, integrally contracted, integrally enlarged, or the like. In the step S134, if an instruction for manually adding the key points is received, the key points may be added to the iris image according to the selected position; if an instruction for manually deleting the key points is received, deleting the selected key points; if an instruction for selecting the key point and dragging the key point to a certain position is received, the position of the key point automatically found in the iris image can be changed. In addition, if the instruction of reducing, amplifying and deforming the iris boundary is received, the iris boundary automatically found in the iris can be correspondingly reduced, amplified and deformed. In this way, the accuracy of iris boundary positioning can be improved through manual participation.
Step S140: normalizing the first iris image after iris positioning and the second iris image after iris positioning; wherein the normalization operation includes one or more of iris radius normalization, pupil radius normalization, and rectangular expansion normalization.
The normalization operation can be performed by processing one iris image separately in different modes, processing one iris image in two modes, and processing one iris image in three modes. The image obtained through the normalization operation may include images obtained through a plurality of different processing manners. The first iris image and the second iris image may be processed in exactly the same way.
The iris radius normalization and the pupil radius normalization together can generate a circular normalized iris image (both the radius of the inner circle and the radius of the outer circle of the two irises are corresponding to be consistent), wherein the order of the iris radius normalization and the pupil radius normalization, or the order of the iris radius normalization and the pupil radius normalization respectively aiming at which of the inner circle and the outer circle of the irises can be in various modes. The iris of the circular ring shape can be unfolded into a rectangle by rectangular unfolding normalization, so that the computer identification is facilitated. The image processed by the rectangular expansion normalization may be subjected to other processing, such as iris radius normalization and/or pupil radius normalization, as long as the image is subjected to rotation coincidence.
In some embodiments, the normalization operation in the step S140 includes iris radius normalization and pupil radius normalization, in this case, the step S140, that is, performing normalization operation on the first iris image after iris positioning and the second iris image after iris positioning, may specifically include the steps of: s141, performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
Wherein one of the inner circle and the outer circle of the iris can be made to coincide by scaling the iris image, and the other of the inner circle and the outer circle can be made to coincide by affine transformation. In addition, the inner circle or the outer circle of the iris can be adjusted according to the actual curvature of the iris through affine transformation.
Further, the step S141 mentioned above, that is, performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning, performs iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning, and more specifically, may include the steps of: s1411, scaling the first iris image after iris positioning and the second iris image after iris positioning to enable the iris in the first iris image after iris positioning and the iris in the second iris image after iris positioning to have the same iris radius so as to normalize the iris radius of the first iris image after iris positioning and the second iris image after iris positioning; s1412, performing affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization to enable the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization to have the same pupil radius, so as to perform pupil radius normalization on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, and further perform iris radius normalization and pupil radius normalization on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
In this embodiment, first, the iris image is scaled by the above step S1411 to make the iris radii (the outer circle radii of the iris) of the irises in the two iris images uniform, wherein the purpose of normalizing the iris radii can be achieved by scaling one iris image or by scaling both iris images. Then, affine transformation is performed on the iris images with normalized iris radii through the above step S1412, so that the pupil radii (the inner radius of the iris) of the irises in the two iris images are also identical, wherein the purpose of pupil radius normalization may be achieved by performing affine transformation on one of the iris images, or the purpose may be achieved by performing radial transformation on both iris images.
In other embodiments, the normalization operation in the step S140 may further include rectangular expansion normalization in addition to iris radius normalization and pupil radius normalization, in which case, the step S140, that is, performing normalization operation on the first iris image after iris positioning and the second iris image after iris positioning, may further include the steps, in addition to the step S141: and S142, respectively performing rectangular expansion normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle. Wherein the first iris image and the second iris image after the iris thereof rotates to the same angle may include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
Wherein, by performing rectangular expansion normalization on the iris in the iris image, a rectangular normalized image can be generated. Specifically, for example, the iris region rectangle normalization operation may be completed by taking the center of the pupil or the iris as the origin, starting from the horizontal axis, rotating for one turn along a specific direction (clockwise or counterclockwise), and recording the image of the normal line in the iris region under each fixed angle in the corresponding column of the normalized rectangular region.
Further, the normalizing the first iris image may include: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangular expansion normalization; the second iris image after normalization operation may include: one or more of the second iris image normalized by iris radius, the second iris image normalized by pupil radius, and the second iris image normalized by rectangular expansion.
In these embodiments, the iris image used for performing rectangular expansion normalization may be any iris image that is rotated so that the photographing angles of view are identical, may not be an iris image that is subjected to iris radius normalization and pupil radius normalization and that is subjected to photographing angles of view that are identical, or may be an iris image that is subjected to one or both of iris normalization and pupil radius normalization. For example, the first iris image and the second iris image after the operation of the above step S120 may be the first iris image and the second iris image after the operation of the above step S1411, the first iris image and the second iris image after the operation of the above step S1412 may be the first iris image and the second iris image, or the like. In addition, the first iris image and the second iris image may be subjected to the same type of operation, and each of them is subjected to a rectangular expansion normalization operation.
Step S150: calculating attribute information of iris texture features of corresponding positions in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation.
The first iris image subjected to the normalization operation may have one or more iris images, which constitute a set of images with respect to the first iris image, and may include, for example, one or more iris images among the first iris image subjected to the operation of step S1411, the first iris image subjected to the operation of step S1412, and the first iris image subjected to the operation of step S142. Similarly, the second iris image subjected to the normalization operation may have one or more iris images, which constitute another set of images with respect to the second iris image, and may include, for example, one or more of the second iris image subjected to the operation of step S1411 described above, the second iris image subjected to the operation of step S1412 described above, and the second iris image subjected to the operation of step S142 described above.
Any one of the set of images regarding the first iris image and any one of the set of images regarding the second iris image may be formed into a pair of images, so that one or more pairs of images may be formed, for example, two iris images of the two sets of images that have undergone the same processing procedure may be formed into a pair of images. Various attribute information of the iris texture feature of the corresponding location area in each pair of images can be extracted. It is of course not excluded that the pair of images are subjected to different processing procedures, for example, one is a first iris image normalized by the iris and the other is a second iris image normalized by the pupil, since the performing device may record the correspondence of each position in the iris images obtained by various operations.
Wherein the attribute information of the iris texture feature may include iris texture feature position information and iris texture feature image information. Specifically, the iris texture feature location information may include: the center of the iris texture feature, the center of gravity, the average gray level after normalization of the image gray level, the center under the polar coordinates of origin with the pupil center, the center under the polar coordinates of origin with the iris center, and the center of gravity under the polar coordinates of origin with the iris center. The iris texture feature image information may include: average gray scale of iris texture features and/or average gray scale of normalized image gray scale. Such attribute information may be obtained by the executing device. In addition, the iris texture feature image information may further include an iris texture feature (patch) image.
In some embodiments, the step S150, that is, calculating the attribute information of the iris texture feature of the corresponding position in the first iris image after normalization operation and the second iris image after normalization operation, may specifically include the steps of: s151, receiving a marking instruction for an iris texture feature in an image selected from the first iris image subjected to normalization operation and the second iris image subjected to normalization operation; s152, obtaining iris texture features of corresponding positions in the rest images in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation according to mark position information corresponding to mark instructions of the iris texture features in the selected images; and S153, calculating attribute information of iris texture features corresponding to the mark position information in each image of the first iris image subjected to normalization operation and the second iris image subjected to normalization operation.
In the step S151, the clear iris texture feature (plaque) may be manually found from an iris image and marked, so that the execution device may receive the marking instruction, including the marked position information, etc. In the above step S152, there may be more than one iris image, and the iris texture at the position corresponding to the marked iris texture position may be found from all the iris images. In step S153 described above, various attribute information of the same position corresponding to all iris images may be acquired by the execution device. And sequentially repeating the steps, so that the attribute information of the plaque in each iris image at a plurality of corresponding positions can be found.
Step S160: and outputting attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of textures of corresponding areas in the first iris image and the second iris image.
The attribute information of the iris texture feature in each iris image output may be displayed on the execution device or output to other devices via the execution device. After various attribute information is manually checked, some judgment can be made according to experience, so that the judgment result of manual iris comparison is included.
According to the attribute information of the iris texture features, the consistency of the iris texture features can be manually judged. Specifically, for example, when the images of each pair (for example, the pair of images formed by the first iris image and the second iris image normalized by the iris radius obtained in the above step S1411, the pair of images formed by the first iris image and the second iris image normalized by the iris radius obtained in the above step S1412, and the pair of images formed by the first iris image and the second iris image normalized by the rectangular expansion obtained in the above step S142) are compared in pairs, the method for determining whether the iris texture plaque is the same texture plaque may include, but is not limited to, one or more of the following ways: (1) Judging whether the connecting line between the centers or the centers of gravity of the two patches and the center of the pupil or the iris is the same as the included angle between the connecting line and the coordinate axis direction; (2) judging whether the gray scales of the two plaques are similar; (3) Judging whether the contrast of the two plaques and the surrounding non-plaque area is similar or not; (4) judging whether the boundary shapes of the two plaques are similar; (5) After overlapping the two images together according to a certain principle, calculating the ratio of the number of pixels of the intersection of the two plaques to the number of pixels of the union of the two plaques, comparing the ratio with a preset threshold value, and judging that the images are similar when the ratio is higher than the threshold value, or else, judging that the images are dissimilar; (6) It is determined whether there are interpretable changes in the two plaques.
In the step S160, the consistency comparison result of the textures of the corresponding areas in the first iris image and the second iris image may be a comprehensive result of the consistency comparison result of the iris texture features of the corresponding positions in each pair of images obtained by combining the first iris image after the normalization operation and the second iris image after the normalization operation. The consistency comparison result of the iris texture features of the corresponding positions in the pair of combined images can be obtained by integrating the classification comparison result of the attribute information of the iris texture features and the attribute information of the iris texture features. For example, one group of images related to the first iris image and the other group of images related to the second iris image can correspondingly form three pairs of images, and the comparison results of the images in the three pairs of images can be integrated to obtain the final consistency comparison result of the iris texture features.
Step S170: and under the condition that the iris texture features of the corresponding positions with the quantity larger than or equal to the set quantity are consistent in the consistency comparison result, confirming that the irises in the first iris image and the second iris image are both from the same iris.
The iris texture features can be obtained in sequence or in parallel from a plurality of corresponding positions in each image for comparison, so that if the iris texture features of the more corresponding positions are the same, the probability that the irises in the first iris image and the second iris image are the same is higher. The inventors found empirically that, for example, 5 or 6 or more identical iris texture patches, the probability of identical irises has reached a very high level, so that the set number may be, for example, 4, 5 or 6, etc.
According to the computer-aided iris comparison method, the attribute information of the iris texture features which can be used for manual judgment is obtained through the operations of rotation positioning, normalization, extraction of the attribute information and the like, and then the judgment result obtained by manual operation according to the attribute information is received, so that the judgment of the iris texture features by manual operation is brought into the iris comparison result, the problem that the comparison of the iris features fails due to insufficient accuracy of a computer can be avoided, and the requirement of iris identification can be met conveniently.
In addition, the embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the method in any embodiment when executing the program.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the above embodiments.
In order that those skilled in the art will better understand the present invention, embodiments of the present invention will be described below with specific examples.
At present, no implementation scheme capable of manually performing pairwise comparison and identification on iris images exists. In order to solve the problem of manually comparing and identifying iris images in pairs, in a specific embodiment, the proposed computer-aided iris manual comparison method is to judge whether two iris images to be compared belong to the same iris or not by manually comparing the iris images in pairs based on computer-aided imaging and image attribute calculation of the iris images acquired by acquisition equipment.
Fig. 2 is a schematic flow chart of a computer-aided iris alignment method according to an embodiment of the present invention, referring to fig. 2, the computer-aided iris artificial alignment method of the embodiment may include the following steps:
s1, judging whether two iris images meet the requirement of manual comparison or not;
s2, marking left and right eyes and inner and outer boundaries of the iris on each image respectively, wherein marking can be completed manually or automatically by a computer;
s3, rotating the two images to the same direction through the left and right corners of eyes, for example, the direction that the connecting line between the corners of eyes is parallel to the horizontal axis;
s4, scaling operation is carried out on the two images respectively, so that the radiuses of irises in the two images are the same, and images A_1 and B_1 are correspondingly obtained;
s5, generating a circular normalized image: affine transformation is utilized to carry out affine transformation on the iris and pupil areas in the two images A_1 and B_1, so that the radiuses of the pupils and the irises in the two transformed images are the same, and images A_2 and B_2 are obtained;
s6, generating a rectangular normalized image: taking the positive direction of the x-axis from the pupil center as a starting point, and normalizing and expanding iris areas in the images A_1 and B_1 into rectangles along the tangential direction of the annular iris, wherein the normalized images are recorded as images A_3 and B_3;
S7, manually marking iris texture features clearly visible in any one of images A_1, A_2, A_3, B_1, B_2 and B_3;
s8, the computer automatically depicts positions corresponding to the iris texture features marked in the step S7 on other unmarked images one by one;
s9, calculating position information and image information of marked positions in all the images in a computer-aided manner, wherein the information comprises, but is not limited to, the center, the gravity center, the average gray level of marked plaques, the gray level of normalized image gray level, the center and the like under the polar coordinates with the center of pupils or irises as the origin;
s10, combining the comparison results of 3 pairs of images (such as image A_1 and image B_1, image A_2 and image B_2, and image A_3 and image B_3) obtained according to the obtained position information and the image information, and judging whether a difference which is obviously unexplained exists. If yes, discarding the current plaque, and repeating the processes in the steps S7, S8 and S9; otherwise, judging that the current marked plaque belongs to the same texture plaque;
s11, repeating the steps of marking the iris texture features in the steps S7, S8, S9 and 10 until the number of the same texture patches is not less than N (positive integer), or the clearly visible iris texture features can not be found out any more in all the current images;
S12, if the number of the identical texture patches is not less than N after manual comparison, determining that the two iris images are subordinate to the same iris, and judging the identity; otherwise, it cannot be determined that the current two iris images are from the same iris.
Wherein, the steps S1, S2, S3, S7, S10, S11, S12 can be manually completed; the steps S4, S5, S6, S8, S9 described above may be performed by computer-aided operation.
In the step S1, it is determined whether the two images meet the comparison requirement, and the reference standard may be "national standard of the people' S republic of China-sixth part of the quality of the information technology biological feature sample: iris image data and the technical requirement of security iris recognition application image of the public safety industry standard of the people's republic of China. That is, the image may be a sixth part of the quality of the information technology biometric sample that needs to meet the national standard of the people's republic of China: iris image data, part 6 and part six of the quality of information technology biological characteristic sample, national Standard of the people's republic of China: iris image data, part 4.
In the step S2, the method of marking the canthus may be manual or automatic by computer. The automatic marking of the canthus by the computer can be realized by using the existing key point detection model. FIG. 5 is a flowchart of an iris positioning method according to an embodiment of the invention, referring to FIG. 5, the method for automatic calibration by a computer may include the following steps:
S2.1, inputting the iris image into a deep convolution network sharing layer, and outputting a sharing feature map; the shared feature map is used for subsequent key point regression prediction and iris region segmentation;
s2.2, inputting the shared feature map into a key point regression network, and outputting a plurality of key point coordinate pairs (x k ,y k ) Wherein x is k And y k Row coordinates and column coordinates of the kth key point respectively;
s2.3, inputting the shared feature map into an iris segmentation network, performing multi-scale transformation, and outputting segmented images corresponding to each pixel of an iris image one by one, wherein each pixel value in the segmented images is 1 or 0, and respectively representing whether the pixel belongs to an iris region in the iris image;
s2.4, performing operations such as integral difference on the iris boundary according to the iris segmentation image and the iris image, and outputting a precise positioning result of the iris boundary;
and S2.5, carrying out comprehensive processing on the output in the steps S2.2 and S2.4 based on a set strategy, and outputting a final iris boundary positioning result.
In the training process of the neural network model, key point marks and iris area marks are marked in advance on iris images in the training samples. In order to solve the multitasking problem, the loss function used in training the model may comprise two parts, corresponding to the keypoint regression task and the iris segmentation task, respectively. The overall Loss function Loss can be identified as:
Loss=λ reg ×Loss regcls ×Loss cls
Wherein lambda is reg And lambda (lambda) cls Weights of Loss functions of key point regression and iris segmentation respectively reg And Loss of cls Regression loss of key points and average segmentation error of iris segmentation, respectively.
Regression Loss reg Can be expressed as:
where N is the number of training samples in a batch and M is the number of keypoints x on an image ij 、y ij To predict the row and column coordinates of a keypoint,the row coordinates and column coordinates of the keypoints are marked.
Average segmentation error Loss for iris segmentation cls The method comprises the following steps:
where N is the number of samples in a batch for training, H, W is the height and width of the iris image, G (i ', j') is the sign of whether the pixel in the image is an iris region, and M (i ', j') is the result of whether the current pixel predicted by the model is an iris region.
After the model is trained, eyelid positions can be drawn through polynomial fitting according to key point results, iris accurate positioning results are fused, and final iris positioning output results can be comprehensively obtained.
Further, the positioning result can be adjusted through man-machine interaction: after the automatic calibration of the computer, the positioning result can be modified by fine adjustment and the like through man-machine interaction, for example, the operations of revising the iris center manually, pulling key points to correct the position and the like are performed manually, and the purpose of obtaining the accurate iris positioning result is achieved through man-machine interaction.
In the step S3, the step may be performed manually, or the two images may be automatically rotated after being calculated by a computer. If the computer is used for automatic rotation, man-machine interaction can be carried out after rotation, and the rotation degree of the two images is manually fine-tuned, so that the textures of the two images are ensured to be overlapped as much as possible.
In the above step S5, affine transformation for normalizing the iris to a specific inner radius is shown in fig. 3. The image between the inner edge and the outer edge of the iris is transformed by simulating the stretching mode of the iris muscle texture, so that the iris texture changes along with the stretching of the inner edge of the iris.
In the above step S6, the variation of generating the rectangular normalized image is shown in fig. 4. Specifically, the iris region rectangle normalization operation can be completed by taking the center of the pupil or the iris as the origin, starting from the horizontal axis and rotating for one circle along a specific direction (clockwise or anticlockwise), and recording the image of the normal line in the iris region under each fixed angle in the corresponding column of the normalization rectangle region.
In the step S10, the method may be used to manually determine whether the corresponding iris texture patches are the same texture patches when the three pairs of images (e.g. the image a_1 and the image b_1, the image a_2 and the image b_2, and the image a_3 and the image b_3) are aligned with each other, and the method includes, but is not limited to, the following:
S10_1, judging whether the lengths of the connecting lines of the centers\gravity centers and the centers of the pupils\irises of the two plaques and the directions of the coordinate axes are the same or not;
s10_2, judging whether the gray scales of the two plaques are similar;
s10_3, judging whether the contrast ratios of the two plaques and the surrounding non-plaque areas are similar;
s10_4, judging whether the boundary shapes of the two plaques are similar;
and S10-5, after the two images are overlapped together in front and back according to a certain principle, calculating the ratio of the number of pixels of the intersection of the two plaques to the number of pixels of the union of the two plaques, and comparing the ratio with a preset threshold value. If the threshold value is higher than the threshold value, the similarity is judged, otherwise, the similarity is judged;
s10_6, judging whether there is a interpretable change in the two plaques.
In the above step 11, the same texture tile number N is determined: the inventors have found that the probability of identical iris textures appearing at the same location on images of different irises is not higher than 5%. Therefore, when the number of identical iris texture patch pairs in two iris images is N, the upper limit of probability that the two iris images are not from the same iris is P (N), it is known that: p (N) is less than or equal to 0.1 N . When N=6, P (N). Ltoreq.1.5625×10 -8 That is, when the number of identical iris texture patches in two iris images is 6, the probability that the two images come from different irises will not exceed two parts per million, and the condition that gives an iris "identity" determination can be satisfied.
In the embodiment, based on a computer-aided artificial iris comparison method, an iris original image, an iris circular normalization image and an iris rectangular normalization image are used as data bases for manually comparing irises, based on computer-aided imaging and image attribute calculation, whether two iris images which are compared belong to the same iris is judged through manual pairwise comparison of the iris images, so that the characteristics of manually comparing the iris textures are incorporated into the comparison process, and pairwise comparison identification of the iris images is realized manually.
In summary, the method, the electronic device and the computer-readable storage medium for iris comparison with the assistance of the computer provided by the embodiment of the invention realize iris comparison with human participation, thereby improving the iris recognition success rate and meeting the requirements of iris recognition.
In the description of the present specification, reference to the terms "one embodiment," "one particular embodiment," "some embodiments," "for example," "an example," "a particular example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The order of steps involved in the embodiments is illustrative of the practice of the invention, and is not limited and may be suitably modified as desired.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (17)

1. A computer-aided iris alignment method, comprising:
acquiring a first iris image and a second iris image to be compared;
rotating the iris in the first iris image and the iris in the second iris image to the same angle;
iris positioning is carried out on the rotated first iris image and the rotated second iris image;
normalizing the first iris image after iris positioning and the second iris image after iris positioning; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangular expansion normalization;
calculating attribute information of iris texture features of corresponding positions in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation;
outputting attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of textures of corresponding areas in the first iris image and the second iris image;
and under the condition that the iris texture features of the corresponding positions with the quantity larger than or equal to the set quantity are consistent in the consistency comparison result, confirming that the irises in the first iris image and the second iris image are both from the same iris.
2. The computer-aided iris alignment method of claim 1, further comprising, before rotating the iris in the first iris image and the iris in the second iris image to the same angle:
and receiving a judging result for confirming that the first iris image and the second iris image both accord with the iris image data quality requirement.
3. The computer-aided iris alignment method of claim 1, wherein rotating the iris in the first iris image and the iris in the second iris image to the same angle comprises:
acquiring a straight line where an inner and outer corner connecting line corresponding to the iris in the first iris image is located and a straight line where an inner and outer corner connecting line corresponding to the iris in the second iris image is located;
and rotating the iris image to enable the straight line where the inner and outer eye corner connecting lines corresponding to the iris in the first iris image are located to be consistent with the direction of the straight line where the inner and outer eye corner connecting lines corresponding to the iris in the second iris image are located.
4. A computer-aided iris alignment method of claim 3 in which,
the method for acquiring the first iris image and the second iris image to be compared comprises the following steps:
Acquiring and displaying a first iris image and a second iris image to be compared;
the method for obtaining the straight line of the inner and outer corner connecting lines corresponding to the irises in the first iris image and the straight line of the inner and outer corner connecting lines corresponding to the irises in the second iris image comprises the following steps:
receiving an inner canthus click command and an outer canthus click command aiming at an iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the first iris image;
and receiving an inner canthus click command and an outer canthus click command aiming at the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the second iris image.
5. The computer-aided iris alignment method of claim 1, wherein iris positioning the rotated first iris image and the rotated second iris image comprises:
respectively carrying out iris positioning on the rotated first iris image and the rotated second iris image by utilizing an iris positioning neural network model to obtain an iris boundary position in the rotated first iris image and an iris boundary position in the rotated second iris image;
The iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the depth convolution network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are connected with the input of the output layer; the depth convolution network sharing layer is used for converting an input iris image into a sharing feature map; the key point regression network layer is used for extracting the position information of the key points of the iris region in the iris image from the shared feature map; the iris segmentation network layer is used for segmenting the shared feature image into segmented images corresponding to each pixel of the shared feature image, and the pixel value of each pixel in the segmented images is used for identifying whether the corresponding pixel belongs to an iris region in the iris image; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmented image and the input iris image, and combining the position information of the initial iris boundary and the key points of the iris region to obtain the iris boundary position in the iris image.
6. A computer-aided iris alignment method of claim 5 further comprising:
training based on a total loss function to obtain the iris positioning neural network model;
wherein the overall loss function is expressed as:
Loss=λ reg ×Loss regcls ×Loss cls
wherein Loss represents the overall Loss function, loss reg Representing a key point regression Loss function corresponding to the key point regression network layer cls Representing iris segmentation level corresponding to the iris segmentation network layerEqually split error lambda reg Weights, lambda, representing the key point regression loss function cls The weight of the average segmentation error of iris segmentation is represented, N represents the number of training samples, M represents the number of key points in an iris image, i represents the sequence number of the training samples, j represents the sequence number of the key points in the iris image, i and j are positive integers, i is not less than 1 but not more than N, j is not less than 1 but not more than M, x ij 、y ij Representing the row and column coordinates of the predicted keypoint,the row coordinates and column coordinates of the key points of the mark are respectively represented by i ', j' and row numbers of the pixels, H, W respectively represents the height and width of the iris image, N represents the number of samples, N is a positive integer, 1N, G (i ', j') represents the pixel value of the divided image indicating whether the pixel in the i 'th row and j' th column is an iris region, and M (i ', j') represents whether the pixel in the i 'th row and j' th column is an iris region prediction result.
7. The computer-aided iris alignment method of claim 5, wherein iris positioning the rotated first iris image and the rotated second iris image further comprises:
outputting and displaying the iris boundary position in the rotated first iris image and the iris boundary position in the rotated second iris image to obtain an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting iris boundaries;
and adjusting the iris boundary position of the corresponding iris image according to the instruction of manually adjusting the key points of the iris region and/or the instruction of adjusting the iris boundary.
8. The computer-aided iris alignment method of claim 1, wherein the normalization operation comprises iris radius normalization and pupil radius normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, including:
and performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
9. The computer-aided iris alignment method of claim 8, wherein performing iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning by performing scaling operation and affine transformation on the first iris image after iris positioning, comprises:
the iris radius normalization is carried out on the first iris image after iris positioning and the second iris image after iris positioning by carrying out scaling operation on the first iris image after iris positioning and the second iris image after iris positioning, so that the iris in the first iris image after iris positioning and the iris in the second iris image after iris positioning have the same iris radius;
and carrying out affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, so that the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization have the same pupil radius, and pupil radius normalization is carried out on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, so that iris radius normalization and pupil radius normalization are carried out on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
10. A computer-aided iris alignment method as in claim 8 or 9, wherein the normalization operation further comprises rectangular expansion normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, and further comprising:
respectively performing rectangular unfolding normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle; wherein the first iris image and the second iris image after the iris thereof rotates to the same angle include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
11. The computer-aided iris alignment method of claim 10, wherein the normalized first iris image comprises: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangular expansion normalization; the second iris image after normalization operation includes: one or more of the second iris image normalized by iris radius, the second iris image normalized by pupil radius, and the second iris image normalized by rectangular expansion.
12. The computer-aided iris comparison method of claim 1 or 11, wherein calculating attribute information of iris texture features of corresponding positions in the normalized first iris image and the normalized second iris image includes:
receiving a marking instruction for an iris texture feature in an image selected from the first iris image subjected to normalization operation and the second iris image subjected to normalization operation;
according to the mark position information corresponding to the mark instruction aiming at the iris texture feature in the selected image, obtaining iris texture features of corresponding positions in the rest images in the first iris image subjected to normalization operation and the second iris image subjected to normalization operation;
and calculating attribute information of iris texture features corresponding to the mark position information in each image of the first iris image subjected to normalization operation and the second iris image subjected to normalization operation.
13. The method for comparing iris features with computer assistance according to claim 1 or 11, wherein the comparison result of the texture of the corresponding region in the first iris image and the second iris image is a comprehensive result of the comparison result of the texture features of the iris at the corresponding position in each pair of images obtained by combining the first iris image after normalization operation and the second iris image after normalization operation one by one; and combining the obtained consistency comparison results of the iris texture features at the corresponding positions in the pair of images to obtain the classification comparison result of the attribute information of the iris texture features.
14. The computer-aided iris alignment method of claim 1, wherein the attribute information of the iris texture feature includes iris texture feature position information and iris texture feature image information.
15. A computer-aided iris alignment method of claim 14 wherein the iris texture feature location information comprises: one or more of center, center of gravity, average gray scale after normalization of image gray scale, center under polar coordinates of origin with pupil center, center of gravity under polar coordinates of origin with pupil center, center under polar coordinates of origin with iris center, and center of gravity under polar coordinates of origin with iris center; the iris texture feature image information includes: average gray scale of iris texture features and/or average gray scale of normalized image gray scale.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 15 when the program is executed.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 15.
CN202010694591.2A 2020-07-17 2020-07-17 Computer-aided iris comparison method and device Active CN112001244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694591.2A CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694591.2A CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Publications (2)

Publication Number Publication Date
CN112001244A CN112001244A (en) 2020-11-27
CN112001244B true CN112001244B (en) 2023-11-03

Family

ID=73466480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694591.2A Active CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Country Status (1)

Country Link
CN (1) CN112001244B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949518B (en) * 2021-03-09 2024-04-05 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN113688874A (en) * 2021-07-29 2021-11-23 天津中科智能识别产业技术研究院有限公司 Method and system for automatically segmenting iris region in human eye iris image
CN113780239B (en) * 2021-09-27 2024-03-12 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic device and computer readable medium
CN113837117B (en) * 2021-09-28 2024-05-07 上海电力大学 Iris coding method based on novel normalization and depth neural network
CN115083006A (en) * 2022-08-11 2022-09-20 北京万里红科技有限公司 Iris recognition model training method, iris recognition method and iris recognition device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7756301B2 (en) * 2005-01-26 2010-07-13 Honeywell International Inc. Iris recognition system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苑玮琦 ; 白晓光 ; 冯琪 ; .一种新颖的虹膜图像预处理方法.光电工程.2009,(04),全文. *

Also Published As

Publication number Publication date
CN112001244A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112001244B (en) Computer-aided iris comparison method and device
EP3091479B1 (en) Method and apparatus for fingerprint identification
CN101246544B (en) Iris positioning method based on boundary point search and minimum kernel value similarity region edge detection
US8194936B2 (en) Optimal registration of multiple deformed images using a physical model of the imaging distortion
CN110751098B (en) Face recognition method for generating confrontation network based on illumination and posture
CN104933389B (en) Identity recognition method and device based on finger veins
CN105354558B (en) Humanface image matching method
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN107066969A (en) A kind of face identification method
CN110427746A (en) Sliding block verifies code verification method, device, storage medium and computer equipment
CN112686191B (en) Living body anti-counterfeiting method, system, terminal and medium based on three-dimensional information of human face
JP2008176645A (en) Three-dimensional shape processing apparatus, control method of three-dimensional shape processing apparatus, and control program of three-dimensional shape processing apparatus
CN111612083B (en) Finger vein recognition method, device and equipment
CN111445540A (en) Automatic registration method for RGB colored three-dimensional point cloud
CN115984203A (en) Eyeball protrusion measuring method, system, terminal and medium
CN115456974A (en) Strabismus detection system, method, equipment and medium based on face key points
CN115375891A (en) Cultural relic fragment similarity identification and transformation matching method based on machine learning
CN112308842B (en) Positioning core automatic extraction method based on printed matter image
KR100951315B1 (en) Method and device detect face using AAMActive Appearance Model
Wang et al. Automatic landmark placement for large 3D facial image dataset
Mahmood et al. 3D face recognition using pose invariant nose region detector
Adam et al. Eyelid Localization for Iris Identification.
CN107341431B (en) Extraction and matching method for arm vein line features
Nair et al. Region segmentation and feature point extraction on 3D faces using a point distribution model
De Mezzo et al. Weed leaf recognition in complex natural scenes by model-guided edge pairing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant